Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Is it possible to exit a C++ loop based on keyboard input without actually having to input something each iteration? For instance ``` while(checkkeyboardinput != 'q') { do work } ``` I feel that this is very easy, but google isn't helping me, and I can't remember how to do this. Thanks for the help. EDIT: I'm using VS2008
Try \_kbhit(). As far as I know it checks if there is any keyboard input waiting in the buffer. <http://msdn.microsoft.com/en-us/library/58w7c94c%28VS.80%29.aspx> > \_kbhit > > Checks the console for keyboard input. > > int \_kbhit( void ); > > **Return Value** > > \_kbhit returns a nonzero value if a key has been pressed. Otherwise, it > returns 0. > > **Remarks** > > The \_kbhit function checks the console > for a recent keystroke. If the > function returns a nonzero value, a > keystroke is waiting in the buffer. > The program can then call \_getch or > \_getche to get the keystroke.
If you are using ncurses, you can, very easily, with `getch()`. However, there is not standard way to do what you want.
c++ exit loop based on keyboard input
[ "", "c++", "loops", "" ]
So basically I have an application that works with just one user, but I'd like to make it multi-user. This would involve me logging a user in, and keeping the user ID in session, plus adding a user id column to my database tables. no biggy. I'd like to change my dbml, or use it's partial, so that any query I throw through it also gets ``` .WHERE(user=>user.id=MYPARAM) ``` added to it, to return just records for the logged in user. Is there a quick win to be had? A single place where I can specify this in the .dbml?
I would perhaps create a lambda expression dynamically. The Where clause takes an object of type Expression> You can then create an expression using the following. ``` Expression<Func<User,bool>> func = null; ``` Then ``` ParameterExpression xParam = Expression.Parameter(typeof(User), "u"); ``` You would then create an expression for u.id = "test" by using a binary expression. ``` BinaryExpression uidTest = Expression.Equal(Expression.Property(xParam, "id"), Expression.Constant("test"); ``` and then attaching it to the Expression as follows: ``` func = Expression.Lambda<Func<User, bool>>(uidTest, xParam) ``` In effect this is building a lambda expression u=> u.id = "test" The func object can then be used in the .Where as follows: .Where(func) Obviously, you can dynamically build this to any criteria you need at any time in your application.
I am not sure if there is something like that. If there isn't, you can expose already filtered properties for all the tables in the partial class. You would then receive the user id as a parameter to the data context. You can combine that with a dynamic expression so that those properties are as simple as possible to define.
DataContext. Is there a single method where I can add an extra WHERE?
[ "", "c#", "asp.net", "linq", "linq-to-sql", "" ]
I have a `DataTable resultSet;` - I'm trying to check fields for null, but get an '{}' (empty-set ?) object back. Searches involving "{}" aren't yielding any appropriate solutions. This is the code that isn't working as expected when the "fk\_id" field is null: ``` if (resultSet.Rows[0].ItemArray[resultSet.Columns.IndexOf("fk_id")] == null) { //never reaches here } ``` Note: using an int index instead of the `Columns.IndexOf()` isn't the issue. Also does the "{}" have some other name in C#?
To check a column for DBNull in a DataSet, you can use the [IsNull](http://msdn.microsoft.com/en-us/library/system.data.datarow.isnull.aspx) method: ``` if (resultSet.Rows[0].IsNull("fk_id")) ``` Your comparison against `null` is probably failing because DataSets don't use `null` to represent a "database NULL" value - they use [DBNull.Value](http://msdn.microsoft.com/en-us/library/system.dbnull.value.aspx). If you need your code to work the way you've presented it, try this: ``` if (resultSet.Rows[0].ItemArray[resultSet.Columns.IndexOf("fk_id")] == DBNull.Value) ```
``` try { if (DT.Rows[0][0] != null) { //your code } } catch { MessageBox.Show("AUTHANICATION FAILED."); } ```
C# DataTable ItemArray returns '{}' - how can I test for null value?
[ "", "c#", "datatable", "null", "datarow", "" ]
I need to calculate the difference of a column between two lines of a table. Is there any way I can do this directly in SQL? I'm using Microsoft SQL Server 2008. I'm looking for something like this: ``` SELECT value - (previous.value) FROM table ``` Imagining that the "previous" variable reference the latest selected row. Of course with a select like that I will end up with n-1 rows selected in a table with n rows, that's not a probably, actually is exactly what I need. Is that possible in some way?
SQL has no built in notion of order, so you need to order by some column for this to be meaningful. Something like this: ``` select t1.value - t2.value from table t1, table t2 where t1.primaryKey = t2.primaryKey - 1 ``` If you know how to order things but not how to get the previous value given the current one (EG, you want to order alphabetically) then I don't know of a way to do that in standard SQL, but most SQL implementations will have extensions to do it. Here is a way for SQL server that works if you can order rows such that each one is distinct: ``` select rank() OVER (ORDER BY id) as 'Rank', value into temp1 from t select t1.value - t2.value from temp1 t1, temp1 t2 where t1.Rank = t2.Rank - 1 drop table temp1 ``` If you need to break ties, you can add as many columns as necessary to the ORDER BY.
Use the [lag](http://www.postgresql.org/docs/current/static/functions-window.html) function: ``` SELECT value - lag(value) OVER (ORDER BY Id) FROM table ``` Sequences used for Ids can skip values, so Id-1 does not always work.
Access the "previous row" value in a SELECT statement
[ "", "sql", "sql-server", "sql-server-2008", "" ]
I'm working on a homework project and i'm trying to store inventory data into a file. The inventory data size shouldn't be too large cause technically no one is going to really use it. **I need to write these contents to a file:** • Item Description • Quantity on Hand • Wholesale Cost • Retail Cost • Date Added to Inventory **I am going to need to make an interface that allows me to do this:** • Add new records to the file • Display any record in the file • Change any record in the file Struct would be the easiest way to go about this imo. If I can just figure out how to read / write structs to a file this should be really easy. If you could provide a small example on how to do this I would really appreciate it. Thanks!
**IOStream library** does it > The [ofstream](http://www.cplusplus.com/reference/iostream/ofstream/) class provides the interface to write data to files as output streams. > > The [ifstream](http://www.cplusplus.com/reference/iostream/ifstream/) class provides the interface to read data from files as input streams Edit- [Example](http://www.linuxtopia.org/online_books/programming_books/c++_practical_programming/c++_practical_programming_079.html)
Ask your teacher, could you use boost library. If yes, read boost serilization tutorial, it contains a simple examples: <http://www.boost.org/doc/libs/1_38_0/libs/serialization/doc/tutorial.html> But if you want understand how to work with files, you should do this works without any help or boost. If you want works with std::[io]fstreams you should decide what format you will support: - text - for this case best way define operator<< and operator>> and use them for writing structure to file or reading; - binary - your structure should be POD ( plain old data ) and doesn't should contain pointers - and you will use read and write streams methods. example for binary file: <http://www.codeguru.com/forum/showthread.php?t=269648>
Serializing structs
[ "", "c++", "serialization", "struct", "" ]
Basically I want to do this: ``` obj = 'str' type ( obj ) == string ``` I tried: ``` type ( obj ) == type ( string ) ``` and it didn't work. Also, what about the other types? For example, I couldn't replicate `NoneType`.
``` isinstance() ``` In your case, `isinstance("this is a string", str)` will return `True`. You may also want to read this: <http://www.canonical.org/~kragen/isinstance/>
First, avoid all type comparisons. They're very, very rarely necessary. Sometimes, they help to check parameter types in a function -- even that's rare. Wrong type data will raise an exception, and that's all you'll ever need. All of the basic conversion functions will map as equal to the type function. ``` type(9) is int type(2.5) is float type('x') is str type(u'x') is unicode type(2+3j) is complex ``` There are a few other cases. ``` isinstance( 'x', basestring ) isinstance( u'u', basestring ) isinstance( 9, int ) isinstance( 2.5, float ) isinstance( (2+3j), complex ) ``` None, BTW, never needs any of this kind of type checking. None is the only instance of NoneType. The None object is a Singleton. Just check for None ``` variable is None ``` BTW, do not use the above in general. Use ordinary exceptions and Python's own natural polymorphism.
How to compare type of an object in Python?
[ "", "python", "types", "compare", "" ]
Recently I was talking with a friend of mine who had started a C++ class a couple months ago (his first exposure to programming). We got onto the topic of C# and .NET generally, and he made the point to me that he felt it was 'doomed' for all of the commonly-cited issues (low speed, breakable bytecode, etc). I agreed with him on all those issues, but I held back in saying it was doomed, only because I felt that, in time, languages like C# could instead become native code (if Microsoft so chose to change the implementation of .NET from a bytecode, JIT runtime environemnent to one which compiles directly to native code like your C++ program does). My question is, am I out to lunch here? I mean, it may take a lot of work (and may break too many things), but there isn't some type of magical barrier which prevents C# code from being compiled natively (if one wanted to do it), right? There was a time where C++ was considered a very high-level language (which it still is, but not as much as in the past) yet now it's the bedrock (along with C) for Microsoft's native APIs. The idea that .NET could one day be on the same level as C++ in that respect seems only to be a matter of time and effort to me, not some fundamental flaw in the design of the language. **EDIT**: I should add that if native compilation of .NET is possible, why does Microsoft choose not to go that route? Why have they chosen the JIT bytecode path?
Java uses bytecode. C#, while it uses IL as an intermediate step, has *always* compiled to native code. IL is never directly interpreted for execution as Java bytecode is. You can even pre-compile the IL before distribution, if you really want to (hint: performance is normally better *in the long run* if you don't). The idea that C# is slow is laughable. Some of the *winforms components* are slow, but if you know what you're doing C# itself is a very speedy language. In this day and age it generally comes down to the algorithm anyway; language choice won't help you if you implement a bad bubble sort. If C# helps you use more efficient algorithms from a higher level (and in my experience it generally does) that will trump any of the other speed concerns. --- Based on your edit, I also want to explain the (typical) compilation path again. C# is compiled to IL. This IL is distributed to local machines. A user runs the program, and that program is then JIT-compiled to native code for that machine *once*. The next time the user runs the program on that machine they're running a fully-native app. There is also a JIT optimizer that can muddy things a bit, but that's the general picture. The reason you do it this way is to allow individual machines to make compile-time optimizations appropriate to that machine. You end up with faster code on average than if you distributed the same fully-compiled app to everyone. --- Regarding decompilation: The first thing to note is that you can pre-compile to native code before distribution if you really want to. At this point you're close to the same level as if you had distributed a native app. However, that won't stop a determined individual. It also largely misunderstands the economics at play. Yes, someone might perhaps reverse-engineer your work. But this assumes that all the value of the app is in the technology. It's very common for a programmer to over-value the code, and undervalue the execution of the *product*: interface design, marketing, connecting with users, and on-going innovation. If you do all of that right, a little extra competition will help you as much as it hurts by building up demand in your market. If you do it wrong, hiding your algorithm won't save you. If you're more worried about your app showing up on warez sites, you're even more misguided. It'll show up there anyway. A much better strategy is to [engage those users](https://stackoverflow.com/questions/651291/securing-a-net-application). --- At the moment, the biggest impediment to adoption (imo) is that the framework redistributable has become mammoth in size. Hopefully they'll address that in a relatively near release.
Are you suggesting that the fact that C# is managed code is a design flaw??
Could managed code (specifically .NET) ever become 'unmanaged'?
[ "", "c#", ".net", "windows", "" ]
Like the title says; how do I load every file in a directory? I'm interested in both c++ and lua. Edit: For windows I'd be glad for some real working code and especially for lua. I can do with boost::filesystem for c++.
For Lua, you want the module [Lua Filesystem](http://www.keplerproject.org/luafilesystem/). As observed by Nick, accessing the file system itself (as opposed to individual files) is outside the scope of the C and C++ standards. Since Lua itself is (with the exception of the dynamic loader used to implement require() for C modules) written in standard C, the core language lacks many file system features. However, it is easy to extend the Lua core since (nearly) any platform that has a file system also supports DLLs or shared libraries. Lua File system is a portable library that adds support for directory iteration, file attribute discovery, and the like. With lfs, emulating some of the capability of DIR in Lua is essentially as simple as: ``` require "lfs" dot = arg[1] or "." for name in lfs.dir(dot) do local fqn = dot.."/"..name local attr = lfs.attributes(fqn) print(name, attr.mode, os.date("%Y-%m-%d %H:%M",attr.modification), attr.size) end ``` Which produces output that looks like: ``` E:...>t-lfs.lua . directory 2009-04-02 13:23 0 .. directory 2009-04-02 13:18 0 foo.txt file 2009-02-23 01:56 0 t-lfs.lua file 2009-04-02 13:18 241 E:...> ``` If your copy of Lua came from [Lua for Windows](http://www.keplerproject.org/luafilesystem/), then you already have lfs installed, and the above sample will work out of the box. **Edit:** Incidentally, the Lua solution might also be a sensible C or C++ solution. The Lua core is not at all large, provides a dynamic, garbage-collected language, and is easy to interact with from C either as a hosting application or as an extension module. To use lfs from a C application, you would link with the Lua DLL, initialize a Lua state, and get the state to execute the `require"lfs"` either via `luaL_dostring()` or by using the C API to retrieve the `require()` function from the global table, push the string `"lfs"`, and call the Lua function with something like `lua_pcall(L,1,1,0)`, which leaves the `lfs` table on the top of the Lua stack. This approach probably makes the most sense if you already had a need for an embedded scripting language, and Lua meets your requirements.
For a C++ solution, have a look at the [Boost.Filesystem](http://www.boost.org/doc/libs/release/libs/filesystem/index.html) library.
How to load all files from a directory?
[ "", "c++", "file", "input", "lua", "io", "" ]
If I am using `EventWaitHandle` (or `AutoResetEvent`, `ManualResetEvent`) to synchronise between threads then do I need to call the `Close()` or `Dispose()` methods on that event handle when I am done with it? `EventWaitHandle` inherits from `WaitHandle`, which implements `IDisposable`. And FxCop complains if I don't implement `IDisposable` on any class that contains an `EventWaitHandle`. So this suggests that I do need to call it. However none of these MSDN usage examples call `Dispose()` or `Close()`: <http://msdn.microsoft.com/en-us/library/system.threading.eventwaithandle(VS.80).aspx> <http://msdn.microsoft.com/en-us/library/system.threading.manualresetevent(VS.80).aspx> <http://msdn.microsoft.com/en-us/library/system.threading.autoresetevent(VS.80).aspx> Is this just an example of Microsoft ignoring their own advice?
The disposable resource of an `EventWaitHandle` is actually a `SafeHandle` (wrapped in a `SafeWaitHandle`). `SafeHandle` implements a finalizer, which eventually makes sure the necessary resource is release, so it should be safe to let the garbage collector / finalizer thread handle it *in this case*. However, it is always a good idea to explicitly call `Dispose()` when the resource is no longer needed. The threading chapter in [C# 3.0 in a Nutshell](http://www.albahari.com/nutshell/) states > This practice is (arguably) acceptable > with wait handles because they have a > light OS burden (asynchronous > delegates rely on exactly this mechanism > to release their `IAsyncResult`'s wait > handle).
You need to dispose them explicitly. Close() is more appropriate for them as it does call Dispose().
Do I need to Dispose() or Close() an EventWaitHandle?
[ "", "c#", ".net", "multithreading", "synchronization", "" ]
Right now I have an app that allows a user to pick a ring from a drop down (select) box. There can be many to choose from and it creates a long drop down that isn't very nice to use. I would like to maybe use jQuery to make this a little nicer. Like maybe have a tab on the top where they can choose "white gold", "yellow gold", etc. to narrow the options I am not sure. I have never used jQuery and not sure what it can do for me here, so I am open to any suggestions. I only have one requirement, right now each < option > has a value attribute indicating the stock number for the selected item, so when the submit button is clicked my processing page receives the stock number of the item they want; I need to keep that functionality. Can anyone tell me a cool solution for this? Thanks!
Hmm, something like [FlexBox](http://www.fairwaytech.com/flexbox/flexbox-demos/)?
You might want to look at the jQuery [autocomplete](http://plugins.jquery.com/project/autocompletex) plugin. You can constrain the autocompleted input so that it must be chosen from your autocomplete list. If you set the number of characters required to 0, it might be able to bring up your entire list when the text box gets focus, then narrow down as the user starts typing.
use jQuery instead of long listed < SELECT > box?
[ "", "javascript", "jquery", "" ]
I am building the documentation for our C# API containing: 1. A general overview and description of the current state as a doc/pdf file. 2. A class library API in a .chm file using Sandcastle. ### Questions: 1. Should I merge these two into the same .chm file? What is a good way to merge them? 2. I need to exclude certain classes/packages. How can I specify that in SandCastle? 3. It generates documentation for the VB code and the Visual C++ code. How can I change this? Or should I leave it, knowing that I am only using safe code? 4. Where can I find HTML Help 2.x Viewer Path on my system? ### Edit: The comments I make above methods, fields and classes are not generated in the documentation. What should I do?
I recommend that you use [Sandcastle Help File Builder](http://www.codeplex.com/SHFB) from Codeplex. You can easily include and exclude namespaces, but I am unsure how to go about excluding a single class. You can set the option to only generate documentation for public/protected classes, but I do not know if that will fit your scenario. You can also target a specific language in SHFB, as to your second question. Additionally, you can use MAML within SHFB for conceptual documentation, such as you mention as being in the doc/pdf file. You should be able to use [Doc2Maml](http://www.codeplex.com/DocProject/Release/ProjectReleases.aspx?ReleaseId=12723) for to migrate your existing documentation. Doc2Maml is a part of DocProject, but it appears that you might be able to run it standalone. **Edit in response to comment:** Directions are for SHFB 1.8.0.1. I do not remember the exact way to do it in 1.7, but I believe it is similar: 1. Under "Comments" group in the Project Properties tab, click the ellipsis to the right of "NamespaceSummaries". 2. In the checkbox list in the top left, uncheck any namespace you want to exclude. This is also the screen where you put namespace summaries in.
In addition to Sand Castle as mentioned above, I would also recommend looking at [FxCop](http://msdn.microsoft.com/en-us/library/bb429476.aspx) and [StyleCop](http://code.msdn.microsoft.com/sourceanalysis) to help make sure your code and documentation is up to CLS Compliance standards.
How should I document my C# code?
[ "", "c#", ".net", "documentation", "sandcastle", "" ]
I'm looking for a class in java that has key-value association, but without using hashes. Here is what I'm currently doing: 1. Add values to a `Hashtable`. 2. Get an iterator for the `Hashtable.entrySet()`. 3. Iterate through all values and: 1. Get a `Map.Entry` for the iterator. 2. Create an object of type `Module` (a custom class) based on the value. 3. Add the class to a JPanel. 4. Show the panel. The problem with this is that I do not have control over the order that I get the values back, so I cannot display the values in the a given order (without hard-coding the order). I would use an `ArrayList` or `Vector` for this, but later in the code I need to grab the `Module` object for a given Key, which I can't do with an `ArrayList` or `Vector`. Does anyone know of a free/open-source Java class that will do this, or a way to get values out of a `Hashtable` based on when they were added? Thanks!
I suggest a [`LinkedHashMap`](http://docs.oracle.com/javase/7/docs/api/java/util/LinkedHashMap.html) or a [`TreeMap`](http://docs.oracle.com/javase/7/docs/api/java/util/TreeMap.html). A `LinkedHashMap` keeps the keys in the order they were inserted, while a `TreeMap` is kept sorted via a `Comparator` or the natural `Comparable` ordering of the keys. Since it doesn't have to keep the elements sorted, `LinkedHashMap` should be faster for most cases; `TreeMap` has `O(log n)` performance for `containsKey`, `get`, `put`, and `remove`, according to the Javadocs, while `LinkedHashMap` is `O(1)` for each. If your API that only expects a predictable sort order, as opposed to a specific sort order, consider using the interfaces these two classes implement, [`NavigableMap`](http://docs.oracle.com/javase/7/docs/api/java/util/NavigableMap.html) or [`SortedMap`](http://docs.oracle.com/javase/7/docs/api/java/util/SortedMap.html). This will allow you not to leak specific implementations into your API and switch to either of those specific classes or a completely different implementation at will afterwards.
LinkedHashMap will return the elements in the order they were inserted into the map when you iterate over the keySet(), entrySet() or values() of the map. ``` Map<String, String> map = new LinkedHashMap<String, String>(); map.put("id", "1"); map.put("name", "rohan"); map.put("age", "26"); for (Map.Entry<String, String> entry : map.entrySet()) { System.out.println(entry.getKey() + " = " + entry.getValue()); } ``` This will print the elements in the order they were put into the map: ``` id = 1 name = rohan age = 26 ```
Java Class that implements Map and keeps insertion order?
[ "", "java", "dictionary", "key-value", "" ]
what is the simplest way to parse the lat and long out of the following xml fragment. There is no namespace etc. It is in a string variable. not a stream. ``` <poi> <city>stockholm</city> <country>sweden</country> <gpoint> <lat>51.1</lat> <lng>67.98</lng> </gpoint> </poi> ``` everything I have read so far is waaaaay too complex for what should be a simple task e.g. [Link](https://web.archive.org/web/20160829021356/http://geekswithblogs.net:80/kobush/archive/2006/04/20/75717.aspx) I've been looking at the above link Surely there is a simpler way to do this in .net?
``` using System.IO; using System.Xml; using System.Xml.XPath; ``` . . . ``` string xml = @"<poi> <city>stockholm</city> <country>sweden</countr> <gpoint> <lat>51.1</lat> <lng>67.98</lng> </gpoint> </poi>"; XmlReaderSettings set = new XmlReaderSettings(); set.ConformanceLevel = ConformanceLevel.Fragment; XPathDocument doc = new XPathDocument(XmlReader.Create(new StringReader(xml), set)); XPathNavigator nav = doc.CreateNavigator(); Console.WriteLine(nav.SelectSingleNode("/poi/gpoint/lat")); Console.WriteLine(nav.SelectSingleNode("/poi/gpoint/lng")); ``` You could of course use xpath `SelectSingleNode` to select the `<gpoint>` element into a variable.
Using Linq for XML: ``` XDocument doc= XDocument.Parse("<poi><city>stockholm</city><country>sweden</country><gpoint><lat>51.1</lat><lng>67.98</lng></gpoint></poi>"); var points=doc.Descendants("gpoint"); foreach (XElement current in points) { Console.WriteLine(current.Element("lat").Value); Console.WriteLine(current.Element("lng").Value); } Console.ReadKey(); ```
simple xml parsing
[ "", "c#", ".net", "xml", "parsing", "" ]
Is there any rational reason, why [native properties](http://tech.puredanger.com/java7#property) will not be part of Java 7?
Doing properties "right" in Java will not be easy. Rémi Forax's work especially has been valuable in figuring out what this might look like, and uncovering a lot of the "gotchas" that will have to be dealt with. Meanwhile, Java 7 has already taken too long. The closures debate was a huge, controversial distraction that wasted a lot of mind-power that could have been used to develop features (like properties) that have broad consensus of support. Eventually, the decision was made to limit major changes to modularization (Project Jigsaw). Only "small change" is being considered for the language (under Project Coin). JavaFX has beautiful property support, so Sun clearly understands the value of properties and knows how to implement them. But having been spoiled by JavaFX properties, developers are less likely to settle for a half-baked implementation in Java. If they are worth doing, they are worth doing right.
There are some high-level reasons related to schedule and resources of course. Implementation of properties and understanding all of the ramifications and intersections with other language features is a large task similar to the size of various Java 5 language changes. But I think the real reason Sun is not pushing properties is the same as closures: 1) There is no consensus on what the implementation should look like. Or rather, there are many competing alternatives and people who are passionate about properties disagree about crucial parts of the implementation. 2) Perhaps more importantly, there is a significant lack of consensus about whether the feature is wanted at all. While many people want properties, there are also many people that don't think it's necessary or useful (in particular, I think server-side people see properties as far less crucial to their daily life than swing programmers). Properties history here: * <http://tech.puredanger.com/java7#property>
Why will there be no native properties in Java 7?
[ "", "java", "properties", "java-7", "" ]
*MySQL* Suppose you want to retrieve just a single record by some id, but you want to know what its position would have been if you'd encountered it in a large ordered set. Case in point is a photo gallery. You land on a single photo, but the system must know what its offset is in the entire gallery. I suppose I could use custom indexing fields to keep track of positions, but there must be a more graceful way in SQL alone.
So, first you create a virtual table with the position # ordered by whatever your ORDER BY is, then you select the highest one from that set. That's the position in the greater result set. You can run into problems if you don't order by a unique value/set of values... If you create an index on (photo\_gallery\_id, date\_created\_on) it may do an index scan (depending on the distribution of photos), which ought to be faster than a table scan (provided your gallery\_id isn't 90% of the photos or whatnot). ``` SELECT @row := 0; SELECT MAX( position ) FROM ( SELECT @row := @row + 1 AS position FROM photos WHERE photo_gallery_id = 43 AND date_created_on <= 'the-date-time-your-photo-was' ORDER BY date_created_on ) positions; ```
Not really. I think Oracle gives you a "ROWID" or something like that, but most don't give you one. A custom ordering, like a column in your database that tells you want position the entry in the gallery is good because you can never be sure that SQL will put things in the table in the order you think they should be in.
How to find the *position* of a single record in a limited, arbitrarily ordered record set?
[ "", "sql", "mysql", "" ]
I'm trying to find a way that will add / update attribute using JavaScript. I know I can do it with `setAttribute()` function but that doesn't work in IE.
You can read [here](http://www.quirksmode.org/dom/w3c_core.html#attributes) about the behaviour of attributes in many different browsers, including IE. `element.setAttribute()` should do the trick, even in IE. Did you try it? If it doesn't work, then maybe `element.attributeName = 'value'` might work.
What seems easy is actually tricky if you want to be completely compatible. ``` var e = document.createElement('div'); ``` Let's say you have an id of 'div1' to add. ``` e['id'] = 'div1'; e.id = 'div1'; e.attributes['id'] = 'div1'; e.createAttribute('id','div1') ``` These will all work except the last in IE 5.5 (which is ancient history at this point but still is XP's default with no updates). But there are contingencies, of course. Will not work in IE prior to 8:`e.attributes['style']` Will not error but won't actually set the class, it must be className:`e['class']` . However, if you're using attributes then this WILL work:`e.attributes['class']` In summary, think of attributes as literal and object-oriented. In literal, you just want it to spit out x='y' and not think about it. This is what attributes, setAttribute, createAttribute is for (except for IE's style exception). But because these are really objects things can get confused. Since you are going to the trouble of properly creating a DOM element instead of jQuery innerHTML slop, I would treat it like one and stick with the e.className = 'fooClass' and e.id = 'fooID'. This is a design preference, but in this instance trying to treat is as anything other than an object works against you. It will never backfire on you like the other methods might, just be aware of class being className and style being an object so it's style.width not style="width:50px". Also remember tagName but this is already set by createElement so you shouldn't need to worry about it. This was longer than I wanted, but CSS manipulation in JS is tricky business.
How to add/update an attribute to an HTML element using JavaScript?
[ "", "javascript", "html", "cross-browser", "" ]
Say I have classes `Foo` and `Bar` set up like this: ``` class Foo { public: int x; virtual void printStuff() { std::cout << x << std::endl; } }; class Bar : public Foo { public: int y; void printStuff() { // I would like to call Foo.printStuff() here... std::cout << y << std::endl; } }; ``` As annotated in the code, I'd like to be able to call the base class's function that I'm overriding. In Java there's the `super.funcname()` syntax. Is this possible in C++?
In C++ you have to explicitly name the base class in calling the derived class method. This can be done from any method from the derived class. The override is a special case of the method of the same name. In Java there is no multi inheritance, so you can use super which will uniquely name the base class. The C++ syntax is like this: ``` class Bar : public Foo { // ... void printStuff() override { // help the compiler to check Foo::printStuff(); // calls base class' function } }; ```
Yes, ``` class Bar : public Foo { ... void printStuff() { Foo::printStuff(); } }; ``` It is the same as `super` in Java, except it allows calling implementations from different bases when you have multiple inheritance. ``` class Foo { public: virtual void foo() { ... } }; class Baz { public: virtual void foo() { ... } }; class Bar : public Foo, public Baz { public: virtual void foo() { // Choose one, or even call both if you need to. Foo::foo(); Baz::foo(); } }; ```
Can I call a base class's virtual function if I'm overriding it?
[ "", "c++", "virtual-functions", "overriding", "" ]
What is the Python equivalent of following Perl code? ``` hmac_md5_hex($login . "^" . $seq . "^" . $time . "^" . $amo . "^", $CryptoKey); ``` The Python hashlib.md5 doesn't seem to take an "cryptographic key" argument. It only accepts 1 argument.
You have to use the [hmac module](http://docs.python.org/library/hmac.html) together with md5 or sha. Per default it uses md5: ``` In [1]: import hmac, hashlib In [2]: hmac.new('key', 'msg').hexdigest() Out[2]: '18e3548c59ad40dd03907b7aeee71d67' In [3]: hmac.new('key2', 'msg').hexdigest() Out[3]: 'a4bde113179bc2a7c6ac9ad7309ea073' In [4]: hmac.new('key', 'msg', hashlib.sha256).hexdigest() Out[4]: '2d93cbc1be167bcb1637a4a23cbff01a7878f0c50ee833954ea5221bb1b8c628' ``` Your example would probably look something like: ``` hmac.new(CryptoKey, '^'.join([login, seq, time, amo]), hashlib.md5).hexdigest() ```
Take a look at [this python library documentation about hmac](http://docs.python.org/library/hmac.html) What you probably want is: ``` import hmac hmac_object = hmac.new(crypto_key) hmac_object.update('^'.join([login, seq, time, amo, '']) print hmac_object.hexdigest() ``` It's probably best to use **.update()** since that way you don't have to instantiate the hmac class everytime and it's a serious performance boost if you want to have a lot of hex digest of the message.
How to set the crypto key for Python's MD5 module?
[ "", "python", "hash", "md5", "hmac", "" ]
Why wasn't the `java.lang.Object` class declared to be abstract ? Surely for an Object to be useful it needs added state or behaviour, an Object class is an abstraction, and as such it should have been declared abstract ... **why did they choose not to ?**
Ande, I think you are approaching this -- pun NOT intended -- with an unnecessary degree of abstraction. I think this (IMHO) unnecessary level of abstraction is what is causing the "problem" here. You are perhaps approaching this from a mathematical theoretical approach, where many of us are approaching this from a "programmer trying to solve problems" approach. I believe this difference in approach is causing the disagreements. When programmers look at practicalities and how to actually *implement* something, there are a number of times when you need some totally arbitrary Object whose actual instance is totally irrelevant. It just cannot be null. The example I gave in a comment to another post is the implementation of `*Set` (`*` == `Hash` or `Concurrent` or type of choice), which is commonly done by using a backing `*Map` and using the `Map` keys as the Set. You often cannot use `null` as the `Map` value, so what is commonly done is to use a static `Object` instance as the value, which will be ignored and never used. However, some non-null placeholder is needed. Another common use is with the `synchronized` keyword where *some* `Object` is needed to synchronize on, and you want to ensure that your synchronizing item is totally private to avoid deadlock where different classes are unintentionally synchronizing on the same lock. A very common idiom is to allocate a `private final Object` to use in a class as the lock. To be fair, as of Java 5 and `java.util.concurrent.locks.Lock` and related additions, this idiom is measurably less applicable. Historically, it has been quite useful in Java to have `Object` be instantiable. You could make a good point that with small changes in design or with small API changes, this would no longer be necessary. You're probably correct in this. And yes, the API could have provided a `Placeholder` class that extends `Object` without adding anything at all, to be used as a placeholder for the purposes described above. But -- if you're extending `Object` but adding nothing, what is the value in the class other than allowing `Object` to be abstract? Mathematically, theoretically, perhaps one could find a value, but pragmatically, what value would it add to do this? There are times in programming where you need an object, *some* object, *any* concrete object that is not null, something that you can compare via `==` and/or `.equals()`, but you just don't need any other feature to this object. It exists only to serve as a unique identifier and otherwise does absolutely nothing. `Object` satisfies this role perfectly and (IMHO) very cleanly. I would guess that **this** is part of the reason why `Object` was not declared abstract: It is directly useful for it not to be.
An `Object` is useful even if it does not have any state or behaviour specific to it. One example would be its use as a generic guard that's used for synchronization: ``` public class Example { private final Object o = new Object(); public void doSomething() { synchronized (o) { // do possibly dangerous stuff } } } ``` While this class is a bit simple in its implementation (it isn't evident here why it's useful to have an explicit object, you could just declare the method `synchronized`) there are several cases where this is *really* useful.
Java: Rationale of the Object class not being declared abstract
[ "", "java", "object", "specifications", "" ]
Is there a standard and reliable way of creating a temporary directory inside a Java application? There's [an entry in Java's issue database](http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4735419), which has a bit of code in the comments, but I wonder if there is a standard solution to be found in one of the usual libraries (Apache Commons etc.) ?
If you are using JDK 7 use the new [Files.createTempDirectory](http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#createTempDirectory%28java.nio.file.Path,%20java.lang.String,%20java.nio.file.attribute.FileAttribute...%29) class to create the temporary directory. ``` Path tempDirWithPrefix = Files.createTempDirectory(prefix); ``` Before JDK 7 this should do it: ``` public static File createTempDirectory() throws IOException { final File temp; temp = File.createTempFile("temp", Long.toString(System.nanoTime())); if(!(temp.delete())) { throw new IOException("Could not delete temp file: " + temp.getAbsolutePath()); } if(!(temp.mkdir())) { throw new IOException("Could not create temp directory: " + temp.getAbsolutePath()); } return (temp); } ``` You could make better exceptions (subclass IOException) if you want.
The Google Guava library has a ton of helpful utilities. One of note here is the [Files class](http://google.github.io/guava/releases/19.0/api/docs/com/google/common/io/Files.html). It has a bunch of useful methods including: ``` File myTempDir = Files.createTempDir(); ``` This does exactly what you asked for in one line. If you read the documentation [here](http://google.github.io/guava/releases/19.0/api/docs/com/google/common/io/Files.html#createTempDir()) you'll see that the proposed adaptation of `File.createTempFile("install", "dir")` typically introduces security vulnerabilities.
How to create a temporary directory/folder in Java?
[ "", "java", "file", "file-io", "directory", "temporary-directory", "" ]
**[background]** So, I've got a C# application that was written before I got here. I'm not in the dev org, at this time, but I am the tech lead in my sub-group within the internet marketing org. My responsibility is process automation, minimal desktop support, and custom apps that make our lives easier. **[/background]** **[app details]** We've got an app that creates a custom database file from a list of URLs. It was designed to have one input file, and two output files for the two applications that use these sort of db files. The rule for the difference between the two output files is compiled into the code. **[/app details]** Should an internal C# app be compiled with business logic that can't be changed without it being re-built?
Internal applications have one goal: support the process. If the rules for creating the output are simple, change every day and are put down by a user, compiling it into the binary is totally wrong and an investment into a GUI and a new set of programmers could do much good. If the rules are complex, change once a year and are mandated by the management, having them compiled into the binary is a simple, cost-effective way to maintain them and keep users from fiddling with the internals. As always, the answer has to be "it depends".
If the logic is changed on a regular basis, you should avoid building it into the program. On the other hand, since it is internal, I'm guessing that the process required to rebuild the app is minimal or non-existent, so it may not make much of a difference.
Should a internal C# app be compiled with business logic?
[ "", "c#", "automation", "business-logic", "" ]
I have a page that display of all News in a database, which I get with function ``` IList<News> GetAll(); ``` I then bind the list to a repeater to display all the news. If a user clicks on news N, he is redirected to page.aspx?id=N If the QueryString["id"] is set, then I get one of my News like this: ``` News news = sNewsService.Get(int.Parse(id)); ``` Now, I would like to display this single news, but I cannot bind it to a Repeater as it does not implement IListSource or IEnumerable. Is there any other way to display the properties of one news instead of writing every property value to a different Label like lText = news.Text; lTitle = news.Title;... or wrapping the news in a List?
I converted the product to MVC and created custom DTO's/Views
The [DetailsView](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.detailsview.aspx) control is designed to display a single record. Not sure if that might help.
Display the selected record
[ "", "c#", ".net", "data-binding", "" ]
I am using this SQL query to order a list of records by date in a php page. ``` SELECT ARTICLE_NO, USERNAME, ACCESSSTARTS, ARTICLE_NAME FROM table WHERE upper(ARTICLE_NAME) LIKE % x % ORDER BY str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'); ``` This works fine. In a different php page, I want to be able to delete this record, and show the next one in the list. The query I am using to do that is: ``` SELECT ARTICLE_NO FROM auctions1 WHERE str_to_date( ACCESSSTARTS, '%d/%m/%Y %k:%i:%s' ) > (SELECT str_to_date( ACCESSSTARTS, '%d/%m/%Y %k:%i:%s' ) FROM table WHERE ARTICLE_NO =".$pk.") ORDER BY str_to_date( ACCESSSTARTS, '%d/%m/%Y %k:%i:%s' ) LIMIT 1"; ``` The problem I seem to be having, is because there are many records with the same date, any record from the group of records with the same date will be chosen, not the same one in the list. How can I select the next record returned from the same result set as the first query? The first query always returns the same order, so I am not sure why the second query seems to have a different order. edit: I have been trying to use Quassnoi advice. The first query I am now using is: ``` SELECT ARTICLE_NO, USERNAME, ACCESSSTARTS, ARTICLE_NAME, date_format(str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'), '%d %m %Y' ) AS shortDate FROM AUCTIONS1 WHERE upper(ARTICLE_NAME) LIKE % x % ORDER BY str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'), article_no limit 0, 10 ``` And the second query, as suggested by Quassnoi is: ``` SELECT ARTICLE_NO FROM auctions1 WHERE (str_to_date( ACCESSSTARTS, '%d/%m/%Y %k:%i:%s' ), article_no) > ( SELECT str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'), article_no FROM auctions1 WHERE ARTICLE_NO = xxx ) ORDER BY str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'), article_no LIMIT 1 ``` I copied this query by echoing it out through my php page, and simply placed xxx in place of the article\_no that was present. This matches the first code example perfectly, however the results are the same as the code I was using in my original question. edit2: This is the query used to obtain the original result set: ``` SELECT ARTICLE_NO, USERNAME, ACCESSSTARTS, ARTICLE_NAME, date_format(str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'), '%d %m %Y' ) AS shortDate FROM auctions1 WHERE upper(ARTICLE_NAME) LIKE '%o%' ORDER BY str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'), article_no limit 0, 10; ``` Which results in this data, which is fine: ``` ARTICLE_NO USERNAME ACCESSSTARTS ARTICLE_NAME shortDate 160288212077 5864_australen 30/09/2008 05:22:30 DON ED HARDY TIGER JACKE WEISS XL 30 09 2008 220288566257 fashionticker1 01/10/2008 16:39:12 Ed Hardy Tank Top Lila Neu & OVP Gr. L 01 10 2008 280273115680 mulle15 01/10/2008 16:42:38 Ed Hardy, T-Shirt,Destroy, schwarz, Gr.L 01 10 2008 280273115991 mulle15 01/10/2008 16:43:54 Ed Hardy, T-Shirt,Destroy, schwarz, Gr.XL 01 10 2008 280273116224 mulle15 01/10/2008 16:44:59 Ed Hardy, T-Shirt,Destroy, schwarz, Gr.XXL 01 10 2008 280273118653 mulle15 01/10/2008 16:54:50 Ed Hardy, T-Shirt,King Snoopy,chocolate, Gr.M 01 10 2008 120312402767 lieschenjuli 01/10/2008 16:56:12 Badehose Shorts Ed Hardy L 01 10 2008 280273119206 mulle15 01/10/2008 16:56:47 Ed Hardy, T-Shirt,King Snoopy,chocolate, Gr.XL 01 10 2008 280273119489 mulle15 01/10/2008 16:57:49 Ed Hardy, T-Shirt,King Snoopy,chocolate, Gr.XXL 01 10 2008 160288777155 bonifatzius1 01/10/2008 16:58:33 Ed Hardy Bomberjacke Gr. L Jacke für Damen oder H... 01 10 2008 ``` The problem is, if I perform this query: ``` SELECT ARTICLE_NO FROM auctions1 WHERE (str_to_date( ACCESSSTARTS, '%d/%m/%Y %k:%i:%s' ), article_no) > ( SELECT str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'), article_no FROM auctions1 WHERE ARTICLE_NO =160288212077 ) ORDER BY str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'), article_no LIMIT 1; ``` The this returns 280273112610, when 220288566257 is what should be returned
``` SELECT ARTICLE_NO FROM auctions1 WHERE upper(ARTICLE_NAME) LIKE '% x %' AND (str_to_date( ACCESSSTARTS, '%d/%m/%Y %k:%i:%s' ), article_no) > ( SELECT str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'), article_no FROM auctions1 WHERE ARTICLE_NO = @pk ) ORDER BY str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'), article_no LIMIT 1 ``` Note that your original resultset: ``` SELECT ARTICLE_NO, USERNAME, ACCESSSTARTS, ARTICLE_NAME FROM auctions1 WHERE upper(ARTICLE_NAME) LIKE % x % ORDER BY str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'); ``` does not guarantee stable row order within one `ACCESSSTARTS`. You need to add `PRIMARY KEY` to the `ORDER BY` clause, like this: ``` SELECT ARTICLE_NO, USERNAME, ACCESSSTARTS, ARTICLE_NAME FROM auctions1 WHERE upper(ARTICLE_NAME) LIKE % x % ORDER BY str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'), article_no. ``` (I'm assuming `ARTICLE_NO` is the `PRIMARY KEY` of your table) Your original rowset, depending on what access method it's using, returns rows either in table order in in index order. **You really, really, really need to change your original rowset to use the stable order.** But if you cannot do it for some reason, you can do the following: ``` SELECT ARTICLE_NO, USERNAME, ACCESSSTARTS, ARTICLE_NAME, FROM ( SELECT @c := NULL ) vars, auctions1 WHERE upper(ARTICLE_NAME) LIKE % x % AND CASE WHEN ARTICLE_NO = $PK THEN @с := 0 ELSE 0 END IS NOT NULL AND (@c := @c + 1) = 2 ORDER BY str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'); LIMIT 1 ``` This query is less efficient and heavily relies on fact that this query will use exactly same access method as your original query. You don't normally rely on this fact, as access method can change anytime. If you want clear and sane code, just add the `ARTICLE_NO` into `ORDER BY` and enjoy the query I posted first.
You could try also ordering your results by ARTICLE\_NO. Just do: ``` ... ORDER BY str_to_date(ACCESSSTARTS, '%d/%m/%Y %k:%i:%s'), ARTICLE_NO ... ``` This way, both results sets should have exactly the same order. In both cases, the next record will be the record with the lowest ARTICLE\_NO with a posterior date. Greetings Sacher
selecting the next record from a list ordered by date
[ "", "sql", "mysql", "" ]
C#: How do you disable a key from being pressed indefinetly in textbox's keydown event?
The standard way to handle that is to create an event handler for the `Textbox.KeyDown` event and then set `KeyEventArgs.SuppressKeyPress` to true if the key pressed matches the key you want to disable. Here is an example: ``` yourTextBox.KeyDown += delegate(Object sender, KeyEventArgs e) { e.SuppressKeyPress = (e.KeyCode == YOUR_KEY); } ```
Using e.SuppressKeyPress will prevent the key stroke from registering at all. Assuming you'd want the first keystroke to register, but not continually register that key as a keystroke when the user holds it down, wrap the e.SuppressKeyPress in a class level variable that registers when a key is being held down. ``` public class nonRepeatingTextBox : TextBox { private bool keyDown = false; protected override void OnKeyUp(KeyEventArgs e) { keyDown = false; } protected override void OnKeyDown(KeyEventArgs e) { if (keyDown) { e.SuppressKeyPress = true; } keyDown = true; } } ``` Use this class as your text box. You'd probably want to make exceptions in the OnKeyDown override, for arrow keys etc.
C#: How do you disable a key from being pressed indefinetly in textbox's keydown event?
[ "", "c#", "" ]
Is there a way to set up such enum values via Spring IoC at construction time? What I would like to do is to inject, at class load time, values that are hard-coded in the code snippet below: ``` public enum Car { NANO ("Very Cheap", "India"), MERCEDES ("Expensive", "Germany"), FERRARI ("Very Expensive", "Italy"); public final String cost; public final String madeIn; Car(String cost, String madeIn) { this.cost= cost; this.madeIn= madeIn; } } ``` Let's say that the application must be deployed in Germany, where Nanos are "Nearly free", or in India where Ferraris are "Unaffordable". In both countries, there are only three cars (deterministic set), no more no less, hence an enum, but their "inner" values may differ. So, this is a case of contextual **initialization** of immutables.
Do you mean setting up the `enum` itself? I don't think that's possible. You cannot instantiate enums because they have a `static` nature. So I think that Spring IoC can't *create* `enums` as well. On the other hand, if you need to set initialize something with a `enum` please check out the [Spring IoC chapter](http://static.springframework.org/spring/docs/2.5.x/reference/beans.html). (search for enum) There's a simple example that you can use.
I don't think it can be done from Spring's `ApplicationContext` configuration. But, do you really need it done by Spring, or can you settle for simple externalization using [ResourceBundle](http://java.sun.com/j2se/1.4.2/docs/api/java/util/ResourceBundle.html); like this: ``` public enum Car { NANO, MERCEDES, FERRARI; public final String cost; public final String madeIn; Car() { this.cost = BUNDLE.getString("Car." + name() + ".cost"); this.madeIn = BUNDLE.getString("Car." + name() + ".madeIn"); } private static final ResourceBundle BUNDLE = ResourceBundle.getBundle(...); } ``` In the properties file, one for each specific locale, enter the keys describing the possible internal enum values: ``` Car.NANO.cost=Very cheap Car.NANO.madeIn=India Car.MERCEDES.cost=Expensive ... ``` The only drawback of this approach is having to repeat the name of enum fields (cost, madeIn) in Java code as strings. **Edit**: And on the plus side, you can stack all properties of all enums into one properties file per language/locale.
Using Spring IoC to set up enum values
[ "", "java", "enums", "spring", "" ]
I consider myself a very experienced SQL person. But I'm failing to do these two things: * Reduce the size of the allocated log. * Truncate the log. DBCC sqlperf(logspace) returns: ``` Database Name Log Size (MB) Log Space Used (%) Status ByBox 1964.25 30.0657 0 ``` The following does not work with SQL 2008 ``` DUMP TRANSACTION ByBox WITH TRUNCATE_ONLY ``` Running the following does nothing either ``` DBCC SHRINKFILE ('ByBox_1_Log' , 1) DBCC shrinkdatabase(N'bybox') ``` I've tried backups. I've also tried setting the properties of the database - 'Recover Model' to both 'FULL' and 'SIMPLE' and a combination of all of the above. I also tried setting the compatibility to SQL Server 2005 (I use this setting as I want to match our production server) and SQL Server 2008. No matter what I try, the log remains at 1964.25 MB, with 30% used, which is still growing. I'd like the log to go back down near 0% and reduce the log file size to, say, 100 MB which is plenty. My database must hate me; it just ignores everything I ask it to do regarding the log. One further note. The production database has quite a few replicated tables, which I turn off when I perform a restore on my development box by using the following: ``` -- Clear out pending replication stuff exec sp_removedbreplication go EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1 go ``` Trying: ``` SELECT log_reuse_wait, log_reuse_wait_desc FROM sys.databases WHERE NAME='bybox' ``` Returns ``` log_reuse_wait log_reuse_wait_desc 0 NOTHING ``` How can I fix this problem? --- Looking at [this](http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/thread/9f9ae5c5-3031-4963-bb16-c7ed05d9890a/) and setting the recovery model to FULL I have tried the following: ``` USE master GO EXEC sp_addumpdevice 'disk', 'ByBoxData', N'C:\<path here>\bybox.bak' -- Create a logical backup device, ByBoxLog. EXEC sp_addumpdevice 'disk', 'ByBoxLog', N'C:\<path here>\bybox_log.bak' -- Back up the full bybox database. BACKUP DATABASE bybox TO ByBoxData -- Back up the bybox log. BACKUP LOG bybox TO ByBoxLog ``` which returned: ``` Processed 151800 pages for database 'bybox', file 'ByBox_Data' on file 3. Processed 12256 pages for database 'bybox', file 'ByBox_Secondary' on file 3. Processed 1 pages for database 'bybox', file 'ByBox_1_Log' on file 3. BACKUP DATABASE successfully processed 164057 pages in 35.456 seconds (36.148 MB/sec). Processed 2 pages for database 'bybox', file 'ByBox_1_Log' on file 4. BACKUP LOG successfully processed 2 pages in 0.056 seconds (0.252 MB/sec). ``` Perfect! But it's not. And DBCC SHRINKFILE ('ByBox\_1\_Log' , 1) now returns with ``` DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages 7 2 251425 251425 251424 251424 ``` and DBCC SQLPERF(LOGSPACE) still reports 30% usage. I think I may have to resign myself to the fact there could well be a bug in SQL Server 2008, or that my log file has been corrupted in some manner. However, my database is in good working order, which leads me to think there is a bug *(shudders at the thought)*.
**Found the solution!** I added a load of data to the database, so the log was forced to expand. I then removed the uneeded data to get my database back to how it was. Backup and voila, a perfect 0% log. So the solution is to make the log expand.
In my situation, I had a 650 MB database with a 370 GB log file in SQL Server 2008. No matter what I tried, I could not get it to shrink down. I tried everything listed as answers here but still, nothing worked. Finally, I found a very short comment somewhere else that did work. It is to run this: ``` BACKUP LOG DatabaseName TO DISK = N'D:\Backup\DatabaseName_log.bak' GO DBCC SHRINKFILE('MyDatabase_Log', 1) GO ``` This caused the log file to shrink from 37 GB down to 1 MB. Whew!
SQL Server 2008 log will not truncate
[ "", "sql", "sql-server", "sql-server-2008", "" ]
In C++ templates, one can specify that a certain type parameter is a default. I.e. unless explicitly specified, it will use type T. Can this be done or approximated in C#? I'm looking for something like: ``` public class MyTemplate<T1, T2=string> {} ``` So that an instance of the type that doesn't explicitly specify `T2`: ``` MyTemplate<int> t = new MyTemplate<int>(); ``` Would be essentially: ``` MyTemplate<int, string> t = new MyTemplate<int, string>(); ``` Ultimately I am looking at a case wherein there is a template that is fairly widely used, but I am considering expanding with an additional type parameter. I could subclass, I guess, but I was curious if there were other options in this vein.
Subclassing is the best option. I would subclass your main generic class: `class BaseGeneric<T,U>` with a specific class `class MyGeneric<T> : BaseGeneric<T, string>` This makes it easy to keep your logic in one place (the base class), but also easy to provide both usage options. Depending on the class, there is probably very little extra work needed to make this happen.
One solution is subclassing. Another one I would use instead, is factory methods (combined with var keyword). ``` public class MyTemplate<T1,T2> { public MyTemplate(..args..) { ... } // constructor } public static class MyTemplate{ public static MyTemplate<T1,T2> Create<T1,T2>(..args..) { return new MyTemplate<T1, T2>(... params ...); } public static MyTemplate<T1, string> Create<T1>(...args...) { return new MyTemplate<T1, string>(... params ...); } } var val1 = MyTemplate.Create<int,decimal>(); var val2 = MyTemplate.Create<int>(); ``` In the above example `val2` is of type `MyTemplate<int,string>` **and not** a type derived from it. A type `class MyStringTemplate<T>:MyTemplate<T,string>` is not the same type as `MyTemplate<T,string>`. This could pose some problems in certain scenarios. For instance you can't cast an instance of `MyTemplate<T,string>` to `MyStringTemplate<T>`.
Is there a reasonable approach to "default" type parameters in C# Generics?
[ "", "c#", "generics", "types", "default", "parameters", "" ]
If I want to add an isEmpty method to all JavaScript arrays, I would use the following code ``` Array.prototype.isEmpty = function() { return this.length == 0; } ``` Assume this code is in a file foo.js. If I want isEmpty to be available on all pages of a web site, would I need to include foo.js in all the HTML files? In other words, do the prototypes get "reset" whenever the user navigates to a different page? Thanks, Don
Yes, you wil need to include your code on each page load. Think of each page load as a compile/linking cycle. All the various bits of Javascript on the page are linked together1 and then executed as one giant program. The next time a page is loaded, the default Javascript objects start in a fresh state. --- 1. Linked together in a brain-dead "every piece of code shares the same global namespace" fashion
Yes, you will have to modify the prototype after each page loads
JavaScript augmented prototype lifecycle
[ "", "javascript", "prototype", "" ]
In Java, I'm dynamically creating a set of files and I'd like to change the file permissions on these files on a linux/unix file system. I'd like to be able to execute the Java equivalent of `chmod`. Is that possible Java 5? If so, how? I know in Java 6 the `File` object has `setReadable()`/`setWritable()` methods. I also know I could make a system call to do this, but I'd like to avoid that if possible.
Full control over file attributes is available in Java 7, as part of the "new" New IO facility ([NIO.2](http://jcp.org/en/jsr/detail?id=203)). For example, POSIX permissions can be set on an existing file with [`setPosixFilePermissions()`,](http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#setPosixFilePermissions%28java.nio.file.Path,%20java.util.Set%29) or atomically at file creation with methods like [`createFile()`](https://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#createFile(java.nio.file.Path,%20java.nio.file.attribute.FileAttribute...)) or [`newByteChannel()`.](https://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#newByteChannel(java.nio.file.Path,%20java.util.Set,%20java.nio.file.attribute.FileAttribute...)) You can create a set of permissions using `EnumSet.of()`, but the helper method [`PosixFilePermissions.fromString()`](https://docs.oracle.com/javase/7/docs/api/java/nio/file/attribute/PosixFilePermissions.html#fromString(java.lang.String)) will uses a conventional format that will be more readable to many developers. For APIs that accept a `FileAttribute`, you can wrap the set of permissions with with [`PosixFilePermissions.asFileAttribute()`.](https://docs.oracle.com/javase/7/docs/api/java/nio/file/attribute/PosixFilePermissions.html#asFileAttribute(java.util.Set)) ``` Set<PosixFilePermission> ownerWritable = PosixFilePermissions.fromString("rw-r--r--"); FileAttribute<?> permissions = PosixFilePermissions.asFileAttribute(ownerWritable); Files.createFile(path, permissions); ``` In earlier versions of Java, using native code of your own, or `exec`-ing command-line utilities are common approaches.
Prior to Java 6, there is no support of file permission update at Java level. You have to implement your own native method or call `Runtime.exec()` to execute OS level command such as **chmod**. Starting from Java 6, you can use`File.setReadable()/File.setWritable()/File.setExecutable()` to set file permissions. But it doesn't simulate the POSIX file system which allows to set permission for different users. File.setXXX() only allows to set permission for owner and everyone else. Starting from Java 7, POSIX file permission is introduced. You can set file permissions like what you have done on \*nix systems. The syntax is : ``` File file = new File("file4.txt"); file.createNewFile(); Set<PosixFilePermission> perms = new HashSet<>(); perms.add(PosixFilePermission.OWNER_READ); perms.add(PosixFilePermission.OWNER_WRITE); Files.setPosixFilePermissions(file.toPath(), perms); ``` This method can only be used on POSIX file system, this means you cannot call it on Windows system. For details on file permission management, recommend you to read [this post](http://www.pixelstech.net/article/1440837434-Set-file-permission-in-Java).
How do I programmatically change file permissions?
[ "", "java", "filesystems", "" ]
My development team has run into a design issue. I'm hoping someone can help me clean this part of the architecture up a bit. In my system, I have an enum with 250 members [one member represents a distinct drop down]. In order to populate the drop downs on any given window, that form sends in the enum members that relate to the drop downs needed, and the drop down information is returned. In other words, say for example, we have 3 windows. Window A has drop downs X, Y and Z. Window B has drop downs W, X and Y and Window C has drop downs T, U and W. My DropDownType enum would consist of T, U, W, X, Y, Y, and Z. So for a specified window, given the drop downs on that window, I query for the data to appear in those drop downs. This is a simplified example because my application consists of > 250 distinct drop downs. As you can imagine, I have a factory setup to return the data for each drop down. And this factory is called for each drop down requested. ``` switch (dropDownType) { case DropDownType.T: return (from t in dataContext.GetTable<TableOne>() select new DropDownDto { DropDownDisplayName = t.ColumnA, DropDownValue = t.ColumnB }).ToList(); case DropDownType.U: return (from u in dataContext.GetTable<TableTwo>() select new DropDownDto { DropDownDisplayName = u.ColumnC, DropDownValue = u.ColumnD }).ToList(); // etc... } ``` Since I have so many members in this enum, does anyone know of a more elegant way of coding this? Do you think transforming this into factory methods would be helpful (but then we'd have to worry about 250 separate files in our source...)? Is there another pattern out there that is more useful? Just having this HUGE switch statement is getting unmanageable. Any help is greatly appreciated. Thanks in advance!
You could create a `Dictionary<DropDownType, DropDownDtoDelegate>` with each entry containing an enum entry as a key and a delegate to retrieve the drop down data as the value. This would allow you to keep each method that returns a drop down list separate and out of the huge switch statement. You would then have one method to retrieve the delegate, execute it, and return the drop down list data.
There are some solutions to such problems. 1. You might use a Dictionary to map from the drop downs to your data. 2. You may even consider moving this mapping data into the database. The amount of data may justfy this this decision. 3. To remove the additional keys in form of your enum think about using the names of the dropdowns as key. Here is a really good blog dealing with related issues and suggesting similar solutions. [Back to Basics - Life After If, For and Switch - Like, a Data Structures Reminder](http://www.hanselman.com/blog/BackToBasicsLifeAfterIfForAndSwitchLikeADataStructuresReminder.aspx)
How to handle a massive factory in a cleaner fashion
[ "", "c#", "design-patterns", "oop", "factory", "" ]
I have an array of float values and want the value and more importantly the position of the maximum four values. I built the system originally to walk through the array and find the max the usual way, by comparing the value at the current position to a recorded max-so-far, and updating a position variable when the max-so-far changes. This worked well, an O(n) algo that was very simple. I later learned that I need to keep not only the top value, but the top three or four. I extended the same procedure and complicated the max-so-far into an array of four max-so-fars and now the code is ugly. It still works and is still sufficiently fast because only a trivial amount of computations have been added to the procedure. it still effectively walks across the array and checks each value once. I do this in MATLAB with a sort function that returns two arrays, the sorted list and the accompanying original position list. By looking at the first few values I have exactly what I need. I am replicating this functionality into a C# .NET 2.0 program. I know that I could do something similar with a List object, and that the List object has a built in sort routine, but I do not believe that it can tell me the original positions, and those are really what I am after. It has been working well, but now I find myself wanting the fifth max value and see that rewriting the max-so-far checker that is currently an ugly mess of if statements would only compound the ugliness. It would work fine and be no slower to add a fifth level, but I want to ask the SO community if there is a better way. Sorting the entire list takes many more computations than my current method, but I don't think it would be a problem, as the list is 'only' one or two thousand floats; so if there is a sort routine that can give back the original positions, that would be ideal. As background, this array is the result of a Fourier Transform on a kilobyte of wave file, so the max values' positions correspond to the sample data's peak frequencies. I had been content with the top four, but see a need to really gather the top five or six for more accurate sample classification.
I can suggest an alternative algorithm which you'll have to code :) Use a heap of size K where K denotes the count of top elements you want to save. Initialize this to the first K elements of your original array. For all N - K elements walk the array, inserting as and when required. ``` proc top_k (array<n>, heap<k>) heap <- array<1..k-1> for each (array<k..n-1>) if array[i] > heap.min heap.erase(heap.min) heap.insert(array[i]) end if end for ```
You could still use your list idea - the elements you put in the list could be a structure which stores both the index and the value; but sorts only on the value, for instance: ``` class IndexAndValue : IComparable<IndexAndValue> { public int index; public double value; public int CompareTo(IndexAndValue other) { return value.CompareTo(other.value); } } ``` Then you can stick them in the list, while retaining the information about the index. If you keep only the largest m items in the list, then your efficiency should be O(mn).
How to find the top several values from an array?
[ "", "c#", "algorithm", "arrays", "max", "" ]
I don't get it. I've been staring at the code the code for three hours and I can't see the problem. The class I'm creating, called TwoDayPackage is derived from a class called Package. This is how I defined the constructor: ``` TwoDayPackage(string, string, string, string, int, string, string, string, string, int, float, float, float); ``` This is how I implement the constructor: ``` TwoDayPackage::TwoDayPackage(string sName, string sAddress, string sState, string sCountry, int sZIP, string rName, string rAddress, string rState, string rCountry, int rZIP, float weight, float cost, float flat) { Package::Package(sName, sAddress, sState, sCountry, sZIP, rName, rAddress, rState, rCountry, rZIP, weight, cost); flatRate = flat; } ``` This is how i use it in my main function. ``` TwoDayPackage pack2(senderName, senderAddress, senderState, senderCountry, senderZIP, receipientName, receipientAddress, receipientState, receipientCountry, receipientZIP, weight, cost, flat); ``` I know my argument list is pretty long, there's a reason for that. Thanks.
Should use: ``` TwoDayPackage::TwoDayPackage(string sName, string sAddress, string sState, string sCountry, int sZIP, string rName, string rAddress, string rState, string rCountry, int rZIP, float weight, float cost, float flat) :Package(sName, sAddress, sState, sCountry, sZIP, rName, rAddress, rState, rCountry, rZIP, weight, cost) { flatRate = flat; } ```
A default ctor is one that can be called with no arguments. At least with this code, you don't have one: a default ctor either has the signature ``` ClassName::ClassName(); ``` or every argument must have a default. That said, Dirk's point about the syntax is correct; if you want to invoke the parent classes ctor, you should do it after that colon. --- @dirkgently's answer shows the right syntax, but let's expand it a little bit. You've got two classes ``` public class Package { // ... Package( /* that horrible ctor arg list */){/*...*/} // ... } public class TwoDayPackage : public Package { // ... TwoDayPackage(/* another horrible ctor */); // declaration only // ... } ``` and then you come along to define it ``` TwoDayPackage::TwoDayPackage(string sName, string sAddress, string sState, string sCountry, int sZIP, string rName, string rAddress, string rState, string rCountry, int rZIP, float weight, float cost, float flat) { Package::Package(sName, sAddress, sState, sCountry, sZIP, rName, rAddress, rState, rCountry, rZIP, weight, cost); flatRate = flat; } ``` ... but that doesn't work? Why? Basically, because what you're telling C++ doesn't make sense: the `Package::Package` is just naming the superclass's ctor and not doing anything with it. You could create a new object of class Package by using the *new* operator, ``` Package foo = new Package::Package(sName, sAddress, sState, sCountry, sZIP, rName, rAddress, rState, rCountry, rZIP, weight, cost); ``` but that's still not what you want to do; what you *want* is to tell C++ to contruct the Package parts of TwoDayPackage using that arg list. You don't need to have the fully-qualified name, because the compiler already knows what the parent class is. You could also just assign values in the child ctor, but that's inefficient, as it makes the compiler generate code for "multiple trips to the well". So C++ has a special syntax where initializers are put after a colon, as Dirk showed. One more thing: since you're just assigning a parameter to flat anyway, you can say ``` TwoDayPackage::TwoDayPackage(string sName, string sAddress, string sState, string sCountry, int sZIP, string rName, string rAddress, string rState, string rCountry, int rZIP, float weight, float cost, float flat) : Package(sName, sAddress, sState, sCountry, sZIP, rName, rAddress, rState, rCountry, rZIP, weight, cost), flatRate(flat) { } ``` Check [this section](http://www.parashift.com/c++-faq-lite/ctors.html) of the C++ FAQ Lite for more.
"No appropriate default constructor available" error in Visual C++
[ "", "c++", "constructor", "" ]
I'm trying to learn how to use JoGL, and for some reason I'm getting this error despite having all of these imported: ``` import javax.media.opengl.*; import javax.media.opengl.glu.*; import com.sun.opengl.util.*; import com.sun.opengl.util.j2d.*; ``` ``` public void display(javax.media.opengl.GLDrawable gLDrawable) { final GL gl = gLDrawable.getGL(); } ``` nor ``` public void display(GLDrawable gLDrawable) { final GL gl = gLDrawable.getGL(); } ``` work. How do I fix this? (edit: moved the next question up to here... you can edit your question or make a new question if things change with the answers you get) Okay, that worked for getGL(), but now I'm still having problems with "cannot find symbol method getGL**U**()" ``` public void reshape(GLAutoDrawable gLDrawable, int x, int y, int width, int height) { final GL gl = gLDrawable.getGL(); final GLU glu = gLDrawable.getGLU(); } ```
It's because GLDrawable actually *doesn't* have such a method - see [here](http://download.java.net/media/jogl/builds/archive/jsr-231-beta5/javadoc_public/javax/media/opengl/GLDrawable.html). It's an interface which is implemented by GLCanvas and GLJPanel, both of which do have such a method. I think what you need to do is pass in a GLAutoDrawable and use its getGL method. See [here](http://download.java.net/media/jogl/builds/archive/jsr-231-beta5/javadoc_public/javax/media/opengl/GLAutoDrawable.html). [This wikipedia article](http://en.wikipedia.org/wiki/Java_OpenGL) has some sample source which shows how it's done. In answer to your further query where you ask why `final GLU glu = gLDrawable.getGLU();` doesn't work, you're probably using the latest JSR-231 version of the API but basing your own code on older source code samples. Before that spec, you used to get the GLU from: ``` final GLU glu = glAutoDrawable.getGLU(); ``` With it, you now just use: ``` final GLU glu = new GLU(); ``` See [here](http://download.java.net/media/jogl/builds/archive/jsr-231-beta5/javadoc_public/javax/media/opengl/glu/GLU.html) for details on that particular object.
Okay, that worked for getGL(), but now I'm still having problems with "cannot find symbol method getGL**U**()" ``` public void reshape(GLAutoDrawable gLDrawable, int x, int y, int width, int height) { final GL gl = gLDrawable.getGL(); final GLU glu = gLDrawable.getGLU(); } ```
"cannot find symbol method getGL()" (and now getGLU() as well)
[ "", "java", "opengl", "jogl", "" ]
SVN doesnt commit a library like MySql connector, so when I commit my project it is not uploaded to the server, how can I do this, I how to sync another resources like pictures?
SVN, just like any other source control management system, can handle binary files as well. This should not pose a problem. Check that the file is not under `.svn-ignore` or any similar ignore flag. Check that any other SVN properties that define the file are set correctly.
Subversion can support any type of file, what software are you using to make your commits? If you are using TortoiseSVN: make sure you have the "Show un-versioned files" box checked and that each file you want to commit is selected.
Committing binary files to SVN
[ "", "java", "svn", "netbeans", "resources", "" ]
The following program prints ``` A:C(A,B) B:C(A,B) ``` (as it should) ``` public interface I { string A(); } public class C : I { public string A() { return "A"; } public string B() { return "B"; } } public class A { public virtual void Print(C c) { Console.WriteLine("A:C(" + c.A() + "," + c.B() + ")"); } } public class B : A { public new void Print(C c) { Console.WriteLine("B:C(" + c.A() + "," + c.B() + ")"); } public void Print(I i) { Console.WriteLine("B:I(" + i.A() + ")"); } } class Program { public static void Main(string[] args) { A a = new A(); B b = new B(); C c = new C(); a.Print(c); b.Print(c); } } ``` however, if I change keyword 'new' to 'override' in class B like so: ``` public override void Print(C c) ``` all of a sudden program starts to print: ``` A:C(A,B) B:I(A) ``` Why?
This is to do with how overloaded methods are resolved. Effectively (simplified somewhat), the compiler first looks at the declared type of the expression (B) in this case and looks for candidate methods *which are first declared in that type*. If there are any methods which are appropriate (i.e. where all the arguments can be converted to the method's parameter types) then it *doesn't* look at any parent types. This means that overridden methods, where the initial declaration is in a parent type, don't get a look-in if there are any "freshly declared" appropriate methods in the derived type. Here's a slightly simpler example: ``` using System; class Base { public virtual void Foo(int x) { Console.WriteLine("Base.Foo(int)"); } } class Derived : Base { public override void Foo(int x) { Console.WriteLine("Derived.Foo(int)"); } public void Foo(double d) { Console.WriteLine("Derived.Foo(double)"); } } class Test { static void Main() { Derived d = new Derived(); d.Foo(10); } } ``` This prints `Derived.Foo(double)` - even though the compiler knows there is a matching method with a parameter of type `int`, and the argument is type `int`, and the conversion from `int` to `int` is "better" than the conversion from `int` to `double`, the fact that only the `Foo(double)` method is originally *declared* in `Derived` means the compiler ignores `Foo(int)`. This is highly surprising IMO. I can see why it would be the case if `Derived` didn't override `Foo` - otherwise introducing a new, more specific, method in the base class could change the behaviour unexpectedly - but clearly `Derived` here *knows* about `Base.Foo(int)` as it's overriding it. This is one of the (relatively few) points where I believe the C# designers made the wrong decision.
Ok, so ``` public new void Print(C c) { Console.WriteLine("B:C(" + c.A() + "," + c.B() + ")"); } public void Print(I i) { Console.WriteLine("B:I(" + i.A() + ")"); } ``` This declares a new method for the print. Now because B inherits from A, you are simly calling the new method twice. When you overide the method, this then changes the method signature when you call for A, but when you call the B signature, then it has its own method signature. I am not sure if I am explaining clear but good question. using new: A and B get the same implementation of the Print Method. using override: A has a different method signature to B as, you have not changed the method signature in B only in A. using the new it basically ignores this: ``` public void Print(I i) { Console.WriteLine("B:I(" + i.A() + ")"); } ```
How method hiding works in C#? (Part Two)
[ "", "c#", "inheritance", "compiler-construction", "method-hiding", "" ]
Is it possible to set a Velocity reference to 'null' or 'undefined'? The [Velocity template language reference](http://velocity.apache.org/engine/devel/vtl-reference-guide.html) says > `#set` - Establishes the value of a reference > Format: > > `# [ { ] set [ } ] ( $ref = [ ", ' ]arg[ ", ' ] )` > > Usage: > > `$ref` - The LHS of the assignment must be a variable reference or a property reference. > > `arg` - The RHS of the assignment, arg is parsed if enclosed in double quotes, and not > parsed if enclosed in single quotes. *If the RHS evaluates to null, it is not assigned > to the LHS.* (emphasis mine) I cannot find an equivalent `#unset` macro.
Read on... > Depending on how Velocity is > configured, it is usually not possible > to remove an existing reference from > the context via this mechanism. (Note > that this can be permitted by changing > one of the Velocity configuration > properties) In the VE default configuration has property ``` directive.set.null.allowed = false ``` > if true, having a right hand side of a > #set() statement with an invalid reference or null value will set the > left hand side to null. If false, the > left hand side will stay the same Change it to `true` using `setProperty()` on `org.apache.velocity.app.Velocity` and you're ready to go.
You can set the reference to false. As a non null reference is considered true, you can then test if the reference is set. This is useful in loops. ``` #foreach ($obj in $list) #set ($x = false) #set ($x = $obj.maybeNull()) #if ($x) ... $x #end #end ```
Unsetting a variable in Velocity
[ "", "java", "templates", "velocity", "" ]
My understanding is that Dictionary does not have boxing issues and [faster in performance](http://www.phase9studios.com/2008/01/08/DictionaryVSHashTable.aspx). Are there cases that the usage of Hashtable would be more advisable compared to Dictionary? Thanks
For .Net 2.0, you pretty much always want Dictionary. However, be warned that it's not just a "drop in replacement" for an existing Hashtable. There are some differences in the way they work (mostly how they handle nulls) that mean you do need to check your code first.
`Hashtable` is pretty much deprecated. It might be useful for interfacing with legacy code. `Dictionary` is a generic class introduced in .NET 2.0, along with other classes in `System.Collections.Generic` namespace. They supersede classes in `System.Collections` namespace.
Hashtable vs Dictionary
[ "", "c#", ".net", "" ]
My goal is to create a system monitoring application using Java. I would like to know when a user is doing activity on a Windows PC. The result would be something like this: 8:00 - 8:15 activity 9:12 - 10:29 activity 12:24 - 15:34 activity I'm not interested in any other information (which key was pressed, application used, etc.). Only user activity. Is this even possible in Java? I plan to run my java application as a service. But as for getting events when a user uses the computer, I have no idea where to start. --- [Edit] Further clarifications: I'm not interested in the *details* of the activity, only that a user has moved the mouse or pressed a key. I don't care which key was pressed, as long as I know that a key was pressed in an application somewhere. I also don't care for any other activity except key pressed and mouse movement (for example, I am not interested if a USB key is inserted in a USB port).
You cannot monitor user activity directly from a service. The service will be running in a different window station from the users activities and so will have no way to hook into that activity (except through filter drivers that would need to be written in C). So you will need a client application that runs in the user's desktop and hooks into the keyboard and mouse activity. You would do that via two calls to the Windows API [SetWindowsHookEx](http://msdn.microsoft.com/en-us/library/ms644990(VS.85).aspx) (for low level keyboard and mouse hooks) using JNI. To monitor activity the application would then need to process the keyboard and mouse hooks for messages. You could launch the application as auto-start by adding an entry to the registry's Run key or you could have your service monitor for session log on events and [launch the application](https://stackoverflow.com/questions/564829/launching-a-net-winforms-application-interactively-from-a-service/564874) from it. Then the user session application could either process the information itself or pass it to the service via a pipe or socket.
What exactly do you mean by 'user activity'? You need to define that term precisely first to even start thinking of a solution, especially as you say that "key pressed, application used, etc." are not activities.
How to detect user activity with a Java service running on Windows?
[ "", "java", "windows", "windows-services", "android-activity", "" ]
I am using `rss2email` for converting a number of RSS feeds into mail for easier consumption. That is, I *was* using it because it broke in a horrible way today: On every run, it only gives me this backtrace: ``` Traceback (most recent call last): File "/usr/share/rss2email/rss2email.py", line 740, in <module> elif action == "list": list() File "/usr/share/rss2email/rss2email.py", line 681, in list feeds, feedfileObject = load(lock=0) File "/usr/share/rss2email/rss2email.py", line 422, in load feeds = pickle.load(feedfileObject) TypeError: ("'str' object is not callable", 'sxOYAAuyzSx0WqN3BVPjE+6pgPU', ((2009, 3, 19, 1, 19, 31, 3, 78, 0), {})) ``` The only helpful fact that I have been able to construct from this backtrace is that the file `~/.rss2email/feeds.dat` in which `rss2email` keeps all its configuration and runtime state is somehow broken. Apparently, `rss2email` reads its state and dumps it back using `cPickle` on every run. I have even found the line containing that `'sxOYAAuyzSx0WqN3BVPjE+6pgPU'`string mentioned above in the giant (>12MB) `feeds.dat` file. To my untrained eye, the dump does not appear to be truncated or otherwise damaged. What approaches could I try in order to reconstruct the file? The Python version is 2.5.4 on a Debian/unstable system. **EDIT** Peter Gibson and J.F. Sebastian have suggested directly loading from the pickle file and I had tried that before. Apparently, a `Feed` class that is defined in `rss2email.py` is needed, so here's my script: ``` #!/usr/bin/python import sys # import pickle import cPickle as pickle sys.path.insert(0,"/usr/share/rss2email") from rss2email import Feed feedfile = open("feeds.dat", 'rb') feeds = pickle.load(feedfile) ``` The "plain" pickle variant produces the following traceback: ``` Traceback (most recent call last): File "./r2e-rescue.py", line 8, in <module> feeds = pickle.load(feedfile) File "/usr/lib/python2.5/pickle.py", line 1370, in load return Unpickler(file).load() File "/usr/lib/python2.5/pickle.py", line 858, in load dispatch[key](self) File "/usr/lib/python2.5/pickle.py", line 1133, in load_reduce value = func(*args) TypeError: 'str' object is not callable ``` The `cPickle` variant produces essentially the same thing as calling `r2e` itself: ``` Traceback (most recent call last): File "./r2e-rescue.py", line 10, in <module> feeds = pickle.load(feedfile) TypeError: ("'str' object is not callable", 'sxOYAAuyzSx0WqN3BVPjE+6pgPU', ((2009, 3, 19, 1, 19, 31, 3, 78, 0), {})) ``` **EDIT 2** Following J.F. Sebastian's suggestion around putting "printf debugging" into `Feed.__setstate__` into my test script, these are the last few lines before Python bails out. ``` u'http:/com/news.ars/post/20080924-everyone-declares-victory-in-smutfree-wireless-broadband-test.html': u'http:/com/news.ars/post/20080924-everyone-declares-victory-in-smutfree-wireless-broadband-test.html'}, 'to': None, 'url': 'http://arstechnica.com/'} Traceback (most recent call last): File "./r2e-rescue.py", line 23, in ? feeds = pickle.load(feedfile) TypeError: ("'str' object is not callable", 'sxOYAAuyzSx0WqN3BVPjE+6pgPU', ((2009, 3, 19, 1, 19, 31, 3, 78, 0), {})) ``` The same thing happens on a Debian/etch box using python 2.4.4-2.
# How I solved my problem ## A Perl port of `pickle.py` Following J.F. Sebastian's comment about how simple the `pickle` format is, I went out to port parts of `pickle.py` to Perl. A couple of quick regular expressions would have been a faster way to access my data, but I felt that the hack value and an opportunity to learn more about Python would be be worth it. Plus, I still feel much more comfortable using (and debugging code in) Perl than Python. Most of the porting effort (simple types, tuples, lists, dictionaries) went very straightforward. Perl's and Python's different notions of classes and objects has been the only issue so far where a bit more than simple translation of idioms was needed. The result is a module called `Pickle::Parse` which after a bit of polishing will be published on CPAN. A module called `Python::Serialise::Pickle` existed on CPAN, but I found its parsing capabilities lacking: It spews debugging output all over the place and doesn't seem to support classes/objects. ## Parsing, transforming data, detecting actual errors in the stream Based upon `Pickle::Parse`, I tried to parse the `feeds.dat` file. After a few iteration of fixing trivial bugs in my parsing code, I got an error message that was strikingly similar to `pickle.py`'s original *object not callable* error message: ``` Can't use string ("sxOYAAuyzSx0WqN3BVPjE+6pgPU") as a subroutine ref while "strict refs" in use at lib/Pickle/Parse.pm line 489, <STDIN> line 187102. ``` Ha! Now we're at a point where it's quite likely that the actual data stream is broken. Plus, we get an idea *where* it is broken. It turned out that the first line of the following sequence was wrong: ``` g7724 ((I2009 I3 I19 I1 I19 I31 I3 I78 I0 t(dtRp62457 ``` Position 7724 in the "memo" pointed to that string `"sxOYAAuyzSx0WqN3BVPjE+6pgPU"`. From similar records earlier in the stream, it was clear that a `time.struct_time` object was needed instead. All later records shared this wrong pointer. With a simple search/replace operation, it was trivial to fix this. I find it ironic that I found the source of the error by accident through Perl's feature that tells the user its position in the input data stream when it dies. ## Conclusion 1. I will move away from `rss2email` as soon as I find time to automatically transform its pickled configuration/state mess to another tool's format. 2. `pickle.py` needs more meaningful error messages that tell the user about the position of the data stream (not the poision in its own code) where things go wrong. 3. Porting parts `pickle.py` to Perl was fun and, in the end, rewarding.
Have you tried manually loading the feeds.dat file using both cPickle and pickle? If the output differs it might hint at the error. Something like (from your home directory): ``` import cPickle, pickle f = open('.rss2email/feeds.dat', 'r') obj1 = cPickle.load(f) obj2 = pickle.load(f) ``` (you might need to open in binary mode 'rb' if rss2email doesn't pickle in ascii). Pete Edit: The fact that cPickle and pickle give the same error suggests that the feeds.dat file is the problem. Probably a change in the Feed class between versions of rss2email as suggested in the Ubuntu bug J.F. Sebastian links to.
How to recover a broken python "cPickle" dump?
[ "", "python", "rss", "pickle", "" ]
I have a JMeter test with 2 Thread Groups - the first is a single thread (which creates some inventory) and the second has multiple threads (which purchase all the inventory). I use BeanShell Assertions and XPath Extractors to parse the returned value (which is XML) and store variables (such as the ids of the items to be purchased). But, values that are created in the first Thread Group, whether extracted into standard `${jmeter}` type variables, or `${__BeanShell(vars.get("jmeter"))}` type vars, are not available in the second Thread Group. Is there anyway to create a variable in the first Thread Group and make it visible to the second?
I was not able to do this with variables (since those are local to individual threads). However, I was able to solve this problem with properties! Again, my first ThreadGroup does all of the set up, and I need some information from that work to be available to each of the threads in the second ThreadGroup. I have a BeanShell Assertion in the first ThreadGroup with the following: ``` ${__setProperty(storeid, ${storeid})}; ``` The ${storeid} was extracted with an XPath Extractor. The BeanShell Assertion does other stuff, like checking that storeid was returned from the previous call, etc. Anyway, in the second ThreadGroup, I can use the value of the "storeid" property in Samplers with the following: ``` ${__property(storeid)} ``` Works like a charm!
According to JMeter documentation: > 16.12 Sharing variables between threads and thread groups > > Variables are local to a thread a variable set in one thread cannot be read in > another. This is by design. For variables that can be determined > before a test starts, see Parameterising Tests (above). If the value > is not known until the test starts, there are various options: > > 1. Store the variable as a property - properties are global to the > JMeter instance > 2. Write variables to a file and re-read them. > 3. Use the bsh.shared namespace - see [16.8.2 Sharing Variables](http://jmeter.apache.org/usermanual/best-practices.html#bsh_variables) > 4. Write your own Java classes Another way to pass variable between the threads is to use jmeter-plugins as [mentioned by Andrey Botalov below](https://stackoverflow.com/a/12871601/632951). But I found that it is a bit confusing to use it first time but it gives full control of variable during passing from thread to thread. Follow my example with BeanShell usage and you see how easy it is: ![Project stucture](https://i.stack.imgur.com/CCEaU.jpg) Next referring to sections in picture bellow: (1.1) Here I created custom variable in User Defined Variables (or you can do it with BSF Proccessor - disabled in this example (1.2)) (2.1)(2.4)I successfully used variable in first thread - nothing special :) (2.2)Added BeanShell PostProcessor and customized my variable (2.3)Added it to queue (3.1) In second thread - variable is taken from queue - with any name you want. But be careful, use wisely Timeout, because this thread will wait til previous finish so it can get modified variable (experiment with some long response) (3.2)(3.3)(3,4)That repeated steps of using and modifying variable (3.5) Variable is sent once again in new queue - so provide new name to it (4.1)(4.2)(4.3) Grabbed modified variable from new queue works like charm Warning 1. If you add more threads then add some Counter to Thread Group with variable and add this variable name to queue name - do the same in Thread Group where you try to catch queue so queue will have unique name for each thread (write a comment if you need some clearer explenation) 2. If you have more than one http Request in one Thread Group then add thread communication pre processor as a child of last (or other if you want to achieve some custom thing) http Request Play, modify, customize to get best result :) Adding more threads can result in unwanted behavior so you need to be watchful. ![Information about project structure](https://i.stack.imgur.com/lpki7.jpg)
How do I pass a variable from one Thread Group to another in JMeter
[ "", "java", "testing", "jmeter", "beanshell", "" ]
How do you test methods that fire asynchronous processes with JUnit? I don't know how to make my test wait for the process to end (it is not exactly a unit test, it is more like an integration test as it involves several classes and not just one).
IMHO it's bad practice to have unit tests create or wait on threads, etc. You'd like these tests to run in split seconds. That's why I'd like to propose a 2-step approach to testing async processes. 1. Test that your async process is submitted properly. You can mock the object that accepts your async requests and make sure that the submitted job has correct properties, etc. 2. Test that your async callbacks are doing the right things. Here you can mock out the originally submitted job and assume it's initialized properly and verify that your callbacks are correct.
**TL;DR;** Unfortunately, there is no built-in solution **yet** (at time of writting, 2022), hence you are free to use and/or implement whatever fits your situation. ## Example An alternative is to use the [CountDownLatch](http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/CountDownLatch.html) class. ``` public class DatabaseTest { /** * Data limit */ private static final int DATA_LIMIT = 5; /** * Countdown latch */ private CountDownLatch lock = new CountDownLatch(1); /** * Received data */ private List<Data> receiveddata; @Test public void testDataRetrieval() throws Exception { Database db = new MockDatabaseImpl(); db.getData(DATA_LIMIT, new DataCallback() { @Override public void onSuccess(List<Data> data) { receiveddata = data; lock.countDown(); } }); lock.await(2000, TimeUnit.MILLISECONDS); assertNotNull(receiveddata); assertEquals(DATA_LIMIT, receiveddata.size()); } } ``` **NOTE** you can't just used **syncronized** with a regular object as a lock, as fast callbacks can release the lock before the lock's wait method is called. See [this](http://joe.truemesh.com/blog/000279.html) blog post by Joe Walnes. **EDIT** Removed syncronized blocks around CountDownLatch thanks to comments from @jtahlborn and @Ring
How to use JUnit to test asynchronous processes
[ "", "java", "unit-testing", "asynchronous", "junit", "" ]
I wrote this on a simple ASP.NET page: ``` jQuery(document).ready(function() { jQuery("form").submit(function() { alert("kikoo"); return false }); }); ``` and when I submit (by clicking on a button, link, ...), I never see the alert box. Is there something in asp.net page that "bypass" the submit (I think about the dopostback javascript method) Can anyone tell me why?
I found the problem. when i click on some button, they call a all javascript method that does form.submit. This is not caught by jQuery. I'll correct the behaviour of my button and it'll work fine. thanks all for help.
Is it possible you are confused about the usage of the [jQuery submit()](http://docs.jquery.com/Events/submit) function? It sounds like you're expecting the alert dialog to happen on page load because you think the submit function is actually submitting your form. But with the way you are using it, the submit() function is just attaching an event to your form so that when you actually **do** submit it, your function will run. To actually trigger the submit from your code, you have to use submit() without any arguments like this: ``` $("form").submit(); ``` Here's an expanded example, using Mike C's answer as the basis: ``` <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>Title</title> <script type="text/javascript" src="js/jquery-1.3.2.min.js"></script> <script type="text/javascript"> $(function() { $("form").submit(function() { alert("kikoo"); return false }); // you could also chain this onto the end of the previous function $("form").submit(); }); </script> </head> <body> <form><input type="submit" /></form> </body> </html> ```
Why is my submit event not being caught by jQuery in ASP.NET?
[ "", ".net", "asp.net", "javascript", "jquery", "" ]
I'm creating a dynamic expression, which will order items in a list by some rule (lambda exp.). This is the code: ``` Expression<Func<String, String>> exp = o => o; MethodCallExpression orderByExp = Expression.Call(typeof(Enumerable), "OrderBy", new Type[] { typeof(String), exp.Body.Type }, Expression.Parameter(typeof(IEnumerable<String>), "list"), exp); ``` Now I want to execute previously created expression on specific data to sort it, but it fails because of some strange exceptions like "Lambda Parameter not in scope" or "Argument expression is not valid". ``` var data = new String[] { "asdasdasd", "asdads", "123", "xcvxcvs", "ASDSD" }; // one of attempts: doesn't work var result = data.AsQueryable().Provider.CreateQuery<String>(orderByExp); ``` Can somebody help me with this?
This is the working code: ``` Expression<Func<String, String>> exp = o => o; var list = Expression.Parameter(typeof(IEnumerable<String>), "list"); MethodCallExpression orderByExp = Expression.Call(typeof(Enumerable), "OrderBy", new Type[] { typeof(String), exp.Body.Type }, list, exp); var lambda = Expression.Lambda<Func<IEnumerable<String>, IEnumerable<String>>>(orderByExp, list); var data = new String[] { "asdasdasd", "asdads", "123", "xcvxcvs", "ASDSD" }; var result = lambda.Compile()(data); ``` 1. To execute the MethodCallExpression you should wrap it in lambda expression. 2. Be sure you use the same instance of parameter expression ('list'), when creating a MethodCallExpression and LambdaExpression, and not two separate instances even with the same name, otherwise you will get the: "Lambda Parameter not in scope" exception without much explanation. thanks experts
order any enumerable by a property(no reflection): ``` public static IOrderedEnumerable<T> OrderBy<T>(this IEnumerable<T> items, string property, bool ascending) { var MyObject = Expression.Parameter(typeof (T), "MyObject"); var MyEnumeratedObject = Expression.Parameter(typeof (IEnumerable<T>), "MyEnumeratedObject"); var MyProperty = Expression.Property(MyObject, property); var MyLamda = Expression.Lambda(MyProperty, MyObject); var MyMethod = Expression.Call(typeof(Enumerable), ascending ? "OrderBy" : "OrderByDescending", new[] { typeof(T), MyLamda.Body.Type }, MyEnumeratedObject, MyLamda); var MySortedLamda = Expression.Lambda<Func<IEnumerable<T>, IOrderedEnumerable<T>>>(MyMethod, MyEnumeratedObject).Compile(); return MySortedLamda(items); } ```
Dynamically created expressions
[ "", "c#", "lambda", "" ]
I have a list of items in MySQL, connected through a column "parent\_id". Let's assume the columns are: id, name, parent\_id If one of my users deletes an item high in the hierarchy, I need to delete all of its children. So, two part question: 1) Is there an effective and efficient MySQL call that will return ID's for all items where its parent no longer exists? 2) In PHP, I'm thinking I can get these orphaned ID's from MySQL into an array, then run a foreach loop and delete each? Much thanks for your help.
I guess what the original author had in mind was something in the lines of hierarchical queries. Unfortunately, MySQL does not have native support for hierarchical queries (unlike, for example Oracle, where you could use `CONNECT BY` to achieve what you want). Probably the easiest way to remove all orphans would be to execute a query like: ``` SELECT t1.id FROM table t1 LEFT JOIN table t2 ON t2.id = t1.parent_id WHERE t2.id IS NULL ``` This would yield you all rows from `table` where their parent doesn't exist. Pair this with a PHP script that keeps executing the query and deleting any results, and after a couple of iterations your table should be free of orphans (this whole thing can probably be merged into one DELETE statement you could just execute in a while loop). You need to execute the select-delete multiple times because of transitivity -- consider a situation where an orphan is parent to another record; with the first iteration you'd remove the first orphan, making the next record in chain an orphan. Also, make sure you explicitly skip the head of your hierarchy, otherwise you'll end up with an empty table (as the head is, by definition, an orphan).
If you are using InnoDB, you should look into [Foreign Key Constraints](http://dev.mysql.com/doc/refman/5.1/en/innodb-foreign-key-constraints.html) which take care of this for you by using the `ON DELETE CASCADE` option when you are setting your foreign key relationships. An example from the docs: ``` CREATE TABLE parent (id INT NOT NULL, PRIMARY KEY (id)) ENGINE=INNODB; CREATE TABLE child (id INT, parent_id INT, INDEX par_ind (parent_id), FOREIGN KEY (parent_id) REFERENCES parent(id) ON DELETE CASCADE) ENGINE=INNODB; ``` With that in place, if you were to do add a parent and a matching child row like this: ``` INSERT INTO parent (id) VALUES (1); INSERT INTO child (id, parent_id) VALUES (1,1); ``` And then removed the parent like this: ``` DELETE FROM parent WHERE id = 1; ``` You will find the matching child record to be gone. This is, in my opinion, the best way to do it. **EDIT**: To do this within 1 table, you would do something like this: ``` CREATE TABLE parent ( id INT NOT NULL, name varchar(250) not null, parent_id INT NULL, FOREIGN KEY (parent_id) REFERENCES parent(id) ON DELETE CASCADE, PRIMARY KEY (id) ) ENGINE=INNODB ``` Then if you add two rows, one referencing the other: ``` INSERT INTO parent (id,name,parent_id) VALUES ('1', 'Test 1', NULL), ('2', 'Test 2', '1') ``` Then delete the parent row of the two: ``` DELETE FROM parent WHERE id = 1; ``` You will find that it deletes the child row with `parent_id` of 1.
Remove Orphaned Items In A Hierarchy
[ "", "php", "mysql", "" ]
A C program spits out consecutive doubles into a binary file. I wish to read them into Python. I tried using `struct.unpack('d',f.read(8))` EDIT: I used the following in C to write a random double number ``` r = drand48(); fwrite((void*)&r, sizeof(double), 1, data); ``` The Errors are now fixed but I cannot read the first value. for an all 0.000.. number it reads it as 3.90798504668055 but the rest are fine.
I think you are actually reading the number correctly, but are getting confused by the display. When I read the number from your provided file, I get "`3.907985046680551e-14`" - this is almost but not quite zero (0.000000000000039 in expanded form). I suspect your C code is just printing it with less precision than python is. [Edit] I've just tried reading the file in C, and I get the same result (though slightly less precision: 3.90799e-14) (using printf("%g", val)), so I think if this value is incorrect, it's happened on the writing side, rather than the reading.
First, have you tried [pickle](http://www.python.org/doc/1.5.2p2/lib/module-pickle.html)? No one has shown any Python code yet... Here is some code for reading in binary in python: ``` import Numeric as N import array filename = "tmp.bin" file = open(filename, mode='rb') binvalues = array.array('f') binvalues.read(file, num_lon * num_lat) data = N.array(binvalues, typecode=N.Float) file.close() ``` Where the f here specified single-precision, 4-byte floating, numbers. Find whatever size your data is per entry and use that. For non binary data you could do something simple like this: ``` tmp=[] for line in open("data.dat"): tmp.append(float(line)) ```
What is the best method to read a double from a Binary file created in C?
[ "", "python", "c", "double", "" ]
Are User Defined Data Types in SQL Server something that a intermediate SQL user should know and use? What are pros and cons of using UDTs?
Never use them is my advice. You are in a world of hurt if you ever have to change the definition. Perhaps this has improved since SQL Server 2000 and someone with more familiarity with the newer versions can tell you whether it is now safe to get in the water, but until I had confirmation of this and had checked it out myself with a test, I wouldn't put it on my production system. Check out this question for details: [How to change the base type of a UDT in Sql Server 2005?](https://stackoverflow.com/questions/349049/how-to-change-the-base-type-of-a-udt-in-sql-server-2005)
I do *not* use code-based UDTs because I don't think that the extra complexity warrants the advantages. I *do* use T-SQL UDTs because there's very little extra complexity so that the advantages are worth the effort. (Thanks go to Marc\_s for pointing out that my original post was incomplete!) **Regarding Code-based UDTs** Think of it this way: if your project has a managed code component (your app) and a database component (SQL Server) what real advantage do you gain from defining managed code in the database? In my experience? None. Deployment is more difficult because you'll have to add assemblies to your DB deployment and alter these assemblies, add files, etc. within SQL Server. You'll also have to turn on the CLR in SQL Server (not a big deal but no one's proven to me that this won't have a performance/memory penalty). In the end, you'll have exactly what you would have had if you had simply designed this into your application's code. There may be some performance enhancement but it really strikes me as a case of premature optimization - especially since I don't know if the *overall* performance suffers due to having the CLR on versus off. Note: I'm assuming that you would be using SQL Server's CLR to define your types. HLGEM talks about SQL Server 2000 but I'm not familiar with 2000 and thought it only had UDFs and not UDTs in externally-defined dlls (but don't quote me...I really am not familiar with it!). **Regarding T-SQL UDTs** T\_SQL UDTs can be defined in SQL alone (go to "Programmability | Types | User-defined Data Types" in SQL Server Management Studio). For standard UDTs I *would* in fact recommend that you master them. They are quite easy and can make your DDL more self-documenting and can enforce integrity constraints. For example, I define a "GenderType" (char(1), not nullable, holding "M" or "F") that ensures that only appropriate data is permitted in the Gender field. UDTs are pretty easy overall but [this article](http://www.mssqltips.com/tip.asp?tip=1628) gives a pretty good example of how to take it to the next level by defining a Rule to constrain the data permitted in your UDT. When I originally answered this question I was fixed on the idea of complex, code-defined types (*smacks palm to forehead*). So...thanks Marc.
How cool are User Defined Data Types in SQL Server?
[ "", "sql", "sql-server", "user-defined-types", "" ]
I have a class that is annotated as the `@XmlRootElement` with `@XmlAccessorType(XmlAccessType.NONE)`. The problem that I am having is that the superclass's methods are being bound, when I do not want them to be bound, and cannot update the class. I am hoping there is an annotation that I can put on the root element class to prevent this from happening. Example: ``` @XmlRootElement @XmlAccessorType(XmlAccessType.NONE) public class Person extends NamedObject { @XmlElement public String getId() { ... } } ``` I would expect that only the methods annotated `@XmlElement` on `Person` would be bound and marshalled, but the superclass's methods are all being bound, as well. The resulting XML then has too much information. How do I prevent the superclass's methods from being bound without having to annotate the superclass, itself?
According to this StackOverflow post: [How can I ignore a superclass?](https://stackoverflow.com/questions/1386192/jax-b-how-can-i-ignore-a-superclass) It is not possible with JAX-B to ignore the superclass without modifying the superclass. Quoting the relevant portion of that post: > Update2: I found [a thread on java.net](http://forums.java.net/jive/thread.jspa?threadID=13932&tstart=0) > for a similar problem. That thread > resulted in an [enhancement request](https://jaxb.dev.java.net/issues/show%5Fbug.cgi?id=133), > which was marked as a duplicate of > [another issue](https://jaxb.dev.java.net/issues/show%5Fbug.cgi?id=216), which resulted in the > @XmlTransient annotation. The comments > on these bug reports lead me to > believe this is impossible in the > current spec.
Just add ``` @XmlAccessorType(XmlAccessType.NONE) ``` in front of EACH superclass declaration (and the class itself). In your case: ``` @XmlAccessorType(XmlAccessType.NONE) class NamedObject{ [ ... ] } ``` Remember that this has to be done really for each superclass, it is often forgotten when dealing with huge class dependency trees. Interfaces, of course, don't need any JAXB annotations.
How do I prevent JAXB from binding superclass methods of the @XmlRootElement when marshalling?
[ "", "java", "xml", "inheritance", "annotations", "jaxb", "" ]
I was wondering about StringBuilder and I've got a question that I was hoping the community would be able to explain. Let's just forget about code readability, which of these is **faster** and why? ### `StringBuilder.Append`: ``` StringBuilder sb = new StringBuilder(); sb.Append(string1); sb.Append("----"); sb.Append(string2); ``` ### `StringBuilder.AppendFormat`: ``` StringBuilder sb = new StringBuilder(); sb.AppendFormat("{0}----{1}",string1,string2); ```
It's impossible to say, not knowing the size of `string1` and `string2`. With the call to [`AppendFormat`](http://msdn.microsoft.com/en-us/library/system.text.stringbuilder.appendformat.aspx), it will preallocate the buffer just once given the length of the format string and the strings that will be inserted and then concatenate everything and insert it into the buffer. For very large strings, this will be advantageous over separate calls to [`Append`](http://msdn.microsoft.com/en-us/library/system.text.stringbuilder.append.aspx) which might cause the buffer to expand multiple times. However, the three calls to `Append` might or might not trigger growth of the buffer and that check is performed each call. If the strings are small enough and no buffer expansion is triggered, then it will be faster than the call to `AppendFormat` because it won't have to parse the format string to figure out where to do the replacements. **More data is needed for a definitive answer** It should be noted that there is little discussion of using the static [`Concat` method on the `String` class](http://msdn.microsoft.com/en-us/library/system.string.concat.aspx) ([Jon's answer](https://stackoverflow.com/questions/710504/stringbuilder-append-vs-stringbuilder-appendformat/710659#710659) using `AppendWithCapacity` reminded me of this). His test results show that to be the best case (assuming you don't have to take advantage of specific format specifier). `String.Concat` does the same thing in that it will predetermine the length of the strings to concatenate and preallocate the buffer (with slightly more overhead due to looping constructs through the parameters). It's performance is going to be comparable to Jon's `AppendWithCapacity` method. Or, just the plain addition operator, since it compiles to a call to `String.Concat` anyways, with the caveat that all of the additions are in the same expression: ``` // One call to String.Concat. string result = a + b + c; ``` **NOT** ``` // Two calls to String.Concat. string result = a + b; result = result + c; ``` --- **For all those putting up test code** You need to run your test cases in *separate* runs (or at the least, perform a GC between the measuring of separate test runs). The reason for this is that if you do say, 1,000,000 runs, creating a new [`StringBuilder`](http://msdn.microsoft.com/en-us/library/system.text.stringbuilder.aspx) in each iteration of the loop for one test, and then you run the next test that loops the same number of times, creating an *additional* 1,000,000 `StringBuilder` instances, the GC will more than likely step in during the second test and hinder its timing.
[casperOne is correct](https://stackoverflow.com/questions/710504/stringbuilder-append-vs-stringbuilder-appendformat/710522#710522). Once you reach a certain threshold, the `Append()` method becomes slower than `AppendFormat()`. Here are the different lengths and elapsed ticks of 100,000 iterations of each method: ## Length: 1 ``` Append() - 50900 AppendFormat() - 126826 ``` ## Length: 1000 ``` Append() - 1241938 AppendFormat() - 1337396 ``` ## Length: 10,000 ``` Append() - 12482051 AppendFormat() - 12740862 ``` ## Length: 20,000 ``` Append() - 61029875 AppendFormat() - 60483914 ``` When strings with a length near 20,000 are introduced, the `AppendFormat()` function will *slightly* outperform `Append()`. Why does this happen? See [casperOne's answer](https://stackoverflow.com/questions/710504/stringbuilder-append-vs-stringbuilder-appendformat/710522#710522). **Edit:** I reran each test individually under Release configuration and updated the results.
StringBuilder.Append Vs StringBuilder.AppendFormat
[ "", "c#", "string", "stringbuilder", "" ]
I'm executing a number of SQL query's as a batch and then getting all of the result sets back in a batch. The way that my code is currently put together, the first result set gets skipped. Now that I know this, I could simply throw in another statement outside of my loop that grabs the first result, however I'm wondering if there is a more elegant solution to this problem. Here is some sudo code of whats going on: ``` DbDataReader reader= /*some stuff that returns a batch of results...*/; while (reader.NextResult()) { while (reader.Read()) { if (!reader.IsDBNull(0)) { //do things with the data.... } } } ``` Now i would have expected that NextResult() advances you to the first result the first time you call it, which is what Read() seems to do. However what it actually seems to do is bring you to the second result on the first call. Am I misunderstanding how you're expected to use this method, or are you really expected to do some like the following: ``` DbDataReader reader= /*some stuff that returns a batch of results...*/; //this deals with the row in the the very first result while (reader.Read()) { if (!reader.IsDBNull(0)) { //do things with the data.... } } //this deals with the rest of the rows... while (reader.NextResult()) { while (reader.Read()) { if (!reader.IsDBNull(0)) { //do exact same things with the data.... //is this not pretty klugey? } } } ``` This strikes me as rotten programming style, but I don't see a way around it. Does anyone know of a more elegant solution to this?
Simply put the NextResult at the end of the loop instead of the beginning: ``` do { while (reader.Read()) { if (!reader.IsDBNull(0)) { //do things with the data.... } } } while (reader.NextResult()); ```
In the great majority of cases, you will only be returning a single result set with any given call so it would not make sense for the designers to have required a "NextResultSet" every time you use a reader. Thus, your second example does indeed hold *if* you are pulling multiple result sets. The other thing that your post makes me wonder, though, is why you'd be doing the "exact same things with the data" if you are retrieving multiple result sets - wouldn't the structure of the data be different enough that you *wouldn't* be doing the exact same things? That is, your example makes me wonder if you don't have some kind of bug in your thinking about how the data management functions work.
How best to loop over a batch of Results with a C# DbDataReader
[ "", "c#", "dbdatareader", "" ]
The behavior of "this" when function `bar` is called is baffling me. See the code below. Is there any way to arrange for "this" to be a plain old js object instance when bar is called from a click handler, instead of being the html element? ``` // a class with a method function foo() { this.bar(); // when called here, "this" is the foo instance var barf = this.bar; barf(); // when called here, "this" is the global object // when called from a click, "this" is the html element $("#thing").after($("<div>click me</div>").click(barf)); } foo.prototype.bar = function() { alert(this); } ```
Welcome to the world of javascript! :D You have wandered into the realm of javascript scope and closure. **For the short answer:** ``` this.bar() ``` is executed under the scope of *foo*, (as *this* refers to *foo*) ``` var barf = this.bar; barf(); ``` is executed under the global scope. this.bar basically means: execute the function pointed by this.bar, under the scope of *this* (foo). When you copied this.bar to barf, and run barf. Javascript understood as, run the function pointed by barf, and since there is no *this*, it just runs in global scope. To correct this, you can change ``` barf(); ``` to something like this: ``` barf.apply(this); ``` This tells Javascript to bind the scope of *this* to barf before executing it. For jquery events, you will need to use an anonymous function, or extend the bind function in prototype to support scoping. For more info: * [Good explanation on scoping](http://ryanmorr.com/understanding-scope-and-context-in-javascript/) * [Extending jQuery bind to supportscoping](http://www.sanghvilabs.com/?p=12)
There's a good explanation on `this` keyword in JavaScript available at [QuirksMode](http://www.quirksmode.org/js/this.html).
jQuery/JavaScript "this" pointer confusion
[ "", "javascript", "jquery", "this", "" ]
I have a three tables **Results:** TestID TestCode Value **Tests:** TestID TestType SysCodeID **SystemCodes** SysCodeID ParentSysCodeID Description The question I have is for when the user is entering data into the **results** table. The formatting code when the row gets the focus changes the value field to a dropdown combobox if the testCode is of type SystemList. The drop down has a list of all the system codes that have a parentsyscodeID of the test.SysCodeID. When the user chooses a value in the list it translates into a number which goes into the value field. The datatype of the Results.Value field is integer. I made it an integer instead of a string because when reporting it is easier to do calculations and sorting if it is a number. There are issues if you are putting integer/decimal value into a string field. As well, when the system was being designed they only wanted numbers in there. The users now want to put strings into the value field as well as numbers/values from a list and I'm wondering what the best way of doing that would be. --- Would it be bad practice to convert the field over to a string and then store both strings and integers in the same field? There are different issues related to this one but i'm not sure if any are a really big deal. Should I add another column into the table of string datatype and if the test is a string type then put the data the user enters into the different field. Another option would be to create a 1-1 relationship to another table and if the user types in a string into the value field it adds it into the new table with a key of a number. Anyone have any interesting ideas?
What about treating Results.Value as if it were a numeric ValueCode that becomes an foreign key referencing another table that contains a ValueCode and a string that matches it. ``` CREATE TABLE ValueCodes ( Value INTEGER NOT NULL PRIMARY KEY, Meaning VARCHAR(32) NOT NULL UNIQUE ); CREATE TABLE Results ( TestID ..., TestCode ..., Value INTEGER NOT NULL FOREIGN KEY REFERENCES ValueCodes ); ``` You continue storing integers as now, but they are references to a limited set of values in the ValueCodes table. Most of the existing values appear as an integer such as 100 with a string representing the same value "100". New codes can be added as needed.
Are you saying that they want to do free-form text entry? If that's the case, they will ruin the ability to do meaningful reporting on the field, because I can guarantee that they will not consistently enter the strings. If they are going to be entering one of several preset strings (for example, grades of A, B, C, etc.) then make a lookup table for those strings which maps to numeric values for sorting, evaluating, averaging, etc. If they really want to be able to start entering in free-form text and you can't dissuade them from it, add another column along the lines of other\_entry. Have a predefined value that means "other" to put in your value column. That way, when you're doing reporting you can either roll up all of those random "other" values or you can simply ignore them. Make sure that you add the "other" into your SystemCodes table so that you can keep a foreign key between that and the Results table. If you don't already have one, then you should definitely consider adding one. Good luck!
Theory of storing a number and text in same SQL field
[ "", "sql", "types", "field", "" ]
I am developing a Python module with several source files, each with its own test class derived from [unittest](http://docs.python.org/library/unittest.html) right in the source. Consider the directory structure: ``` dirFoo\ test.py dirBar\ __init__.py Foo.py Bar.py ``` To test either Foo.py or Bar.py, I would add this at the end of the Foo.py and Bar.py source files: ``` if __name__ == "__main__": unittest.main() ``` And run Python on either source, i.e. ``` $ python Foo.py ........... ---------------------------------------------------------------------- Ran 11 tests in 2.314s OK ``` Ideally, I would have "test.py" automagically search dirBar for any unittest derived classes and make one call to "unittest.main()". What's the best way to do this in practice? I tried using Python to call [execfile](http://docs.python.org/library/functions.html#execfile) for every \*.py file in dirBar, which runs once for the first .py file found & exits the calling test.py, plus then I have to duplicate my code by adding unittest.main() in every source file--which violates DRY principles.
I knew there was an obvious solution: ``` dirFoo\ __init__.py test.py dirBar\ __init__.py Foo.py Bar.py ``` Contents of dirFoo/test.py ``` from dirBar import * import unittest if __name__ == "__main__": unittest.main() ``` Run the tests: ``` $ python test.py ........... ---------------------------------------------------------------------- Ran 11 tests in 2.305s OK ```
As of Python 2.7, test discovery is automated in the unittest package. From the [docs](http://docs.python.org/library/unittest.html): > Unittest supports simple test discovery. In order to be compatible > with test discovery, all of the test files must be modules or packages > importable from the top-level directory of the project (this means > that their filenames must be valid identifiers). > > Test discovery is implemented in `TestLoader.discover()`, but can also > be used from the command line. The basic command-line usage is: > > ``` > cd project_directory > python -m unittest discover > ``` By default it looks for packages named `test*.py`, but this can be changed so you might use something like ``` python -m unittest discover --pattern=*.py ``` In place of your test.py script.
Python: How to run unittest.main() for all source files in a subdirectory?
[ "", "python", "unit-testing", "" ]
I've tried to find a comprehensive guide on whether it is best to use `import module` or `from module import`. I've just started with Python and I'm trying to start off with best practices in mind. Basically, I was hoping if anyone could share their experiences, what preferences other developers have and what's the best way to avoid any *gotchas* down the road?
The difference between `import module` and `from module import foo` is mainly subjective. Pick the one you like best and be consistent in your use of it. Here are some points to help you decide. `import module` * **Pros:** + Less maintenance of your `import` statements. Don't need to add any additional imports to start using another item from the module * **Cons:** + Typing `module.foo` in your code can be tedious and redundant (tedium can be minimized by using `import module as mo` then typing `mo.foo`) `from module import foo` * **Pros:** + Less typing to use `foo` + More control over which items of a module can be accessed * **Cons:** + To use a new item from the module you have to update your `import` statement + You lose context about `foo`. For example, it's less clear what `ceil()` does compared to `math.ceil()` Either method is acceptable, but **don't** use `from module import *`. For any reasonable large set of code, if you `import *` you will likely be cementing it into the module, unable to be removed. This is because it is difficult to determine what items used in the code are coming from 'module', making it easy to get to the point where you think you don't use the `import` any more but it's extremely difficult to be sure.
There's another detail here, not mentioned, related to writing to a module. Granted this may not be very common, but I've needed it from time to time. Due to the way references and name binding works in Python, if you want to update some symbol in a module, say foo.bar, from outside that module, and have other importing code "see" that change, you have to import foo a certain way. For example: module foo: ``` bar = "apples" ``` module a: ``` import foo foo.bar = "oranges" # update bar inside foo module object ``` module b: ``` import foo print foo.bar # if executed after a's "foo.bar" assignment, will print "oranges" ``` However, if you import symbol names instead of module names, this will not work. For example, if I do this in module a: ``` from foo import bar bar = "oranges" ``` No code outside of `a` will see `bar` as "oranges" because my setting of `bar` merely affected the name "bar" inside module `a`, it did not "reach into" the `foo` module object and update its `bar`.
Use 'import module' or 'from module import'?
[ "", "python", "python-import", "" ]
I'm getting the error: ``` the multi-part identifier "IC.industry" could not be bound ``` when making this SQL query from a JSP page via JDBC: ``` select C.company, C.shname, C.fullname, count(d_to_c.designer) from companies C left join ind_to_c IC on C.company = IC.company left join d_to_c on C.company= d_to_c.company where IC.industry = ? group by C.company, C.shname, C.fullname order by C.shname ``` and I'm trying to run it as a prepared statement, where I'm setting the parameter via (for example) `stmt.setObject(1, 7)` prior to running `stmt.executeQuery()`. Now, what's weird is: If I execute this with the `?` and set the parameter as I just mentioned, I get the "could not be bound" error. If, however, I just change the query and hardcode the number 7 into the text of the query, it works! So it has *something* to do with binding that parameter. But I can't seem to figure out what. Anybody? **UPDATE**: Per request, the table definition for `ind_to_c`: ``` industry - int(11) company - int(11) ``` (it's just a table that defines the m2m relationship between industries and companies) **UPDATE 2**: Also per request, the full JSP code. I had to pull this out of a call to an abstraction of the database connection (which we use to store prepared statements, etc. // conn has been initialized as the db connection object. ``` int parent_id = 7; PreparedStatement ps = conn.prepareStatement("select C.company, C.shname, C.fullname, count(d_to_c.designer) from companies C left join ind_to_c IC on C.company = IC.company left join d_to_c on C.company = d_to_c.company where IC.industry = ? group by C.company, C.shname, C.fullname order by C.shname"); ps.setObject(1, parent_id); ResultSet rs = null; rs = ps.executeQuery(); ```
What's the data type for industry? Does it make a difference if you use the type specific bind methods like `stmt.setInt(1,7)` instead? ***edit:*** also, not related to the question, but you should probably remove `C.cid` from the `SELECT`. Some variants of T-SQL will infer that you want to group by that column since it is not the subject of an aggregation function, even though you don't specifiy it in the `GROUP BY` clause. Back on topic, can you post the table definition for `ind_2_c`? The nature of the error would seem to indicate that it has no column called `industry`.
I upgraded to the newer v. 2.0 of the MS-SQL JDBC driver, and magically it worked.
SQL Server/T-SQL via JSP: "The multi-part identifier XX.YY could not be bound"
[ "", "sql", "sql-server", "t-sql", "jsp", "jdbc", "" ]
can you explain w3wp ,,attaching to w3wp while debugging asp.net internal layers is good
Erm, what? :-) w3wp is just the worker process for an application pool in IIS. Every app pool has one or more worker processes so a failure in a single application pool doesn't kill the whole server (assuming you've used different app pools for each app :-)) What else is there to know? :-)
To attach a debugger set a break point in code that you know will be run (make sure that Debug="True" so that debug symbols are created) and in Visual Studio go Debug -> Attach to Process -> find your site's app pool's w3wp process. If you have several application pools active then w3wp will be listed more than once. To step into your DAL you will need that compiled with debug symbols as well.
W3wp againt asp.net development server
[ "", "c#", "asp.net", "debugging", "w3wp", "" ]
Every time I access data in $\_SESSION, Does it immediately update the session file on the disk, or just once when the process goes down? Or every n bytes of data change (flush)? This question is not necessarily about the specific file session handler, but every handler. (Does every touch in session immediately invoke an I/O of any kind, beside the storing of a normal variable in memory).
As Matt wrote, it writes at the end of script execution by default. You can read about it here in [session\_write\_close()](http://php.net/manual/en/function.session-write-close.php) > Session data is usually stored after > your script terminated without the > need to call session\_write\_close(), > but as session data is locked to > prevent concurrent writes only one > script may operate on a session at any > time. When using framesets together > with sessions you will experience the > frames loading one by one due to this > locking. You can reduce the time > needed to load all the frames by > ending the session as soon as all > changes to session variables are done.
It writes it and the end of the process on my setup. I made a new \_ session\_ write\_method: ``` public function _session_write_method($id, $sess_data) { var_dump(file_put_contents('/var/www/public_html/testing.txt', serialize($sess_data))); return(true); } ``` and then: ``` $_SESSION['foo'] = 'bar'; while(true) ``` I executed the script,waited a few seconds and then ran 'sudo kill' on the process id. It did not write the serialized data to the file. I ran it again without the infinite loop and I got: int(22) at the very bottom of the page and testing.txt was successfully written to and contained: s:14:"foo|s:3:"bar";";
Does every access to $_SESSION immediately involves an i/o with the file system?
[ "", "php", "session", "" ]
Suppose I had a set of alpha-character identifiers of a set length, e.g. always five letters, and they are assigned in such a way that they are always incremented sequentially (GGGGZ --> GGGHA, etc.). Now, if I get to ZZZZZ, since the length is fixed, I must "roll over" to AAAAA. I might have a contiguous block from ZZZAA through AAAAM. I want to write a sproc that will give me the "next" identifier, in this case AAAAN. If I didn't have this "rolling over" issue, of course, I'd just ORDER BY DESC and grab the top result. But I'm at a bit of a loss now -- and it doesn't help at all that SQL is not my strongest language. If I *have* to I can move this to my C# calling code, but a sproc would be a better fit. ETA: I would like to avoid changing the schema (new column *or* new table); I'd rather just be able to "figure it out". I might even prefer to do it brute force (e.g. start at the lowest value and increment until I find a "hole"), even though that could get expensive. If you have an answer that does not modify the schema, it'd be a better solution for my needs.
I think the lowest-impact solution for my needs is to add an identity column. The one thing I can guarantee is that the ordering will be such that entries that should "come first" will be added first -- I'll never add one with identifier BBBB, then go back and add BBBA later. If I didn't have that constraint, obviously it wouldn't work, but as it stands, I can just order by the identity column and get the sort I want. I'll keep thinking about the other suggestions -- maybe if they "click" in my head, they'll look like a better option.
Here's code that I think will give you your Next value. I created 3 functions. The table is just my simulation of the table.column with your alpha ids (I used MyTable.AlphaID). I assume that it's as you implied and there is one contiguous block of five-character uppercase alphabetic strings (AlphaID): ``` IF OBJECT_ID('dbo.MyTable','U') IS NOT NULL DROP TABLE dbo.MyTable GO CREATE TABLE dbo.MyTable (AlphaID char(5) PRIMARY KEY) GO -- Play with different population scenarios for testing INSERT dbo.MyTable VALUES ('ZZZZY') INSERT dbo.MyTable VALUES ('ZZZZZ') INSERT dbo.MyTable VALUES ('AAAAA') INSERT dbo.MyTable VALUES ('AAAAB') GO IF OBJECT_ID('dbo.ConvertAlphaIDToInt','FN') IS NOT NULL DROP FUNCTION dbo.ConvertAlphaIDToInt GO CREATE FUNCTION dbo.ConvertAlphaIDToInt (@AlphaID char(5)) RETURNS int AS BEGIN RETURN 1+ ASCII(SUBSTRING(@AlphaID,5,1))-65 + ((ASCII(SUBSTRING(@AlphaID,4,1))-65) * 26) + ((ASCII(SUBSTRING(@AlphaID,3,1))-65) * POWER(26,2)) + ((ASCII(SUBSTRING(@AlphaID,2,1))-65) * POWER(26,3)) + ((ASCII(SUBSTRING(@AlphaID,1,1))-65) * POWER(26,4)) END GO IF OBJECT_ID('dbo.ConvertIntToAlphaID','FN') IS NOT NULL DROP FUNCTION dbo.ConvertIntToAlphaID GO CREATE FUNCTION dbo.ConvertIntToAlphaID (@ID int) RETURNS char(5) AS BEGIN RETURN CHAR((@ID-1) / POWER(26,4) + 65) + CHAR ((@ID-1) % POWER(26,4) / POWER(26,3) + 65) + CHAR ((@ID-1) % POWER(26,3) / POWER(26,2) + 65) + CHAR ((@ID-1) % POWER(26,2) / 26 + 65) + CHAR ((@ID-1) % 26 + 65) END GO IF OBJECT_ID('dbo.GetNextAlphaID','FN') IS NOT NULL DROP FUNCTION dbo.GetNextAlphaID GO CREATE FUNCTION dbo.GetNextAlphaID () RETURNS char(5) AS BEGIN DECLARE @MaxID char(5), @ReturnVal char(5) SELECT @MaxID = MAX(AlphaID) FROM dbo.MyTable IF @MaxID < 'ZZZZZ' RETURN dbo.ConvertIntToAlphaID(dbo.ConvertAlphaIDToInt(@MaxID)+1) IF @MaxID IS NULL RETURN 'AAAAA' SELECT @MaxID = MAX(AlphaID) FROM dbo.MyTable WHERE AlphaID < dbo.ConvertIntToAlphaID((SELECT COUNT(*) FROM dbo.MyTable)) IF @MaxID IS NULL RETURN 'AAAAA' RETURN dbo.ConvertIntToAlphaID(dbo.ConvertAlphaIDToInt(@MaxID)+1) END GO SELECT * FROM dbo.MyTable ORDER BY dbo.ConvertAlphaIDToInt(AlphaID) GO SELECT dbo.GetNextAlphaID () AS 'NextAlphaID' ``` By the way, if you don't want to assume contiguity, you can do as you suggested and (if there's a 'ZZZZZ' row) use the first gap in the sequence. Replace the last function with this: ``` IF OBJECT_ID('dbo.GetNextAlphaID_2','FN') IS NOT NULL DROP FUNCTION dbo.GetNextAlphaID_2 GO CREATE FUNCTION dbo.GetNextAlphaID_2 () RETURNS char(5) AS BEGIN DECLARE @MaxID char(5), @ReturnVal char(5) SELECT @MaxID = MAX(AlphaID) FROM dbo.MyTable IF @MaxID < 'ZZZZZ' RETURN dbo.ConvertIntToAlphaID(dbo.ConvertAlphaIDToInt(@MaxID)+1) IF @MaxID IS NULL RETURN 'AAAAA' SELECT TOP 1 @MaxID=M1.AlphaID FROM dbo.Mytable M1 WHERE NOT EXISTS (SELECT 1 FROM dbo.MyTable M2 WHERE AlphaID = dbo.ConvertIntToAlphaID(dbo.ConvertAlphaIDToInt(M1.AlphaID) + 1 ) ) ORDER BY M1.AlphaID IF @MaxID IS NULL RETURN 'AAAAA' RETURN dbo.ConvertIntToAlphaID(dbo.ConvertAlphaIDToInt(@MaxID)+1) END GO ```
Find the last value in a "rolled-over" sequence with a stored procedure?
[ "", "sql", "string", "series", "" ]
I tried to create a dropdown menu in JQuery, but it's proving quite difficult. My code is here: ``` $(document).ready(function(){ $('ul li').mouseover(function() { $(this).children("ul").show(); }); $('ul li ul').mouseover(function() { $('ul li ul').show(); }).mouseout(function(){ $('ul li ul').hide(); }); }); ``` Basically I want to hover over a list item and the sub ul to drop down, then I can hover over the list items and If the mouse goes off of the sub nav, the menu hides again. thanks, Keith UPDATE: I removed the border from the CSS and it works fine, so it appears the mouseout is triggered when I hover over the 1px border, quite weird.
Are you aware of [superfish](http://users.tpg.com.au/j_birch/plugins/superfish/)? It is menu jQuery plug-in with excellent cross-browser support. It definitely doesn't have the problem you are experiencing. I haven't checked the source code, but the key difference is that it adds a delay on mouseout. This means that an action isn't triggered, unless the position of the cursor is steady for some time (default delay is 800ms). This will solve your problem and is also a good thing to implement, as it will make your menu more user-friendly.
you should use [jQuery's hover() function](http://docs.jquery.com/Events/hover) as it avoids all sorta browser specific issues .. Without a lick of testing I'd imagine the code would look something more like: ``` $('.clearfix li').hover(function() { $('ul', this).show(); }, function() { $('ul', this).hide(); }); ```
How to fix this JQuery dropdown menu?
[ "", "javascript", "jquery", "menu", "drop-down-menu", "" ]
Can IIS supporting ASP.NET and WAMP supporting PHP coexist on the same server? We already have a WAMP stack setup on a Windows Server 2003 box to support some internal PHP applications, and I want to also setup CI Factory on that box which will try to configure IIS to support it's ASP.NET based dashboard. I want to make sure that there isn't a big chance of fubarring the WAMP stack that is already there. Will it be smart enough to handle \*.PHP through Apache and \*.aspx through IIS ? Edit: Is there a way to get this to work on the same port?
For example IIS on non-standard port (e.g. 8080) and Apache redirecting traffic to IIS via [mod\_proxy](http://httpd.apache.org/docs/2.2/mod/mod_proxy.html). Separate vhosts: ``` <VirtualHost lamp.example.com> # standard vhost configuration </VirtualHost> <VirtualHost aspx.example.com> ProxyPass / aspx.example.com:8080 </VirtualHost> ``` One vhost: ``` <VirtualHost www.example.com> ProxyPassMatch ^/(.*\.aspx) www.example.com:8080/$1 # ... standard vhost configuration for LAMP </VirtualHost> ```
Yes. Both will run under IIS in a Windows server.
Can IIS supporting ASP.NET and WAMP supporting PHP coexist on the same server?
[ "", "php", "asp.net", "apache", "iis", "wamp", "" ]
I want to change my website's header only it if's not the homepage. Is there a *tal:condition* expression for that? I've been reading [this](http://plone.org/documentation/tutorial/zpt) and can't find what I'm looking for... thanks!
The best way is to use two really handy plone views that are intended just for this purpose. The interface that defines them is at <https://svn.plone.org/svn/plone/plone.app.layout/trunk/plone/app/layout/globals/interfaces.py>, in case you want to check it out. ``` <tal:block tal:define="our_url context/@@plone_context_state/canonical_object_url; home_url context/@@plone_portal_state/portal_url;" tal:condition="python:our_url == home_url"> HERE GOES YOUR STUFF </tal:block> ``` The great thing about @@plone\_context\_state and @@plone\_portal\_state is that they handle all sorts of weird edge cases. context/@@plone\_context\_state/canonical\_object\_url also returns the right, most basic, object's url even when you're viewing the default page in the portal root with a query string appended :-)
I use something similar to ax: ``` <tal:block define="global currentUrl request/getURL" condition="python: u'home' not in str(currentUrl)"> <!-- whatever --> </tal:block> ```
How do I check if it's the homepage in a Plone website using ZPT?
[ "", "python", "plone", "zope", "template-tal", "zpt", "" ]
First I will give a specific case, and the I would like to see if it can be applied to a general problem. Say I have map. And I want to get all the keys meeting a certain criteria. For example all keys that contain "COL". My naive implementation will be ``` template<typename T> void Filter (map<string, T> & m, std:set<string> & result, const std::string& condition) { for(map<string,string> iter=m.begin();iter!m.end();iter++) { std::string key=iter->first; size_t found=key.find(condition); if (found!=string::npos) result.insert(key); } } ``` what is the good way to implement this? Also, what is a good way to implement general problem when I want to filter map using algos?
I think your solution is rather good: it is clear, and except if you can "guess" hash values based on the condition, I don't think you could be much more performant. However, you could change your function to make it more generic: ``` template<typename TKey, typename TValue, typename Predicate> void filter (const map<TKey, TValue> & m, set<TKey> & result, Predicate & p) { typename map<TKey,TValue>::const_iterator it = m.begin(); typename map<TKey,TValue>::const_iterator end = m.end(); for( ; it != end ; ++it) { TKey key = it->first; if (p(key)) result.insert(key); } } ``` Your example can then be writen using a functor as predicate: ``` struct Contains { Contains(const string & substr) : substr_(substr) {} bool operator()(const string & s) { return s.find(substr_) != string::npos; } string substr_; }; ``` The call to filter will then look like this: ``` map<string, Obj> m; // Insert in m set<string> res; filter(m, res, Contains("stringToFind")); ```
That looks like a candidate for `remove_copy_if`. I've written something using `boost` that probably looks more than disgusting, but provides a generalization of your algorithm. ``` #include <boost/iterator/transform_iterator.hpp> #include <boost/bind.hpp> #include <boost/function.hpp> #include <algorithm> #include <map> #include <set> #include <string> struct filter_cond : std::unary_function<std::string, bool> { filter_cond(std::string const &needle):needle(needle) { } bool operator()(std::string const& arg) { return (arg.find(needle) == std::string::npos); } std::string needle; }; int main() { std::set<std::string> result; typedef std::map<std::string, int> map_type; map_type map; std::remove_copy_if( boost::make_transform_iterator(map.begin(), boost::bind(&map_type::value_type::first, _1)), boost::make_transform_iterator(map.end(), boost::bind(&map_type::value_type::first, _1)), std::inserter(result, result.end()), filter_cond("foo") ); } ``` I would probably prefer the manual loop. C++1x will make look that really much better with lambda expressions.
returning a set of keys in the map matching the criteria
[ "", "c++", "stl", "" ]
I have a table like so: ``` object_id | vote 1 | 2 1 | -1 1 | 5 2 | 3 2 | 1 3 | 4 3 | -2 ``` I want this result (for this particular example, object\_ids 1 and 2 are part of a group, defined elsewhere, and I'm looking for the normalized\_score so that the sum always = 1. object\_id 3 is part of an unused group.): ``` object_id | normalized_score 1 | 6/10 2 | 4/10 ``` [added 3:05PM] 10 here is the sum of the votes for object\_id in (1,2). There's a whole other set of logic to come up with the (1,2), I was just trying to give the cleanest question so people don't have to worry about that part. [added 3:10PM] As pointed out in the comments, if the score for one of the objects is below 0, a problem arises. Here is the rule, "IF the score for any outcome\_id is -x, AND that is the minimum score for the set, ADD x to all scores in order to zero-out the minimum score". I can do this on my own time though outside of SQL - so it's a bonus only if somebody has the cahones to try to tackle it in SQL. If I do a self join, I can get the sum. I can't figure out how to get the normalized sum. Ideally this will work in both MySQL 5.x and Sqlite3. Otherwise, I can do this with two separate queries and just do the work in post-processing.
The solution without compensating for negative votes (I include this one because its much easier to read/understand): ``` SELECT object_id , SUM(vote) + '/' + total AS normalized_score FROM tabel , ( SELECT sum(vote) AS total FROM tabel ) GROUP BY object_id, total ``` Full solution: ``` SELECT object_id , SUM(vote + minvote) + '/' + (total + minvote * count) AS normalized_score FROM tabel , ( SELECT sum(vote) AS total , CASE WHEN MIN(vote) < 0 THEN -MIN(vote) END AS minvote , COUNT(*) AS count FROM tabel ) GROUP BY object_id, total, minvote, count ``` (I don't have access to MySQL, so I wrote a query in Oracle and replaced || for +. Hope it works in MySQL or at least helps :))
``` -- SQL solution SELECT object_id AS ObjectID, (SUM(CASE SIGN(vote) WHEN 1 THEN vote ELSE 0 END) - ((CASE SIGN(MIN(vote)) WHEN -1 THEN MIN(vote) ELSE 0) * (COUNT(1)))) + '/10' AS NormalizedVote FROM table GROUP BY object_id ```
Rollup Column for Normalized Sums (SQL) - Part 1
[ "", "sql", "mysql", "" ]
I'm trying to install Tomcat + Solr on my Ubuntu machine. I was using ubuntu repo: ``` http://packages.ubuntu.com/intrepid/web/solr-tomcat5.5 http://packages.ubuntu.com/intrepid/tomcat5.5 ``` When i launch tomcat, solr do not work: ``` sudo service tomcat5.5 start ``` The webpanel do not find solr, and give me this error: ``` HTTP Status 404 - /solr/admin ``` Does anyone have any idea what this may be? Thank you in advance
I found this link usefully since i couldn't get nothing done from ubuntu repositories: <http://wiki.apache.org/solr/SolrTomcat#Simple_Example_Install>
The Ubuntu-packaged versions of Tomcat and Solr aren't known for working very well. I'd highly recommend going outside the package manager and getting them from the source. This guide should give you everything you need to know: <http://justin-hayes.com/2009-04-08/installing-apache-tomcat-6-and-solr-nightly-on-ubuntu-804>
Installing Tomcat + Solr problem
[ "", "java", "web-services", "tomcat", "ubuntu", "solr", "" ]
A quick Google search reveals that there are a good number of Bayesian classifiers implemented as Python modules. If I want wrapped, high-level functionality similar to [dbacl](http://dbacl.sourceforge.net/), which of those modules is right for me? Training ``` % dbacl -l one sample1.txt % dbacl -l two sample2.txt ``` Classification ``` % dbacl -c one -c two sample3.txt -v one ```
I think you'll find the [nltk](http://www.nltk.org) helpful. Specifically, the [classify module](http://nltk.googlecode.com/svn/trunk/doc/api/nltk.classify-module.html).
If you're trying to detect language [this](http://cogscicoder.blogspot.com/2009/03/automatic-language-identification-using.html) works fine even with pretty short texts. The api is pretty close to yours but I don't know if it is called a Bayesian classifier.
Which Python Bayesian text classification modules are similar to dbacl?
[ "", "python", "text", "classification", "bayesian", "" ]
I'm trying to figure out what's gone wrong with my json serializing, have the current version of my app with and old one and am finding some surprising differences in the way JSON.stringify() works (Using the JSON library from json.org). In the old version of my app: ``` JSON.stringify({"a":[1,2]}) ``` gives me this; ``` "{\"a\":[1,2]}" ``` in the new version, ``` JSON.stringify({"a":[1,2]}) ``` gives me this; ``` "{\"a\":\"[1, 2]\"}" ``` any idea what could have changed to make the same library put quotes around the array brackets in the new version?
Since JSON.stringify has been shipping with some browsers lately, I would suggest using it instead of Prototype’s toJSON. You would then check for window.JSON && window.JSON.stringify and only include the json.org library otherwise (via `document.createElement('script')`…). To resolve the incompatibilities, use: ``` if(window.Prototype) { delete Object.prototype.toJSON; delete Array.prototype.toJSON; delete Hash.prototype.toJSON; delete String.prototype.toJSON; } ```
The function JSON.stringify() defined in [ECMAScript 5 and above (Page 201 - the JSON Object, pseudo-code Page 205)](http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-262%20edition%205.1,%20June%202011.pdf), uses the function toJSON() when available on objects. Because Prototype.js (or another library that you are using) defines an Array.prototype.toJSON() function, arrays are first converted to strings using Array.prototype.toJSON() then string quoted by JSON.stringify(), hence the incorrect extra quotes around the arrays. The solution is therefore straight-forward and trivial (this is a simplified version of Raphael Schweikert's answer): ``` delete Array.prototype.toJSON ``` This produces of course side effects on libraries that rely on a toJSON() function property for arrays. But I find this a minor inconvenience considering the incompatibility with ECMAScript 5. It must be noted that the JSON Object defined in ECMAScript 5 is efficiently implemented in modern browsers and therefore the best solution is to conform to the standard and modify existing libraries.
JSON.stringify() array bizarreness with Prototype.js
[ "", "javascript", "json", "prototypejs", "" ]
I want to make a web request to a page that needs authenticating. How would I go about doing this? I found something that said possibly to use the Credentials property, but I'm not sure how to use it.
Assign a new `NetworkCredential` instance to the `Credentials` property: ``` webClient.Credentials = new NetworkCredential("Mehrdad", "Password"); ```
Basic auth example: ``` public void SetBasicAuthHeader(WebRequest req, String userName, String userPassword) { string authInfo = userName + ":" + userPassword; authInfo = Convert.ToBase64String(Encoding.Default.GetBytes(authInfo)); req.Headers["Authorization"] = "Basic " + authInfo; } ``` <http://blog.kowalczyk.info/article/at3/Forcing-basic-http-authentication-for-HttpWebReq.html>
Using HTTP Authentication with a C# WebRequest
[ "", "c#", "authentication", "webrequest", "" ]
An open source JavaScript project I work on includes code: ``` if (color) { tapeDiv.style.backgroundColor = color; // set color here if defined by event. Else use css } ``` A contributor wants to change it to ``` if (color != null) { // this line changed tapeDiv.style.backgroundColor = color; // set color here if defined by event. Else use css } ``` color is a string var. Only a string of more than 0 characters should be used to explicitly set the color. Since JS casts "" and null as boolean false, why would the comparison != null be needed? Am I missing something in thinking that the first form is just as good (and a bit shorter) than the second? I see comparisons with null quite often in JS source. Why are they needed when all JS simple objects have known results when cast as booleans? Thanks, Larry ps. I suppose if 0 (an integer) was a valid case, then if(0) would be false [a problem] and if(0 != null) would be true [allows the 0 case]. Any other reason? pps. Should have mentioned that the tapeDiv is newly created. So there's no point to resetting the style to "" since the div is brand new.
No, and your ps is correct. Null would evaluate to false, and if null needs to be distinguished from empty string or 0, then you'd do the null check. Or it could just be for clarity. It's more descriptive to indicate that you're specifically looking for null.
Evaluate the assignment with all possible falsy values and you'll get your answer: ``` tapeDiv.style.backgroundColor = false; // does nothing tapeDiv.style.backgroundColor = 0; // sets as "black", // but ignored by FF tapeDiv.style.backgroundColor = null; // resets the background-color // to use whatever is defined // in a stylesheet (if any), // but ignored by IE. tapeDiv.style.backgroundColor = ''; // resets the background-color // to use whatever is defined // in a stylesheet (if any). ``` The check for "`if (color)`" will not let any of them through. The check for "`if (color != null)`" will let 1), 2) and 4) through. 1) doesn't do anything, 2) won't work as expected in Firefox, and 4) will always work as expected. However, "works" is dependent on your context (which you did not provide). Hope that helps.
Javascript: Why compare with null?
[ "", "javascript", "" ]
I've got a string in .NET which is actually a URL. I want an easy way to get the value from a particular parameter. Normally, I'd just use `Request.Params["theThingIWant"]`, but this string isn't from the request. I can create a new `Uri` item like so: ``` Uri myUri = new Uri(TheStringUrlIWantMyValueFrom); ``` I can use `myUri.Query` to get the query string...but then I apparently have to find some regexy way of splitting it up. Am I missing something obvious, or is there no built in way to do this short of creating a regex of some kind, etc?
Use static `ParseQueryString` method of `System.Web.HttpUtility` class that returns `NameValueCollection`. ``` Uri myUri = new Uri("http://www.example.com?param1=good&param2=bad"); string param1 = HttpUtility.ParseQueryString(myUri.Query).Get("param1"); ``` Check documentation at <http://msdn.microsoft.com/en-us/library/ms150046.aspx>
This is probably what you want ``` var uri = new Uri("http://domain.test/Default.aspx?var1=true&var2=test&var3=3"); var query = HttpUtility.ParseQueryString(uri.Query); var var2 = query.Get("var2"); ```
Get URL parameters from a string in .NET
[ "", "c#", ".net", "url", "parsing", "parameters", "" ]
I'm doing a read in on a file character by character using istream::get(). How do I end this function with something to check if there's nothing left to read in formatted in the file (eg. only whitespace) and set the corresponding flags (EOF, bad, etc)?
You can strip any amount of leading (or trailing, as it were) whitespace from a stream at any time by reading to [`std::ws`](http://en.cppreference.com/w/cpp/io/manip/ws). For instance, if we were reading a file from STDIN, we would do: ``` std::cin >> std::ws ``` Credit to [this comment](https://stackoverflow.com/questions/15674177/remove-whitespace-from-input-stream-by-only-means-of-istream-functions#comment31958751_15741326) on another version of this question, asked four years later.
Construct an `istream::sentry` on the stream. This will have [a few side effects](http://www.cplusplus.com/reference/istream/istream/sentry/), the one we care about being: > If its `skipws` format flag is set, and the constructor is not passed `true` as second argument (*noskipws*), all leading *whitespace characters* (locale-specific) are extracted and discarded. If this operation exhausts the source of characters, the function sets both the `failbit` and `eofbit` *internal state flags*
How do I set EOF on an istream without reading formatted input?
[ "", "c++", "" ]
I'm still learning C++; I was trying out how polymorphism works and I got a segmentation fault when calling a virtual method. (Note: I didn't mark the destructor as virtual, I was just trying out to see what happens.) Here's the code: ``` #include <iostream> using namespace std; class Base { protected: char *name; public: Base(char *name) { cout << name << ": Base class cons" << endl; } ~Base() { cout << name << ": Base class des" << endl; } virtual void disp(); }; void Base::disp() { cout << name << ": Base disp()" << endl; } class Child : public Base { public: Child(char *name): Base(name) { cout << name << ": Child class cons" << endl; } ~Child() { cout << name << ": Child class des" << endl; } virtual void disp() { cout << name << ": Child disp()" << endl; } }; int main() { //Base b; //b.disp(); Base c = Child("2"); c.disp(); } ``` Also, if you've any other tips regarding the usage of inheritance and polymorphism in general for someone who knows these concepts in Java, please let me know. Thank you!
name - is unintialized in Base also you have another problem: ``` Base c = Child("2"); ``` I don't think it's what you want. Your code will create an instance of Base from casted Child. But I think you want work with Child instance based on Base interface; you should instead write: ``` Base *c = new Child("2"); ``` also, to avoid future bugs, declare destructor in base as virtual.
You never initialise the base nenber variable - your base constructor should be: ``` Base(char * aname) : name( aname ) { cout << name << ": Base class cons" << endl; } ``` As well as that, when you say ``` Base b = Child( "xxx" ); ``` then the Child instance will be sliced down to a Base, which is probably not what you want.
Why do I get a segmentation fault when calling a virtual method in this code?
[ "", "c++", "inheritance", "polymorphism", "segmentation-fault", "" ]
Many third-party Python modules have an attribute which holds the version information for the module (usually something like `module.VERSION` or `module.__version__`), however some do not. Particular examples of such modules are libxslt and libxml2. I need to check that the correct version of these modules are being used at runtime. Is there a way to do this? A potential solution wold be to read in the source at runtime, hash it, and then compare it to the hash of the known version, but that's nasty. Is there a better solutions?
I'd stay away from hashing. The version of libxslt being used might contain some type of patch that doesn't effect your use of it. As an alternative, I'd like to suggest that you don't check at run time (don't know if that's a hard requirement or not). For the python stuff I write that has external dependencies (3rd party libraries), I write a script that users can run to check their python install to see if the appropriate versions of modules are installed. For the modules that don't have a defined 'version' attribute, you can inspect the interfaces it contains (classes and methods) and see if they match the interface they expect. Then in the actual code that you're working on, assume that the 3rd party modules have the interface you expect.
Use [pkg\_resources](https://pythonhosted.org/setuptools/pkg_resources.html). Anything installed from PyPI at least should have a version number. ``` >>> import pkg_resources >>> pkg_resources.get_distribution("blogofile").version '0.7.1' ```
Checking a Python module version at runtime
[ "", "python", "module", "version", "" ]
I'm creating a PDF file on the fly using ITextSharp and ASP.NET 1.1. My process is as follows - * Create file on server * Redirect browser to newly created PDF file so it is displayed to user What I'd like to do is delete the PDF from the server as soon it is displayed in the users browser. The PDF file is large so it is not an option to hold it in memory, an initial write to the server is required. I'm currently using a solution that periodically polls for files then deletes them, but I'd prefer a solution that deletes the file immediately after it has been downloaded to the client machine. Is there a way to do this?
Instead of redirecting the browser to the created file you could serve the file yourself using you own HttpHandler. Then you could delete the file immediately after you served it or you could even create the file in memory. Write the PDF file directly to the Client: ``` public class MyHandler : IHttpHandler { public void ProcessRequest(System.Web.HttpContext context) { context.Response.ContentType = "application/pdf"; // ... PdfWriter.getInstance(document, context.Response.OutputStream); // ... ``` or read an already generated file 'filename', serve the file, delete it: ``` context.Response.Buffer = false; context.Response.BufferOutput = false; context.Response.ContentType = "application/pdf"; Stream outstream = context.Response.OutputStream; FileStream instream = new FileStream(filename, FileMode.Open, FileAccess.Read, FileShare.Read); byte[] buffer = new byte[BUFFER_SIZE]; int len; while ((len = instream.Read(buffer, 0, BUFFER_SIZE)) > 0) { outstream.Write(buffer, 0, len); } outstream.Flush(); instream.Close(); // served the file -> now delete it File.Delete(filename); ``` *I didn't try this code. This is just how I think it would work ...*
Inspired by f3lix's answer (thanks f3lix!) I've come up with the folowing VB.net code - ``` HttpContext.Current.Response.ClearContent() HttpContext.Current.Response.ClearHeaders() HttpContext.Current.Response.ContentType = "application/pdf" HttpContext.Current.Response.TransmitFile(PDFFileName) HttpContext.Current.Response.Flush() HttpContext.Current.Response.Close() File.Delete(PDFFileName) ``` This appears to work - is the 'WriteFile' method I've used any less efficent that the stream methods used by f3lix? Is there a method available that's more efficient than either of our solutions? EDIT (19/03/2009) Based on comments below I've changed 'WriteFile' method to 'TransmitFile' as it appears it sends the file down to client in chunks rather than writing the entire file to the webserver's memory before sending. Further info can be found [here](http://forums.asp.net/t/1275581.aspx).
Delete dynamically generated PDF file immediately after it has been displayed to user
[ "", "c#", "asp.net", "vb.net", "pdf-generation", "" ]
I have a project which I would like to make more use of smart pointers. Overall, I have been successful in this goal. However, I've come across one things which I'm not sure what the "best practice" is. Basically I would like to return a "pointer" from a function, but **require** that the user hold it in a smart pointer. Not only that, I don't want to mandate a particular smart pointer (shared vs. scoped). The problem is mostly that there doesn't seem be to a proper way to upgrade a `scoped_ptr` to a `shared_ptr` (that would be the ideal solution i think). I understand why they didn't do this, as it would allow transferring of ownership which can lead to some issues like those `std::auto_ptr` has. However, transferring of ownership seems like a good idea for *this* case. So my idea is like this: ``` // contrived example of factory pattern std::auto_ptr<A> func() { return std::auto_ptr<A>(new A); } ``` This works "ok" since both `scoped_ptr` and `shared_ptr` have constructors which take ownership from a `std::auto_ptr`. So my question is, is this good practice? Is there a better solution? The only real alternative I've been able to come up with is using a template template as the return value like this: ``` // similar to above example template <template <typename> class P> P<A> func() { return P<A>(new A); } ``` which actually could work well except that I think it would need some work to get it to work with a `scoped_ptr` too. Thoughts?
Using `std::auto_ptr` is **the** good practice, in fact [such example](http://www.research.att.com/~bs/bs_faq2.html#memory-leaks) was suggested by Bjarne Stroustrup. The move semantics of `auto_ptr` gives you right tools to deal with it. For example: ``` auto_ptr<Foo> make_foo() { return auto_ptr<Foo>(new Foo); } Foo *raw_pointer=make_foo().release(); shared_ptr<Foo> shared_pointer=make_foo(); auto_ptr<Foo> auto_pointer=make_foo(); ``` If you return `shared_ptr` you can't fallback to normal pointer, with `auto_ptr` you can. You can allways upgrade `auto_ptr` to shared but not other direction. Another important point, `shared_ptr` uses atomic reference-counting, that is much slower that simple and yet fully efficient job that `auto_ptr` does. P.S.: `scoped_ptr` is just version of `auto_ptr` for poors --- it is non-copyable and does not have default constuctor. It is more like "less confusing" version of `auto_ptr`, in comparison to `shared_ptr` it is not in tr1. Generally there no much advantages of using `scoped_ptr` over `auto_ptr`
If you build a factory that's ok that you simply return a pointer. And the user of your factory can make his own decision how to and where to put this pointer. If you need to enforce to use smart pointer you have to restrict choice as you don't want them to use "wrong" ones. So boost::shared\_ptr. But better to typedef it then to MyClassPtr or MyClass::ptr. Still, factories they are like "new". When I want I put result of new inside of std::auto\_ptr. But I don't want to be forced to call "release" all the times when I don't want smart pointer.
returning a 'pointer' which is required to be held by a smart pointer
[ "", "c++", "boost", "tr1", "" ]
I have a python program with 2 threads ( let's name them 'source' and 'destination' ). Source thread sometimes post a message to destination thread with some arguments. Than destination thread picks a message it must call a corresponding function with aruments saved in message. This task can be solved multiple ways. The easy one is tu put a big 'if...if..if' in destination thread's message pick cycle and call function according to received message type and saved arguments. But this will result in huge amounf of code ( or big lookup table ) and adding new messages / handler function will evolve additonal step to write code in message pick cycle. Since python treats functions as first-class objects and have tuples, i want to put a function and argumens inside a message, so than destination thread picks a message it just call a functon saved within a message without any knowledge what function it is. I can write a code for a functions with specified number of arguments: ``` from Queue import * from thread import * from time import * q = Queue() def HandleMsg( arg1, arg2 ) : print arg1, arg2 def HandleAnotherMsg( arg1, arg2, arg3 ) : print arg1, arg2, arg3 def DestinationThread( a ) : while True : (f, a, b) = q.get() f( a, b ) start_new_thread( DestinationThread, ( 0, ) ) print "start" sleep( 1 ) q.put( (HandleMsg, 1, 2) ) sleep( 1 ) print "stop" ``` The question is: how to modify a code so i can put() a function with any number of arguments in queue? for example HandleAnotherMsg() ? Using q.put( (HandleAnotherMsg, 1, 2, 3) ) will rise a compilation error :(
So simple: ``` def DestinationThread( a ) : while True : items = q.get() func = items[0] args = items[1:] func(*args) ```
Another interesting option is simply to pass in a lambda. ``` q.put(lambda: HandleMsg(1,2)) q.put(lambda: HandleAnother(8, "hello", extra="foo")) def DestinationThread() : while True : f = q.get() f() ```
how to put a function and arguments into python queue?
[ "", "python", "multithreading", "" ]
How can I refer to a nested div by id when it has the same id as a div nested in a similarly named div eg ``` <div id="obj1"> <div id="Meta"> <meta></meta> </div> </div> <div id="obj2"> <div id="Meta"> <meta></meta> </div> </div> ``` I want to get the innerHTML of meta ``` document.getElementById('obj1').getElementById('Meta').getElementsByTagName('meta') ``` doesn't work
IDs should only be used when there is one of that item on the page, be it a SPAN, DIV or whatever. CLASS is what you should use for when you may have a repeating element. Code there doesn't work because you're referring to an element by unique ID, but have more than one on the page.
Id is supposed to be unique.
Referring to a div inside a div with the same ID as another inside another
[ "", "javascript", "html", "dom", "getelementbyid", "" ]
I am needing to implement email notifications for a C++ project. Basically a user provides all the relevant information for their email account and on certain events this component would fire off an email. Ideally I would like to find a small cross platform open source command line project that I can exec from within my project and parse the output. Something like [blat](http://sourceforge.net/projects/blat/) but it would also support SSL connections and can be used in both Windows(XP and 2003) and Linux (Ubuntu 6.06 and 8.04) I could also use a library if it were simple enough and licensed under a commercial friendly license, but would be open to hearing all suggestions. Thank you very much in advance for any recommendations
I ended up using the Perl script [sendEmail](http://caspian.dotconf.net/menu/Software/SendEmail/). A windows binary was available and building a new binary after modifying the Perl script was not too hard to do at all. The script also had no issues running in the LTE Ubuntu environments after the required Debian packages were installed.
To compose the email libmime (<http://www.mozilla.org/mailnews/arch/libmime-description.html>) can be helpful. To send the mail libsmtp (<http://libsmtp.berlios.de/>) can be used.
Simple email program / library recommendations
[ "", "c++", "email", "" ]
Okay so i set up this thing so that I can print out page that people came from, and then put dummy tags on certain pages. Some pages have commented out "linkto" tags with text in between them. My problem is that some of my pages don't have "linkto" text. When I link to this page from there I want it to grab everything between "title" and "/title". How can I change the eregi so that if it turns up empty, it should then grab the title? Here is what I have so far, I know I just need some kind of if/then but I'm a rank beginner. Thank you in advance for any help: ``` <?php $filesource = $_SERVER['HTTP_REFERER']; $a = fopen($filesource,"r"); //fopen("html_file.html","r"); $string = fread($a,1024); ?> <?php if (eregi("<linkto>(.*)</linkto>", $string, $out)) { $outdata = $out[1]; } //echo $outdata; $outdatapart = explode( " " , $outdata); echo $part[0]; ?> ```
Here you go: if eregi() fails to match, the $outdata assignment will never happen as the if block will not be executed. If it matches, but there's nothing between the tags, $outdata will be assigned an empty string. In both cases, !$outdata will be true, so we can fallback to a second match on the title tag instead. ``` if(eregi("<linkto>(.*?)</linkto>", $string, $link_match)) { $outdata = $link_match[1]; } if(!$outdata && eregi("<title>(.*?)</title>", $string, $title_match)) { $outdata = $title_match[1]; } ``` I also changed the (.\*) in the match to (.\*?). This means, don't be greedy. In the (.\*) form, if you had $string set to ``` <title>Page Title</title> ... ... <iframe><title>A second title tag!</title></iframe> ``` The regex would match ``` Page Title</title> ... ... <iframe><title>A second title tag! ``` Because it tries to match as much as possible, as long as the text is between any and any other !. In the (.\*?) form, the match does what you'd expect - it matches ``` Page Title ``` And stops as soon as it is able. ... As an aside, this thing is an interesting scheme, but why do you need it? Pages can link to other pages and pass parameters via the query string: ``` <a href="somescript.php?prevpage=Foo">...</a> ``` Then somescript.php can access the prevpage parameter via the $\_GET['prevpage'] superglobal variable. Would that solve your problem?
The POSIX regex extension (ereg etc.) will be deprecated as of PHP 5.3.0 and may be gone completely come PHP 6, you're better off using the PCRE functions (preg\_match and friends). The PCRE functions are also faster, binary safe and support more features like non-greedy matching etc. Just a pointer.
basic if/then with PHP
[ "", "php", "" ]
I have looked at [TinyMCE](http://tinymce.moxiecode.com/), [FCKeditor](http://www.fckeditor.net), [YUI Rich Text Editor](http://developer.yahoo.com/yui/editor/), [NicEdit](http://www.nicedit.com/) and some others, but I was wondering if there is one out there that utilises jQuery? I mean, I already use jQuery in all my projects, and it seemed it'd be useful to have a WYSIWYG editor that used it also. This should result in a smaller file size. I understand if one doesn't exist (as they were probably developed before JS frameworks were popular, and changing the codebase now is too much work), or simply for easy deployment (no dependency on any framework). So, does one exist, and is it any good?
[jquery wysiwyg](http://code.google.com/p/jwysiwyg/)
I've been using for some time editor of [imperavi redactor](http://imperavi.com/redactor/). Quick, easy, may not be quite functional, but a constantly evolving
Is there a JavaScript WYSIWYG that leverages jQuery?
[ "", "javascript", "jquery", "wysiwyg", "" ]
I am trying to display some images containing special characters like ☻ ☺ ♥ or Chinese or Arabic characters in their names using jsp...but the images are not getting displayed !! ``` <img src = "pipo².jpg" /> <img src = "pip☺☻♥o².jpg" /> ``` What am I doing wrong !!
you can percent encode the urls using encodeURIComponent in javascript to give you ``` <img src="pip%C3%A2%C2%98%C2%BA%C3%A2%C2%98%C2%BB%C3%A2%C2%99%C2%A5o%C3%82%C2%B2.jpg"> ```
Try encoding the filename using [URLEncoder.encode()](http://java.sun.com/j2se/1.5.0/docs/api/java/net/URLEncoder.html) method before the HTML is sent to the page, e.g. ``` String encodedString = URLEncoder.encode(filename, "UTF-8"). ``` This will convert the characters to entities which can be passed in HTML.
Show Images with name containing special characters
[ "", "java", "html", "jsp", "encoding", "upload", "" ]
I'm trying to understand what kind of memory hit I'll incur by creating a large array of objects. I know that each object - when created - will be given space in the HEAP for member variables, and I think that all the code for every function that belongs to that type of object exists in the code segment in memory - permanently. Is that right? So if I create 100 objects in C++, I can **estimate** that I will need space for all the member variables that object owns multiplied by 100 (possible alignment issues here), and then I need space in the code segment for a single copy of the code for each member function for that type of object( not 100 copies of the code ). Do virtual functions, polymorphism, inheritance factor into this somehow? What about objects from dynamically linked libraries? I assume dlls get their own stack, heap, code and data segments. Simple example (may not be syntactically correct): ``` // parent class class Bar { public: Bar() {}; ~Bar() {}; // pure virtual function virtual void doSomething() = 0; protected: // a protected variable int mProtectedVar; } // our object class that we'll create multiple instances of class Foo : public Bar { public: Foo() {}; ~Foo() {}; // implement pure virtual function void doSomething() { mPrivate = 0; } // a couple public functions int getPrivateVar() { return mPrivate; } void setPrivateVar(int v) { mPrivate = v; } // a couple public variables int mPublicVar; char mPublicVar2; private: // a couple private variables int mPrivate; char mPrivateVar2; } ``` About how much memory should 100 dynamically allocated objects of type Foo take including room for the code and all variables?
It's not necessarily true that "each object - when created - will be given space in the HEAP for member variables". Each object you create will take some nonzero space somewhere for its member variables, but where is up to how you allocate the object itself. If the object has automatic (stack) allocation, so too will its data members. If the object is allocated on the free store (heap), so too will be its data members. After all, what is the allocation of an object other than that of its data members? If a stack-allocated object contains a pointer or other type which is then used to allocate on the heap, that allocation will occur on the heap regardless of where the object itself was created. For objects with virtual functions, each will have a vtable pointer allocated as if it were an explicitly-declared data member within the class. As for member functions, the code for those is likely no different from free-function code in terms of where it goes in the executable image. After all, a member function is basically a free function with an implicit "this" pointer as its first argument. Inheritance doesn't change much of anything. I'm not sure what you mean about DLLs getting their own stack. A DLL is not a program, and should have no need for a stack (or heap), as objects it allocates are always allocated in the context of a program which has its own stack and heap. That there would be code (text) and data segments in a DLL does make sense, though I am not expert in the implementation of such things on Windows (which I assume you're using given your terminology).
Code exists in the text segment, and how much code is generated based on classes is reasonably complex. A boring class with no virtual inheritance ostensibly has some code for each member function (including those that are implicitly created when omitted, such as copy constructors) just once in the text segment. The size of any class instance is, as you've stated, generally the sum size of the member variables. Then, it gets somewhat complex. A few of the issues are... * The compiler can, if it wants or is instructed, `inline` code. So even though it might be a simple function, if it's used in many places and chosen for inlining, a lot of code can be generated (spread all over the program code). * Virtual inheritance increases the size of polymorphic each member. The VTABLE (virtual table) hides along with each instance of a class using a virtual method, containing information for runtime dispatch. This table can grow quite large, if you have many virtual functions, or multiple (virtual) inheritance. Clarification: The VTABLE is per class, but pointers to the VTABLE are stored in each instance (depending on the ancestral type structure of the object). * Templates can cause code bloat. Every use of a templated class with a new set of template parameters can generate brand new code for each member. Modern compilers try and collapse this as much as possible, but it's hard. * Structure alignment/padding can cause simple class instances to be larger than you expect, as the compiler pads the structure for the target architecture. When programming, use the `sizeof` operator to determine object size - never hard code. Use the rough metric of "Sum of member variable size + some VTABLE (if it exists)" when estimating how expensive large groups of instances will be, and don't worry overly about the size of the code. Optimise later, and if any of the non-obvious issues come back to mean something, I'll be rather surprised.
In C++, where in memory are class functions put?
[ "", "c++", "memory", "dynamic", "" ]
I need to measure, in C++ on Linux (and other Unix-like systems), the CPU (not wall clock) time taken by a long computation, so I'm using clock(). Problem: on 32 bit systems, this wraps around after about 2000 or 4000 seconds. What's the recommended workaround for this?
You want to use `getrusage` which fills the following struct: ``` struct rusage { struct timeval ru_utime; /* user time used */ struct timeval ru_stime; /* system time used */ ... }; ``` For completion, `struct timeval`: ``` struct timeval { time_t tv_sec; /* seconds */ suseconds_t tv_usec; /* microseconds */ }; ``` As always, consult the manpage for details (`man getrusage`)
Another possible method is to use gettimeofday() twice as it returns ms accuracy.
32 bit Linux clock() overflow
[ "", "c++", "linux", "time", "clock", "" ]
wx (and wxPython) has two events I miss in PyQt: * `EVT_IDLE` that's being sent to a frame. It can be used to update the various widgets according to the application's state * `EVT_UPDATE_UI` that's being sent to a widget when it has to be repainted and updated, so I can compute its state in the handler Now, PyQt doesn't seem to have these, and the PyQt book suggests writing an `updateUi` method and calling it manually. I even ended up calling it from a timer once per 0.1 seconds, in order to avoid many manual calls from methods that may update the GUI. Am I missing something? Is there a better way to achieve this? --- An example: I have a simple app with a Start button that initiates some processing. The start button should be enabled only when a file has been opened using the menu. In addition, there's a permanent widget on the status bar that displays information. My application has states: 1. Before the file is opened (in this state the status bar show something special and the start button is disabled) 2. File was opened and processing wasn't started: the start button is enabled, status bar shows something else 3. The processing is running: the start button now says "Stop", and the status bar reports progress In Wx, I'd have the update UI event of the button handle its state: the text on it, and whether it's enabled, depending on the application state. The same for the status bar (or I'd use EVT\_IDLE for that). In Qt, I have to update the button in several methods that may affect the state, or just create a update\_ui method and call it periodically in a timer. What is the more "QT"-ish way?
The use of EVT\_UPDATE\_UI in wxWidgets seems to highlight one of the fundamental differences in the way wxWidgets and Qt expect developers to handle events in their code. With Qt, you connect signals and slots between widgets in the user interface, either handling "business logic" in each slot or delegating it to a dedicated method. You typically don't worry about making separate changes to each widget in your GUI because any repaint requests will be placed in the event queue and delivered when control returns to the event loop. Some paint events may even be merged together for the sake of efficiency. So, in a normal Qt application where signals and slots are used to handle state changes, there's basically no need to have an idle mechanism that monitors the state of the application and update widgets because those updates should occur automatically. You would have to say a bit more about what you are doing to explain why you need an equivalent to this event in Qt.
I would send Qt signals to indicate state changes (e.g. fileOpened, processingStarted, processingDone). Slots in objects managing the start button and status bar widget (or subclasses) can be connected to those signals, rather than "polling" for current state in an idle event. If you want the signal to be deferred later on in the event loop rather than immediately (e.g. because it's going to take a bit of time to do something), you can use a "queued" signal-slot connection rather than the normal kind. <http://doc.trolltech.com/4.5/signalsandslots.html#signals> The connection type is an optional parameter to the connect() function: <http://doc.trolltech.com/4.5/qobject.html#connect> , <http://doc.trolltech.com/4.5/qt.html#ConnectionType-enum>
wx's idle and UI update events in PyQt
[ "", "python", "qt", "wxpython", "pyqt", "" ]
Do default parameters for methods violate Encapsulation? What was the rationale behind not providing default parameters in C#?
I would take [this](https://learn.microsoft.com/en-us/archive/blogs/csharpfaq/does-c-have-default-parameters) as the "official" answer from Microsoft. However, default (and named) parameters *will* most definitely be available in C# 4.0.
No, it doesn't affect encapsulation in any way. It simply is not often necessary. Often, creating an overload which takes fewer arguments is a more flexible and cleaner solution, so C#'s designer simply did not see a reason to add the complexity of default parameters to the language. Adding "Another way to do the same thing" is always a tradeoff. In some cases it may be convenient. But the more syntax you make legal, the more complex the language becomes to learn, and the more you may wall yourself in, preventing future extension. (Perhaps they'd one day come up with another extension to the language, which uses a similar syntax. Then that'd be impossible to add, because it'd conflict with the feature they added earlier)
Are default parameters bad practice in OOP?
[ "", "c#", "oop", "default", "" ]
I'm thinking about converting an app from Asp.net to python. I would like to know: what are the key comparisons to be aware of when moving a asp.net app to python(insert framework)? Does python have user controls? Master pages?
First, Python is a language, while ASP.NET is a web framework. In fact, you can code ASP.NET applications using [IronPython](http://www.codeplex.com/IronPython). If you want to leave ASP.NET behind and go with the Python "stack," then you can choose from several different web application frameworks, including [Django](http://en.wikipedia.org/wiki/Django_(web_framework)) and [Zope](http://en.wikipedia.org/wiki/Zope). Zope, for example, offers a pluggable architecture where you can "add on" things like wikis, blogs, and so on. It also has page templates, which are somewhat similar to the ASP.NET master page.
I second the note by Out Into Space on how python is a language versus a web framework; it's an important observation that underlies pretty much everything you will experience in moving from ASP.NET to Python. On a similar note, you will also find that the differences in language style and developer community between C#/VB.NET and Python influence the basic approach to developing web frameworks. This would be the same whether you were moving from web frameworks written in java, php, ruby, perl or any other language for that matter. The old "when you have a hammer, everything looks like a nail" adage really shows in the basic design of the frameworks :-) Because of this, though, you will find yourself with a few paradigm shifts to make when you substitute that hammer for a screwdriver. For example, Python web frameworks rely much less on declarative configuration than ASP.NET. Django, for example, has only a single config file that really has only a couple dozen lines (once you strip out the comments :-) ). Similarly, URL configuration and the page lifecycle are quite compact compared to ASP.NET, while being just as powerful. There's more "convention" over configuration (though much less so that Rails), and heavy use of the fact that modules in Python are top-level objects in the language... not everything has to be a class. This cuts down on the amount of code involved, and makes the application flow highly readable. As Out Into Space mentioned, zope's page templates are "somewhat" similar to ASP.NET master page, but not exactly. Django also offers page templates that inherit from each other, and they work very well, but not if you're trying to use them like an ASP.NET template. There also isn't a tradition of user controls in Python web frameworks a la .NET. The configuration machinery, request/response process indirection, handler complexity, and code-library size is just not part of the feel that python developers have for their toolset. We all argue that you can build the same web application, with probably less code, and more easily debuggable/maintainable using pythonic-tools :-) The main benefit here being that you also get to take advantage of the python language, and a pythonic framework, which is what makes python developers happy to go to work in the morning. YMMV, of course. All of which to say, you'll find you can do everything you've always done, just differently. Whether or not the differences please or frustrate you will determine if a python web framework is the right tool for you in the long run.
What should I be aware of when moving from asp.net to python for web development?
[ "", "asp.net", "python", "" ]
I would like to port my C/C++ apps to OS X. I don't have a Mac, but I have Linux and Windows. Is there any tool for this?
There appears to be [some scripts](http://www.sandroid.org/imcross/) that have been written to help get you set up cross compiling for the Mac; I can't say how good they are, or how applicable to your project. In the documentation, they refer to [these instructions](http://www.sandroid.org/imcross/) for cross-compiling for 10.4, and [these ones](http://devs.openttd.org/%7Etruebrain/compile-farm/apple-darwin9.txt) for cross compiling for 10.5; those instructions may be more helpful than the script, depending on how well the script fits your needs. If your program is free or open source software, then you may wish instead to create a [MacPorts](http://www.macports.org/) portfile (documentation [here](http://guide.macports.org/)), and allow your users to build your program using MacPorts; that is generally the preferred way to install portable free or open source software on Mac OS X. MacPorts has been known to run on Linux in the past, so it may be possible to develop and test your Portfile on Linux (though it will obviously need to be tested on a Mac).
For Linux, there is a prebuilt GCC cross-compiler (from publicly available Apple's modified GCC sources). <https://launchpad.net/~flosoft/+archive/cross-apple> **Update for 2015** 1. After so many years, the industry-standard IDE now supports OSX/iOS/Android. <http://channel9.msdn.com/Events/Visual-Studio/Connect-event-2014/311> 2. Embarcadero's RadStudio also supports building OSX/iOS/Android apps on Windows. 3. This [answer](https://stackoverflow.com/a/19891283/1182653) by [Thomas](https://stackoverflow.com/users/1392778/thomas) also provides a cross-compilation tool. For all these options you still need a real mac/i-device to test the application.
How to Compile for OS X in Linux or Windows?
[ "", "c++", "c", "macos", "cross-compiling", "" ]
I know this is an elementary question for you php people out there: I put p tags around in some php by doing this: ``` echo "<p>"; echo $VM_LANG->_('PHPSHOP_COUPON_ENTER_HERE') . '<br />'; echo "</p>"; ``` It is a silly way to do it. So how can I put the p tags within the one 'echo'?
``` echo "<p>" . $VM_LANG->_('PHPSHOP_COUPON_ENTER_HERE') . '<br /></p>'; ``` or ``` echo "<p>" , $VM_LANG->_('PHPSHOP_COUPON_ENTER_HERE') , '<br /></p>'; ``` The later is only possible with echo (not print) and theoretically saves some computation time, as the string don't need to be concatenated together. Probably won't mater 99% of the time, but it's nice to know about. The first says 1. Attach (concatenate) "<p>" to $VM\_LANG->\_('PHPSHOP\_COUPON\_ENTER\_HERE') 2. Attach the rusult of 1 to '<br /></p>' 3. echo out the result of 2 While the later says 1. echo out "<p>" 2. echo out $VM\_LANG->\_('PHPSHOP\_COUPON\_ENTER\_HERE') 3. echo out '<br /></p>'; A single concatenation will almost always take less processing power than an echo
You can concatenate `<p>` and `</p>` on to the ends: ``` echo '<p>'.$VM_LANG->_('PHPSHOP_COUPON_ENTER_HERE').'<br /></p>'; ``` The dot ('.') is PHP's [string concatenation](http://us.php.net/language.operators.string) operator and can be used to combine several string literals, variables, and string-returning functions. However, there is an alternative: ``` echo "<p>{$VM_LANG->_('PHPSHOP_COUPON_ENTER_HERE')}<br /></p>"; ``` This uses [variable parsing](http://us.php.net/manual/en/language.types.string.php#language.types.string.parsing) to create the desired string.
put in p tags using php?
[ "", "php", "html", "" ]
I reference to this post; <http://www.pinvoke.net/default.aspx/user32/RegisterHotKey.html> ``` #region fields public static int MOD_ALT = 0x1; public static int MOD_CONTROL = 0x2; public static int MOD_SHIFT = 0x4; public static int MOD_WIN = 0x8; public static int WM_HOTKEY = 0x312; #endregion [DllImport("user32.dll")] private static extern bool RegisterHotKey(IntPtr hWnd, int id, int fsModifiers, int vlc); [DllImport("user32.dll")] private static extern bool UnregisterHotKey(IntPtr hWnd, int id); private static int keyId; public static void RegisterHotKey(Form f, Keys key) { int modifiers = 0; if ((key & Keys.Alt) == Keys.Alt) modifiers = modifiers | WindowsShell.MOD_ALT; if ((key & Keys.Control) == Keys.Control) modifiers = modifiers | WindowsShell.MOD_CONTROL; if ((key & Keys.Shift) == Keys.Shift) modifiers = modifiers | WindowsShell.MOD_SHIFT; Keys k = key & ~Keys.Control & ~Keys.Shift & ~Keys.Alt; Func ff = delegate() { keyId = f.GetHashCode(); // this should be a key unique ID, modify this if you want more than one hotkey RegisterHotKey((IntPtr)f.Handle, keyId, modifiers, (int)k); }; f.Invoke(ff); // this should be checked if we really need it (InvokeRequired), but it's faster this way } ``` My question is, how does `RegisterHotKey` api know that `1`, `2`, `4`, `8` are windows keys? Because the key codes for ctrl, shift, and menu (alt) give back totally different values for the keys? And what exactly is going on in the `RegisterHotKey` function where it's checking for: ``` if ((key & Keys.Control) == Keys.Control) modifiers = modifiers | WindowsShell.MOD_CONTROL; ``` and what is it doing here? ``` Keys k = key & ~Keys.Control & ~Keys.Shift & ~Keys.Alt; ```
`MOD_ALT`, `MOD_CONTROL`, etc don't have any relationship to the keycodes of the associated keys. You're seeing an `enum` type used to represent a set of [flags](http://msdn.microsoft.com/en-us/library/system.enum.aspx). This is a particularly compact way of representing states that represent combinations of things (like modifier keys being simultaneously pressed, file access permissions, etc) When `enum`s are used this way, each bit of a variable of the enumeration type can be used to indicate that a particular "flag" is set. ``` // Note that powers of 2 are used; each value has only a single bit set public static int MOD_ALT = 0x1; // If bit 0 is set, Alt is pressed public static int MOD_CONTROL = 0x2; // If bit 1 is set, Ctrl is pressed public static int MOD_SHIFT = 0x4; // If bit 2 is set, Shift is pressed public static int MOD_WIN = 0x8; // If bit 3 is set, Win is pressed // If we wanted to represent a combination of keys: int altAndControl = MOD_ALT | MOD_CONTROL; // == 3 int controlAndShift = MOD_CONTROL | MOD_SHIFT; // == 6 ``` This has two advantages: * The API doesn't have to take an individual boolean parameter for every modifier key * The API can be expanded to include additional modifier keys without changing the interface Bitwise `&`s and `|`s can be used to determine which flags are set in a value, and to set or unset a flag in a value. The code you've asked about does exactly this: ``` if ((key & Keys.Control) == Keys.Control) modifiers = modifiers | WindowsShell.MOD_CONTROL ``` is saying "If the key has the `Control` bit set, then set the control bit in `modifiers`" ``` Keys k = key & ~Keys.Control & ~Keys.Shift & ~Keys.Alt; ``` is saying "`k` is assigned `key` with the `Control`, `Shift` and `Alt` flags cleared" I'm not sure why the contributors to pinvoke chose to use constants; you can just as easily use a proper enum: ``` [Flags] public enum Modifiers { None = 0, Alt = 1, Control = 2, // ... } ``` [My answer to a similar question](https://stackoverflow.com/questions/612072/question-on-operators/612116#612116) has some more details on how flags work, and more examples.
1, 2, 4, 8 are used to represent single bits. 1 is the least significant bit will be on. Adding 2 will turn on the 2nd least significant bit, etc. Thus you can set multiple flags independently of each other. ``` 1 = 0001 2 = 0010 4 = 0100 8 = 1000 ``` Thus if you want to set the variables represented by 2 and 4 to be true you would use 0110 = 6.
When using RegisterHotKey api call, why does it accept 1, 2, 4, and 8 as modifiers?
[ "", "c#", "enums", "pinvoke", "flags", "" ]
Just want to know if there's a tutorial or a how-to for serializing objects, putting them into a stream over network, and deserialize it on the other side. I understand the principles of serialization, I/O, streams, sockets, and so on, I just would like an example of a client sending an object to a server to start with.
[This](http://www.cs.mcgill.ca/~adenau/teaching/cs303/lecture12.pdf) (pdf) is a useful tutorial which walks you through the basics of serialisation, and sockets, then ties the two concepts together (about halfway through the slides) to show how to serialise an object and send it from client to server (no RMI). I think that's precisely what you want.
Its pretty simple, actually. Just make your objects serializable, and create an ObjectOutputStream and ObjectInputStream that are connected to whatever underlying stream you have, say FileInputStream, etc. Then just write() whatever object you want to the stream and read it on the other side. Heres an [example](http://www.java2s.com/Tutorial/Java/0180__File/ReadingBasicDataFromanObjectStream.htm) for you.
Java serialization over network
[ "", "java", "serialization", "network-programming", "" ]
I am working on an application which accepts any uploaded CSV data, stores it alongside other datasets which have been uploaded previously, and then produces output (CSV or HTML) based on the user selecting which columns/values they want returned. The database will be automatically expanded to handle new/different columns and datatypes as required. This is in preference to a entity-attribute-value model. Example - uploading these 2 sets to a blank database: dataset A: ``` name | dept | age ------+-------+------ Bob | Sales | 24 Tim | IT | 32 ``` dataset B: ``` name | dept | age | salary ------+-------+------+-------- Bob | Sales | 24 | £20,000 Tim | IT | 32 | £20,000 ``` Will programatically change the 'data' table so that importing dataset A results in 3 newly created columns (name,dept,age). Importing dataset B results in 1 newly created column (salary). At the moment, forget about whether the recordsets should be combined or not and that there's no normalisation. The issue I have is that some columns will also have lookup values - let's say that the Dept column will at some point in the future have associated values which give the address and phone numbers of that department. The same could be true for the Salary column, looking up tax groupings etc. The number of columns in this big table should not become too high (a few hundred) but will be high enough to want the user to administer the lookup table structure and values through an admin panel rather than have to involve developers each time. **The question** is whether to use individual lookup tables for each column (value, description), or a combined lookup table which references the column (column, value, description). Normally I would opt for individual lookup tables, but here the application will need to create them automatically (e.g. lookup\_dept, lookup\_salary) and then add a new join into the master SQL statement. This would be done at the request of the user rather than when the column's added (to avoid hundreds of empty tables). The combined lookup table on the other hand would need to be joined multiple times onto the data table, selecting on the column name each time. Individual lookups seems to make sense to me but I may be barking up completely the wrong tree.
I would agree that individual tables is preferable. It is more scalable and better for query optimisation. Also, if in future the users want more columns on a particular lookup then you can add them. Yes, the application will have to create tables and constraints automatically: I wouldn't normally do this, but then this application is already altering existing tables and adding columns to them, which I wouldn't normally do either!
Ah, the "[One true lookup table](http://www.dbazine.com/ofinterest/oi-articles/celko22)" idea. One of the rare times I agree with Mr Celko. [Google search too](http://www.google.com/search?q=one+true+lookup+table) Individual tables every time. It's "correct" in the database sense. My reason (no normalisation pedants please): each row in a table stores one entity only. eg Fruit names, car makes, phone brands. To mix them is nonsense. I could have a phone brand called "Apple". Er... wait a minute...
Dynamic creation of new lookup tables based on values in main data table
[ "", "sql", "dynamic", "schema", "lookup", "" ]
I'm working through the Composite Application Guidance and often come across instantiation of an object of type interface e.g.: ``` IShell shell = new Shell(); ``` instead of type class: ``` Shell shell = new Shell(); ``` * What are the differences between these two? * Why is the first way used at all (since the second shell object can be used anywhere an IShell interface is specified, right?)
You might want to do that if the class has an explicit implementation of an interface method. Consider this example: ``` public interface ISomething { void Action(); } public interface ISomethingElse {void Action(); } public class Something : ISomething { public void Action() { } void ISomething.Action() { } } public class Something2 : ISomething, ISomethingElse { void ISomething.Action() { } void ISomethingElse.Action() { } } ``` If you want ISomething.Action to be called on Something, then you use have to call it through an ISomething variable. Even in Something2, the Action method is hidden if you don't do it through the interface. That said, you usually want to avoid having implementations like that. I doubt a framework class would force you into that, but that would be the scenario to declare it with the interface. **Update 1:** To clear it up a bit, some extra code on how to get to the methods: ``` Something some = new Something(); some.Action(); //calls the regular Action ISomething isome = some; isome.Action(); //calls the ISomething.Action ((ISomething)some).Action(); //again, calls ISomething.Action Something2 some2 = new Something2(); some2.Action();//compile error ((ISomething)some2).Action(); //calls ISomething.Action ((IsomethingElse)some2).Action(); // calls ISomethingElse.Action ```
The obvious difference is that the first allows you to use shell as an IShell only, the second allows you to use all the features of Shell which happen to also include those of IShell. Perhaps you could take the view off a maintainer. The first is saying what we need is an instance of something supporting IShell, and we can change it to some other object if we like. The second is saying we must have specifically a Shell object for some feature it provides.
what is the difference between IShell shell = new... and Shell shell = new
[ "", "c#", "interface", "" ]
In the following code: ``` public class StringCache { private readonly object lockobj = new object(); private readonly Dictionary<int, string> cache = new Dictionary<int, string>(); public string GetMemberInfo(int key) { if (cache.ContainsKey(key)) return cache[key]; lock (lockobj) { if (!cache.ContainsKey(key)) cache[key] = GetString(key); } return cache[key]; } private static string GetString(int key) { return "Not Important"; } } ``` 1) Is ContainsKey thread safe? IOW, what happens if that method is executing when another thread is adding something to the dictionary? 2) For the first return cache[key], is there any chance that it could return a garbled value? TIA, MB
The inherent thread safety of ContainsKey doesn't matter, since there is no synchronization between ContainsKey & cache[key]. For example: ``` if (cache.ContainsKey(key)) // Switch to another thread, which deletes the key. return cache[key]; ``` MSDN is pretty clear on this point: > To allow the collection to be accessed > by multiple threads for reading and > writing, you must implement your own > synchronization. For more info, JaredPar posted a great blog entry at <http://blogs.msdn.com/jaredpar/archive/2009/02/11/why-are-thread-safe-collections-so-hard.aspx> on thread-safe collections.
No, ContainsKey is not thread-safe if you're writing values while you're trying to read. Yes, there is a chance you could get back invalid results -- but you'll probably start seeing exceptions first. Take a look at the ReaderWriterLockSlim for locking in situations like this -- it's built to do this kind of stuff.
ContainsKey Thread Safe
[ "", "c#", "multithreading", "dictionary", "" ]
Is there any way to check if a URL scheme is currently registered on the phone... with javascript?
No, not from a webpage.
Not seamlessly. But there is a way similar to checking if a pop-up was blocked or not. When you try a URL scheme which is not supported, Safari will warn the user that it doesn't know what to do with it and stay on the same page. So if you gave your app-call some time to activate, say 300 ms, and then do something else to respond to the non-existence of the scheme. It's not the prettiest but it works: ``` function startIThrown(){ document.location = 'ithrown://restart'; setTimeout(function(){ if(confirm('You do not seem to have iThrown installed, do you want to go download it now?')){ document.location = 'http://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=293049283&mt=8&uo=6'; } }, 300); } <a href="#" onclick="startIThrown()">Restart iThrown</a> ```
Check if URL scheme is supported in javascript
[ "", "javascript", "iphone", "safari", "url-scheme", "" ]
I would like to know if I'm safe against SQL injection when I use something like that with PostgresSQL: ``` CREATE or REPLACE FUNCTION sp_list_name( VARCHAR ) RETURNS SETOF v_player AS ' DECLARE v_start_name ALIAS FOR $1; r_player v_player%ROWTYPE; v_temp VARCHAR; BEGIN v_temp := v_start_name || ''%''; FOR r_player IN SELECT first_name, last_name FROM v_player WHERE last_name like v_temp LOOP RETURN NEXT r_player; END LOOP; RETURN; END; ' LANGUAGE 'plpgsql' VOLATILE; ``` I want to use this function to list player's name beginning with a letter. ``` select * from sp_list_name( 'A' ); ``` gives me players with last name beginning with A. I tried to inject sql with ``` select * from sp_list_name( 'A; delete from t_player;--' ); select * from sp_list_name( '''; delete from t_player;--' ); ``` Am I safe ? Which case I could be injected ? Regards
In terms of your procedure you seem safe as the variable in the SP won't be expanded into code, but you can still expose yourself if you don't use a parameterized query like "*`SELECT * FROM sp_list_name(?);`*" in your *appplication code*. Something like "*`SELECT * FROM sp_list_name('$start_name');`*" could be subverted by a user passing a start name of "**`');delete from t_player where last_name NOT IN ('`**". So use a parameterized query or sanity check your inputs in your program. **NB:** To others, please note that a variable in a stored procedure will **not** expand into code even if it contains a ' or ;, (excluding passing it to *EXECUTE*, for which you would use *`quote_literal`*, not hand-rolled *`replace`* functions) so replacing ; or ' is totally unnecessary (in the stored procedure, the application using it is a different story, of course) and would prevent you from always finding the "`tl;dr`" or "`O'Grady`" teams. **Leo Moore, Karl, LFSR Consulting**: `v_temp_name` in the stored procedure will *NOT* be expanded into code in the SP (no *EXECUTE*), the check would need to be done n the application, not the SP (or the OP could just use a parameterized query in their app code, instead). What others are suggesting is similar to worrying about ``` my $bar = "foo; unlink('/etc/password');"; my $baz = $bar; ``` actually running the unlink in the absence of an eval.
Rule #1 to prevent against sql injection: Sanitize all input that is coming from someone/something you cannot trust/have no control over. The problem itself does not lie within the database code, but from the application that is executing those statements.
Am I safe against SQL injection
[ "", "sql", "security", "postgresql", "sql-injection", "" ]
I'm using UnitTest++ for unit testing C++ code. In my tests, there's a group of tests I repeat several times. What I'd like is for a utility function to perform these tests. In short, I'd like to take this: ``` TEST( foo ) { Foo one; Foo two; // init one & two // lots of CHECK_CLOSE(one.bar, two.bar, 1e-5); in repeating cycles } TEST( bar ) { Foo one; Foo two; // init one & two // lots of CHECK_CLOSE(one.bar, two.bar, 1e-5); in repeating cycles } ``` And use this: ``` void blah( const Foo& one, const Foo& two ) { // lots of CHECK_CLOSE(one.bar, two.bar, 1e-5); } TEST( foo ) { Foo one; Foo two; // init one & two blah(one, two); } TEST( bar ) { Foo one; Foo two; // init one & two blah(one, two); } ``` This doesn't work due to UnitTest++'s macro manipulations. What is the best way to approach this problem? **edit**: A couple of thoughts which I can't check out right now. 1. If I use a fixture, with the utility function in the `struct`, will I be able to call UnitTest++ macros from within this function? 2. I can write a macro to perform the common tests. I don't really like this, but at least I'll stary dry...
Write a macro to do it. Don't let the "macros are evil" mantra hide this fact from you: Macros are designed to generate repetitive code.
I would recommend against doing this, because what happens is you lose a lot of information about which test had failed, the wrapper function now encloses the functionality of two or more tests, so when it fails you have to trace back to see what was the reason for the failing test.
How can you use utility functions with UnitTest++
[ "", "c++", "unit-testing", "" ]
I would like to be able to change the locale in my Swing application at runtime and have all the text elements on the screen update themselves with localized text from a ResourceBundle of the new locale. Can this be done without customizing swing components or creating UIDelegates for all components that handle rendering localized text? If no, then what is a good solution I can consider implementing?
1. You have a method that is used to change app locale (and probably persist the new value) and another one to get localized strings. 2. Create an interface: ``` interface LocaleChangeListener { void onLocaleChange(); } ``` Implement it by UI components that need to be able to change locale at runtime and set the new values in overrides `onLocaleChange()`. 3. Now, have a list of listeners that will be notified on locale change by the first method.
use `ResourceBundle.getBundle(BUNDLE_NAME).getString(key);` to access the Strings. when updating the Default Locale e.g. via `Locale.setDefault(Locale.GERMAN);` clear the Resourcebundle cache: `ResourceBundle.clearCache();` the next call of `ResourceBundle.getBundle(BUNDLE_NAME).getString(key);` should the return the localized String of the chosen Locale.
Changing locale at runtime in Swing
[ "", "java", "swing", "internationalization", "" ]
I would like to access a PHP file whose name has UTF-8 characters in it. The file does not have a BOM in it. It just contains an echo statement that displays a few unicode characters. Accessing the PHP page from the browser (FireFox 3.0.8, IE7) results in HTTP error 500. There are two entries in the Apache log (file is /க.php; the letter க is a composite one and corresponds to the characters \xe0\xae\x95 in the log below): [Sat Apr 04 09:30:25 2009] [error] [client 127.0.0.1] PHP Warning: Unknown: failed to open stream: No such file or directory in Unknown on line 0 [Sat Apr 04 09:30:25 2009] [error] [client 127.0.0.1] PHP Fatal error: Unknown: Failed opening required 'D:/va/ROOT/\xe0\xae\x95.php' (include\_path='.;C:\php5\pear') in Unknown on line 0 The same page works when file and dir names are in English. In the same setup, there is no problem using SSI for these pages. *EDIT* Removed info on url rewriting since it does not seem to be a factor. When mod\_rewrite is removed, the PHP file still does not work. Works if the file is renamed to a non-UTF name. However, shtml works even with UTF characters in file and/or path name.
I have come across the same problem and done some research and conclude the following. This is for php5 on Windows; it is probably true on other platforms but I haven't checked. 1. ALL php file system functions (dir, is\_dir, is\_file, file, filemtime, filesize, file\_exists etc) only accept and return file names in ISO-8859-1, irrespective of the default\_charset set in the program or ini files. 2. Where a filename contains a unicode character dir->read will return it as the corresponding ISO-8859-1 character if there is one, otherwise it will substitute a question mark. 3. When referencing a file, e.g. in is\_file or file, if you pass in a UTF-8 file name the file will not be found when the name contains any two-byte or more characters. However, is\_file(utf8\_decode($filename)) etc will work providing the UTF-8 character is representable in ISO-8859-1. In other words, PHP5 is not capable of addressing files with multi-byte characters in their names at all. If a UTF-8 URL with multibyte characters is requested and this corresponds directly to a file, PHP won't be able to open the file because it cannot address it. If you simply want pretty URLs in your language the suggestion of using mod\_rewrite seems like a good one. But if you are storing and retrieving files uploaded and downloaded by users, this problem has to be resolved. One way is to use an arbitrary (non UTF-8) file name, such as an incrementing number, on the server and index the files in a database or XML file or some such. Another way is to store the files in the database itself as a BLOB. Another way (which is perhaps easier to see what is going on, and not subject to problems if your index gets corrupted) is to encode the filenames yourself - a good technique is to urlencode (sic) all your incoming filenames when storing on the server disk and urldecode them before setting the filename in the mime header for the download. All even vaguely unusual characters (except %) are then encoded as %nn and so any problems with spaces in file names, cross platform support and pattern matching are largely avoided.
* I know for a fact PHP itself *can* work with Unicode URLs, because I have tried using Unicode page names in MediaWiki (PHP-based, also runs WikiPedia) and it does work. Eg, URLs such as /index.php/Page\_name©. So PHP can handle it. But it may be a problem with Apache finding a file where the source file has a UTF-8 name. * The PHP.ini setting for character encoding should not be affecting this; it is the job of the web server to find a specific resource and then call PHP once it's determined to be a PHP file. It will mean that the web server, and the underlying file system itself, have to be able to deal with UTF-8 filenames. * Does it work without the mod\_rewrite rule? Ie, if you disable the rewrite engine with RewriteEngine off and then request va.in/utf\_dir/utf\_file.php? If so, then it may be a mod\_rewrite config issue or a problem with the rule. * Unicode in URLs may not be properly supported in some browsers when you just type an address in, such as older browsers. Older browsers may skip the UTF-8 encoding step. This should not prevent it from working if you are following a link on a page, where that page is UTF-8 encoded, though.
Can a PHP file name (or a dir in its full path) have UTF-8 characters?
[ "", "php", "apache", "unicode", "utf-8", "url-rewriting", "" ]
I am currently using an image manipulation script to do some work on uploaded images and I'm running into a problem in my dev environment. This same code works on my production server. The error is: ``` PHP Warning: imagecreatefromjpeg(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /path/to/script.php on line 75 ``` The line of code on 75 is: ``` $this->img = imagecreatefromjpeg(PHPTHUMB."?src=/".$this->image_path); ``` which creates an image that is loaded from another made by phpThumb, which is used for further manipulation. Any ideas on how to solve this? Can anyone shed some light on what the error means? Thank you, **Edit:** Just as a further bit of insight, if i visit `PHPTHUMB . "?src=/" . $this->image_path` in my browser, the image loads fine User agent in `php.ini` did not solve the problem as [indicated here](http://www.mydigitallife.info/2006/04/21/php-5-unable-to-open-http-request-stream-with-fopen-or-fsockopen-functions/) EDIT (SOLUTION): I had to add the INTERNAL 192.168.204.XXX IP into the hosts file, so that <http://dev.mysite.com> resolved correctly. Trying both 127.0.0.1 and the external IP yielded no result, but the internal works perfectly. Thanks for everyone's efforts,
I'm totally not sure about the cause of that error, but this is my best guess. There is a particular PHP Setting which enables you to treat URLs as files. Normally imagecreatefromjpeg accepts a filename, I think. Since you are passing a URL, you need to make sure the particular setting is enabled. I believe you can find more info about it here: <https://www.php.net/file> > filename > > ``` > Path to the file. > Tip > ``` > > A URL can be used as a filename with this function if the fopen > wrappers have been enabled. See > fopen() for more details on how to > specify the filename and List of > Supported Protocols/Wrappers for a > list of supported URL protocols. Fopen wrappers: <https://www.php.net/manual/en/filesystem.configuration.php#ini.allow-url-fopen> Your dev environment may not be setup with this, but your production environment could be. Edit This page lists your error: <http://bugs.php.net/bug.php?id=11058> and mentions that a possible fix is by modifying your hosts file to explicitly include the ip/dns of the server you're accessing. > i tryed to reconfigure my named > configuration, without any result, to > fix this. the problem was solved by > adding the following line to my > /etc/hosts: > > 194.97.55.147 mx.freenet.de Maybe you can try this?
did you also check that allow\_url\_fopen is set to On in php.ini? I saw that they did mention that in the article you referred to in your post. **EDIT:** Also, what OS and what PHP version are you running? Just read this on [php.net](http://se.php.net/manual/en/filesystem.configuration.php#ini.allow-url-fopen): "On Windows versions prior to PHP 4.3.0, the following functions do not support remote file accessing: include(), include\_once(), require(), require\_once() and the imagecreatefromXXX functions in the GD Functions extension." **EDIT2:** I just read what was written in the comments, your allow\_url\_fopen is set to On, sorry about that. But OS, PHP version and phpinfo() would help in finding the problem.
imagecreatefromjpeg() error - Why is my image manipulation script failing?
[ "", "php", "image-processing", "image-manipulation", "" ]
I am looking for a way to allow users to download a file from my site using ajax. The reason why i need to do this is because the files are quite big (50 megs normally). So when the user clicks the download button it actually looks as if the page is not responding because it has to stream the file from my Amazon S3 to the user. So i need to notify the user that the download is in progress and maybe even give them a progress bar. I have found the following [article](http://encosia.com/2007/02/23/ajax-file-downloads-and-iframes/) but i am not really keen on injecting iFrames into my webpage. Thanks in advance for any help.
I have decided to use the IFrame method is there is no other way that works seamlessly enough to make it worth the time.
I don't really see how you can allow a user to download a file using ajax. There won't be any way to get the file to their filesystem from javascript. What's wrong with targeting the file into an iframe or a new window and letting users manage the download like they are used to using their browser?
Allow User to Download File using Ajax
[ "", "c#", "asp.net", "ajax", "" ]
I'm trying to create a script to compile an Windows Forms C# 2.0 project from the command line (I know, I know.. I'm reinventing the wheel.. again.. but if somebody knows the answer, I'd appreciate it). The project is a standard Windows Forms project that has some resources and references a couple external assemblies. Here is a list of the files: ``` Program.cs // no need to expand on this on :) frmMain.cs // this is a typical C# windows forms file frmMain.designer.cs // .. and the designer code frmMain.resx // .. and the resource file MyClasses.cs // this contains a couple classes Properties\AssemblyInfo.cs // the Properties folder -- again pretty standard Properties\Resources.Designer.cs Properties\Resources.resz Properties\Settings.Designer.cs Properties\Settings.settings References\AnAssembly.dll // an assmebly that I reference from this application ``` So far I've identified the following programs/tools that I would need: ``` csc.exe // the C# compiler al.exe // the assembly linker resgen.exe // the resource compiler ``` And this is my script so far: ``` @echo off set OUT=Out set AL=C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\al.exe set RESGEN="C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\resgen.exe" set COMPILER=C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\csc.exe echo. echo Compiler: %COMPILER% echo. if "%1"=="/help" goto help :start echo Starting... set REFERENCES=.\References\AReferencedll set SRCFILES=Program.cs frmMain.cs frmMain.designer.cs MyClasses.cs Properties\AssemblyInfo.cs Properties\Resources.Designer.cs Properties\Settings.Designer.cs del /Q %OUT%\* %RESGEN% /compile frmMain.resx,%OUT%\frmMain.resources cd Properties %RESGEN% /compile Resources.resx,..\%OUT%\Resources.resources cd .. %COMPILER% /target:module /out:%OUT%\app.module %SRCFILES% /reference:%REFERENCES% %AL% %OUT%\app.module /embed:%OUT%\frmMain.resources /target:winexe /out:%OUT%\app.exe /main:App.Program.Main goto done :error echo Ooooops! :done echo Done!! ``` In the end, no matter how I spin it I get different errors from the linker, or the final executable will just not run - it crashes. Please help (MSDN didn't help too much..)!
I like to cheat for these kinds of things. Take a working project and run the following command on it ``` msbuild Project.csproj /t:rebuild /clp:ShowCommandLine ``` The output of that will show you the command msbuild uses to compile the project, which you can then take and modify as you like.
Is there a reason why you are not using msbuild? It can compile any visual studio project/solution in one command... ``` msbuild.exe <solution filename> ```
Command Line Compiling a Win Forms C# Application
[ "", "c#", ".net", "compiler-construction", ".net-2.0", "cmd", "" ]
(I understand that someone else asked a similar question and it was closed as 'argumentative', but I'm really interested in understanding the arguments around this.) I know JavaScript really well. I've been writing it professionally for years. I've internalized a lot of the cross-browser incompatibilities and sketchiness, know DOM manipulation like the back of my hand, have worked with some of the best web developers in the industry & picked up a lot of their mojo. I've been checking out jQuery. I understand the point of a javascript library (how many times have I written animation, getElementsByClass, and hide/show functions?). But to be honest, it seems like a waste of time to learn an entirely new syntax that isn't less complex. It seems like I'd be bashing my head against a wall to learn an entirely new interface to the same old JavaScript. I'm not technically an engineer, so maybe I'm missing something. Could someone spell out the tradeoffs of jQuery? Is it really faster to learn and understand jQuery syntax than to just learn JavaScript?
There are a few big benefits to using a framework over homegrown/handwritten code: * **Abstractions**. I'm sure you're very proud of the fact that you've slung enough JS to be able to write animations from scratch. You should be! However, abstracting common functionality away from yourself is actually very liberating. You just call the method and know the innards will be executed, and executed well. Even if you're very fast at writing the low-level stuff yourself, that's still time you've spent on that instead of solving today's problems. * **Common language**. Using a common framework is like speaking a common language. You can collaborate with other developers very easily, and each can pick up where others left off without friction. (Compared to stepping into an application which uses a homegrown library for the things jQuery can do.) * **Experts**. The people working on jQuery are JavaScript *gods*. I am really, really good at JavaScript, and you probably are too, but there's a big difference between normal good and jQuery good. A team of insanely good people are constantly poring over a small set of common functionality - tuning it, tweaking it, enhancing it to make it the best it can possibly be. That represents a huge number of man-hours a single person like you or me simply cannot reproduce, no matter how good we are. And if you *are* as good as the jQuery guys, you can only benefit by combining your talent with theirs and contributing to the jQuery codebase. It's a melting pot of insane talent.
> I'm not technically an engineer, so maybe I'm missing something. Could someone spell out the tradeoffs of jQuery? Is it really faster to learn and understand jQuery syntax than to just learn JavaScript? jQuery is more than "just another interface" to Javascript. It allows you to express yourself in ways that are more compact and succincter than the corresponding Javascript implementation. At the same time, it's clearer and much more powerful. Specifically, the benefits you get include: * **Expressiveness.** jQuery is essentially a DSL for DOM manipulation and querying. This specificity is a major source of its utility and effectiveness. * **Cross-browser.** To a very large extent, jQuery is "write once, run anywhere". Even in 2009, this is still a surprisingly rare feat for web-based platforms. It's gratifying and relieving to know that you won't have to waste time debugging obscure problems on IE6 (well, most of the time). * **Highly complete documentation.** As a developer, I prize APIs and frameworks that have taken the time to spell out what all the moving pieces are supposed to be doing. Nothing is more encouraging than knowing that (1) things are stable enough that writing documentation isn't an attempt to hit a moving target and (2) things are useful enough that they've attracted enough people to flesh out the documentation fully. In sum, the difference between Javascript and jQuery is analogous to the difference between assembly and higher-order programming languages. Although they're both technically both "an interface to machine language" and can both do similar things, the latter is considered far more powerful because of how easy it is to represent things and develop rapidly.
Why would I want to use jQuery?
[ "", "javascript", "jquery", "" ]