Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
*note, I am not asking for answers. I simply am curious regarding why things work* I need to implement a priority queue for a printer simulator for a class assignment. After looking at examples on the internet, I noticed that operator< was being overloaded in order to arrange the priority queue correctly. [code in question: java2s priority queue example](http://www.java2s.com/Code/Cpp/Data-Structure/Storingclassobjectsinapriorityqueue.htm) Why does operator< need to be overloaded? Where is '<' even used to make the comparison? Does implementing the operator overload change the way the queue STL works? This implementation doesn't seem intuitive to me at all: why isn't operator> being overloaded instead? How is one supposed to learn that operator< needs to be overloaded in order for the priority\_queue to work correctly?
STL containers use operator< by default to order the contents, for those containers that order the contents. You can override this by passing in a comparison functor to the constructor of the container, which allows you to decouple the sorting/ordering from the container object. Operator> could have been chosen, but one had to be picked and that was operator<, and is then used everywhere for consistency.
> Why does operator< need to be overloaded? The `Compare` function object in [`priority_queue<T, Sequence, Compare>`](http://www.sgi.com/tech/stl/priority_queue.html): > Compare induces a strict weak ordering, as defined in the LessThan Comparable requirements, on its argument type. [LessThanComparable documentation](http://www.sgi.com/tech/stl/LessThanComparable.html): > Notes > > [1](http://www.sgi.com/tech/stl/priority_queue.html) Only operator< is fundamental; the other inequality operators are essentially syntactic sugar. > > [2](http://www.sgi.com/tech/stl/LessThanComparable.html) Antisymmetry is a theorem, not an axiom: it follows from irreflexivity and transitivity. > > [3] Because of irreflexivity and transitivity, operator< always satisfies the definition of a partial ordering. The definition of a strict weak ordering is stricter, and the definition of a total ordering is stricter still. > > Where is '<' even used to make the comparison? `void push(const value_type& x)` which inserts a value in the queue. > Does implementing the operator overload change the way the queue STL works? Yes, of course. If you swap the order of elements in the comparison, your sort goes the other way.
Why does operator< need to be overloaded when implementing class-based priority queues in c++?
[ "", "c++", "operator-overloading", "priority-queue", "" ]
I have a JavaScript object like the following: ``` var p = { "p1": "value1", "p2": "value2", "p3": "value3" }; ``` How do I loop through all of `p`'s elements (`p1`, `p2`, `p3`...) and get their keys and values?
You can use the [`for-in`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...in) loop as shown by others. However, you also have to make sure that the key you get is an actual property of an object, and doesn't come from the prototype. **Here is the snippet:** ``` var p = { "p1": "value1", "p2": "value2", "p3": "value3" }; for (var key in p) { if (p.hasOwnProperty(key)) { console.log(key + " -> " + p[key]); } } ``` **For-of with Object.keys() alternative:** ``` var p = { 0: "value1", "b": "value2", key: "value3" }; for (var key of Object.keys(p)) { console.log(key + " -> " + p[key]) } ``` Notice the use of [`for-of`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...of) instead of [`for-in`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...in), if not used it will return undefined on named properties, and [`Object.keys()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/keys) ensures the use of only the object's own properties without the whole prototype-chain properties **Using the new [`Object.entries()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/entries) method:** **Note:** This method is not supported natively by Internet Explorer. You may consider using a Polyfill for older browsers. ``` const p = { "p1": "value1", "p2": "value2", "p3": "value3" }; for (const [key, value] of Object.entries(p)) { console.log(`${key}: ${value}`); } ```
Under ECMAScript 5, you can combine [`Object.keys()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/keys) and [`Array.prototype.forEach()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach): ``` var obj = { first: "John", last: "Doe" }; Object.keys(obj).forEach(function(key) { console.log(key, obj[key]); }); ``` ECMAScript 6 adds [`for...of`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...of): ``` for (const key of Object.keys(obj)) { console.log(key, obj[key]); } ``` ECMAScript 8 adds [`Object.entries()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/entries) which avoids having to look up each value in the original object: ``` Object.entries(obj).forEach( ([key, value]) => console.log(key, value) ); ``` You can combine `for...of`, destructuring, and `Object.entries`: ``` for (const [key, value] of Object.entries(obj)) { console.log(key, value); } ``` Both `Object.keys()` and `Object.entries()` iterate properties in the same order as a `for...in` loop *but ignore the prototype chain*. Only the object's own enumerable properties are iterated.
How do I loop through or enumerate a JavaScript object?
[ "", "javascript", "loops", "for-loop", "each", "" ]
I have come accross to this function below and I am wondering wether this is the right way of using the error handling of try/catch. ``` public function execute() { $lbReturn = false; $lsQuery = $this->msLastQuery; try { $lrResource = mysql_query($lsQuery); if(!$lrResource) { throw new MysqlException("Unable to execute query: ".$lsQuery); } else { $this->mrQueryResource = $lrResource; $lbReturn = true; } } catch(MysqlException $errorMsg) { ErrorHandler::handleException($errorMsg); } return $lbReturn; } ```
Codewise it is correct/works, However the power of try-catch is that when an Exception is thrown from deep down in one of the functions you're calling. Because of the "stop execution mid-function and jump all the way back to the catch block". In this case there are no deep-down exceptions therefore I would write it like this: (Assuming there is a function "handleErrorMessage" in the ErrorHandler.) ``` public function execute() { $lsQuery = $this->msLastQuery; $lrResource = mysql_query($lsQuery); if(!$lrResource) { ErrorHandler::handleErrorMessage("Unable to execute query: ".$lsQuery); return false; } $this->mrQueryResource = $lrResource; return true; } ``` Which I find more readable.
No. Throwing an exception in this case is simply a GOTO, but with a (slightly) prettier face.
correct way of using a throw try catch error handling
[ "", "php", "exception", "error-handling", "" ]
I have the following class hierarchy: ``` public abstract class BaseData { //some properties } public class CoData : BaseData { //some properties } ``` I am working with a method that requires the return type to be `List<BaseData>`. In the method, I have access to `List<CoData>` ``` public List<BaseData> Save() { List<CoData> listCoData = GetData(); return listCoData; } ``` If I understand correctly, I can upcast from a `CoData` to a `BaseData`. But, when I have a list, it errors out even if I explicitly try to typecast. Error: ``` Error 118 Cannot implicitly convert type 'System.Collections.Generic.List<CoData>' to System.Collections.Generic.List<BaseData>' ``` **EDIT:** mquander's Conversion approach seems to work for me in 3.0 Is downcasting done the same way as well? from ie., Can I do this - `List<CoData> listCoData = listBaseData.Cast<BaseData>().ToList();`
Yes; welcome to variance. Ultimately, it **isn't** a list of `BaseData` - for example, if you had another subclass, a `List<BaseData>` would (at compile time) let you `.Add` it... but the runtime type wouldn't let you. The compiler is stopping you making a mistake. In some scenarios, generics can help here... I discuss this at the end of [this blog entry](http://marcgravell.blogspot.com/2009/02/what-c-40-covariance-doesn-do.html). Note that .NET 4.0 variance doesn't apply to lists.
This is called "covariance" with respect to the parameter of your collection, and C# doesn't support this kind of covariance right now. In C# 4.0, user-defined types and some built-in types like `IEnumerable<T>` will support this. In the meantime, you can work around it by creating a new enumeration with explicitly cast members (you might want to think about changing the return types of your stuff here to `IEnumerable<T>` so you don't have to actually cook up new collections all the time:) ``` public List<BaseData> Save() { List<CoData> listCoData = GetData(); return listCoData.Cast<BaseData>().ToList(); } ``` Depending on how GetData works, you might also consider a structure like this: ``` public List<T> Save<T>() where T : BaseData { return listCoData = GetData<T>(); } ``` **EDIT**: Actually, as Marc and others undoubtedly will point out, `List<T>` can't be covariant like this because you can add members in ways that would break type safety if it were. However, if you sent back `IEnumerable<T>`s instead, you could use covariance here. **EDIT 2**: In response to your additional question: > Can I do this? > `List<CoData> listCoData = > listBaseData.Cast<BaseData>().ToList();` No. `listBaseData.Cast<BaseData>().ToList();` returns an object of type `List<BaseData>` which can't be cast directly to a `List<CoData>`. That's why we had to go to this effort in the first place.
Upcasting and generic lists
[ "", "c#", ".net", "" ]
What in everyone's opinion is the best representation for a time-bound hierarchy in SQL? What I mean by this is: - On any given date you have a normal tree hierarchy - This hierarchy can change from day to date - Each child still only has one parent on any given date Day 1... ``` Business | |-Joe | |-Happy | |-Sneezy | |-Doc(*) | |-Moe |-Bashfull |-Sleepy ``` Day 2... ``` Business | |-Joe | |-Happy | |-Sneezy | |-Moe |-Doc(*) |-Bashfull |-Sleepy ``` At any time, a child can join the hierarchy for the first time, or leave the hierarchy completely. (For example, new employees, and retired employees.) The main considerations: * Updating the hierarchy * Viewing the whole hierarchy across a date range * Reporting on whole sub-trees within the hierarchy * Reporting on whole sub-trees across a date range I know how I do it at present, but am intrigued as to how other people may do it :) **EDIT** I naively assumed a few considerations so will be more explicit... * Each 'team' or 'person' will have a unique ID in a dimension table elsewhere * Other fact tables will use those IDs (storing performance metrics, for example) * The structure needs to facilitate historical reporting across date ranges * Use of ETL or triggers to maintain alternative structures Is an option The generic nature is most important (forming just one part of a generic relational mode), combined with ease of use for driving report (for any part of the tree across any range of dates) and the ability to be updated reliably.
There are several different books of relevance here - one set is for 'temporal databases', and the other for 'hierarchical structures in RDBMS'. * Snodgrass "[Developing Time-Oriented Applications in SQL](http://www.cs.arizona.edu/%7Erts/publications.html)" (PDF available online at URL) * Date, Darwen and Lorentzos "[Temporal Data and the Relational Model](https://rads.stackoverflow.com/amzn/click/com/B001E4587Q)" * Celko "[Joe Celko's Trees and Hierarchies in SQL for Smarties](https://rads.stackoverflow.com/amzn/click/com/1558609202)" The tricky parts of your question, it seems to me, are: > * Viewing the whole hierarchy across a date range > * Reporting on whole sub-trees across a date range The other items are, if not straight-forward, then manageable using the techniques outlined in the books, and along the lines suggested in other answers. Part of the problem is understanding what those two bullet points mean. In one sense, they are 'the same'; the 'whole hierarchy' is just a special case of 'whole sub-trees'. But the deeper question is 'how do you want to demonstrate - visualize, represent - the changes in the hierarchy over time?' Are you seeking to compare the states at the start and end times, or are you seeking to see the intermediate changes too? How do you want to represent the moves of an individual within a hierarchy? More questions than answers - but I hope the pointers are some help.
A couple of flat tables can work here. For each row, we need columns ID, Name, ParentID, and InactivatedDatetime (which defaults to null). Set the datetime for the old Doc belonging to Joe indicating that that record is no longer valid and move it off to an archive table (for cleanliness), and then create a new row (a near copy of the original row) for a new Doc with Moe's ID as the ParentID. The drawback with this approach is that the person being moved must get a new ID, which may not be convenient.
Best Relational DataBase Representation Of Time Bound Hierarchies
[ "", "sql", "database", "reporting", "hierarchy", "" ]
Hi I've run into some problems with hibernate 2nd level cache. As cache provider I use ehcache. Part of config from persistence.xml ``` <property name="hibernate.cache.use_second_level_cache" value="true"/> <property name="hibernate.cache.provider_class" value="net.sf.ehcache.hibernate.EhCacheProvider" /> <property name="hibernate.cache.provider_configuration_file_resource_path" value="/ehcache.xml" /> ``` I configure my entities using annotations so: ``` @Cache(region = "Kierunek", usage = CacheConcurrencyStrategy.READ_WRITE) public class Kierunek implements Serializable { ``` imports for those annotations are: `import org.hibernate.annotations.Cache; import org.hibernate.annotations.CacheConcurrencyStrategy;` my ehcache.xml ``` <diskStore path="java.io.tmpdir" /> <defaultCache maxElementsInMemory="10000" eternal="false" timeToIdleSeconds="120" timeToLiveSeconds="120" overflowToDisk="true" diskSpoolBufferSizeMB="30" maxElementsOnDisk="10000000" diskPersistent="false" diskExpiryThreadIntervalSeconds="120" memoryStoreEvictionPolicy="LRU" /> <cache name="Kierunek" maxElementsInMemory="1000" eternal="true" overflowToDisk="false" memoryStoreEvictionPolicy="LRU" /> ``` And anyone idea why i get following error ? ``` WARNING: Could not find a specific ehcache configuration for cache named [persistence.unit:unitName=pz2EAR.ear/pz2EJB.jar#pz2EJB.Kierunek]; using defaults. 19:52:57,313 ERROR [AbstractKernelController] Error installing to Start: name=persistence.unit:unitName=pz2EAR.ear/pz2EJB.jar#pz2EJB state=Create java.lang.IllegalArgumentException: Cache name cannot contain '/' characters. ``` solution is to add another property to persistence.xml ``` <property name="hibernate.cache.region_prefix" value=""/> ``` and that removes that faulty prefix big thx ruslan!
IMHO, you get the generated region name for your class. This generated name "persistence.unit:unitName=pz2EAR.ear/pz2EJB.jar#pz2EJB.pl.bdsdev.seps.encje.Kierunek". And it's not defined in your's ehcache.xml configuration. Also it's looking for the predefined name, so it can't use default region. As an option to solve this problem you can use @Cache annotation properties to predefine some region name, like ``` @Cache(region = 'Kierunek', usage = CacheConcurrencyStrategy.READ_WRITE) public class Kierunek implements Serializable { // .... } ``` And in ehcache.xml ``` <cache name="Kierunek" maxElementsInMemory="1000" eternal="true" overflowToDisk="false" memoryStoreEvictionPolicy="LRU" /> ```
Hibernate add prefix to cache names based on appname or value of property hibernate.cache.region\_prefix If You set this property for "" (empty string) then You have regions named exactly like name in hibernate config.
Hibernate 2nd level cache
[ "", "java", "hibernate", "annotations", "ehcache", "" ]
I have a Google map and street view based on provided lat/lng coordinates, zoom, yaw and pitch. I need to invoke a javascript to update a hidden field for each of these values whenever any of the details change from their default or when a button is clicked. So whenever the map/street view is zoomed in, panned, tilted etc it outputs the new details. How do I call the functions getPOV(), yawchanged(yaw:Number), pitchchanged(pitch:Number), and zoomchanged(zoom:Number) whenever the Street View is changed (similar to moveend for Maps)
Not sure of the best way to compress this but this works to get the changed details: ``` GEvent.addListener(myPano, 'initialized', function(pano) { alert("newlng: " + pano.latlng.lng() + ", newlat: " + pano.latlng.lat()); }); GEvent.addListener(myPano, 'yawchanged', function(newyaw){ alert("yawchanged: " + newyaw); }); GEvent.addListener(myPano, 'pitchchanged', function(newpitch) { alert("pitchchanged: " + newpitch); }); GEvent.addListener(myPano, 'zoomchanged', function(newzoom) { alert("zoomchanged: " + newzoom); }); ```
I generally have found that "moveend" is the best event to use as a hook to get the state of the map when a user changes it. I will need to look up how to get the lat/lng, zoom, yaw, and pitch from the map isntanct when I have more time this afternoon ``` // map is the instance of your GMap2 GEvent.addListener(map, 'moveend', function() { var center = map.getCenter(); var zoom = map.getZoom(); alert([center.lat(), center.lng(), zoom].join(',')); }); ```
Getting lat/lng/zoom/yaw/pitch from Google Street View
[ "", "javascript", "google-maps", "google-street-view", "" ]
> **Possible Duplicate:** > [Best way to determine if two path reference to same file in C/C++](https://stackoverflow.com/questions/562701/best-way-to-determine-if-two-path-reference-to-same-file-in-c-c) Given two file path strings with potentially different casing and slashes (`'\'` vs `'/'`), is there a quick way (that does not involve writing my own function) to normalize both paths to the same form, or at least to test them for equivalence? I'm restricted to WinAPI and standard C++. All files are local.
Depending on whether the paths could be relative, or contain "..", or junction points, or UNC paths this may be more difficult than you think. The best way might be to use the *GetFileInformationByHandle()* function as in [this answer](https://stackoverflow.com/questions/562701/best-way-to-determine-if-two-path-reference-to-same-file-in-c-c/562773#562773). **Edit:** I agree with the comment by RBerteig that this may become hard to impossible to do if the paths are not pointing to a local file. Any comment on how to safely handle this case would be greatly appreciated.
May I suggest [PathCanonicalize](http://msdn.microsoft.com/en-us/library/bb773569(VS.85).aspx)?
Normalize file path with WinAPI
[ "", "c++", "c", "windows", "winapi", "path", "" ]
I'm moving through the atmosphere with Microsoft Virtual Earth 3D and I can descend smoothly, but I don't know the math to ascend smoothly. I'm descending like this: ``` for(int curAlt = startAlt; curAlt < endAlt; curAlt--){ //do something curAlt -= curAlt/150 } ``` This works by decreasing the size of the jump the closer I get to the earth (lower altitude). I need a solution that would do similar, just in reverse, while still keeping the smaller jumps at the lower altitude. How can I do this? Or is what I am doing unacceptable and should be done differently (say with logarithms)?
An even better solution might be to use a function like the [Logistic function](http://en.wikipedia.org/wiki/Logistic_function). ``` Double minAlt = 0.0; Double maxAlt = 500000.0; Int32 numberSteps = 1000; Double boundary = +6.0; for (Int32 step = 0; step < numberSteps; step++) { Double t = -boundary + 2.0 * boundary * step / (numberSteps - 1); Double correction = 1.0 / (1.0 + Math.Exp(Math.Abs(boundary))); Double value = 1.0 / (1.0 + Math.Exp(-t)); Double correctedValue = (value - correction) / (1.0 - 2.0 * correction); Double curAlt = correctedValue * (maxAlt - minAlt) + minAlt; } ``` Because the current altitude is explicitly calculated you do not have to rely on a iterative calculation introducing all sorts of precision releated errors. See the sample code for how to tune the function shape. --- Here is a sample console application that displays the function. You can play a bit with the parameters to get a feeling for the behavior. ``` using System; namespace LogisticFunction { class Program { static void Main(string[] args) { Double minAlt = 5.0; Double maxAlt = 95.0; Int32 numberSteps = 60; // Keep maxAlt and numberSteps small if you don't want a giant console window. Console.SetWindowSize((Int32)maxAlt + 12, numberSteps + 1); // Positive values produce ascending functions. // Negative values produce descending functions. // Values with smaller magnitude produce more linear functions. // Values with larger magnitude produce more step like functions. // Zero causes an error. // Try for example +1.0, +6.0, +20.0 and -1.0, -6.0, -20.0 Double boundary = +6.0; for (Int32 step = 0; step < numberSteps; step++) { Double t = -boundary + 2.0 * boundary * step / (numberSteps - 1); Double correction = 1.0 / (1.0 + Math.Exp(Math.Abs(boundary))); Double value = 1.0 / (1.0 + Math.Exp(-t)); Double correctedValue = (value - correction) / (1.0 - 2.0 * correction); Double curAlt = correctedValue * (maxAlt - minAlt) + minAlt; Console.WriteLine(String.Format("{0, 10:N4} {1}", curAlt, new String('#', (Int32)Math.Round(curAlt)))); } Console.ReadLine(); } } } ```
By the way, you should really make the ascension time dependent (framerate aware). All the answers here depend on the code being called at a specific interval; which it is not. If some process kicks in, Virtual Earth comes under stress somehow, if you minimize Virtual Earth, or if something that affects the performance of Virtual Earth happens the movement will be anything but smooth. Even if 'nothing' happens to Virtual Earth sometimes your 3D card will stall which means potentially you will get a jump every once in a while. In particular if the user has VSync turned off you will get some really nasty things cropping up: * On slow machine the ascension will take forever (even with VSync on). * On fast machines it will be so fast you might not even notice it. In your class: ``` private int lastTime; ``` In your loop/event: ``` if(lastTime == 0) { lastTime = Environment.TickCount; return; } int curTime = Environment.TickCount; // store this baby. int timeDiff = lastTime - curTime; if(timeDiff == 0) return; curAlt += (maxAlt - curAlt) * timeDiff / (150000); // TickCount reports // time in Ticks // (1000 ticks per second) lastTime = curTime; ``` If you want to get fancy you could plug the code from the DX SDK. Environment.TickCount has a resolution of 15ms (reason I check for the timeDiff being zero, because it could easily be). The managed DX SDK sample framework has a class called DxTimer (or sorts) that has a better resolution. [There is an article that uses the same API](http://www.kalme.de/index.php?option=com_content&task=view&id=18&Itemid=26&limit=1&limitstart=1 "Article").
Smooth movement to ascend through the atmosphere
[ "", "c#", "math", ".net-3.5", "virtual-earth", "" ]
I have a function which shall return a char\*. Since I have to concatenate some strings, I wrote the following line: ``` std::string other_text; // ... func(("text" + other_text).c_str()); ``` I know that I could avoid the question naming the string I want to use. I just want to take the chance to make a more general question: is it safe to call methods of temporary variables? is it standard compliant?
It is safe to call methods of temporary variables, but not safe to return a char\* of a temporary variable for later use. This char\* points to a buffer that will be freed soon. Once it is freed you will have a pointer to an invalid region in memory. Instead please return an std::string object.
You are allowed to call methods on temporaries, however you have to be careful about object lifetimes -- in particular, if you have a function that returns `c_str()` called on a temporary `std::string`, that string object will be destroyed when the function returns. Your code above suffers from this problem.
Is it safe to call temporary object's methods?
[ "", "c++", "string", "stl", "g++", "" ]
Im currently working on an RMI client that will talk to an RMI server (developed by a different division of the company I work for). The other team own the interface, but IMO it's overly complex, with many different types being passed backwards and forwards, as well as an unnecessarily (IMO) complex exception hierarchy. I've expressed concern numerous times that this is the sort of unnecessary complexity is a sure fire source of problems later on when we come to integrate, but Im not getting much traction. IMO it will lead to an unnecessarily large amount of code sharing, plus every single different class we share is an extra set of versioning requirements that need to be watched. Does anyone know of any resources/arguments that I can use to bolster my argument. Alternatively can anyone convince me that I'm barking up the wrong tree?
First of all, I would say that the problem you described pertains not only to RMI but to any kind of interface of a component, including plain Java interface, although in case of RMI a bad design may have additional caveats, e.g. performance. Not knowing the details, I can only guess by looking at my experience. Such an unnecessary complexity of an interface is often related to invalid or insufficient business requirements defined for the component. If that's the case, in future the guys at the other division will probably have to frequently modify the interface, trying to catch up with new features, which is usually a cause of pain for the users of the component. Although changes of the interface are of course natural over time, in this case they may result in a deep redesign. Furthermore, an overly complex interface usually means that the author exposes implementation details. Needless to say, this can lead to unnecessary interface changes due to the evolution of the implementation, switching to a different technology, or even optimization only. Last but not least, giving the users more than they need is a straight way to letting them use functionalities not even intended to be used, or even exist. In future, it may turn out that the users invoke the interface in an unexpected way. It makes the maintenance of the component a hell. To wrap it up, the key arguments for a simple interface are: clear business definition of a component, improved flexibility of the implementation, maintainability. And remember, all those profits are good for both the component developers and the users.
You'll data to accomplish what you're doing. I agree with unforgiven3 -- it's a good fight, and you're not barking up the wrong tree -- if you present the suggestion of a cleaner code right now, without ammunition, it could fall on deaf ears, and worse; could start a "my horse is bigger than your horse" kind of contest -- not productive. Just my suggestion; 1. Start documenting the bugs, or any other ticket item which relates or points to the inefficient interface 2. Start documenting the code reviews, put in a wiki (a company sanctioned wiki -- don't get into trouble now), just document it for now -- it's not yet time to pass judgement, you're just gathering data. When you've got enough data from these 2, make a case on programmer productivity that is being lost or misused because of inefficient design decisions -- it's very difficult to argue when cost is involved. hope it helps.
RMI interface design principles
[ "", "java", "interface", "rmi", "" ]
c#: How would you convert an image field's data to an int array? How would you write an int array to an image field? Has to be done in code, not procedures, I don't know which database type will be used. Edit: I am using a blob field to store a one dimensional array of instrument readings.
This seems to work. I wonder how portable it is? ``` [TestMethod] public void TestMethod1() { int[] intArray = { 4, 8, 15, 16, 23, 42, 108, 342, 815, 1087, 6717 }; byte[] inputBytes = new byte[intArray.GetLength(0) * 4]; Buffer.BlockCopy(intArray, 0, inputBytes, 0, intArray.GetLength(0) * 4); int[] outputInts = new int[intArray.GetLength(0)]; Buffer.BlockCopy(inputBytes, 0, outputInts, 0, intArray.GetLength(0) * 4); Assert.IsTrue(Enumerable.SequenceEqual(intArray, outputInts)); } ```
Assuming you mean a SQL Server Image, typically you'll get this as a byte array or possibly as a stream. (You can wrap a byte array in a `MemoryStream`.) You could either use `BinaryReader` around a stream or `BitConverter` with a byte array to convert an integer at a time, or use `Buffer.BlockCopy` to copy the data from a `byte[]` into an `int[]`.
c#: how to convert image field to int[]?
[ "", "c#", "" ]
# The problem I've got a programm that uses [storm 0.14](http://storm.canonical.com) and it gives me this error on windows: ``` sqlite3.OperationError: database table is locked ``` The thing is, under linux it works correctly. I've got the impression that it happens only after a certain amount of changes have been done, as it happens in some code, that copies a lot of objects. Turning on the debug mode gives me this on windows: ``` 83 EXECUTE: 'UPDATE regularorder_product SET discount=? WHERE regularorder_product.order_id = ? AND regularorder_product.product_id = ?', (Decimal("25.00"), 788, 274) 84 DONE 85 EXECUTE: 'UPDATE repeated_orders SET nextDate=? WHERE repeated_orders.id = ?', (datetime.date(2009, 3, 31), 189) 86 ERROR: database table is locked ``` On linux: ``` 83 EXECUTE: 'UPDATE regularorder_product SET discount=? WHERE regularorder_product.order_id = ? AND regularorder_product.product_id = ?', (Decimal("25.00"), 789, 274) 84 DONE 85 EXECUTE: 'UPDATE repeated_orders SET nextDate=? WHERE repeated_orders.id = ?', (datetime.date(2009, 3, 31), 189) 86 DONE ``` # System info *Windows* * Windows XP SP 3 * Python 2.5.4 * NTFS partition *Linux* * Ubuntu 8.10 * Python 2.5.2 * ext3 partition # Some code ``` def createRegularOrderCopy(self): newOrder = RegularOrder() newOrder.date = self.nextDate # the exception is thrown on the next line, # while calling self.products.__iter__ # this happens when this function is invoked the second time for product in self.products: newOrder.customer = self.customer newOrder.products.add(product) return newOrder orders = getRepeatedOrders(date) week = timedelta(days=7) for order in orders: newOrder = order.createRegularOrderCopy() store.add(newOrder) order.nextDate = date + week ``` # The question Is there anything about sqlite3/python that differs between windows and linux? What could be the reason for this bug and how can I fix it? # Another observation When adding a `COMMIT` at the place where the error happens, this error is thrown instead: `sqlite3.OperationalError: cannot commit transaction - SQL statements in progress` # Answers to answers I'm not using multiple threads / processes, therefore concurrency shouldn't be a problem and also I've got only one Store object.
I've solved the problem at the moment by replacing the sqlite3-dll with the newest version. I'm still not sure if this was a bug in the windows code of sqlite or if python installed an older version on windows than on linux. Thanks for your help.
The "database table is locked" error is often a generic/default error in SQLite, so narrowing down your problem is not obvious. Are you able to execute *any* SQL queries? I would start there, and get some basic SELECT statements working. It could just be a permissions issue.
Why does windows give an sqlite3.OperationalError and linux does not?
[ "", "python", "windows", "linux", "sqlite", "" ]
Suppose I have the following HTML code, how can I pass the user's input to execute(str) JavaScript function as an argument? ``` <body> <input name="textbox1" type="text" /> <input name="buttonExecute" onclick="execute(//send the user's input in textbox1 to this function//)" type="button" value="Execute" /> </body> ```
You could either access the element’s value by its name: ``` document.getElementsByName("textbox1"); // returns a list of elements with name="textbox1" document.getElementsByName("textbox1")[0] // returns the first element in DOM with name="textbox1" ``` So: ``` <input name="buttonExecute" onclick="execute(document.getElementsByName('textbox1')[0].value)" type="button" value="Execute" /> ``` Or you assign an ID to the element that then identifies it and you can access it with `getElementById`: ``` <input name="textbox1" id="textbox1" type="text" /> <input name="buttonExecute" onclick="execute(document.getElementById('textbox1').value)" type="button" value="Execute" /> ```
As opposed to passing the text as a variable, you can use the DOM to retrieve the data in your function: ``` var text = document.getElementsByName("textbox1").value; ```
How to pass text in a textbox to JavaScript function?
[ "", "javascript", "html", "" ]
In other words does this work as expected? ``` int32 i = INT_MAX-1; int64 j = i * i; ``` or do I need to cast the i to 64 bit first?
You need to cast at least one of the operands to the multiply. At the point the multiply is being done, the system doesn't know you're planning to assign to an int64. (Unless int64 is actually the native int type for your particular system, which seems unlikely)
It depends on what int32 and int64 are. In brief, all integers are promoted to at least 'int' size (which may be 64 bits) before any arithmetic operations, and to the size of the larger operand for binary operators if this is of greater rank than an int. How the result of an expression is used (whether or not it is stored to a wider type) has no bearing on the promotions of the constituent parts of the expression.
32bit int * 32bit int = 64 bit int?
[ "", "c++", "integer-overflow", "" ]
Why would this compile: ``` public Dictionary<ValueLineType, Func<HtmlHelper, string, object, Type, string>> constructor = new Dictionary<ValueLineType, Func<HtmlHelper, string, object, Type, string>>(); ``` but not this other one with one extra parameter in the Func (the boolean): ``` public Dictionary<ValueLineType, Func<HtmlHelper, string, object, Type, bool, string>> constructor = new Dictionary<ValueLineType, Func<HtmlHelper, string, object, Type, bool, string>>(); ``` Either I'm getting blind or there's something else I'm going to learn today :D
There is no such thing as `Func<T1,T2,T3,T4,T5,TResult>`. It only goes as far as 4 parameters (i.e. 5 *type* parameters, including one for the return value): ``` Func<T> Func<T1, TResult> Func<T1, T2, TResult> Func<T1, T2, T3, TResult> Func<T1, T2, T3, T4, TResult> SpinalTap<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11, TResult> ``` You can declare your own, of course: ``` public delegate TResult Func<T1, T2, T3, T4, T5, TResult> (T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5); ``` However, at that point I'd think really carefully about whether you might be able to encapsulate some of those parameters together. Are they completely unrelated?
FYI, the next version of the .NET libraries will include Func and Action generic types of more than four parameters.
Dictionaries and Lambdas fun
[ "", "c#", "dictionary", "lambda", "" ]
I have a question regarding vectors: If I have a std::vector<MyClass> will this vector "inherit" the MyClass member functions or not? If not, then would be the best way to handle individually the MyClass members inside a loop? Would I have to create a new class object on each iteration and assign the current vector iterator to it? ``` class MyClass { public: void setMembers(std::string &string, int &num) { mystring = string; number = num; } string printString() { return mystring; } int printNumber() { return number; } private: std::string mystring; int number; }; MyClass test; std::vector<MyClass> vector; std::string string = "Test 1"; int a = 3; test.setMembers(string,a); vector.push_back(test); for(unsigned int i = 0; i<vector.size(); i++) { cout << "Vector #" << << endl; // does vector inherit member functions here? cout << "mystring is: " << vector.printString()<< endl; cout << "number is : " << vector.printNumber() << endl; } ``` Thanks a lot in advance for your help.
No, the instance std::vector doesn't inherit your member variables. However the objects in the vector do have those member which you can access via operator[]. ``` for (size_t i = 0; i < vector.size(); i++) { cout << "Vector #" << i << endl; cout << "mystring is: " << vector[i].printString() << endl; cout << "number is : " << vector[i].printNumber() << endl; } ``` Notice we say vector[i] now, which returns a value of type MyClass which does have the member functions printString() and printNumber(). You should go re-read the chapter on inheritance.
The first part of your question has already been answered by others. No form of inheritance takes place. The vector behaves as a vector, and nothing else. There are two ways to manipulate arrays. The first (obvious) one is through a for loop, like you said: ``` for(size_t i = 0; i<vector.size(); i++) { // technically, you should use size_t here, since that is the type returned by vector.size() cout << "Element #" << << endl; // We're iterating through the elements contained in the vector, so printing "Vector #" doesn't make sense. There is only one vector cout << "mystring is: " << vector[i].printString()<< endl; // [i] to access the i'th element contained in the vector cout << "number is : " << vector[i].printNumber() << endl; } ``` The other approach is to use the algorithms defined in the standard library. As an introduction to those, I'm going to split it up into a few steps. First, every container also defines an iterator type. Iterators are conceptually like pointers that point to a location in the container. So instead of `vector[i].printString()`, you can call printString() on the element point to by any given iterator. (assuming an iterator called `iter`, the syntax would be `iter->printString()`) The reason for this is that it allows a common and generic way to traverse containers. Because lists, vectors, deques and all other container types all provide iterators, and these iterators use the same syntax, your code can take a pair of iterators, denoting the beginning and the end of the range of elements you want to process, and then the same code will work regardless of the underlying container type. So first, let's use a loop to run through the container again, but this time using iterators: ``` forstd::vector<MyClass> current = vector.begin(); current != vector.end(); ++current) { cout << "mystring is: " << current->printString() << endl; cout << "number is : " << current->printNumber() << endl; } ``` Not a huge improvement so far, although it does eliminate the `i` index variable, which often isn't necessary, except as a loop counter. The begin/end) functions return an iterator pointing to the first element in the container, and another pointing *one past the end* of the iterator. So as we move the first iterator forward, we know we've reached the end when it equals the end iterator. In this way, two iterators can represent any range of elements. Now that we have iterators though, we can use a lot of other tricks. The C++ standard library comes with a number of algorithms for processing sequences of elements. They're located in the `<algorithm>` header. A simple one to get us started is `std::for_each`, which is almost a drop-in replacement for a for loop. It is simply a function which takes two iterators, denoting the range of elements it should process, and an action it should perform on each. So to call it, we need to define such an action: ``` void Print(const MyClass& obj) { cout << "mystring is: " << obj.printString() << endl; cout << "number is : " << obj.printNumber() << endl; } ``` That's all. A function which takes the element type as a parameter, and does whatever needs to be done. Now we can call `for_each`: ``` std::for_each(vector.begin(), vector.end(), Print); ``` If you need to do this often, it saves a lot of typing. The Print function only has to be defined once, and then every for loop can be replaced with such a one-liner. Another nice trick with iterators is that they don't have to represent the entire range. We could skip the first five elements: ``` std::for_each(vector.begin() + 5, vector.end(), Print); ``` or take only the first three elements: ``` std::for_each(vector.begin(), vector.begin()+3, Print); ``` or any other manipulation you can think of. There are also algorithms such as copy (copy from one iterator range to another): ``` std::copy(vector.begin(), vector.end(), dest.begin()); ``` And `dest` may be any type of iterator as well, it doesn't have to be a vector iterator just because the source is. In fact we could even copy directly to std::cout if you wanted to print out the contents directly (unfortunately, since MyClass doesn't define the `operator <<`, that would result in an error.) to work around this little problem with std::cout, we could use `std::transform`, which applies some transformation to each object, and then places the result into an output sequence. Since we can't directly print out a MyClass objec, we could just transform it to a string, which *can* be printed out: ``` std::string ToString(const MyClass& obj) { return std::string("mystring is: " + obj.printString() + "\nnumber is :" << obj.printNumber() + "\n"; } ``` Again, fairly simple code. We simply create a function which takes a MyClass object, and builds a string with the desired output. So let's copy this directly to std::cout: ``` std::transform(vector.begin(), vector.end(), std::ostream_iterator(std::cout), ToString); ``` `std::ostream_iterator` creates a special output stream iterator out of `std::cout` to allow it to function as an iterator. And once again, the actual "do this on everything in the vector" code became a single line. The actual action to perform is defined once, elsewhere, so it doesn't have to clutter up the code. So while a for loop is the immediately obvious way to process sequences of elements in a container, iterators are often a better solution in the long run. They offer a lot more flexibility, and even simplifies your code quite a bit. I won't blame you if you prefer to stick with for loops for now, as they're a bit easier to grok. I simply wanted to show you that they're not the "ultimate" answer in C++.
Do templated classes inherit the members of the classes passed to them? (Specificly std::vector)
[ "", "c++", "vector", "" ]
There is an article out on [www.sqlservercentral.com](http://www.sqlservercentral.com/articles/System+Development+Life+Cycle/66133/) about unit testing your SQL. The TDD Guy in me said good, we can test the database stuff. The System Architect in me said, what logic are we testing? There shouldn't be any logic in the database, the only thing you should be doing in the data base is selecting, updating, or inserting. So if you feel the need to unit test your SQL, are you just being really thorough, being overly pragmatic, or is it a sign of design smell?
I agree with the System Architect, too much business logic is making its way into databases these days.
In most living projects, the database is in some amount of flux between project milestones. Tables and columns are created, dropped, or changed. Lookup tables are updated. And you could be testing against multiple instances of the database, so it's good to have some validation of the state of metadata and data in the database, as part of your regression testing. There are several cases where I would suggest testing a database: * Tables & views: Verify the tables and views you expect to exist. Verify that these tables and views contain the columns you expect. You can also verify that tables, views, or columns you dropped in this milestone are in fact absent. * Constraints: Try to execute data changes that should be rejected. The constraints should prevent these changes. You can avoid later bugs if you catch cases where the constraints aren't working. * Triggers: Same as for constraints, and also triggers can be used for cascading effects, or to transform values, etc. Test these logic paths. * Stored Procedures: I support the caution against putting too much logic into the database, when the logic is more easily developed, debugged, and maintained in the application layer. But there are cases when there are compelling reasons to use stored procs. Often you see a performance bottleneck solved by putting complex logic into the database. So stored procs are not going away completely, and testing them is a good idea. * Bootstrap data: Lookup tables are an example of data that needs to be present even in an "empty" database. There can be other examples. Test that the database contains the required data. * Queries: You application code is laced with SQL queries. Test them for proper functionality and also for performance. Especially performance -- because the same query can perform well one day and become a bottleneck the next day, as the volume of data changes, indexes grow imbalanced, etc. * ORM classes: Like triggers, ORM classes in your application can contain logic to validate, transform, or monitor database operations. These should be tested. These tests might not accurately be called "unit testing." Unit testing is a specific type of testing where each test is independent from other tests, and you try to test small units of code in isolation. I'd say testing the database the ways outlined above is an example of *functionality testing*.
Is Unit Testing your SQL taking TDD Too far?
[ "", "sql", "unit-testing", "tdd", "" ]
I have a question regarding the JVM memory management (at least for the SUN's one). I would like to know how to control the fact that the JVM send the unused memory back to the OS (windows in my case). I wrote a simple java program to illustrate what I expect. Run it with -Dcom.sun.management.jmxremote option so that you can also monitor the heap with jconsole for example. With the following program: ``` package fr.brouillard.jvm; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.LinkedList; import java.util.List; public class MemoryFree { private BufferedReader reader = new BufferedReader(new InputStreamReader(System.in)); private List<byte[]> usedMemory = new LinkedList<byte[]>(); private int totalMB = 0; private int gcTimes = 0; public void allocate(int howManyMB) { usedMemory.add(new byte[howManyMB * 1024 * 1024]); totalMB += howManyMB; System.out.println(howManyMB + "MB allocated, total allocated: " + totalMB + "MB"); } public void free() { usedMemory.clear(); } public void gc() { System.gc(); System.out.println("GC " + (++gcTimes) + " times" ); } public void waitAnswer(String msg) { System.out.println("Press [enter]" + ((msg==null)?"":msg)); try { reader.readLine(); } catch (IOException e) { } } public static void main(String[] args) { MemoryFree mf = new MemoryFree(); mf.waitAnswer(" to allocate memory"); mf.allocate(20); mf.allocate(10); mf.allocate(15); mf.waitAnswer(" to free memory"); mf.free(); mf.waitAnswer(" to GC"); mf.gc(); mf.waitAnswer(" to GC"); mf.gc(); mf.waitAnswer(" to GC"); mf.gc(); mf.waitAnswer(" to GC"); mf.gc(); mf.waitAnswer(" to exit the program"); try { mf.reader.close(); } catch (IOException e) {} } } ``` The internal heap is free once the first GC is done (what is expected) but the memory is only sent back to the OS starting from the third GC. After the fourth, the full allocated memory is sent back to the OS. How to setup the JVM to control this behaviour? In fact my problem is that I need to run several CITRIX clients sessions on a server, but I would like the running JVMs on the server to free the memory as soon as possible (I have only few high consuming memory functions in my application). If this behaviour cannot be controlled, can I let it like this and increase instead the OS virtual memory and let the OS using it as it wants without big performance issues. For example, would there be issues to have 10 java process of 1GB memory (with only 100MB real allocated objects in the heap) on a 4GB server with enough virtual memory of course. I guess that other people already faced such questions/problems. Thanks for your help.
To control return of heap to the OS, from Java 5 onward, use the `-XX:MaxHeapFreeRatio` option, as described in the [tuning guide](http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html#0.0.0.%20Total%20Heap|outline). If you feel your question is meaningfully different from [this one](https://stackoverflow.com/questions/324499/java-still-uses-system-memory-after-deallocation-of-objects-and-garbage-collectio), please point out how.
First of all, System.gc() might as well do nothing. You really can't rely on it to do a garbage collection the way you are suggesting. Second, you'll want to monitor what is actually going on the with GC by using ``` -verbosegc -XX:+PrintGCDetails ``` in your invocation of java. Or by using JConsole, which it sounds like you're doing. But that System.gc() has me scared that you're counting the wrong thing... I suspect that when you say that the second or third garbage collection is when it frees the memory, you're just miscounting the garbage collections. A request to GC is not a GC! So check the logs ([interpret them this way](http://java.sun.com/docs/hotspot/gc1.4.2/example.html)) that PrintGCDetails prints out. > In fact my problem is that I need to run several CITRIX clients sessions on a server, but I would like the running JVMs on the server to free the memory as soon as possible (I have only few high consuming memory functions in my application). While your problem is valid, the solution you're going for is a little shady. The JVM needs a heap size for exactly this reason - so that it can be guaranteed this space to run in. It seems like you're leaning towards launching an app, then waiting for the JVM to size its heap down, then launching another such that you're overbooking the resources on the machine. Don't do that, because it will all blow up once an app is taking more memory than you figured it would, but which it is entitled to. I fully believe that you don't want to be micro managing Java's heap this way. Read enough of <http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html> to understand the generations and what the trade offs of a larger/smaller heap are.
JVM sending back memory to OS
[ "", "java", "memory-management", "jvm", "" ]
I am able to parse strings containing date/time with **time.strptime** ``` >>> import time >>> time.strptime('30/03/09 16:31:32', '%d/%m/%y %H:%M:%S') (2009, 3, 30, 16, 31, 32, 0, 89, -1) ``` How can I parse a time string that contains milliseconds? ``` >>> time.strptime('30/03/09 16:31:32.123', '%d/%m/%y %H:%M:%S') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.5/_strptime.py", line 333, in strptime data_string[found.end():]) ValueError: unconverted data remains: .123 ```
Python 2.6 added a new strftime/strptime macro `%f`. The docs are a bit misleading as they only mention microseconds, but `%f` actually parses *any* decimal fraction of seconds with *up to* 6 digits, meaning it also works for milliseconds or even centiseconds or deciseconds. ``` time.strptime('30/03/09 16:31:32.123', '%d/%m/%y %H:%M:%S.%f') ``` However, [time.struct\_time](https://docs.python.org/3/library/time.html?highlight=struct_time#time.struct_time) doesn't actually store milliseconds/microseconds. You're better off using `datetime`, like this: ``` >>> from datetime import datetime >>> a = datetime.strptime('30/03/09 16:31:32.123', '%d/%m/%y %H:%M:%S.%f') >>> a.microsecond 123000 ``` As you can see, `.123` is correctly interpreted as `123 000` microseconds.
I know this is an older question but I'm still using Python 2.4.3 and I needed to find a better way of converting the string of data to a datetime. The solution if datetime doesn't support %f and without needing a try/except is: ``` (dt, mSecs) = row[5].strip().split(".") dt = datetime.datetime(*time.strptime(dt, "%Y-%m-%d %H:%M:%S")[0:6]) mSeconds = datetime.timedelta(microseconds = int(mSecs)) fullDateTime = dt + mSeconds ``` This works for the input string "2010-10-06 09:42:52.266000"
How can I parse a time string containing milliseconds in it with python?
[ "", "python", "date", "time", "datetime-parsing", "" ]
``` List<int> a = 1,2,3 List<int> b = 2,4,5 output 1,3,4,5 ```
The trick is to use [Except](http://msdn.microsoft.com/en-us/library/bb300779.aspx) with the intersection of the two lists. This should give you the list of non-intersecting elements: ``` var nonIntersecting = a.Union(b).Except(a.Intersect(b)); ```
Tried and tested: ``` List<int> a = new List<int>(){1, 2, 3}; List<int> b = new List<int>(){2, 4, 5}; List<int> c = a.Except(b).Union(b.Except(a)).ToList(); ```
find non intersecting data set with linq
[ "", "c#", "linq", "" ]
What is the performance difference between using an iterator to loop through an STL map, versus a vector? I'd like to use the map key for insertion, deletion, and some accesses, but I also need to do regular accesses to *every* element in the map.
With both map and vector, iterating through the entire collection is O(N). however (like list vs vector) vector stores elements contiguously, so accessing the next element is much cheaper because it will use cache optimally, whereas the map won't. But since you *need* to have lookup based on keys, there isn't really an alternative. You could use a vector of pairs sorted on the first element, but if the collection needs to be mutable this is going to be very slow. Just use a map.
Iterating through every element of a map takes O(n) time. [wikipedia](http://en.wikipedia.org/wiki/Map_(C%2B%2B_container))
Iterator access performance for STL map vs. vector?
[ "", "c++", "performance", "stl", "dictionary", "iterator", "" ]
This is the property declaration in question: ``` [RangeValidator(1,RangeBoundaryType.Inclusive,255,RangeBoundaryType.Inclusive,MessageTemplate = "StartFlexibility is out of range")] public byte StartFlexibility { get; set; } ``` When the validate method is called, a FormatException is thrown telling me that the value type needs to be Int32. How to fix, please?
well... the quick obvious fix will be change the type to short or int, but another observation i want to do, is with the range. You are telling the RangeValidator to take a inclusive range between 1 and 256, but you just can assign a byte value till 255, maybe that's the compiler reason to cry out. The RangeValidator is also infering the type of the Range from the parameters, so, try casting ``` [RangeValidator((byte) 1, ... ```
As Jhonny says, cast to byte... but more like this ``` [RangeValidator(typeof(Byte), "1", RangeBoundaryType.Inclusive, "255", RangeBoundaryType.Inclusive, MessageTemplate = "Some message")] ``` The other option would be to call the range validator in a SelfValidation message and cast there.
Using RangeValidator with byte
[ "", "c#", ".net", "enterprise-library", "rangevalidator", "" ]
I'm building a website that fetches text from another page and insert it into the database. The problem is that all the special characters are saved in the database using the HTML encoding so then I need to convert the output using: `<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1" />` I mean, what I have right now is instead of just saving the character " ' " the html version " `&#x27;` " is saved in the database. This happens also when spanish characters or another special ones are saved. Instead of the letter " ñ " for ejample, I get " `&ntilde;` " saved. This wastes space in the database and also I need to later convert the output using content-type so: How can I just convert or set the charset before is saved or just let MySQL convert it?? In case you need to know here's how I connect to the database: ``` function dbConnect() { $conn = new mysqli(DB_SERVER, DB_USER, DB_PASSWORD, DB_NAME) or die ('Error.'); return $conn; } $conn = dbConnect(); $stmt = $conn->stmt_init(); ``` Hope you can help me!! Thanks.
You can use [`html_entity_decode()`](https://www.php.net/manual/en/function.html-entity-decode.php) to convert from HTML to a (real) character encoding. ``` <? echo html_entity_decode("&ntilde;", ENT_COMPAT, "UTF-8"); ?> ñ ``` Please note that "HTML" isn't a character encoding in the usual sense, so isn't understood by libraries such as `iconv`, nor by MySQL itself. I'd also recommend (per example above) having the whole application use UTF-8. Single character encodings such as ISO8859 are effectively obsolete now that Unicode is so widely supported.
I suggest using UTF-8 if there are any non-English characters. You can run the SQL ``` SET NAMES UTF-8 ``` to make your dbase connection in UTF-8 just after you connect to the dbase. When you do this, you shouldn't use "htmlspecialchars" or "htmlentities" while saving the data.
PHP/MySQL: Insert data into database character set problem
[ "", "php", "mysql", "character-encoding", "" ]
Someone here recently [brought up](https://stackoverflow.com/questions/755347/are-constiterators-faster/755371#755371) the article from Scott Meyers that says: * Prefer `iterators` over `const_iterators` ([pdf link](http://www.aristeia.com/Papers/CUJ%5FJune%5F2001.pdf)). Someone else was commenting that the article is probably outdated. I'm wondering what your opinions are? Here is mine: One of the main points of the article is that you cannot erase or insert on a `const_iterator`, but I think it's funny to use that as an argument against `const_iterators`. I thought the whole point of `const_iterators` it that you do not modify the range at all, neither the elements themselves by substituting their values nor the range by inserting or erasing. Or am I missing something?
I totally agree with you. I think the answer is simple: Use const\_iterators where const values are the right thing to use, and vice versa. Seems to me that those who are against const\_iterators must be against const in general...
Here's a slightly different way to look at it. `Const_iterator` almost never makes sense when you are passing it as a pointer into a specific collection **and** you are passing the collection as well. Mr. Meyer was specifically stating that `const_iterator` cannot be used with most member functions of a collection instance. In that case, you will need a plain-old `iterator`. However, if you don't have a handle to the collection, the only difference between the two is that you can modify what is *pointed to* by an `iterator` and you can't modify the object referenced by a `const_iterator`. So... you want to use `iterator` whenever you are passing a collection and position into the collection to an algorithm. Basically, signatures like: ``` void some_operation(std::vector<int>& vec, std::vector::const_iterator pos); ``` don't make a whole lot of sense. The implicit statement is that `some_operation` is free to modify the underlying collection but is not allowed to modify what `pos` references. That doesn't make much sense. If you really want this, then `pos` should be an offset instead of an iterator. On the flip side, most of the algorithms in the STL are based on ranges specified by a pair of iterators. The collection itself is never passed so the difference between `iterator` and `const_iterator` is whether the value in the collection can be modified through the iterator or not. Without a reference to the collection, the separation is pretty clear. Hopefully that made things as clear as mud ;)
Should I prefer iterators over const_iterators?
[ "", "c++", "stl", "iterator", "const-iterator", "" ]
I have a project that I'm working on that requires changing a 'BaseSortedCollection' class to allow duplicates. The class currently implements IEnumerable, IDisposable, ICollection, and ISerializable. The 'BaseSortedCollection' stores Items that have an ItemID (Int64), which is used as the key when accessing the collection. I need to have two identical items (same ItemID) exist in the collection at the same time as well as be able to be retrieved. We are using the 2.0 framework. Any suggestions? Thanks in advance!
I guess you will have to extend a regular ArrayList, and override the Add-method to call Sort if you need the auto sorting. However, I can't seem to wrap my head around the idea of two items with the same (what should be unique) identification number?! Edit, or maybe NameValueCollection (in System.Collections.Specialized) is more appropriate? Extend it and add your own sorting method...
Each item in your BaseSortedCollection could be a List(T), so if you have two items with the same key, you will have a List(T) containing two items for the entry corresponding to that key.
Need Duplicates Allowed In SortedCollection (C#, 2.0)
[ "", "c#", "collections", "duplicates", "icollection", "" ]
I want to make a List and add derived classes to this list. I see here that this is not possible by design in .NET: <http://msdn.microsoft.com/en-us/library/aa479859.aspx#fundamentals_topic12> So what is the best practice solution to this? I guess I can box my derived classes to make them look like my superclass but that feels a little unwieldly. I suppose I should have designed my classes differently....but how?
If I understand what you are trying to do, you should be able to do it just fine. ``` List<SuperClass> myList = new List<SuperClass>(); SubClass item1 = new SubClass(); SuperClass item2 = new SuperClass(); myList.add(item1); myList.add(item2); ``` This is valid code, and then you can easily retrieve the elements from the list, and use the basic rules of polymorphism for changing the types of the objects accordingly.
As [Brian notes](https://stackoverflow.com/questions/628783/workaround-for-creating-a-generic-list-of-typesuperclass-and-populate-it-with-der/628808#628808), this should work fine. The only common limitation people see in this area is covariance of lists as method arguments - i.e. they find that they can't pass a `List<Dog>` to a method that accepts a `List<Animal>`. The answer in that case is generics: ``` public void Foo<T>(IList<T> list) where T : Animal {...} .... List<Dog> dogs = ... Foo(dogs); // implicit <Dog> ``` For more discussion on covariance (including the C# 4.0 changes that *doesn't* change the above scenario, [see here](http://marcgravell.blogspot.com/2009/02/what-c-40-covariance-doesn-do.html)).
Workaround for creating a generic list of Type<SuperClass> and populate it with derived classes?
[ "", "c#", "generics", "inheritance", "" ]
Has anyone spent any time comparing the various Objective C bridges and associated Cocoa wrappers for Mono? I want to port an existing C# application to run on OS X. Ideally I'd run the application on Mono, and build a native Cocoa UI for it. I'm wondering which bridge would be the best choice. In case it's useful to anyone, here are some links to bridges I've found so far: * **CocoSharp** - distributed with Mono on OS X - [www.cocoa-sharp.com](http://www.cocoa-sharp.com) * **Monobjc** - better documentation than the others (in my opinion) - [www.mono-project.com/CocoaSharp](http://www.mono-project.com/CocoaSharp) and [www.monobjc.net](http://www.monobjc.net) * **NObjective** - (apparently) faster than the others - [code.google.com/p/nobjective](http://code.google.com/p/nobjective) * **MObjc / MCocoa** - [code.google.com/p/mobjc](http://code.google.com/p/mobjc) and [code.google.com/p/mcocoa](http://code.google.com/p/mcocoa) * **ObjC#** - [www.mono-project.com/ObjCSharp](http://www.mono-project.com/ObjCSharp)
The Mono team have released a new bridge called MonoMac. It's essentially the desktop version of the MonoTouch framework they produced for the iPhone. Miguel announced the new bridge on his blog here: <http://tirania.org/blog/archive/2010/Apr-19.html> Once MonoMac matures, I suspect that it will become *the* bridge people use.
I've been trying out Monobjc, NObjective and MObjc/MCocoa with F# lately and I will go with MObjc/MCocoa. Monobjc is as you write best documented and packaged but compared to the other two very [slow](http://code.google.com/p/nobjective/) and does not "rethrow exceptions" from ObjC. I believe that NObjective and MObjc/MCocoa are the only two that rethrow exceptions from ObjC and that's why I find them to be the only real alternatives. As discussed [here](http://www.nabble.com/NObjective-and-Inheritance-td21882576ef1367.html) NObjective uses structs with inheritance which I find quite scary and made me finally choose MObjC/MCocoa. (You might have a licence issue as well. Monobjc and NObjective is under LGPL, MObjc/MCocoa is under MIT ...)
Building Cocoa UIs for OS X with C# and Mono
[ "", "c#", "cocoa", "macos", "mono", "" ]
How can data written to a file **really** be flushed/synced with the block device by Java. I tried this code with NIO: ``` FileOutputStream s = new FileOutputStream(filename) Channel c = s.getChannel() while(xyz) c.write(buffer) c.force(true) s.getFD().sync() c.close() ``` I supposed that c.force(true) togehter with s.getFD().sync() should be sufficient because the doc for [force](http://java.sun.com/j2se/1.5.0/docs/api/java/nio/channels/FileChannel.html#force(boolean)) states > Forces any updates to this channel's file to be written to the storage device that contains it. > If this channel's file resides on a local storage device then when this method returns it is guaranteed that all changes made to the file since this channel was created, or since this method was last invoked, will have been written to that device. This is useful for ensuring that critical information is not lost in the event of a system crash. The documentation to [sync](http://java.sun.com/j2se/1.4.2/docs/api/java/io/FileDescriptor.html#sync()) states: > Force all system buffers to synchronize with the underlying device. This method returns after all modified data and attributes of this FileDescriptor have been written to the relevant device(s). In particular, if this FileDescriptor refers to a physical storage medium, such as a file in a file system, sync will not return until all in-memory modified copies of buffers associated with this FileDesecriptor have been written to the physical medium. sync is meant to be used by code that requires physical storage (such as a file) to be in a known state. These two calls should be sufficient. Is it? I guess they aren't. Background: I do a small performance comparison (2 GB, sequential write) using C/Java and the Java version is twice as fast as the C version and probably faster than the hardware (120 MB/s on a single HD). I also tried to execute the command line tool sync with Runtime.getRuntime().exec("sync") but that hasn't changed the behavior. The C code resulting in 70 MB/s is (using the low level APIs (open,write,close) doesn't change much): ``` FILE* fp = fopen(filename, "w"); while(xyz) { fwrite(buffer, 1, BLOCK_SIZE, fp); } fflush(fp); fclose(fp); sync(); ``` Without the final call to sync; I got unrealistical values (over 1 GB aka main memory performance). Why is there such a big difference between C and Java? There are two possiblities: I doesn't sync the data correctly in Java or the C code is suboptimal for some reason. Update: I have done strace runs with "strace -cfT cmd". Here are the results: C (Low-Level API): MB/s 67.389782 ``` % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 87.21 0.200012 200012 1 fdatasync 11.05 0.025345 1 32772 write 1.74 0.004000 4000 1 sync ``` C (High-Level API): MB/s 61.796458 ``` % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 73.19 0.144009 144009 1 sync 26.81 0.052739 1 65539 write ``` Java (1.6 SUN JRE, java.io API): MB/s 128.6755466197537 ``` % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 80.07 105.387609 3215 32776 write 2.58 3.390060 3201 1059 read 0.62 0.815251 815251 1 fsync ``` Java (1.6 SUN JRE, java.nio API): MB/s 127.45830221558376 ``` 5.52 0.980061 490031 2 fsync 1.60 0.284752 9 32774 write 0.00 0.000000 0 80 close ``` The time values seem to be system time only and are therefore pretty meaningless. Update 2: I switched to another server, rebooted, and I use a fresh formatted ext3. Now I get only 4% differences between Java and C. I simply don't know what went wrong. Sometimes things are strange. I should have tried the measurement with another system before writing this question. Sorry. Update 3: To summarize the answers: * Use c.force(true) followed by s.getFD().sync() for Java NIO and s.flush() and s.getFD().sync() for Java's stream API. For the High-Level API in C don't forget to sync. A fflush submitted the data to the OS, but doesn't bring your data to the block device. * Use strace to analyze the syscalls done by a command * Cross check your results before posting a question. Update 4: Please note the following follow-up [question](https://stackoverflow.com/questions/5650327/are-filechannel-force-and-filedescriptor-sync-both-needed).
You need to tell us more about the hardware and operating system, also the specific Java version. How are you measuring this throughput? You're correct that force/sync should force the data out to the physical media. --- Here's a raw version of copy. Compiled with gcc 4.0 on an Intel Mac, should be clean. ``` /* rawcopy -- pure C, system calls only, copy argv[1] to argv[2] */ /* This is a test program which simply copies from file to file using * only system calls (section 2 of the manual.) * * Compile: * * gcc -Wall -DBUFSIZ=1024 -o rawcopy rawcopy.c * * If DIRTY is defined, then errors are interpreted with perror(3). * This is ifdef'd so that the CLEAN version is free of stdio. For * convenience I'm using BUFSIZ from stdio.h; to compile CLEAN just * use the value from your stdio.h in place of 1024 above. * * Compile DIRTY: * * gcc -DDIRTY -Wall -o rawcopy rawcopy.c * */ #include <fcntl.h> #include <sys/types.h> #include <sys/uio.h> #include <stdlib.h> #include <unistd.h> #if defined(DIRTY) # if defined(BUFSIZ) # error "Don't define your own BUFSIZ when DIRTY" # endif # include <stdio.h> # define PERROR perror(argv[0]) #else # define CLEAN # define PERROR # if ! defined(BUFSIZ) # error "You must define your own BUFSIZ with -DBUFSIZ=<number>" # endif #endif char * buffer[BUFSIZ]; /* by definition stdio BUFSIZ should be optimal size for read/write */ extern int errno ; /* I/O errors */ int main(int argc, char * argv[]) { int fdi, fdo ; /* Input/output file descriptors */ ssize_t len ; /* length to read/write */ if(argc != 3){ PERROR; exit(errno); } /* Open the files, returning perror errno as the exit value if fails. */ if((fdi = open(argv[1],O_RDONLY)) == -1){ PERROR; exit(errno); } if((fdo = open(argv[2], O_WRONLY|O_CREAT)) == -1){ PERROR; exit(errno); } /* copy BUFSIZ bytes (or total read on last block) fast as you can. */ while((len = read(fdi, (void *) buffer, BUFSIZ)) > -1){ if(len == -1){ PERROR; exit(errno); } if(write(fdo, (void*)buffer, len) == -1){ PERROR; exit(errno); } } /* close and fsync the files */ if(fsync(fdo) ==-1){ PERROR; exit(errno); } if(close(fdo) == -1){ PERROR; exit(errno); } if(close(fdi) == -1){ PERROR; exit(errno); } /* if it survived to here, all worked. */ exit(0); } ```
Actually, in C you want to just call `fsync()` on the one file descriptor, not `sync()` (or the "sync" command) which signals the kernel to `flush` all buffers to disk system-wide. If you `strace` (getting Linux-specific here) the JVM you should be able to observe an `fsync()` or `fdatasync()` system call being made on your output file. That would be what I'd expect the `getFD()`.`sync()` call to do. I assume `c.force(true)` simply flags to NIO that `fsync()` should be called after each write. It might simply be that the JVM you're using doesn't actually implement the `sync()` call? I'm not sure why you weren't seeing any difference when calling "sync" as a command: but obviously, after the first sync invocation, subsequent ones are usually quite a lot faster. Again, I'd be inclined to break out `strace` (truss on Solaris) as a "what's actually happening here?" tool.
Really force file sync/flush in Java
[ "", "java", "storage", "" ]
I have a **variable string** that contains well-formed and valid XML. I need to use JavaScript code to parse this feed. How can I accomplish this using (browser-compatible) JavaScript code?
> *Update: For a more correct answer see [Tim Down's answer](https://stackoverflow.com/a/8412989/2172).* Internet Explorer and, for example, Mozilla-based browsers expose different objects for XML parsing, so it's wise to use a JavaScript framework like [jQuery](http://jquery.com/) to handle the cross-browsers differences. A really basic example is: ``` var xml = "<music><album>Beethoven</album></music>"; var result = $(xml).find("album").text(); ``` Note: As pointed out in comments; jQuery does not really do any XML parsing whatsoever, it relies on the DOM innerHTML method and will parse it like it would any HTML so be careful when using HTML element names in your XML. But I think it works fairly good for simple XML 'parsing', but it's probably not suggested for intensive or 'dynamic' XML parsing where you do not upfront what XML will come down and this tests if everything parses as expected.
**Updated answer for 2017** The following will parse an XML string into an XML document in all major browsers. Unless you need support for IE <= 8 or some obscure browser, you could use the following function: ``` function parseXml(xmlStr) { return new window.DOMParser().parseFromString(xmlStr, "text/xml"); } ``` If you need to support IE <= 8, the following will do the job: ``` var parseXml; if (typeof window.DOMParser != "undefined") { parseXml = function(xmlStr) { return new window.DOMParser().parseFromString(xmlStr, "text/xml"); }; } else if (typeof window.ActiveXObject != "undefined" && new window.ActiveXObject("Microsoft.XMLDOM")) { parseXml = function(xmlStr) { var xmlDoc = new window.ActiveXObject("Microsoft.XMLDOM"); xmlDoc.async = "false"; xmlDoc.loadXML(xmlStr); return xmlDoc; }; } else { throw new Error("No XML parser found"); } ``` Once you have a `Document` obtained via `parseXml`, you can use the usual DOM traversal methods/properties such as `childNodes` and `getElementsByTagName()` to get the nodes you want. Example usage: ``` var xml = parseXml("<foo>Stuff</foo>"); alert(xml.documentElement.nodeName); ``` If you're using jQuery, from version 1.5 you can use its built-in [`parseXML()`](http://api.jquery.com/jQuery.parseXML/) method, which is functionally identical to the function above. ``` var xml = $.parseXML("<foo>Stuff</foo>"); alert(xml.documentElement.nodeName); ```
XML parsing of a variable string in JavaScript
[ "", "javascript", "xml", "parsing", "" ]
I want to retrieve my domain url in asp.net. for example, if my url is: ``` http://www.mydomain.com/blog/currentPage.aspx?id=156 ``` I just want the part ``` http://www.mydomain.com/blog/ ``` can anyone help me out?
you should do some string manipulation on this answer : [how to get url of the current page in c#](https://stackoverflow.com/questions/593709/how-to-get-url-of-the-current-page-in-c/593715#593715) in addition take a look at [segments](http://msdn.microsoft.com/en-us/library/system.uri.segments.aspx).
You have many options: ``` string root = this.ResolveUrl("~") ``` Or ``` Uri requestUri = Context.Request.Url; string baseUrl = requestUri.Scheme + Uri.SchemeDelimiter + requestUri.Host + (requestUri.IsDefaultPort ? "" : ":" + requestUri.Port); ``` Or ``` string baseUrl = Request.Url.GetLeftPart(UriPartial.Authority); ``` If you want /blog appended to the last two, add ``` + Request.ApplicationPath ```
Retrieve domain url
[ "", "c#", "asp.net", "url", "" ]
anyone got java 1.6 applet working on mac os x ? check it on gemal.dk/browserspy/java.html I got Java using object and applet tag 1.5.0\_16 (1.5.0\_16-133) from Apple Inc. and you ? on windows and linux 1.6 working gracefully :S
It was a problem with OSX itself that didn't support 1.6 on applets now everything is updated and works on 10.6.6 at least I wrote it here so somebody don't confuse this problem with another.
On Leopard, open `/Applications/Utilities/Java/Java Preferences.app`, and select Java 6 as the default for your applets. This won't work, of course, if you don't have an Intel Mac running Leopard (at least, I think it requires Leopard... perhaps there's an installer for Tiger). In either event, not reliable if you need to deploy 1.6-based applets to users, but works tickety-boo for your own surfing.
How to get java 1.6 applet working on mac os x
[ "", "java", "applet", "java-6", "" ]
I am in need of a performance-oriented hash function implementation in C++ for a hash table that I will be coding. I looked around already and only found questions asking what's a good hash function "in general". I've considered CRC32 (but where to find good implementation?) and a few cryptography algorithms. My table, though, has very specific requirements. Here's what the table will be like: ``` 100,000 items max 200,000 capacity (so the load is 0.5) hashing a 6-character string which is a part of English sentence examples: "become" "and he" ", not " ``` The *number one priority* of my hash table is quick search (retrieval). Quick insertion is not important, but it will come along with quick search. Deletion is not important, and re-hashing is not something I'll be looking into. To handle collisions, I'll be probably using *separate chaining* as described [here](http://eternallyconfuzzled.com/tuts/datastructures/jsw_tut_hashtable.aspx). I have already looked at [this article](http://eternallyconfuzzled.com/tuts/algorithms/jsw_tut_hashing.aspx), but would like an opinion of those who have handled such task before.
Now assumming you want a hash, and want something **blazing fast** that would work in your case, because your strings are just 6 chars long you could use this magic: ``` size_t precision = 2; //change the precision with this size_t hash(const char* str) { return (*(size_t*)str)>> precision; } ``` CRC is for slowpokes ;) **Explanation:** This works by casting the contents of the string pointer to "look like" a size\_t (int32 or int64 based on the optimal match for your hardware). So the contents of the string are interpreted as a raw number, no worries about characters anymore, and you then bit-shift this the precision needed (you tweak this number to the best performance, I've found 2 works well for hashing strings in set of a few thousands). Also the really neat part is any decent compiler on modern hardware will hash a string like this in 1 assembly instruction, hard to beat that ;)
This simple polynomial works surprisingly well. I got it from Paul Larson of Microsoft Research who studied a wide variety of hash functions and hash multipliers. ``` unsigned hash(const char* s, unsigned salt) { unsigned h = salt; while (*s) h = h * 101 + (unsigned) *s++; return h; } ``` `salt` should be initialized to some *randomly* chosen value before the hashtable is created to defend against [hash table attacks](http://www.georgevreilly.com/blog/2009/03/02/HashTableAttacks.html). If this isn't an issue for you, just use 0. The size of the table is important too, to minimize collisions. Sounds like yours is fine.
Have a good hash function for a C++ hash table?
[ "", "c++", "hash", "hashtable", "" ]
I have a `DropDownList` inside an `UpdatePanel` that is populated on postback from a `SqlDataSource`. It has a parameter which is another control. I sometimes need multiple postbacks, but what happens is that each time the update panel refreshes, items are added to the `DropDownList`. So the `DropDownList` ends up having data that is incorrect, or repeated data. I have the `AppendDataBoundItems` property set to `true` because I need the first item to be blank. How can I overcome this problem? Is there another way to have a blank first item? (This `DropDownList` is in an ASP.NET 2.0 web app, and codebehind is in C#)
Instead of using `AppendDataboundItems='true'` (which will cause the problem you are talking about), respond to the `DataBound` event for the `DropDownList` and then add your "blank" item to the top of the list. ``` <asp:DropDownList runat="server" ID="MyList" ondatabound="MyListDataBound"></asp:DropDownList> ``` Then in your code behind: ``` protected void MyListDataBound(object sender, EventArgs e) { MyList.Items.Insert(0, new ListItem("- Select -", "")); } ```
There are good answers here but I felt the need to include more information because there are multiple options that work and we need to decide which to use. First, we should understand `AppendDataBoundItems`. If `AppendDataBoundItems = "true"`, `ListItems` are added to the `DropDownList` without clearing out the old ones. Otherwise, the `DropDownList` is cleared about before the next `DataBind`. [MSDN AppendDataBoundItems doc](https://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.listcontrol.appenddatabounditems(v=vs.110).aspx) There are basically 2 options covered by most of the answers: **1. Define a blank option in html and add the ListItems from the database to the DropDownList only once.** Notice 3 things here: * Blank `ListItem` is defined in html * `AppendDataBoundItems="true"` * `DataBind` is NOT called on postbacks or when the `DropDownList` item count is > 1 Source: ``` <asp:DropDownList ID="MyList" runat="server" AppendDataBoundItems="true" DataValueField="Id" DataTextField="Name" > <asp:ListItem Text="- Select One -" Value="" /> </asp:DropDownList> ``` Code behind: ``` protected void Page_Load(object sender, System.EventArgs e) { if (MyList.Items.Count <= 1 ) { MyList.DataSource = MyDataSource; MyList.DataBind(); } } ``` Note: I like the logic of checking the count vs checking `IsPostBack`. Though PostBacks are often the cause of duplicate databinding, it is possible to cause it other ways. Checking the item count is basically just checking to see if it's already been loaded. OR (option to use `IsPostBack` instead) ``` protected void Page_Load(object sender, System.EventArgs e) { if (!IsPostBack) { MyList.DataSource = MyDataSource; MyList.DataBind(); } } ``` **2. Clear and reload the DropDownList on each page refresh.** Notice 3 differences from the first option: * `AppendDataBoundItems="false"` (if it is not defined then `false` is it's default value) * Blank `ListItem` is is added in code behind. We can't define it in html because with `AppendDataBoundItems="false"`, it would be cleared out. * `DataBind` is called on every `Page_Load` Source: ``` <asp:DropDownList ID="MyList" runat="server" DataValueField="Id" DataTextField="Name" OnDataBound="MyList_DataBound" > </asp:DropDownList> ``` Code behind: ``` protected void Page_Load(object sender, System.EventArgs e) { MyList.DataSource = MyDataSource; MyList.DataBind(); } protected void MyList_DataBound(object sender, EventArgs e) { MyList.Items.Insert(0, new ListItem("- Select One -", "")); } ```
DropDownList AppendDataBoundItems (first item to be blank and no duplicates)
[ "", "c#", "asp.net", "drop-down-menu", "webforms", "append", "" ]
What a great site this is, I have lurked on here reading others questions for ages but now I have one of my own. My workmate wrote a class very like the one below. As soon as I saw it I knew it wouldn't work but I have no explination for him why it doesn't work. What he expected when declaring it as a `ControlItem<Button>` is that the Draw(Button) method would be called when using the base to call Draw(). Instead we always end up throwing the exception. Is this a covariance issue? ``` public abstract class ControlItem { public ControlItem() { } abstract public void Draw(); } public class ControlItem<T> : ControlItem where T : Control, new() { public T MyControl { get; set; } private ControlItem() { } public ControlItem(T control) : base() { MyControl = control; } public override void Draw() { Draw(this.MyControl); } public void Draw(Control cntrl) { throw new NotImplementedException(); } public void Draw(Button button) { //Do some work } } ```
> Is this a covariance issue? No, it's a static versus dynamic dispatch issue. Static dispatch means an overloaded method call is bound to the appropriate type at compile time based on the type of the *variable* passed in: ``` class Base { } class Derived : Base { } class Foo { void Test() { Base a = new Base(); Overload(a); // prints "base" Derived b = new Derived(); Overload(b); // prints "derived" // dispatched based on c's declared type! Base c = new Derived(); Overload(c); // prints "base" } void Overload(Base obj) { Console.WriteLine("base"); } void Overload(Derived obj) { Console.WriteLine("derived"); } } ``` Dynamic dispatch means the function is bound at runtime based on the actual type of the object stored in the variable: ``` class Base { public virtual void Override() { Console.WriteLine("base"); } } class Derived : Base { public override void Override() { Console.WriteLine("derived"); } } class Foo { void Test() { Base a = new Base(); a.Override(); // prints "base" Derived b = new Derived(); b.Override(); // prints "derived" // dynamically dispatched based type of object stored in c! Base c = new Derived(); c.Override(); // prints "derived" } void Overload(Base obj) { Console.WriteLine("base"); } void Overload(Derived obj) { Console.WriteLine("derived"); } } ``` The last print shows the difference between the two. C#, like most class-based OOP languages, only supports dynamic dispatch for the `this` implicit parameter (referred to as "single dispatch). In other words *overridden* methods are dynamically dispatched, but *overloaded* methods are not. The typical solution to fake multiple dispatch in single dispatch languages is by using the [visitor pattern](http://en.wikipedia.org/wiki/Visitor_pattern), which would work for you here.
This is because the compiler only knows for sure that the type will be a control, so it will always bind to the method with the Control argument. You need to add an explicit check in the Draw() method if you need to handle them differently: ``` public override void Draw() { Button btn = MyControl as Button; if (btn != null) { Draw(btn); } else { Draw(this.MyControl); } } ``` Note that this is not very "generic"... but it may do the trick in your special case.
Using generics its declared as a Button but is treated as a Control internal to class. Why?
[ "", "c#", "generics", ".net-2.0", "overriding", "" ]
I have gotten quite familiar with django's email sending abilities, but I havn't seen anything about it receiving and processing emails from users. Is this functionality available? A few google searches have not turned up very promising results. Though I did find this: [Receive and send emails in python](https://stackoverflow.com/questions/348392/receive-and-send-emails-in-python) Am I going to have to roll my own? if so, I'll be posting that app faster than you can say... whatever you say. thanks, Jim **update**: I'm not trying to make an email server, I just need to add some functionality where you can email an image to the site and have it pop up in your account.
There's an app called [jutda-helpdesk](http://code.google.com/p/jutda-helpdesk/) that uses Python's `poplib` and `imaplib` to process incoming emails. You just have to have an account somewhere with POP3 or IMAP access. This is adapted from their [get\_email.py](http://code.google.com/p/jutda-helpdesk/source/browse/trunk/management/commands/get_email.py): ``` def process_mail(mb): print "Processing: %s" % q if mb.email_box_type == 'pop3': if mb.email_box_ssl: if not mb.email_box_port: mb.email_box_port = 995 server = poplib.POP3_SSL(mb.email_box_host, int(mb.email_box_port)) else: if not mb.email_box_port: mb.email_box_port = 110 server = poplib.POP3(mb.email_box_host, int(mb.email_box_port)) server.getwelcome() server.user(mb.email_box_user) server.pass_(mb.email_box_pass) messagesInfo = server.list()[1] for msg in messagesInfo: msgNum = msg.split(" ")[0] msgSize = msg.split(" ")[1] full_message = "\n".join(server.retr(msgNum)[1]) # Do something with the message server.dele(msgNum) server.quit() elif mb.email_box_type == 'imap': if mb.email_box_ssl: if not mb.email_box_port: mb.email_box_port = 993 server = imaplib.IMAP4_SSL(mb.email_box_host, int(mb.email_box_port)) else: if not mb.email_box_port: mb.email_box_port = 143 server = imaplib.IMAP4(mb.email_box_host, int(mb.email_box_port)) server.login(mb.email_box_user, mb.email_box_pass) server.select(mb.email_box_imap_folder) status, data = server.search(None, 'ALL') for num in data[0].split(): status, data = server.fetch(num, '(RFC822)') full_message = data[0][1] # Do something with the message server.store(num, '+FLAGS', '\\Deleted') server.expunge() server.close() server.logout() ``` `mb` is just some object to store all the mail server info, the rest should be pretty clear. You'll probably need to check the docs on `poplib` and `imaplib` to get specific parts of the message, but hopefully this is enough to get you going.
I know this question is pretty old now but just thought I'd add for future reference that you might want to give <http://cloudmailin.com> a go. We have quite a few django users using the system and it should be a little simpler than the proposed solution.
django to send AND receive email?
[ "", "python", "django", "email", "pop3", "django-email", "" ]
I got a liiitle problem. There is ``` List<List<UInt32>> temp = new List<List<UInt32>>(); ``` For example, there are two `List<UInt32>` records within the List temp however, when i try to do something like `temp.removeAt(0);` it doesn't remove the first row `(List<UInt32>`) .. Why is that? Do i do something wrong? **Update** Here is the code that doesn't work. I know there are 2 `List<UInt32>` within the main List and I know that the first `List<UInt32>.Count` is 1 but when I check the same position after this code, the first code's Count is still one. ``` int i = 0; bool boolA = true; while (boolA) { if (temp[i].Count == 1) { temp.RemoveAt(i); temps++; } if (i == temp.Count - 1) boolA = false; i++; ```
You are removing items, then advancing i, therefore skipping the item that moved into the previous position that i represented. e.g. items = 5 i = 0 -> remove i => items = 4, i = 0 -> i++ => items = 4, i = 1 -> remove i => items= 3, i = 1 -> i++ => items = 3, i = 2 -> remove i => items = 3, i = 2 -> i++ => items = 3, i = 3 -> remove i => index out of range
Your question is unclear, but it sounds like your user interface is bound to the list. If so, you won't see changes because the list doesn't support change notification. You should use a `BindingList<T>` (for windows forms) or `ObservableCollection<T>` (for WPF) to get change notification.
How to remove the element from List in C#?
[ "", "c#", "generics", "list", "" ]
What's the simplest way to serialize a bean to a string using GWT? I prefer not to use *GWT.create()* invocations.
**Disclaimer:** Serializing a bean on the URL isn't such a great idea for GWT. I've learned that if need to put data on the URL, it should be as little as possible and only what is necessary to restore the state of your page. Look at how Gmail uses its history tokens and you'll see it is pretty minimal. **With that disclaimer out of the way:** For a GWT project I worked on I simply wrote out values of the bean separated by a delimiter. When reading the values back in, I used the String.split() method to get an array. With that array I assign the values back to the right bean properties. In code: ``` public class Sample { private int a; private boolean b; private String c; //getters and setters for fields not shown public String toHistoryToken(){ return a+"/"+b+"/"+c; } public void fromHistoryToken(String token){ String[] values=token.split("/"); a=Integer.parseInt(values[0]); b=Boolean.parseBoolean(values[1]); c=values[2]; } } ``` For more complicate scenarios you may have to do more complicated things. For example, for nested objects, you have to write the code to pass the values to the child object(s). Also, be aware that you have to make sure that any values you use don't contain the delimiter. So if you know your Strings might contain "/" then you might have to do a replace() operation of them to escape any nested delimiters.
I'm not sure I understand what you're ultimately trying to accomplish. If you really just want to send strings back and forth, you don't really need to do *anything*- you get that for free using the normal GWT RPC mechanism. Just create a method that returns a `java.lang.String`, create the accompanying implementation and Async interface, and you're done. If on the other hand you really want to send a bean, just make sure it has a zero-arg constructor and implements `Serializable` or `IsSerializable`.
Serialize Object in GWT
[ "", "java", "serialization", "gwt", "javabeans", "" ]
I currently have the following in JQuery 1.3.2 ``` for (i = 0; i < totalPages; i++) { var newDiv = $("<a href=\"#\">").append(i+1).click(function() { alert(i+1); }); $('#pageLinks').append(newDiv).append(" "); } ``` This outputs a list of numbers as required, but the alert when clicking the generated element is returning the totalPages value, not the value of i during the loop. How do I get the click event to alert the value of i instead? Thanks
Try this ``` for (i = 0; i < totalPages; i++) { var newDiv = $("<a href=\"#\">").append(i+1).click(function() { alert(this.text); }); $('#pageLinks').append(newDiv).append(" "); } ```
This code should sort your problem out: ``` for (i = 0; i < totalPages; i++) { var newDiv = $("<a href=\"#\">") .append(i+1) .click( function() { alert($(this).html()); } ); $('#pageLinks').append(newDiv).append(" "); } ```
JQuery Add HTML and Event
[ "", "javascript", "jquery", "jquery-1.3.2", "" ]
I realize that similar questions have been asked about this before here on SO, but let me describe exactly what I need to do: I have a set of tests which run a command line java application and I'd like to add memory profiling to them. One option I see would be to add code (possibly using 3rd party tools/libraries) to my application that would provide a memory snapshot. Another option would be to use a third party tool which manages/instruments my application and the JVM for me (and ideally does not require me to change my code). I'm thinking of something like Valgrind but for Java. Also open source if at all possible. What I'd really like to do is set up the memory tests so that my memory usage is being monitored at regular intervals, let's say every second, and dumped to a text file. That way I'd be able to see if the memory usage oscillates/increases/decreases over time. I'll also be able to calculate the max and min peaks. Has anyone here done anything like this? Thanks in advance.
With something like JProfiler all you need to do is add certain parameters to the JVM. It uses JVMTI. I think you should be reading up on profilers and exactly what they can do for you. I also suggest reading up on JVMTI. The JVMTM Tool Interface (JVM TI) is a new native programming interface for use by tools. It provides both a way to inspect the state and to control the execution of applications running in the Java virtual machine (JVM). JVM TI supports the full breadth of tools that need access to JVM state, including but not limited to: profiling, debugging, monitoring, thread analysis, and coverage analysis tools. Note: JVM TI replaces the Java Virtual Machine Profiler Interface (JVMPI) and the Java Virtual Machine Debug Interface (JVMDI). JVMPI and JVMDI will be removed in the next major release of J2SETM.
[Yourkit](http://www.yourkit.com/overview/index.jsp) also has a pretty good profiler
Best way to profile memory usage in a Java application?
[ "", "java", "memory", "profiling", "profile", "" ]
If I transfer a file over the internet from computer A to computer B using C# using one of the many ways to do file transfers with .NET, What is the best way to tell if the file on computer A and computer B is the same? I am thinking that MD5 hashes would be a good way to tell. It seems like that is a widely accepted way to tell. However, I am just double checking to see if there is not a better way out there hidden in the .NET framework. Thank you Tony
MD5 is the way to go.
CRC32 or Adler32, which are a lot faster then MD5. You should use MD5 if you need to check if file was manipulated with malicious intent. If there is no need to, than it's overkill.
Best way to tell if two files are the same?
[ "", "c#", "file", "" ]
We are interfacing with a piece of hardware developed here, and allowing access via the web. The firmware for this device is written in C, and part of that firmware is about 5000 lines of code that parses huge binary files for config data, compiled to a .dll. Is there any way under heaven that I can access a .dll, written in C, from PHP? I'm betting it's a no, but before starting a quest to duplicate 5000 lines of C in PHP, I thought I'd give the experts here a shot. Thanks all. JH
yes, you can use it as a php extension (which are written in C), but you might need some modifications .. etc [<http://devzone.zend.com/node/view/id/1021>](http://devzone.zend.com/node/view/id/1021) I think [this question](https://stackoverflow.com/questions/705443) is related to yours.
You might want to check out [SWIG](http://www.swig.org/), a general tool for wrapping libraries so that they may be called from a variety of languages. PHP is supported by SWIG, as are Perl and Lua (the targets I've personally used). Quoting from the features list at the [SWIG website](http://www.swig.org/compare.html): > SWIG currently generates wrapper code > for eighteen different target > languages: > > * Allegro CL > * C# > * CFFI > * CLISP > * Chicken > * Guile > * Java > * Lua > * Modula-3 > * Mzscheme > * OCAML > * Octave > * Perl > * PHP > * Python > * R > * Ruby > * Tcl > * UFFI > > In addition to this, the parse tree > can be exported as XML and Lisp > s-expressions. Experimental work is > also available for a Pike module. Some of its features are dependent on back-end support in the per-language wrapper generators, but in general it provides easy to use wrappers for passing all plain-data value types in and out of functions. Where the target language has the concept, it can usually map object models as well.
Can I use a dll written in C with PHP?
[ "", "php", "c", "" ]
From this post [How to use ROW\_NUMBER in the following procedure?](https://stackoverflow.com/questions/706720/how-to-use-rownumber-in-the-following-procedure/706756#706756) There are two versions of answers where one uses a `sub-query` and the other uses a `CTE` to solve the same problem. Now then, what is the advantage of using a `CTE (Common Table Expression)` over a 'sub-query`(thus, more ***readable*** what the query is actually doing) The only advantage of using a `CTE` over `sub-select` is that I can actually *name* the `sub-query`. Are there any other differences between those two *when a CTE is used as a simple (non-recursive) CTE?*
In the sub-query vs *simple* (non-recursive) CTE versions, they are probably very similar. You would have to use the profiler and actual execution plan to spot any differences, and that would be specific to your setup (so we can't tell you the answer in full). In *general*; A CTE can be used recursively; a sub-query cannot. This makes them especially well suited to tree structures.
The main advantage of the [Common Table Expression](http://msdn.microsoft.com/en-us/library/ms190766.aspx) (when not using it for [recursive queries](http://msdn.microsoft.com/en-us/library/ms186243.aspx)) is encapsulation, instead of having to declare the sub-query in every place you wish to use it, you are able to define it once, but have multiple references to it. However, this does *not* mean that it is executed only once (as per [previous iterations of this very answer](https://stackoverflow.com/revisions/706983/3), thank you to all those that have commented). The query definitely has the potential to be executed multiple times if referenced multiple times; the query optimizer ultimately makes the decision as to *how* the CTE should be interpreted.
Difference between CTE and SubQuery?
[ "", "sql", "sql-server", "t-sql", "subquery", "common-table-expression", "" ]
I am using the zend framework to get info from the audioscrobbler api. The response format is like this: ``` <recenttracks user="RJ"> <track nowplaying="true"> <artist mbid="2f9ecbed-27be-40e6-abca-6de49d50299e">Aretha Franklin</artist> <name>Sisters Are Doing It For Themselves</name> <mbid/> <album mbid=""/> <url>www.last.fm/music/Aretha+Franklin/_/Sisters+Are+Doing+It+For+Themselves</url> <date uts="1213031819">9 Jun 2008, 17:16</date> <streamable>1</streamable> </track> ... </recenttracks> ``` I am accessing elements as such: ``` $track->name ``` How can I get the nowplaying value?
According to the [Zend Framework API Documentation](http://framework.zend.com/apidoc/core/Zend_Service/Audioscrobbler/Zend_Service_Audioscrobbler.html#artistGetTopTracks) you are getting a SimpleXML object. You can read an attribute of a `SimpleXMLElement` with it's [attributes()](http://ch.php.net/manual/en/function.simplexml-element-attributes.php) method: ``` $track->attributes()->nowplaying ```
You can give these a try: ``` $track['nowplaying'] ``` or: ``` $track->getAttrib('nowplaying') ``` or: ``` $attributes = $track->attributes(); echo $attributes['nowplaying'] ``` I don't see it in the docs anywhere though.
How can I get the currently playing song from the audioscrobbler api?
[ "", "php", "zend-framework", "rest", "last.fm", "" ]
I have tried to keep up with C++ since they introduced 1998 ANSI/ISO C++. I absorbed the new concepts and tried to understand them. I learned about exception handling, templates, and namespaces. I've read about the new cast mechanisms and worked with the STL library. All of these concepts required a lot of energy. But now I am somewhat worried about the future of C++ when having a look at the new [C++0x](http://en.wikipedia.org/wiki/C%2B%2B0x) standard. Things are getting more and more complicated. This language is becoming a monster. I'm not sure that I want to keep up with the language anymore, since I don't do my day-to-day hacking in C++ anyway. I mostly use interpreted or bytecode languages. So why should I bother to learn this difficult, yet exceptionally powerful, language? I can do 95% of my business with Python et al. With the remaining 5%, I can deal with plain old C++ or C without hassle. What do you think?
Hear what Bruce Eckel { author of the two of the **so-called** best [C++ books](http://www.google.com/url?sa=t&source=web&ct=res&cd=8&url=http%3A%2F%2Fwww.mindview.net%2FBooks%2FTICPP%2FThinkingInCPP2e.html&ei=7R_rSbTDEoyJsAbyvZyOBw&usg=AFQjCNH76VGwE3vo7pXHye_fXySGioXihQ) } [commented on C++](http://www.artima.com/weblogs/viewpost.jsp?thread=252441) a few weeks ago: > That said, I hardly ever use C++ > anymore. When I do, it's either > examining legacy code, or to write > performance-critical sections, > typically as small as possible to be > called from other code (my preferred > approach is to quickly write an app in > Python, then profile it and if > necessary improve performance by > calling small portions of C++ using > Python's ctypes library). > > Because I was on the C++ Standards > Committee, I saw these decisions being > made. They were all extremely > carefully considered, far more so than > many of the decisions made in Java. > > However, as people have rightly > pointed out, **the resulting language > was complicated and painful to use and > full of weird rules that I forget as > soon as I'm away from it for a little > while** -- and I figured out those rules > from first principles while I wrote > books, not just by memorizing them. Additionally, you should read [this thread](https://stackoverflow.com/questions/352113/what-is-the-future-of-c-language-developers/) and [Danny Kalev's predictions on C++](https://stackoverflow.com/questions/352113/what-is-the-future-of-c-language-developers/518222#518222). **However, the growing complexity of C++ will create pressure towards splitting the language into quasi-official dialects. We can already see this trend today; it will probably intensify in the future.** **EDIT**: You should take a look at this discussion, too: [C++ - Anyone else feel like C++ is getting too complicated?](http://www.velocityreviews.com/forums/t676282-anyone-else-feel-like-c-is-getting-too-complicated.html)
***Everyone*** uses a subset of C++. For almost all application programming in C++, whether server or client side, that subset is manageable. In my opinion, the only folks that need to stay on top of absolutely every nuance of the language are the library writers -- people implementing Boost, STL, Loki, etc. But I would absolutely use the language that fits the task. If Python is more readable and more maintainable than C++ for your job, and you don't need what C++ offers, then certainly stick with Python.
Should I reject C++ because it's becoming a juggernaut?
[ "", "c++", "programming-languages", "c++11", "" ]
I have written a script in python that uses cookies and POST/GET. I also included proxy support in my script. However, when one enters a dead proxy, the script crashes. Is there any way to check if a proxy is dead/alive before running the rest of my script? Furthermore, I noticed that some proxies don't handle cookies/POST headers properly. Is there any way to fix this?
The simplest was is to simply catch the IOError exception from urllib: ``` try: urllib.urlopen( "http://example.com", proxies={'http':'http://example.com:8080'} ) except IOError: print "Connection error! (Check proxy)" else: print "All was fine" ``` Also, from [this blog post - "check status proxy address"](http://love-python.blogspot.com/2008/07/check-status-proxy-address.html) (with some slight improvements): **for python 2** ``` import urllib2 import socket def is_bad_proxy(pip): try: proxy_handler = urllib2.ProxyHandler({'http': pip}) opener = urllib2.build_opener(proxy_handler) opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib2.install_opener(opener) req=urllib2.Request('http://www.example.com') # change the URL to test here sock=urllib2.urlopen(req) except urllib2.HTTPError, e: print 'Error code: ', e.code return e.code except Exception, detail: print "ERROR:", detail return True return False def main(): socket.setdefaulttimeout(120) # two sample proxy IPs proxyList = ['125.76.226.9:80', '213.55.87.162:6588'] for currentProxy in proxyList: if is_bad_proxy(currentProxy): print "Bad Proxy %s" % (currentProxy) else: print "%s is working" % (currentProxy) if __name__ == '__main__': main() ``` **for python 3** ``` import urllib.request import socket import urllib.error def is_bad_proxy(pip): try: proxy_handler = urllib.request.ProxyHandler({'http': pip}) opener = urllib.request.build_opener(proxy_handler) opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib.request.install_opener(opener) req=urllib.request.Request('http://www.example.com') # change the URL to test here sock=urllib.request.urlopen(req) except urllib.error.HTTPError as e: print('Error code: ', e.code) return e.code except Exception as detail: print("ERROR:", detail) return True return False def main(): socket.setdefaulttimeout(120) # two sample proxy IPs proxyList = ['125.76.226.9:80', '25.176.126.9:80'] for currentProxy in proxyList: if is_bad_proxy(currentProxy): print("Bad Proxy %s" % (currentProxy)) else: print("%s is working" % (currentProxy)) if __name__ == '__main__': main() ``` Remember this could double the time the script takes, if the proxy is down (as you will have to wait for two connection-timeouts).. Unless you specifically have to know the proxy is at fault, handling the IOError is far cleaner, simpler and quicker..
you can use the Proxy-checker library which is as simple as this ``` from proxy_checker import ProxyChecker checker = ProxyChecker() checker.check_proxy('<ip>:<port>') ``` output : ``` { "country": "United States", "country_code": "US", "protocols": [ "socks4", "socks5" ], "anonymity": "Elite", "timeout": 1649 } ``` with the possibility of generating your own proxies and check them with two lines of code
Proxy Check in python
[ "", "python", "http", "proxy", "" ]
I need to build a custom simple non-authoritative caching DNS server in C/C++. Any guidance? Links? Samples?
There's no such thing as a "simple" cacheing DNS server, particularly if you want decent security. Recent DNS attacks have shown that the cacheing function in recursive DNS servers is particularly vulnerable. Re-evaluate whether you actually need local cacheing of your own. If you don't, you're probably better off modifying existing DNS proxy code (such as 'dnsmasq'). If you do want to roll-your-own, there are good libraries such as [`ldns`](http://www.nlnetlabs.nl/projects/ldns/) which can provide the access to the underlying DNS packets. I'm using `ldns` myself in conjunction with `libevent` to implement the Fuzzing DNS server I mentioned in an earlier question.
I wrote a basic DNS server for a job interview under BSD license. May be someone could find it useful: <https://github.com/nephewtom/dns-server>
How to Build a custom simple DNS server in C/C++
[ "", "c++", "c", "dns", "" ]
So I'm in the middle of web-based filesystem abstraction layer development. Just like file browser, except it has some extra features like freaky permissions etc. I would like users to be notified somehow about directory changes. So, i.e. when someone uploads a new file via FTP, certain users should get a proper message. It is not required for the message to be extra detailed, I don't really need to show the exact resource changed. The parent directory name should be enough. What approach would you recommend?
If your server is Linux you can do this with something like [inotify](http://pyinotify.sourceforge.net/) If the only updates are coming from FTP, then another solution I've used in the past is to write an add-on module to [ProFTPD](http://www.proftpd.org/features.html) that performs the "notification" once upload is complete.
See this question: [How to quickly find added / removed files?](https://stackoverflow.com/questions/479762/how-to-quickly-find-added-removed-files) But if you can control the upload somehow (i.e. use HTTP POST instead of FTP), you could simply send a notification after the upload has completed. This has the additional benefit that it would be simple to make sure users never see a partial file.
Directory checksum with python?
[ "", "python", "file", "filesystems", "checksum", "" ]
i have a desktop application that talks soap to a server. the communication is very slow so i want to download a lot of the data up front and query from my local database. i wan this local data to persist after the application shuts down so what is the best way for doing this . . . are there any good products / examples to do this . . there are about 25 tables that i am caching.
SQLite is perfect for this. A couple years ago I wrote a "smart client" application where I maintained a local cache so that users could still use the application offline. I used SQLite and had no problems, it is very small, persists quickly, and most modern ORMs support it. (I was using NHibernate) <http://www.sqlite.org/>
[Berkeley DB](http://en.wikipedia.org/wiki/Berkeley_DB) is a good, standalone DB. You should also consider [ESENT](http://msdn.microsoft.com/en-us/library/ms684493(VS.85).aspx), which is built into windows. Here's a [write-up](http://ayende.com/Blog/archive/2008/12/23/hidden-windows-gems-extensible-storage-engine.aspx) by Ayende.
fast local database
[ "", "c#", "database", "caching", "" ]
I want to show data in a wpftoolkit datagrid where the data is a collection of ``` public class Thing { public string Foo { get; set; } public string Bar { get; set; } public List<Candidate> Candidates { get; set; } } public class Candidate { public string Name { get; set; } public CandidateType CandidateType { get; set; } } public enum CandidateType { Type1, Type2, Type42 } ``` where the number of candidates in Candidates list is configurable at runtime. Desired grid layout looks like this `Foo | Bar | Candidate 1 | Candidate 2 | ... | Candidate N` Thus it seems I cannot create a DataTemplate for the candidates in xaml as the binding expression will change. I add necessary columns in AutoGeneratedColumns event like so: ``` private void DataGrid_AutoGeneratedColumns(object sender, EventArgs e) { ViewModel vm = DataContext as ViewModel; for (int i = 0; i < vm.LotsOfThings.First().Candidates.Count; i++) { string assName = Assembly.GetExecutingAssembly().GetName().Name; ParserContext ctx = new ParserContext(); ctx.XamlTypeMapper = new XamlTypeMapper(new string[] { assName }); ctx.XamlTypeMapper.AddMappingProcessingInstruction("src", "WpfToolkitDataGridTester", assName); ctx.XmlnsDictionary.Add("", "http://schemas.microsoft.com/winfx/2006/xaml/presentation"); ctx.XmlnsDictionary.Add("x", "http://schemas.microsoft.com/winfx/2006/xaml"); ctx.XmlnsDictionary.Add("src", "clr-namespace:WpfToolkitDataGridTester;assembly=" + assName); var template = XamlReader.Parse(@"<DataTemplate> <DataTemplate.Resources> <src:FooConverter x:Key='fooConverter' /> </DataTemplate.Resources> <TextBlock Foreground='{Binding Candidates[" + i + @"].CandidateType,Converter={StaticResource fooConverter}}' Text='{Binding Candidates[" + i + @"].Name}' /> </DataTemplate>", ctx) as DataTemplate; dg.Columns.Add(new DataGridTemplateColumn { Header = "Candidate " + (i + 1), CellTemplate = template }); } } ``` This however fails with the following exception: The tag 'FooConverter' does not exist in XML namespace 'clr-namespace:WpfToolkitDataGridTester;assembly=WpfToolkitDataGridTester'. Line '3' Position '54'. Changing the StaticResource to DynamicResource makes no change. What am I missing? FWIW: a hardcoded datatemplate ``` <DataTemplate x:Key="candidateTemplate"> <DataTemplate.Resources> <src:FooConverter x:Key="fooConverter" /> </DataTemplate.Resources> <TextBlock Foreground="{Binding Candidates[0].CandidateType,Converter={StaticResource fooConverter}}" Text="{Binding Candidates[0].Name}" /> </DataTemplate> ``` and the template column defined like so ``` <wpftk:DataGridTemplateColumn CellTemplate="{StaticResource candidateTemplate}" /> ``` 'works' but obviously does not produce the desired result as Candidates[0] is hardcoded.
For whatever reason, it works as expected if I do like this... ``` string assName = Assembly.GetExecutingAssembly().GetName().Name; StringBuilder sb = new StringBuilder(); sb.Append("<DataTemplate "); sb.Append("xmlns='http://schemas.microsoft.com/winfx/2006/xaml/presentation' "); sb.Append("xmlns:x='http://schemas.microsoft.com/winfx/2006/xaml' "); sb.Append("xmlns:src='clr-namespace:WpfToolkitDataGridTester;assembly=" + assName + "' >"); sb.Append("<DataTemplate.Resources>"); sb.Append("<src:FooConverter x:Key='fooConverter' />"); sb.Append("</DataTemplate.Resources>"); sb.Append("<TextBlock "); sb.Append("Foreground='{Binding Candidates[" + i + "].CandidateType,Converter={StaticResource fooConverter}}' "); sb.Append("Text='{Binding Candidates[" + i + @"].Name}' />"); sb.Append("</DataTemplate>"); var template = (DataTemplate)XamlReader.Parse(sb.ToString()); ```
When the XAML files are compiled to BAML it references the *assembly* not the in memory source. Since the BAML is compiled into the same assembly the actual type isn't available yet. I've found that a short term workaround is to comment out the style temporarily, build the project, then restore the style. The more permanent solution however is to move the converter to another assembly.
dynamic datatemplate with valueconverter
[ "", "c#", "wpf", "xaml", "datatemplate", "valueconverter", "" ]
I would like to comment this with a line: ``` {% if something.property %} <table> <tr>... {% # this is a comment %} {% if something.property %} <table> <tr>... ```
As answer by Miles, `{% comment %}...{% endcomment %}` is used for multi-line comments, but you can also comment out text on the same line like this: ``` {# some text #} ```
Comment tags are documented at <https://docs.djangoproject.com/en/stable/ref/templates/builtins/#std:templatetag-comment> ``` {% comment %} this is a comment {% endcomment %} ``` --- Single line comments are documented at <https://docs.djangoproject.com/en/stable/topics/templates/#comments> ``` {# this won't be rendered #} ```
How to put comments in Django templates?
[ "", "python", "python-3.x", "django", "django-templates", "django-comments", "" ]
I have to test some C# code which sends simple HTML content with Attachment through SMTP server. How can I make sure my code works. My machine is Vista and have IIS-7, I don't want to connect to any external SMTP
Please see: [How can I save an email instead of sending when using SmtpClient?](https://stackoverflow.com/questions/567765/how-can-i-save-an-email-instead-of-sending-when-using-smtpclient) [Can I test SmtpClient before calling client.Send()?](https://stackoverflow.com/questions/372742/can-i-test-smtpclient-before-calling-client-send/372794) [SmtpClient sends email to junk](https://stackoverflow.com/questions/510748/c-smtpclient-sends-email-to-junk)
How about using a "Fake"SMTP Server like <http://ndumbster.sourceforge.net/default.html>
How can I test C# SmtpClient code in Vista
[ "", "c#", "iis-7", "smtp", "windows-vista", "" ]
I have a gridview that shows (50)rows of data per page. I have the Total showing for the Amount that is shown due to a previous request by the user. Now they want to have the total for the Entire result, regardless of what page they are on. How do I do that? I did the following for my Complete Total Count: After the Gridview result gets populated in table ``` Count = dsReport.DataTable1.Count; ``` My plan for the Complete Total that has so far failed: ``` for (int i = 0; i < Count; i++) { Total += dsPoint.DataTable1.Columns[3].Row[i]; ??? I dont know what to do } ``` What is the sum of the specific Decimal Column[3]? Even though I display 50 results, I need the Total for the ENTIRE column.
Something like: ``` Total = dsPoint.Tables["DataTable1"].Compute("SUM(columnName)", String.Empty); ``` You provide the `columnName` :)
It looks like you have it backwards. Typically to get to a value in a DataTable you'll want to use DataTable.Row[i].Column[j]. So assuming you have an integer in your column, you can run your total like this: ``` for (int i = 0; i < Count; i++) { Total += Integer.Parse(dsPoint.DataTable1.Rows[i].Columns[3]; } ```
How to get Total for a DataSet.DataTable on a GridView?
[ "", "c#", ".net", "asp.net", "" ]
First off, I'm not all that familiar with cookies but I know how they work. I've seen quite a few different tutorials with plain PHP code but I'm looking for a solid example of how to store arrays in a cookie using the symfony syntax: ``` $this->getResponse()->setCookie('myCookie', $data); ``` You can't just pass in an array since it expects a string. Is the only way to do it to serialize an array first? Are there any other options while storing data in a cookie?
You can transform the array to a set of key/values in your cookie in case the array is the only subject to store : ``` $this->getResponse()->setCookie('myCookie[0]', $data1); $this->getResponse()->setCookie('myCookie[1]', $data2); ```
If you really need to store it in a cookie and not in a session, you can use serialization: ``` $this->getResponse()->setCookie('myCookie', serialize($data)); $data = unserialize($this->getRequest()->getCookie('myCookie')); ```
How do I save an array of data in a cookie with Symfony?
[ "", "php", "cookies", "symfony1", "" ]
I have a script that uses PHPMailer to send out an email. The email is sent with a direct SMTP connection to a remote mail server. The problem is that while everything seems to be okay when I run the script on my local machine, when the script is run on our live site using the same remote SMTP server the email never gets through. What's curious is that it seems to have worked at one point in time and has semi-recently decided that it just wasn't going to work anymore. What potential points of failure are there? I do not have access to the SMTP server in question so I can't check its log files without submitting a support ticket, and although I was just given sudo access to the web server that is sending the email, I'm not quite sure what I'm looking for in terms of logfiles to go through, etc. FWIW, my local machine is running Ubuntu 8.10, and the remote machine is running Red Hat Enterprise Linux ES release 4 (Nahant Update 1), so there is a bit of a difference in versions, but all I'm really looking for is a few pointers on where I should be looking for problems.
How about you log into your live site, try to open a raw session to the remote SMTP server, and attempt to send email using raw SMTP commands? Doing so will help narrow down the possible causes of the problem.
Is the remote machine behind a firewall? If it is, it might block the sending of e-mails. Or your SMTP server is disallowing access from the remote server.
PHPMailer and SMTP: Mail never shows up when sent from remote server, works fine on local machine
[ "", "php", "linux", "smtp", "phpmailer", "" ]
I have a variable that needs the first four lines stripped out before being displayed: ``` Error Report Submission From: First Last, email@example.com, 12345 Date: 2009-04-16 04:33:31 pm Eastern The content to be output starts here and can go on for any number of lines. ``` I need to remove the 'header' from this data before I display it as part of a 'pending error reports' view.
Mmm. I am sure someone is going to come up with something nifty/shorter/nicer, but how about: ``` $str = implode("\n", array_slice(explode("\n", $str), 4)); ``` If that is too unsightly, you can always abstract it away: ``` function str_chop_lines($str, $lines = 4) { return implode("\n", array_slice(explode("\n", $str), $lines)); } $str = str_chop_lines($str); ``` **EDIT**: Thinking about it some more, I wouldn't recommend using the `str_chop_lines` function unless you plan on doing this in many parts of your application. The original one-liner is clear enough, I think, and anyone stumbling upon `str_chop_lines` may not realize the default is 4 without going to the function definition.
`$content = preg_replace("/^(.*\n){4}/", "", $content);`
Delete first four lines from the top in content stored in a variable
[ "", "php", "lines", "" ]
I'm learning to program, and C++ is my first language. Don't bother using pointers to show me - I don't understand them yet, and won't bother until I have more free time to dedicate to this. ``` int mergeSort() { const int n = 9; int originalarray[n] = {1, 3, 5, 7, 9, 2, 4, 6, 8}; const int halfelements = (sizeof(originalarray) / sizeof(int)) / 2; int farray[halfelements]; int sarray[halfelements]; for (int i = 0; i < halfelements; i++) { farray[i] = originalarray[i]; } for (int i = halfelements, x = 0; i < (halfelements * 2); i++, x++) { sarray[x] = originalarray[i]; } ``` I was assigned (I'm not taking classes - just learning with a few friends helping me out) a merge sort algorithm, with the algorithm explained but not the implementation. I want to rewrite this so it will work for both odd and even integers. I tried adding this code: ``` if ((n % 2) != 0) int farray[halfelements + 1]; ``` So that I could use the same integer to iterate over both subsequent arrays. A sizeof(farray) is showing to be 16 bytes, or 4 integers. So it isn't resizing. What I want to know - is it possible to resize arrays after they initialized? Edit: How would I implement a vector? I don't understand how to use iterators in a loop to iterate over and copy the values.
C++ arrays are fixed in size. If you need a "resizable array", you'll want to use [std::vector](http://en.wikipedia.org/wiki/Vector_(STL)) instead of an array.
My advice is even stronger: use `std::vector<>` (et. al.) unless you have a very good reason to use a C-style array. Since you're learning C++, I doubt you have such a reason: **use `std::vector<>`**.
Can you resize a C++ array after initialization?
[ "", "c++", "arrays", "resize", "sizeof", "" ]
Say you are writing an app like Photoshop where you have effects (filters), etc, should one make each of these filters a separate assembly using a separate project? The main idea is to have each of these filters as nodes, so think of it like: ``` sourceImage -> Sharpen -> Darken -> Contrast -> Blur ... ``` It seems to me that it would make sense to have dll files like: ``` [Filters folder] Sharpen.dll Darken.dll Contrast.dll Blur.dll ``` But it would be hard to manage them like that, and that would prevent me to use the `internal` keyword for class members, right? So right now I only have 1 dll for all filters. What's the best practices for organizing assemblies?
I wouldn't restrict yourself to one filter per assembly. You may well want to group assemblies which implement similar functionality - e.g. colour/contrast together, while keeping them separate from very different kinds of filters (e.g. edge enhancing). Just one bit of anecdotal evidence: I've often seen applications be difficult to manage due to having too *many* assemblies. I can't remember ever seeing one which had problems because it hadn't split the assemblies up enough. I'm not saying it can't happen - just that I haven't seen it.
Patrick Smacchia author of NDepend tool suggest that the number of assemblies is kept low. Look [here](http://codebetter.com/blogs/patricksmacchia/archive/2008/12/08/advices-on-partitioning-code-through-net-assemblies.aspx). This also implies to a certain level that you use NDepend to manage dependencies between namespaces. Also, compilation is faster if you have less assemblies and deployment is easier. I would second Reed Copsey that DI (like [StructureMap](http://structuremap.sourceforge.net/Default.htm)) solution could provide you with extensibility and testability if that's what you are after.
Organizing assemblies (assembly structure) in C#
[ "", "c#", ".net", "" ]
``` var funcs = []; // let's create 3 functions for (var i = 0; i < 3; i++) { // and store them in funcs funcs[i] = function() { // each should log its value. console.log("My value:", i); }; } for (var j = 0; j < 3; j++) { // and now let's run each one to see funcs[j](); } ``` It outputs this: > My value: 3 > My value: 3 > My value: 3 Whereas I'd like it to output: > My value: 0 > My value: 1 > My value: 2 --- The same problem occurs when the delay in running the function is caused by using event listeners: ``` var buttons = document.getElementsByTagName("button"); // let's create 3 functions for (var i = 0; i < buttons.length; i++) { // as event listeners buttons[i].addEventListener("click", function() { // each should log its value. console.log("My value:", i); }); } ``` ``` <button>0</button> <br /> <button>1</button> <br /> <button>2</button> ``` … or asynchronous code, e.g. using Promises: ``` // Some async wait function const wait = (ms) => new Promise((resolve, reject) => setTimeout(resolve, ms)); for (var i = 0; i < 3; i++) { // Log `i` as soon as each promise resolves. wait(i * 100).then(() => console.log(i)); } ``` It is also apparent in `for in` and `for of` loops: ``` const arr = [1,2,3]; const fns = []; for (var i in arr){ fns.push(() => console.log("index:", i)); } for (var v of arr){ fns.push(() => console.log("value:", v)); } for (const n of arr) { var obj = { number: n }; // or new MyLibObject({ ... }) fns.push(() => console.log("n:", n, "|", "obj:", JSON.stringify(obj))); } for(var f of fns){ f(); } ``` What’s the solution to this basic problem?
Well, the problem is that the variable `i`, within each of your anonymous functions, is bound to the same variable outside of the function. # ES6 solution: `let` ECMAScript 6 (ES6) introduces new `let` and `const` keywords that are scoped differently than `var`-based variables. For example, in a loop with a `let`-based index, each iteration through the loop will have a new variable `i` with loop scope, so your code would work as you expect. There are many resources, but I'd recommend [2ality's block-scoping post](http://www.2ality.com/2015/02/es6-scoping.html) as a great source of information. ``` for (let i = 0; i < 3; i++) { funcs[i] = function() { console.log("My value: " + i); }; } ``` Beware, though, that IE9-IE11 and Edge prior to Edge 14 support `let` but get the above wrong (they don't create a new `i` each time, so all the functions above would log 3 like they would if we used `var`). Edge 14 finally gets it right. --- # ES5.1 solution: forEach With the relatively widespread availability of the `Array.prototype.forEach` function (in 2015), it's worth noting that in those situations involving iteration primarily over an array of values, `.forEach()` provides a clean, natural way to get a distinct closure for every iteration. That is, assuming you've got some sort of array containing values (DOM references, objects, whatever), and the problem arises of setting up callbacks specific to each element, you can do this: ``` var someArray = [ /* whatever */ ]; // ... someArray.forEach(function(arrayElement) { // ... code code code for this one element someAsynchronousFunction(arrayElement, function() { arrayElement.doSomething(); }); }); ``` The idea is that each invocation of the callback function used with the `.forEach` loop will be its own closure. The parameter passed in to that handler is the array element specific to that particular step of the iteration. If it's used in an asynchronous callback, it won't collide with any of the other callbacks established at other steps of the iteration. If you happen to be working in jQuery, the `$.each()` function gives you a similar capability. --- # Classic solution: Closures What you want to do is bind the variable within each function to a separate, unchanging value outside of the function: ``` var funcs = []; function createfunc(i) { return function() { console.log("My value: " + i); }; } for (var i = 0; i < 3; i++) { funcs[i] = createfunc(i); } for (var j = 0; j < 3; j++) { // and now let's run each one to see funcs[j](); } ``` Since there is no block scope in JavaScript - only function scope - by wrapping the function creation in a new function, you ensure that the value of "i" remains as you intended.
Try: ``` var funcs = []; for (var i = 0; i < 3; i++) { funcs[i] = (function(index) { return function() { console.log("My value: " + index); }; }(i)); } for (var j = 0; j < 3; j++) { funcs[j](); } ``` **Edit** (2014): Personally I think @Aust's [more recent answer about using `.bind`](https://stackoverflow.com/a/19323214/918959) is the best way to do this kind of thing now. There's also lo-dash/underscore's `_.partial` when you don't need or want to mess with `bind`'s `thisArg`.
JavaScript closure inside loops – simple practical example
[ "", "javascript", "loops", "closures", "" ]
I am trying to track down the Weblogic 10.3 JAR that contains weblogic.rmi.RemoteException in order to solve a build path issue. Thanks in advance.
I finally found it in $BEA-HOME/modules/com.bea.core.weblogic.rmi.client\_1.4.0.0.jar It seems in 10.3 or (10g as Oracle are branding it) they have moved a lot of what was in $BEA-HOME/wlserver\_10.x/server/lib/weblogic.jar into a seperate modules directory in the root of the bea install. I also had to include $BEA-HOME/modules/com.bea.core.weblogic.workmanager\_1.4.0.0.jar on my build path to use com.bea.core.weblogic.rmi.client\_1.4.0.0.jar The script above is useful, a slightly simpler version which will recurse through sub-directories searching each jar file it encounters for a specified class is find -name "\*.jar" -exec grep "" {} \; e.g. find -name "\*.jar" -exec grep "weblogic/rmi/RemoteException.class" {} \; I found the tip courtesy of <http://snipplr.com/view/12702/find-in-which-jar-a-class-is-defined/>
I don't have WebLogic installed here, but I keep a shell/cygwin script around to find classes in jars: ``` #! /bin/sh target=$1 for jf in `find . -name '*.jar' -type f -print`; do jar tvf $jf | awk "/\/$target\.class/ { print \"$jf: \" \$NF }" done ``` Just call the script something like jarfind.sh and put it in your path somewhere. Then `jarfind.sh RemoteException` in your weblogic tree.
Which Weblogic 10.3 JAR contains weblogic.rmi.RemoteException
[ "", "java", "jar", "weblogic", "rmi", "" ]
### Duplicate: > * [Best way to obfuscate an e-mail address on a website?](https://stackoverflow.com/questions/748780/best-way-to-obfuscate-an-e-mail-address-on-a-website/) > * [Effective method to hide email from spam bots](https://stackoverflow.com/questions/483212/effective-method-to-hide-email-from-spam-bots) > * [What are some ways to protect emails on websites from spambots?](https://stackoverflow.com/questions/308772/what-are-some-ways-to-protect-emails-on-websites-from-spambots) --- What is the best way to prevent spammers from getting the email address from your mailto links? I'm under the impression that javascript could be a solution. I don't know if my current solution is fool proof, so that's why I'm asking. Here's what I'm currently doing: ``` <script language="JavaScript"><!-- var name = "emailusername"; var domain = "yahoo.com"; var text = "emailusername@yahoo.com"; document.write('<a href=\"mailto:' + name + '@' + domain + '\">'); document.write(text + '</a>'); // --></script> ``` Is there is a better way? I don't like having to have this chunk of code everywhere I want to put a mailto link.
* [Best way to obfuscate an e-mail address on a website?](https://stackoverflow.com/questions/748780/best-way-to-obfuscate-an-e-mail-address-on-a-website) * [Effective method to hide email from spam bots](https://stackoverflow.com/questions/483212/effective-method-to-hide-email-from-spam-bots) * [What are some ways to protect emails on websites from spambots?](https://stackoverflow.com/questions/308772/what-are-some-ways-to-protect-emails-on-websites-from-spambots)
Javascript helps, but this won't help much. The email address is still visible in the html source using this type of script. The "best" options use client-side javascript to "build" the email address out of parts, so the entire email address is never visible in the HTML source in one piece. The browser puts it together for you on the client. Example: ``` <script language="JavaScript"><!-- var name = "emailusername"; var domain = "yahoo.com"; document.write('<a href=\"mailto:' + name + '@' + domain + '\">'); document.write(name + '@' + domain + '</a>'); // --></script> ```
How to stop spammers from getting the email address from a mailto link?
[ "", "javascript", "spam-prevention", "mailto", "" ]
If you're halfway through a transaction and perform a DDL statement, such as truncating a table, then the transaction commits. I was wondering whether this was always the case and by definition, or is there a setting hidden somewhere that would *rollback* the transaction instead of committing. Thanks. **Edit to clarify...** I'm not looking to rollback after a truncate. I just want to confirm that statements already carried out are *absolutely always* going to be committed before a DDL. Just want to make sure there isn't a system property somewhere that someone could set to wreck my code. I understand the need to commit before and after a DDL, but conceptually I'd have thought the same consistency requirement *could* be achieved with a rollback before the DDL and a commit after.
No, it will always commit. If you want to rollback, you'll have to do it before the DDL. If you want to isolate the DDL from your existing transaction, then you will have to execute it in its' own, separate transaction.
Technically [DDL](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7072180788422) does a commit BEFORE it executes and AFTER it executes. Yes same link from Cookie but this is a different aspect of the same issue. It's crucial to understand it's not just one commit, there are two and they happen just before and just after.
Do DDL statements always give you an implicit commit, or can you get an implicit rollback?
[ "", "sql", "oracle", "transactions", "ddl", "" ]
I'm doing some research for a JavaScript project where the performance of drawing simple primitives (i.e. lines) is by far the top priority. The answers to [this question](https://stackoverflow.com/questions/221710/are-there-any-good-javascript-graphics-libraries) provide a great list of JS graphics libraries. While I realize that the choice of browser has a greater impact than the library, I'd like to know whether there are any differences between them, before choosing one. Has anyone done a performance comparison between any of these?
**Updated answer (2019):** The core advice is still the same: for maximal performance use thin wrappers or use raw browser API's, and also avoid the DOM or any DOM-like structure. In 2019 this means avoiding SVG (and any library built on top of it) because it may cause performance issues when trying to rapidly change the DOM. Canvas is the go-to solution for high performance web graphics, both for the 2d and 3d (webgl) contexts. Flash is dead so no longer an option, but even if it weren't its performance was eventually matched by the native browser API's. **Original answer (2009):** If you're not doing 3d, just use raw canvas with excanvas as an explorer fall-back. Your bottleneck will be javascript execution speed, not line rendering speed. Except for IE, which will bog down when the scene gets too complex because VML actually builds a dom. If you're really worried about performance though, definitely go with flash and write the whole thing in actionscript. You'll get an order of magnitude better performance, and with the flex sdk you don't even need to buy anything. There are several decent libraries for 3d in flash/flex available.
Raphael JavaScript Library <http://raphaeljs.com>
Which JavaScript graphics library has the best performance?
[ "", "javascript", "graphics", "" ]
I've been reading lots on how to trigger an application from inside a C# program (Process.Start()), but I haven t been able to find any information on how to have this new application run within a panel of my C# program. For example, I'd like a button click to open a notepad.exe WITHIN my application, not externally.
I don't know if this is still the recommended thing to use but the "Object Linking and Embedding" framework allows you to embed certain objects/controls directly into your application. This will probably only work for certain applications, I'm not sure if Notepad is one of them. For really simple things like notepad, you'll probably have an easier time just working with the text box controls provided by whatever medium you're using (e.g. WinForms). Here's a link to OLE info to get started: <http://en.wikipedia.org/wiki/Object_Linking_and_Embedding>
Using the win32 API it is possible to "eat" another application. Basically you get the top window for that application and set it's parent to be the handle of the panel you want to place it in. If you don't want the MDI style effect you also have to adjust the window style to make it maximised and remove the title bar. Here is some simple sample code where I have a form with a button and a panel: ``` using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Diagnostics; using System.Runtime.InteropServices; using System.Threading; namespace WindowsFormsApplication2 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { Process p = Process.Start("notepad.exe"); Thread.Sleep(500); // Allow the process to open it's window SetParent(p.MainWindowHandle, panel1.Handle); } [DllImport("user32.dll")] static extern IntPtr SetParent(IntPtr hWndChild, IntPtr hWndNewParent); } } ``` --- I just saw another example where they called WaitForInputIdle instead of sleeping. So the code would be like this: ``` Process p = Process.Start("notepad.exe"); p.WaitForInputIdle(); SetParent(p.MainWindowHandle, panel1.Handle); ``` The Code Project has a good article one the whole process: [Hosting EXE Applications in a WinForm project](http://www.codeproject.com/KB/miscctrl/AppControl.aspx)
How can I run another application within a panel of my C# program?
[ "", "c#", ".net", "windows", "winforms", "exe", "" ]
I am trying to import a large CSV file into a MySQL database. I have loaded the entire file into one flat table. i can select the data that needs to go into separate tables using select statements, my question is how do i copy the results of those select queries to different tables. i would prefer to do it completely in SQL and not have to worry about using a scripting language.
``` INSERT INTO anothertable (list, of , column, names, to, give, values, for) SELECT list, of, column, names, of, compatible, column, types FROM bigimportedtable WHERE possibly you want a predicate or maybe not; ```
``` INSERT INTO new_table_1 SELECT * FROM existing_table WHERE condition_for_table_1; INSERT INTO new_table_2 SELECT * FROM existing_table WHERE condition_for_table_2; ```
Copy Query Result to another mysql table
[ "", "sql", "mysql", "" ]
I have a problem in my web crawler where I am trying to retrieve images from a particular website. Problem is that often I see images that are exactly same but different in URL i.e. their address. Is there any Java library or utility that can identify if 2 images are exactly same in their content (i.e. at pixel level). My input will be URLs for the images where I can download them.
I've done something very similar to this before in Java and I found that the PixelGrabber class inside the [java.awt.image](http://java.sun.com/javase/6/docs/api/java/awt/image/package-summary.html) package of the api is extremely helpful (if not downright necessary). Additionally you would definitely want to check out the [ColorConvertOp class](http://java.sun.com/javase/6/docs/api/java/awt/image/ColorConvertOp.html) which can performs a pixel-by-pixel color conversion of the data in the source image and the resulting color values are scaled to the precision of the destination image. The documentation goes on to say that the images can even be the same image in which case it would be quite simple to detect if they are identical. If you were detecting similarity, you need to use some form of averaging method as mentioned in the answer to [this question](https://stackoverflow.com/questions/684264/how-does-googles-image-color-search-work/684700#684700) If you can, also check out Volume 2 chapter 7 of Horstman's Core Java (8th ed) because there's a whole bunch of examples on image transformations and the like, but again, make sure to poke around the java.awt.image package because you should find you have almost everything prepared for you :) G'luck!
Depending on how detailed you want to get with it: * download the image * as you download it generate a hash for it * make a directory where the directory name is the hash value (if the directory does not exist) * if directory contains 2 or more files then compare the file sizes * if the file sizes are the same then do a byte by byte comparison of the image to the bytes of the images in the file * if the bytes are unique then you have a new image Regardless of if you want to do all that or not you need to: * download the images * do a byte-by-byte comparison of the images No need to rely on any special imaging libraries, images are just bytes.
Identifying 2 same images using Java
[ "", "java", "image", "" ]
I was curious, what is the scope of variables declared inside a class constructor which are not data members of that class? For example, if a constructor needs an iterating int `i`, will this variable be destroyed after the constructor finishes, or is it then global for the program?
In this sense a constructor is like any other function - any variable declared inside has usual scope limitations and they all surely go out of scope and get destroyed once constructor is finished.
Like any other function, if it's a local variable it will be "destroyed" at the end of the function. Local scope.
What is the scope of variables declared in a class constructor?
[ "", "c++", "constructor", "scope", "" ]
I'm generating some charts that need a hex string for the colors. Example: ``` <dataseries name="ford" color="FF00FF" /> ``` I'm creating these dynamically, so I would like to generate the hex code for each dataseries randomly. What is the best way to do this?
Easiest way is to use `String.Format` and use the hexadecimal format for the argument. ``` var random = new Random(); var color = String.Format("#{0:X6}", random.Next(0x1000000)); // = "#A197B9" ```
Samuel's answer is the best way to do this, just make sure that if you're generating the colors inside a loop that you don't instantiate a new `Random` object each time because `new Random()` seeds the generator using the system clock. Your loop is going to run faster than the clock can tick, so you'll end up generating several of the same colors over and over because `random` is being seeded with the same value. It should look something like this: ``` int numColors = 10; var colors = new List<string>(); var random = new Random(); // Make sure this is out of the loop! for (int i = 0; i < numColors; i++) { colors.Add(String.Format("#{0:X6}", random.Next(0x1000000))); } ``` instead of: ``` int numColors = 10; var colors = new List<string>(); for (int i = 0; i < numColors; i++) { var random = new Random(); // Don't put this here! colors.Add(String.Format("#{0:X6}", random.Next(0x1000000))); } ```
How do I create a random hex string that represents a color?
[ "", "c#", "asp.net", "colors", "hex", "" ]
How can I call an external program with a python script and retrieve the output and return code?
Look at the [**subprocess**](http://docs.python.org/library/subprocess.html) module: a simple example follows... ``` from subprocess import Popen, PIPE process = Popen(["ls", "-la", "."], stdout=PIPE) (output, err) = process.communicate() exit_code = process.wait() ```
Following Ambroz Bizjak's previous comment, here is a solution that worked for me: ``` import shlex from subprocess import Popen, PIPE cmd = "..." process = Popen(shlex.split(cmd), stdout=PIPE) process.communicate() exit_code = process.wait() ```
How to call an external program in python and retrieve the output and return code?
[ "", "python", "return-value", "external-process", "" ]
if i have a private property in a class, i'm wondering what technically the difference is between the following three scenarios (memory usage, usability, best practice, etc): ``` class testClass { private string myString1 = "hello"; private string myString2 { get { return "hello"; } } private string myString3() { return "hello"; } } ``` apart from obviously being able to set the value in myString1 and not in myString2 or myString3, i'm wondering more about how these differ in terms of efficiency?
All these methods are very different in terms of what they get compiled to, though very much similar in terms of use. I'll try to summarise the differences in brief: 1. This is a simple private instance variable. It's easily going to be the most efficient when referencing. 2. This is a read-only property (i.e. a get but no set accessor). 3. This is a normal parameterless function. I suspect you're just offering these examples purely as a point of comparison, and realise that such a function is totally useless (as are private properties, in almost all cases). The layout (i.e. everything on one line) is also rather horrible. Methods 2 and 3 are going to be equally inefficient compared to 1 in that they both involve the overhead of function calls. I don't know by memory the CIL code that they all compile to (maybe someone else can produce that), but they certainly involve a few more instructions, whereas referencing `myString1` ought to only require a single instruction in CIL. Not sure I can make a very useful comment on best practice without knowing more about the context, but method 2 (i.e. a private property) is generally seen as quite useless. The third method should never be used in my opinion (it's begging to be turned into a property). I think what you really want is just a plain old private variable, so definitely go for the first declaration. Public values should always be accessed as properties rather than variables in classes (i.e. a private/protected backing variable to a property), but that is slightly unrelated to your question. (You could find plenty of resources discussing the matter in a quick search anyway.) Finally, note that if your "property" is going to be read-only (i.e. not modified at any point), you really want to use a constant, i.e. `private const string myString1 = "hello";`. Hope that helps clarify things a bit.
I try to follow these rules where possible: * Fields should be kept private * Properties should be used to expose data * Methods should be used to perform an action There are obviously going to be some situations where every last drop of performance is important, but in general I would attempt to follow best-practice until profiling tells you that optimisation is needed. There's a good article here: [Why Properties Matter](http://csharpindepth.com/Articles/Chapter8/PropertiesMatter.aspx)
using get vs property vs method
[ "", "c#", "" ]
I recently encountered this problem. I found many instances of people asking the question—[here, for example](http://www.codeguru.com/forum/showthread.php?p=982326)—but no concrete answers. Here's the sample code hoisted from that link: ``` class AFX_BASE_APPLICATION_APP_CLASS CFileExtension { public: CFileExtension (); virtual ~CFileExtension (); }; ``` The error this generates is: `c:\FileExtension.h(14) : error C2470: 'CFileExtension' : looks like a function definition, but there is no formal parameter list; skipping apparent body`
You've almost certainly missed the header which defines `AFX_BASE_APPLICATION_APP_CLASS`. In that case, it would be passed through unaltered and VC++ would assume that `CFileExtension` was a function that returned `class AFX_BASE_APPLICATION_APP_CLASS`. And, since it thinks it's a function, it also thinks it needs parentheses. You just need to find where `AFX_BASE_APPLICATION_APP_CLASS` is defined and `#include` that file.
I encountered the same compiler error. The problem was a **missing colon** My code was something like: ``` int MyClass:doSomething() { } ``` It should have been: (Note '::' instead of ':' ``` int MyClass::doSomething() { } ```
What does it mean when you get a compile error "looks like a function definition" for a class declaration?
[ "", "c++", "visual-studio", "" ]
While I can upcast a string to an object, I cannot upcast an IList of strings to an IList of objects. How come? What to do now other that coping all items to a new IList? ``` static void ThisWorks() { IList<object> list = new List<object>(); list.Add("I can add a string since string : object"); } static void ThisDoesNotWork() { // throws an invalid cast exception IList<object> list = (IList<object>) new List<string>(); list.Add("I'm never getting here ... why?"); } ```
Look at it like this: while a banana is a fruit, a basket of bananas is not a basket of fruit, since you can add oranges to the latter, but not the former. Your `List<string>` has stronger constraints than a `List<object>`. Casting should always respect [Liskow](http://en.wikipedia.org/wiki/Liskov_substitution_principle). For containers and iterators which do not admit modification, such casting is safe, but once things can be changed, you are skating close to the thin ice.
This is not possible as generics are invariant (as of C# 3.0). You can workaround it with: ``` var objectList = list.Cast<object>().ToList(); ```
C#: No casting within Generics?
[ "", "c#", "generics", "casting", "" ]
I am doing a little exploring simulation and I want to show the graphs to compare the performance among the algorithms during run-time. What library comes to your mind? I highly prefer those that come small as I'd love if it's easy for my instructor to compile my code. I've checked [gdchart](http://www.fred.net/brv/chart/) but it seems to be too heavy. I just want a simple x-y sort of timeline graph. Google chart is of course out of the question, in case you've read [this](https://stackoverflow.com/questions/82294/is-there-any-open-source-library-in-c-for-3d-chart) similar question. --- Related post [Scatter Plots in C++](https://stackoverflow.com/questions/215110/scatter-plots-in-c/).
My favourite has always been [gnuplot](http://www.gnuplot.info/). It's very extensive, so it might be a bit too complex for your needs though. It is cross-platform and there is [a C++ API](http://www.stahlke.org/dan/gnuplot-iostream/).
Honestly, I was in the same boat as you. I've got a C++ Library that I wanted to connect to a graphing utility. I ended up using [Boost Python](http://www.boost.org/doc/libs/1_38_0/libs/python/doc/index.html) and [matplotlib](http://matplotlib.sourceforge.net/). It was the best one that I could find. *As a side note: I was also wary of licensing. matplotlib and the boost libraries can be integrated into proprietary applications.* Here's an example of the code that I used: ``` #include <boost/python.hpp> #include <pygtk/pygtk.h> #include <gtkmm.h> using namespace boost::python; using namespace std; // This is called in the idle loop. bool update(object *axes, object *canvas) { static object random_integers = object(handle<>(PyImport_ImportModule("numpy.random"))).attr("random_integers"); axes->attr("scatter")(random_integers(0,1000,1000), random_integers(0,1000,1000)); axes->attr("set_xlim")(0,1000); axes->attr("set_ylim")(0,1000); canvas->attr("draw")(); return true; } int main() { try { // Python startup code Py_Initialize(); PyRun_SimpleString("import signal"); PyRun_SimpleString("signal.signal(signal.SIGINT, signal.SIG_DFL)"); // Normal Gtk startup code Gtk::Main kit(0,0); // Get the python Figure and FigureCanvas types. object Figure = object(handle<>(PyImport_ImportModule("matplotlib.figure"))).attr("Figure"); object FigureCanvas = object(handle<>(PyImport_ImportModule("matplotlib.backends.backend_gtkagg"))).attr("FigureCanvasGTKAgg"); // Instantiate a canvas object figure = Figure(); object canvas = FigureCanvas(figure); object axes = figure.attr("add_subplot")(111); axes.attr("hold")(false); // Create our window. Gtk::Window window; window.set_title("Engineering Sample"); window.set_default_size(1000, 600); // Grab the Gtk::DrawingArea from the canvas. Gtk::DrawingArea *plot = Glib::wrap(GTK_DRAWING_AREA(pygobject_get(canvas.ptr()))); // Add the plot to the window. window.add(*plot); window.show_all(); // On the idle loop, we'll call update(axes, canvas). Glib::signal_idle().connect(sigc::bind(&update, &axes, &canvas)); // And start the Gtk event loop. Gtk::Main::run(window); } catch( error_already_set ) { PyErr_Print(); } } ```
Free easy way to draw graphs and charts in C++?
[ "", "c++", "charts", "" ]
I need to delete a k/v pair from a dictionary in a loop. After getting `RuntimeError: dictionary changed size during iteration` I pickled the dictionary after deleting the k/v and in one of the outer loops I try to reopen the newly pickled/updated dictionary. However, as many of you will probably know-I get the same error-I think when it reaches the top of the loop. I do not use my dictionary in the outermost loop. So my question is-does anyone know how to get around this problem? I want to delete a k/V pair from a dictionary and use that resized dictionary on the next iteration of the loop. to focus the problem and use the solution from Cygil ``` list=[27,29,23,30,3,5,40] testDict={} for x in range(25): tempDict={} tempDict['xsquared']=x*x tempDict['xinverse']=1.0/(x+1.0) testDict[(x,x+1)]=tempDict for item in list: print 'the Dictionary now has',len(testDict.keys()), ' keys' for key in testDict.keys(): if key[0]==item: del testDict[key] ``` I am doing this because I have to have some research assistants compare some observations from two data sets that could not be matched because of name variants. The idea is to throw up a name from one data set (say set A) and then based on a key match find all the names attached to that key in the other dataset (set B). One a match has been identified I don't want to show the value from B again to speed things up for them. Because there are 6,000 observations I also don't want them to have to start at the beginning of A each time they get back to work. However, I can fix that by letting them chose to enter the last key from A they worked with. But I really need to reduce B once the match has been identified
Without code, I'm assuming you're writing something like: ``` for key in dict: if check_condition(dict[key]): del dict[key] ``` If so, you can write ``` for key in list(dict.keys()): if key in dict and check_condition(dict[key]): del dict[key] ``` `list(dict.keys())` returns a copy of the keys, not a view, which makes it possible to delete from the dictionary (you are iterating through a copy of the keys, not the keys in the dictionary itself, in this case.)
Delete all keys whose value is > 15: ``` for k in mydict.keys(): # makes a list of the keys and iterate # over the list, not over the dict. if mydict[k] > 15: del mydict[k] ```
How to work around needing to update a dictionary
[ "", "python", "dictionary", "runtime-error", "" ]
Am I able to access an array component directly such as: String x = args[1] ? or do I have to use a loop and use args[i] ?
Yes, you can access an array component directly. For example: ``` String[] test = new String[] { "first", "second" }; System.out.println(test[1]); // Prints second ``` (EDIT: For some reason I was sure this was about JavaScript not Java. Bizarre.)
yes. You can do that, assuming args[1] exists, otherwise you'll get an ArrayIndexOutOfBoundsException.
Java array: direct access to component in array
[ "", "java", "arrays", "" ]
Does anyone know a library or some at least some research on creating and using persistent data structures in Java? I don't refer to persistence as long term storage but persistence in terms of immutability (see [Wikipedia entry](http://en.wikipedia.org/wiki/Persistent_data_structure)). I'm currently exploring different ways to model an api for persistent structures. Using builders seems to be a interesting solution: ``` // create persistent instance Person p = Builder.create(Person.class) .withName("Joe") .withAddress(Builder.create(Address.class) .withCity("paris") .build()) .build(); // change persistent instance, i.e. create a new one Person p2 = Builder.update(p).withName("Jack"); Person p3 = Builder.update(p) .withAddress(Builder.update(p.address()) .withCity("Berlin") .build) .build(); ``` But this still feels somewhat boilerplated. Any ideas?
I guess the obvious choices are: o Switch to a transient data structure (builder) for the update. This is quite normal. `StringBuilder` for `String` manipulation for example. As your example. ``` Person p3 = Builder.update(p) .withAddress( Builder.update(p.address()) .withCity("Berlin") .build() ) .build(); ``` o Always use persistent structures. Although there appears to be lots of copying, you should actually be sharing almost all state, so it is nowhere near as bad as it looks. ``` final Person p3 = p .withAddress( p.address().withCity("Berlin") ); ``` o Explode the data structure into lots of variables and recombine with one huge and confusing constructor. ``` final Person p3 = Person.of( p.name(), Address.of( p.house(), p.street(), "Berlin", p.country() ), p.x(), p.y(), p.z() ); ``` o Use call back interfaces to provide the new data. Even more boilerplate. ``` final Person p3 = Person.of(new PersonInfo( public String name () { return p.name(); ) public Address address() { return Address.of(new AddressInfo() { private final Address a = p.address(); public String house () { return a.house() ; } public String street () { return a.street() ; } public String city () { return "Berlin" ; } public String country() { return a.country(); } })), public Xxx x() { return p.x(); } public Yyy y() { return p.y(); } public Zzz z() { return p.z(); } }); ``` o Use nasty hacks to make fields transiently available to code. ``` final Person p3 = new PersonExploder(p) {{ a = new AddressExploder(a) {{ city = "Berlin"; }}.get(); }}.get(); ``` (Funnily enough I was just put down a copy of Purely Functional Data Structures by Chris Okasaki.)
Builders will make your code too verbose to be usable. In practice, almost all immutable data structures I've seen pass in state through the constructor. For what its worth, here are a nice series of posts describing immutable data structures in C# (which should convert readily into Java): * [Part 1: Kinds of Immutability](http://blogs.msdn.com/ericlippert/archive/2007/11/13/immutability-in-c-part-one-kinds-of-immutability.aspx) * [Part 2: Simple Immutable Stack](http://blogs.msdn.com/ericlippert/archive/2007/12/04/immutability-in-c-part-two-a-simple-immutable-stack.aspx) * [Part 3: Covariant Immutable Stack](http://blogs.msdn.com/ericlippert/archive/2007/12/06/immutability-in-c-part-three-a-covariant-immutable-stack.aspx) * [Part 4: Immutable Queue](http://blogs.msdn.com/ericlippert/archive/2007/12/10/immutability-in-c-part-four-an-immutable-queue.aspx) * [Part 5: Lolz!](http://blogs.msdn.com/ericlippert/archive/2007/12/13/immutability-in-c-part-five-lolz.aspx) (included for completeness) * [Part 6: Simple Binary Tree](http://blogs.msdn.com/ericlippert/archive/2007/12/18/immutability-in-c-part-six-a-simple-binary-tree.aspx) * [Part 7: More on Binary Trees](http://blogs.msdn.com/ericlippert/archive/2007/12/19/immutability-in-c-part-seven-more-on-binary-trees.aspx) * [Part 8: Even More on Binary Trees](http://blogs.msdn.com/ericlippert/archive/2008/01/18/immutability-in-c-part-eight-even-more-on-binary-trees.aspx) * [Part 9: AVL Tree Implementation](http://blogs.msdn.com/ericlippert/archive/2008/01/21/immutability-in-c-part-nine-academic-plus-my-avl-tree-implementation.aspx) * [Part 10: Double-ended Queue](http://blogs.msdn.com/ericlippert/archive/2008/01/22/immutability-in-c-part-10-a-double-ended-queue.aspx) * [Part 11: Working Double-ended Queue Implementation](http://blogs.msdn.com/ericlippert/archive/2008/02/12/immutability-in-c-part-eleven-a-working-double-ended-queue.aspx) C# and Java are extremely verbose, so the code in these articles is quite scary. I recommend learning OCaml, F#, or Scala and familiarizing yourself with immutability with those languages. Once you master the technique, you'll be able to apply the same coding style to Java much more easily.
Persistent data structures in Java
[ "", "java", "data-structures", "immutability", "" ]
Does C# allows partial interface? i.e., in ManagerFactory1.cs class, I have ``` public partial interface IManagerFactory { // Get Methods ITescoManager GetTescoManager(); ITescoManager GetTescoManager(INHibernateSession session); } ``` and in ManagerFactory.cs class, I have: ``` public partial interface IManagerFactory { // Get Methods IEmployeeManager GetEmployeeManager(); IEmployeeManager GetEmployeeManager(INHibernateSession session); IProductManager GetProductManager(); IProductManager GetProductManager(INHibernateSession session); IStoreManager GetStoreManager(); IStoreManager GetStoreManager(INHibernateSession session); } ``` Both ManagerFactory and ManagerFactory1 are located in the same assembly.
The simplest way is just to try it :) But yes, partial interfaces are allowed. Valid locations for the `partial` modifier (with C# 3.0 spec references): * Classes (10.1.2) * Structs (11.1.2) * Interfaces (13.1.2) * Methods (C# 3.0+) (10.2.7; 10.6.8) Section 10.2 of the spec contains most of the general details for partial types. Invalid locations: * Enums * Delegates
### Yes, it does. [Partial Classes and Methods (C# Programming Guide)](http://msdn.microsoft.com/en-us/library/wa80x488.aspx) at MSDN ### Restrictions: * All partial-type interface definitions meant to be parts of the same type must be modified with `partial`. * The partial modifier can only appear immediately before the keyword `interface`. * All partial-type definitions meant to be parts of the same type must be defined in the same assembly and the same module (.exe or .dll file). Partial interfaces are primary used when code generation is involved. For example when one part of an interface is generated and the other one is user-written.
Partial Interface in C#
[ "", "c#", "" ]
I'm wondering if there's a way to count the words inside a div for example. Say we have a div like so: ``` <div id="content"> hello how are you? </div> ``` Then have the JS function return an integer of 4. Is this possible? I have done this with form elements but can't seem to do it for non-form ones. Any ideas? g
If you know that the DIV is *only* going to have text in it, you can [KISS](http://en.wikipedia.org/wiki/KISS_principle): ``` var count = document.getElementById('content').innerHTML.split(' ').length; ``` If the div can have HTML tags in it, you're going to have to traverse its children looking for text nodes: ``` function get_text(el) { ret = ""; var length = el.childNodes.length; for(var i = 0; i < length; i++) { var node = el.childNodes[i]; if(node.nodeType != 8) { ret += node.nodeType != 1 ? node.nodeValue : get_text(node); } } return ret; } var words = get_text(document.getElementById('content')); var count = words.split(' ').length; ``` This is the same logic that the jQuery library uses to achieve the effect of its [`text()`](http://docs.jquery.com/Attributes/text) function. jQuery is a pretty awesome library that in this case is not necessary. However, if you find yourself doing a lot of DOM manipulation or AJAX then you might want to check it out. **EDIT**: As noted by Gumbo in the comments, the way we are splitting the strings above would count two consecutive spaces as a word. If you expect that sort of thing (and even if you don't) it's probably best to avoid it by splitting on a regular expression instead of on a simple space character. Keeping that in mind, instead of doing the above split, you should do something like this: ``` var count = words.split(/\s+/).length; ``` The only difference being on what we're passing to the [split](https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/String/split) function.
Paolo Bergantino's second solution is incorrect for empty strings or strings that begin or end with whitespaces. Here's the fix: ``` var count = !s ? 0 : (s.split(/^\s+$/).length === 2 ? 0 : 2 + s.split(/\s+/).length - s.split(/^\s+/).length - s.split(/\s+$/).length); ``` **Explanation:** If the string is empty, there are zero words; If the string has only whitespaces, there are zero words; Else, count the number of whitespace groups without the ones from the beginning and the end of the string.
Javascript word-count for any given DOM element
[ "", "javascript", "" ]
Using JAXB in Java it is easy to generate from a xml schema file a set of Java classes that xml conforming to that schema can be deserialized to. Is there some C# equivalent of JAXB? I know that Linq can serialize and deserialize classes to/from xml files. But how can I generate C# classes from xml schema file and then use this classes with linq?
If you're using Visual Studio, try the [**XML Schema Definition Tool**](http://msdn.microsoft.com/en-us/library/x6c1kb0s(VS.80).aspx). It takes your schema definitions and produces C# classes -- or it can go the other way and produce schema definitions from classes. It also has a number of other XML-related transformations.
There is a better tool from Microsoft called [XsdObjectGen](http://msdn.microsoft.com/en-us/xml/bb190622.aspx), the XSD Object Code Generator. It is like xsd.exe, but better. Also free, it is not part of the .NET SDK, but is a separate download. Also see the SO question: [XSDObjectGen vs Xsd.exe](https://stackoverflow.com/questions/289335/xsdobjectgen-exe-vs-xsd-exe)
Jaxb equivalent in C#
[ "", "c#", "linq", "linq-to-xml", "jaxb", "" ]
In C#, what's the best way to get the 1st digit in an int? The method I came up with is to: * turn the int into a string, * get the 1st char of the string, and * turn the char into an int. Like: ``` int start = Convert.ToInt32(curr.ToString().Substring(0, 1)); ``` While this does the job, it feels like there is probably a good, simple, math-based solution to such a problem. String manipulation feels clunky. **Edit:** Irrespective of speed differences, `mystring[0]` instead of `Substring()` is still just string manipulation
Here's how ``` int i = Math.Abs(386792); while(i >= 10) i /= 10; ``` and `i` will contain what you need
# Benchmarks Firstly, you must decide on what you mean by "best" solution, of course that takes into account the efficiency of the algorithm, its readability/maintainability, and the likelihood of bugs creeping up in the future. Careful unit tests can generally avoid those problems, however. I ran each of these examples 10 million times, and the results value is the number of `ElapsedTicks` that have passed. Without further ado, from slowest to quickest, the algorithms are: ## Converting to a string, take first character ``` int firstDigit = (int)(Value.ToString()[0]) - 48; ``` Results: ``` 12,552,893 ticks ``` ## Using a logarithm ``` int firstDigit = (int)(Value / Math.Pow(10, (int)Math.Floor(Math.Log10(Value)))); ``` Results: ``` 9,165,089 ticks ``` ## Looping ``` while (number >= 10) number /= 10; ``` Results: ``` 6,001,570 ticks ``` ## Conditionals ``` int firstdigit; if (Value < 10) firstdigit = Value; else if (Value < 100) firstdigit = Value / 10; else if (Value < 1000) firstdigit = Value / 100; else if (Value < 10000) firstdigit = Value / 1000; else if (Value < 100000) firstdigit = Value / 10000; else if (Value < 1000000) firstdigit = Value / 100000; else if (Value < 10000000) firstdigit = Value / 1000000; else if (Value < 100000000) firstdigit = Value / 10000000; else if (Value < 1000000000) firstdigit = Value / 100000000; else firstdigit = Value / 1000000000; ``` Results: ``` 1,421,659 ticks ``` ## Unrolled & optimized loop ``` if (i >= 100000000) i /= 100000000; if (i >= 10000) i /= 10000; if (i >= 100) i /= 100; if (i >= 10) i /= 10; ``` Results: ``` 1,399,788 ticks ``` **Note:** each test calls `Random.Next()` to get the next `int`
How can you get the first digit in an int (C#)?
[ "", "c#", "integer", "substring", "tostring", "" ]
### Background Given that 'most' developers are Business application developers, the features of our favorite programming languages are used in the context of what we're doing with them. As a C# / ASP.NET Application developer, I tend to only use delegates when dealing with UI events. In fact (and this is part of my inexperience showing), I *don't even know a good context other than events to use delegates in!* This is quite scary; but I'm gathering that there are other developers in the same boat. **NB**: Answers should pertain to .NET 2.0. .NET 3.0 takes delegates to a different level entirely, and that'll likely be a separate question. ### Question: > Besides events, how useful are delegates, and in what Business Application contexts are they most useful? **Update**: [Jarrod Dixon](https://stackoverflow.com/users/3/jarrod-dixon) [helpfully linked](http://twitter.com/jarrod_dixon/status/1303756483) to the MSDN documentation [regarding delegate usage](http://msdn.microsoft.com/en-us/library/ms173173(VS.80).aspx), and I must admit that my [favorite Design Patterns Book](http://oreilly.com/catalog/9780596007126/) didn't bring up delegates at all, so I haven't really seen them in use other than for UI events. To expand this question (just a little bit!), What examples can you give for business applications (or really, any application having to deal with a relate-able problem) that would make it easier to digest the MSDN documentation on the subject?
I think this question reflects the many ways to skin a cat. I find delegates (and lambdas) nearly as fundamental as a "for" loop. Here's one context in which I used delegates recently (formatting and names changed for presentation purposes:) ``` protected T[] SortLines<T>(Func<T> createLine, IEnumerable<T> unsorted) where T : LineType { Func<IEnumerable<T>, IEnumerable<T>> sorter = (lines => lines); switch (settings.OrderSort) { case OrderSort.ByA: sorter = (lines => lines.OrderBy(x => x.A)); break; case OrderSort.ByB: sorter = (lines => lines.OrderBy(x => x.B)); break; // and so on... a couple cases have several levels of ordering } bool requiresSplit = // a complicated condition if (requiresSplit) { var positives = unsorted.Where(x => x.Qty >= 0); var negatives = unsorted.Where(x => x.Qty < 0); return sorter(negatives).Concat( new T[] { createLine.Invoke() }).Concat( sorter(positives)).ToArray(); } else return sorter(unsorted).ToArray(); } ``` So this sorts a group of items based on some criteria, and then it either returns the whole list sorted, or it breaks it in two, sorts both halves separately, and puts a separator in between them. Good luck doing this elegantly if you can't express the concept of "a way to sort something", which is what the delegate is for. EDIT: I guess Concat and OrderBy are 3.0-specific, but this is still the basic idea.
Other than GUI... 1. event dispatching; some of my business apps are quite complicated, talk to hardware devices, and rely on event queues to keep everything in synch. Delegates are used by these apps for event dispatching. 2. business rules; some of my business apps have a partial soft-coding ability, where certain events trigger certain rules that are kept in a database. Delegates (in a Dictionary) are used to execute the rules on the client-side. (Plug-ins could be supported, but current are not needed). 3. general secondary threads (using the SafeThread class, of course!)
Delegate Usage : Business Applications
[ "", "c#", ".net", "asp.net", "delegates", "" ]
I have a priority\_queue of some object: ``` typedef priority_queue<Object> Queue; Queue queue; ``` From time to time, the priority of one of the objects may change - I need to be able to update the priority of that object in the queue in an efficient way. Currently I am using this method which works but seems inefficient: ``` Queue newQueue; while (!queue.empty()) { Object obj=queue.top(); queue.pop(); if (priorityHasChanged(obj)) newQueue.push_back(Object(new_priority)); else newQueue.push_back(obj); } newQueue.swap(queue); // this only works because I actually subclassed the priority_queue // class and exposed a swap method that swaps in the container ``` I implemented it this way because I was in kind of a hurry at the time and this was the quickest thing I could do that I could be sure it would work ok. There has to be a better way than this though. Really what I want is a way to either: * extract out the instance with the changed priority and insert a new one with the new priority value * update the instance with the changed priority and then update the queue so that it is correctly sorted What is the best way to do this?
I think you are out of luck with standard priority queue because you can't get at the underlying deque/vector/list or whatever. You need to implement your own - it's not that hard.
I can suggest 2 choices to solve the problem, although neither performs a real update. 1. Use the `priority_queue` and push element each time you would like to update it. Accept the fact that you will have useless entries in the queue. When popping the top value, check if it contains the up-to-date value. If not, ignore it and pop the next. This way you delay the removal of the updated element until it comes to the top. I noticed this approach being used by top programmers realizing Dijkstra algorithm. 2. Use `set`. It is also sorted so you are able to extract the greatest element in logarithmic time. You are also able to remove the outdated element before inserting it again. So still no update operation possible, but removal and reinsertion is doable. Seems like the complexity of both approaches is the same.
How to do an efficient priority update in STL priority_queue?
[ "", "c++", "stl", "priority-queue", "" ]
I currently have a VS Solution with 2 projects: a WPF application and a Windows Service. Now, I have managed to get ClickOnce working in installing my WPF application, but I also want a Windows Service to be installed (the one in the project) during this installation. I have found ways how to programmatically start a windows service with C# code, but is there any way to incorporate this in my ClickOnce installation (because I need ClickOnce's benefit of automatic updates and such)?
I don't think you can deploy a windows service via ClickOnce in a normal fashion. <http://social.msdn.microsoft.com/Forums/en-US/winformssetup/thread/1bb64760-9622-4ca6-a1a6-3ce53e641f21> [ClickOnce deploy a Windows Service?](https://stackoverflow.com/questions/460118/clickonce-deploy-a-windows-service)
Please check [DDay Update](http://www.ddaysoftware.com/Pages/Projects/DDay.Update/) library it might give you ClickOnce stuff for updates, but first install you've got to do using another too.
C#: Making an Installer that installs both a WPF application (ClickOnce) and a Windows Service
[ "", "c#", "wpf", "windows-services", "clickonce", "" ]
I am working on a group senior project for my university and I have run into a major hurdle in trying to get my code to work. The compiler that we have for our 8 bit Atmel microcontroller does not support the new or delete operators, and it does not support the C++ STL. I could program it in C, but I have to implement an A\* algorithm which I have never done before. While I have tried C initially I soon realized that I never did pure C before. Trying to model objects with structs and functions is slowing me down since I am so used to the much cleaner C++ syntax. Regardless, the exact wording for my compilers shortcomings can be found here: <http://www.nongnu.org/avr-libc/user-manual/FAQ.html#faq_cplusplus> To overcome them and still use C++ I have considered the following possibilities. 1) Don't allocate anything, just use templates to generate fixed arrays on the stack. 2) Allocate and find some hack to call the constructor for objects once I have allocated the space for them. Placement new isn't an option since new isn't an operator. 3) Just use C and suck it up, its a microcontroller why am I getting fancy? 4) Find a better compiler which will probably cost $$$. The second option is the hardest but it would have the biggest pay off in terms of how I can write this code. However, I imagine that debugging it could be a huge pain if I get it wrong. I'm thinking of creating objects on the stack, copying their bits into the allocated space, and then zeroing the bits in the object so it doesn't call its destructor. To do that I would access the bits directly with an unsigned char pointer and the sizeof operator to get the byte count. That sounds terrible and I don't know if it could work reliably, but I am considering it. I know vtables can be a problem but I don't intend on having any vtables since it is just an 8 bit microcontroller.
Just for the record, zeroing the bits in an object won't affect whether the destructor gets called (unless the compiler has a special quirk that enables this behaviour). Just write some logging statements in your destructor to test this out. Structuring your program not to allocate anything is probably the way the system was designed. I've not worked with embedded systems before, however I have read some experienced embedded shops that discourage use of dynamic memory because the runtime environment has scarce amounts of it. --- However, if you must, you can still use placement new. If you don't have the `<new>` header, here are the relevant lines directly from it on my version of GCC: ``` // Default placement versions of operator new. inline void* operator new(std::size_t, void* __p) throw() { return __p; } inline void* operator new[](std::size_t, void* __p) throw() { return __p; } // Default placement versions of operator delete. inline void operator delete (void*, void*) throw() { } inline void operator delete[](void*, void*) throw() { } ``` Stick that somewhere in a header file included by every source file that uses placement new/delete. Sample file that tests this: ``` #include <cstdio> #include <new> int main(int argc, char** argv) { typedef char const* cstr; char foobar[16]; cstr* str = new (&foobar) cstr(argc > 1 ? argv[1] : "Hello, world!"); std::puts(*str); str->~cstr(); } ``` On my version of GCC, this does not use `libstdc++` at all (if `-fno-exceptions` is used). --- Now, if you want to combine that with `malloc` (if your platform provides this), then you can do this: ``` #include <cstdio> #include <cstdlib> inline void* operator new (std::size_t n) {return std::malloc(n);} inline void* operator new[](std::size_t n) {return std::malloc(n);} inline void operator delete (void* p) {std::free(p);} inline void operator delete[](void* p) {std::free(p);} int main(int argc, char** argv) { typedef char const* cstr; cstr* str = new cstr(argc > 1 ? argv[1] : "Hello, world!"); std::puts(*str); delete str; } ``` This allows you to use the standard `new`/`delete` that you're familiar with, without requiring use of `libstdc++`. Good luck!
Don't fight your tools. If the only compiler you have for your embedded system is a C compiler, learn C - it's not difficult. Trying to produce some bastardised version of the two languages just to solve a fairly simple programming problem will only end in tears. To look at it another way, if your embedded platform didn't even support a C compiler, but only an assembler, would your first impulse be to sit down and write a C++ compiler in assembler? I hope not, I hope you would instead sit down and learn to use the assembler to complete your assignment - writing a C++ compiler (or even a C compiler) would be totally inappropriate use of your time, and would almost certainly result in failure.
How can you do C++ when your embedded compiler doesn't have operator new or STL support?
[ "", "c++", "c", "embedded", "memory-management", "microcontroller", "" ]
It appears that when you type in a number in Java, the compiler automatically reads it as an integer, which is why when you type in (long) `6000000000` (not in integer's range) it will complain that `6000000000` is not an integer. To correct this, I had to specify `6000000000L`. I just learned about this specification. Are there other number specifications like for short, byte, float, double? It seems like these would be good to have because (I assume) if you could specify the number you're typing in is a short then java wouldn't have to cast it - that is an assumption, correct me if I'm wrong. I would normally search this question myself, but I don't know what this kind of number specification is even called.
There are specific suffixes for `long` (e.g. `39832L`), `float` (e.g. `2.4f`) and `double` (e.g. `-7.832d`). If there is no suffix, and it is an integral type (e.g. `5623`), it is assumed to be an `int`. If it is not an integral type (e.g. `3.14159`), it is assumed to be a `double`. In all other cases (`byte`, `short`, `char`), you need the cast as there is no specific suffix. The Java spec allows both upper and lower case suffixes, but the upper case version for `long`s is preferred, as the upper case `L` is less easy to confuse with a numeral `1` than the lower case `l`. See the [JLS section 3.10](http://docs.oracle.com/javase/specs/jls/se7/html/jls-3.html#jls-3.10) for the gory details (see the definition of `IntegerTypeSuffix`).
By default any integral primitive data type (byte, short, int, long) will be treated as *int* type by java compiler. For *byte* and *short*, as long as value assigned to them is in their range, there is no problem and no suffix required. If value assigned to *byte* and *short* exceeds their range, explicit type casting is required. Ex: ``` byte b = 130; // CE: range is exceeding. ``` to overcome this perform type casting. ``` byte b = (byte)130; //valid, but chances of losing data is there. ``` In case of long data type, it can accept the integer value without any hassle. Suppose we assign like ``` long l = 2147483647; //which is max value of int ``` in this case no suffix like L/l is required. By default value 2147483647 is considered by java compiler is int type. Internal type casting is done by compiler and int is auto promoted to Long type. ``` long l = 2147483648; //CE: value is treated as int but out of range ``` Here we need to put suffix as L to treat the literal 2147483648 as long type by java compiler. so finally ``` long l = 2147483648L;// works fine. ```
Java's L number (long) specification
[ "", "java", "numbers", "long-integer", "short", "specifications", "" ]
I want to create a custom annotation (using Java) which would accept other annotations as parameter, something like: ``` public @interface ExclusiveOr { Annotation[] value(); } ``` But this causes compiler error "invalid type for annotation member". Object[] also doesn't work. Is there a way to do what I want?
I myself hereby propose a workaround for the given problem: Well, what I wanted to make possible was something like that: ``` @Contract({ @ExclusiveOr({ @IsType(IAtomicType.class), @Or({ @IsType(IListType.class), @IsType(ISetType.class) }) }) }) ``` Proposed workaround: Define a class with parameter-less constructor (which will be called by your own annotation processor later) in following way: ``` final class MyContract extends Contract{ // parameter-less ctor will be handeled by annotation processor public MyContract(){ super( new ExclusiveOr( new IsType(IAtomicType.class), new Or( new IsType(IListType.class), new IsType(ISetType.class) ) ) ); } } ``` usage: ``` @Contract(MyContract.class) class MyClass{ // ... } ```
The error is produced because you can't use interfaces as annotation values (change it to `Comparable` and you'll get the same error). From the [JLS](http://java.sun.com/docs/books/jls/third_edition/html/interfaces.html#9.6): > It is a compile-time error if the return type of a method declared in an annotation type is any type other than one of the following: one of the primitive types, `String`, `Class` and any invocation of `Class`, an enum type, an annotation type, or an array of one of the preceding types. It is also a compile-time error if any method declared in an annotation type has a signature that is override-equivalent to that of any `public` or `protected` method declared in class `Object` or in the interface `annotation.Annotation`. I'm afraid I don't know of a good workaround, but now at least you know why you get the error.
Annotation member which holds other annotations?
[ "", "java", "annotations", "" ]
I don't know how to extend the map object with prototype and hope you can help. I have something like this: ``` var map = {'one':1, 'two':2}; ``` and I would like to have a method to check for the existence of a key: ``` if (map.containsKey('one')){...} ``` How would I extend the map object?
**It's dangerous to modify Object.prototype**, because [it affects all objects and will usually break other libraries you may be using](http://erik.eae.net/archives/2005/06/06/22.13.54/). In general, if you want to add methods to a dictionary-like object, you should create an object for your hash instances to inherit from, [like the Prototype Hash object](http://www.prototypejs.org/api/hash). For this specific instance, you should really just use either `if (key in map)` or `if (map.hasOwnProperty(key))`.
There is already an operator to test the existence of a key inside an object. (In JavaScript, objects are implemented as maps so actually there is no real map.) ``` if( 'one' in map ) { alert(map['one']); } ```
Extending map objects
[ "", "javascript", "dictionary", "prototype", "extend", "" ]
I want to add some Java (actually [Clojure](http://en.wikipedia.org/wiki/Clojure)) based event handlers to a HUGE legacy C application. What is the most straight forward and easily maintained way to do this? I would like the Java classes to be running in the same process as the C code. Is this even possible?
The [Java Native Interface](http://java.sun.com/javase/6/docs/technotes/guides/jni/index.html) would seem to be designed for this. In particular, [this section](http://java.sun.com/javase/6/docs/technotes/guides/jni/spec/invocation.html#wp9502) of the spec seems relevant.
By the way, this is easy to make it "seem" correct, but much more difficult to actually do correctly. Why do you want them running in the same process? Why not the same thread? You might consider making a bridge through TCP/IP between your C code and a Java process. This would be much better because it would avoid the maintenance surprises. You're thinking "maintenance surprises, what maintenance surprises?" Ah! Today you want to call from C to Java, but tomorrow someone is going to want to go back the other way. That's where your trouble becomes REAL painful. I've been here before. Take my advice: do the TCP/IP bridge. This will keep your Java from crashing. -Stosh
How to start Java from within a C process?
[ "", "java", "c", "multithreading", "clojure", "java-native-interface", "" ]
I have an XML column in SQL Server that is the equivalent of: ``` <Test foo="bar"> <Otherstuff baz="belch" /> </Test> ``` I want to get the value of the `foo` attribute of Test (the root element) as a varchar. My goal would be something along the lines of: ``` SELECT CAST('<Test foo="bar"><Otherstuff baz="belch" /></Test>' AS xml).value('@foo', 'varchar(20)') AS Foo ``` But when I run the above query, I get the following error: > Msg 2390, Level 16, State 1, Line 1 > XQuery [value()]: Top-level attribute > nodes are not supported
John Saunders has it almost right :-) ``` declare @Data XML set @Data = '<Test foo="bar"><Otherstuff baz="belch" /></Test>' select @Data.value('(/Test/@foo)[1]','varchar(20)') as Foo ``` This works for me (SQL Server 2005 and 2008) Marc
If you dont know the root element: ``` select @Data.value('(/*/@foo)[1]','varchar(20)') as Foo ```
How do I select a top-level attribute of an XML column in SQL Server?
[ "", "sql", "sql-server", "xml", "xpath", "xquery-sql", "" ]
I need to filter products where certain attributes are stored in a joined table that match all the required properties, i.e. users need to be able to gradually narrow down their search by adding requirements. The problem really just concerns the properties table I think, rather than the join, given the following (simplified) table of product properties: ``` id product_id property value --------------------------------- 1 1 color red 2 1 size small 3 2 color red 4 2 size large ``` how would I get all the product\_ids where value is both 'red' and 'small' ? A similar question [was asked](https://stackoverflow.com/questions/329822/mysql-strict-select-of-rows-involving-many-to-many-tables) before but not answered very fully. A solution involves a COUNT and HAVING to get the rows where there are as many rows in each group as required values e.g. ``` SELECT product_id, count(*) AS group_count FROM properties where value = 'red' OR value = 'small' GROUP BY product_id HAVING group_count = 2 ``` This works but I'm concerned about performance, it seems like there would be a better way. Eventually this would need to be joined with, or at least used to filter the products table: ``` id name ------------- 1 Product 1 2 Product 2 ``` I forgot to mention that I have 2 of these properties tables joined to products that I need to filter on, one with regular attributes of a product, another with available configurable options (a bit like variants). The scenario is to allow users to filter products like: "show products where gender = 'male', brand = 'nike' and size == 'small'" where gender and brand are 'properties' and size is in options (configurable when adding to cart) The solution of using a group with a count works with the 2 joined tables still but it gets messy, the required group count is the number of required options on the first table multiplied by the number on the second. I could just fetch the ids from properties (and the other table) then just do a select where id IN(ids), matching a set of ids for both property tables, I don't like the idea of doing this with a really long list of ids though.
Not sure this is faster, but joins from subqueries generated from your filter criteria would work: ``` Select p.name, p.id from product p, (select product_id from properties where value='red') colors, (select product_id from properties where value='small') sizes where p.id=colors.product_id and p.id=sizes.product_id ```
``` SELECT DISTINCT p1.product_id, pn.name FROM properties p1, properties p2, productNames pn WHERE p1.product_id = p2.product_id AND p1.property = 'size' and value = 'small' AND p2.property = 'color' and value = 'red' AND pn.id = p1.product_id ```
How do I select rows from a MySQL table grouped by one column with required values in another
[ "", "sql", "mysql", "database", "" ]
How do I call `WSAGetLastError()` from WinAPI so I get the valid text error?
``` [DllImport("ws2_32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern Int32 WSAGetLastError(); ``` Also, on [pinvoke.net](http://www.pinvoke.net/default.aspx/kernel32/GetLastError.html) it's said: > You should never PInvoke to GetLastError. Call Marshal.GetLastWin32Error instead! [System.Runtime.InteropServices.Marshal.GetLastWin32Error()](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.marshal.getlastwin32error.aspx)
`WSAGetLastError` is just a wrapper for the Win32 `GetLastError` function. If you're doing things with P/Invoke, you can use the `SetLastError` parameter to the `DllImport` attribute. It tells .NET that the imported function will call `SetLastError()`, and that the value should be collected. If the imported function fails, you can get at the last error with `Marshal.GetLastWin32Error()`. Alternatively, you can just `throw new Win32Exception()`, which uses this value automatically. If you're not doing things with P/Invoke, you're out of luck: there's no guarantee that the last error value will be preserved long enough to make it back through multiple layers of .NET code. In fact, I'll link to Adam Nathan: [never define a PInvoke signature for GetLastError](http://blogs.msdn.com/adam_nathan/archive/2003/04/25/56643.aspx).
How to get last error (WSAGetLastError)?
[ "", "c#", ".net", "winapi", "error-handling", "pinvoke", "" ]
Using the WebClient class I can get the title of a website easily enough: ``` WebClient x = new WebClient(); string source = x.DownloadString(s); string title = Regex.Match(source, @"\<title\b[^>]*\>\s*(?<Title>[\s\S]*?)\</title\>", RegexOptions.IgnoreCase).Groups["Title"].Value; ``` I want to store the URL and the page title. However when following a link such as: <http://tinyurl.com/dbysxp> I'm clearly going to want to get the Url I'm redirected to. **QUESTIONS** Is there a way to do this using the `WebClient` class? How would I do it using `HttpResponse` and `HttpRequest`?
If I understand the question, it's much easier than people are saying - if you want to let WebClient do all the nuts and bolts of the request (including the redirection), but then get the *actual* response URI at the end, you can subclass WebClient like this: ``` class MyWebClient : WebClient { Uri _responseUri; public Uri ResponseUri { get { return _responseUri; } } protected override WebResponse GetWebResponse(WebRequest request) { WebResponse response = base.GetWebResponse(request); _responseUri = response.ResponseUri; return response; } } ``` Just use MyWebClient everywhere you would have used WebClient. After you've made whatever WebClient call you needed to do, then you can just use ResponseUri to get the actual redirected URI. *You'd need to add a similar override for GetWebResponse(WebRequest request, IAsyncResult result) too, if you were using the async stuff.*
I know this is already an answered question, but this works pretty to me: ``` HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://tinyurl.com/dbysxp"); request.AllowAutoRedirect = false; HttpWebResponse response = (HttpWebResponse)request.GetResponse(); string redirUrl = response.Headers["Location"]; response.Close(); //Show the redirected url MessageBox.Show("You're being redirected to: "+redirUrl); ``` Cheers.! ;)
Using WebClient in C# is there a way to get the URL of a site after being redirected?
[ "", "c#", ".net", "regex", "http", "" ]
So I have an example table called **items** with the following columns: * **item\_id** (int) * **person\_id** (int) * **item\_name** (varchar) * **item\_type** (varchar) - examples: "news", "event", "document" * **item\_date** (datetime) ...and a table **person** with the following columns: "person\_id", "person\_name". I was hoping to **display a list of the top 2 submitters (+ the COUNT() of items submitted) in a given time period for each item\_type**. Here's basically what I was hoping the MySQL output would look like: ``` person_name | item_type | item_count Steve Jobs | document | 11 Bill Gates | document | 6 John Doe | event | 4 John Smith | event | 2 Bill Jones | news | 24 Bill Nye | news | 21 ``` How is this possible without making a separate query for each item\_type? Thanks in advance!
``` SELECT item_type, person_name, item_count FROM ( SELECT item_type, person_name, item_count, @r := IFNULL(@r, 0) + 1 AS rc, CASE WHEN @_item_type IS NULL OR @_item_type <> item_type THEN @r := 0 ELSE 1 END, @_item_type := item_type, FROM ( SELECT @r := 0, @_item_type := NULL ) vars, ( SELECT item_type, person_name, COUNT(*) AS item_count FROM items GROUP BY item_type, person_name ORDER BY item_type, person_name, item_count DESC ) vo ) voi WHERE rc < 3 ```
I think this should do it: ``` SELECT person_name,item_type,count(item_id) AS item_count FROM person LEFT JOIN items USING person_id GROUP BY person_id ``` The "item\_type" column is going to be dodgy though, each row represents multiple items, and you're only showing the item\_type from one of them. You can list all of them with "GROUP\_CONCAT", that's a lot of fun.
MySQL: multiple grouping
[ "", "sql", "mysql", "group-by", "" ]
Why does the first constructor in ClassA cause the compiler error 'cannot use "this" in member intializer'? ... or how can i get this to work? Thanks ``` public sealed class ClassA : IMethodA { private readonly IMethodA _methodA; public ClassA():this(this) {} public ClassA(IMethodA methodA) { _methodA = methodA; } public void Run(int i) { _methodA.MethodA(i); } public void MethodA(int i) { Console.WriteLine(i.ToString()); } } public interface IMethodA { void MethodA(int i); } ```
You can't use the `this` keyword when chaining constructors essentially because `this` refers to an object that hasn't been instantiated yet (creation of the object doesn't begin until some (the top-level or base) constructor block has been entered). Moreover, why exactly would you want to do this? It seems rather pointless when you have access to the `this` keyword everywhere. I recommend simply using independent constructors as such: ``` public sealed class ClassA : IMethodA { private readonly IMethodA _methodA; public ClassA() { _methodA = this; } public ClassA(IMethodA methodA) { _methodA = methodA; } } ``` Perhaps I misunderstand what you're trying to do, but hopefully that will solve the issue for you.
You are allowed to use the `this(...)` syntax to invoke another constructor at the same level - however, you cannot use `this` (the current instance) in this context. The easiest option here is to duplicate the assignment code (`_methodA = methodA`). Another option might be null-coalescing: ``` public ClassA():this(null) {} public ClassA(IMethodA methodA) { // defaults to "this" if null _methodA = methodA ?? this; } ```
Constructor chaining with "this"
[ "", "c#", "c#-3.0", "" ]
I think we all understand the necessity of `delete` when reassigning a dynamically-allocated pointer in order to prevent memory leaks. However, I'm curious, to what extent does the C++ mandate the usage of `delete`? For example, take the following program ``` int main() { int* arr = new int[5]; return 0; } ``` While for all intents and purposes no leak occurs here (since your program is ending and the OS will clean up all memory once it returns), but does the standard still require -- or recommend -- the usage of `delete[]` in this case? If not, would there be any other reason why you would `delete[]` here?
There is nothing that requires a `delete[]` in the standard - However, I would say it is a very good guideline to follow. However, it is better practice to use a `delete` or `delete[]` with every `new` or `new[]` operation, even if the memory will be cleaned up by the program termination. Many custom objects will have a destructor that does other logic than just cleaning up the memory. Using `delete` guarantees the destruction in these cases. Also, if you ever move around your routines, you are less likely to cause memory leaks in other places in your code.
Dupe of [Is there a reason to call delete in C++ when a program is exiting anyway?](https://stackoverflow.com/questions/677812/is-there-a-reason-to-call-delete-in-c-when-a-program-is-exiting-anyway) Answer is that because of destructors that need to be run, it is sometimes necessary to delete the object before exiting the program. Also, many memory leak detection tools will complain if you don't do this, so to make it easier to find real memory leaks, you should try and delete all of your objects before exiting.
Must new always be followed by delete?
[ "", "c++", "new-operator", "" ]
If I have a DLL (that was built in release-mode) and the corresponding PDB file, is it possible to debug (step-into) classes/methods contained in that DLL? If so, what are the required steps/configuration (e.g. where to put the PDB file)? **Edit:** If have the PDB file in the same place as the DLL (in the bin/debug directory of a simple console test application). I can see that the symbols for the DLL are loaded (in the Output window and also in the Modules window), but still I cannot step into the methods of that DLL. Could this be the result of compiler optimizations (as described by Michael in his answer)?
I finally found what cause the problems debugging a DLL that was built in release configuration: First of all, it basically works as expected. Which means, if I have a DLL built in release-configuration plus the corresponding PDB file, then I can debug the classes/methods contained in that DLL. When I first tried this, I unfortunately tried to step into methods of a class which has the **DebuggerStepThroughAttribute**, e.g: ``` [System.Diagnostics.DebuggerStepThrough] public class MyClass { public void Test() { ... } } ``` In that case, it is of course not possible to step into the method from the debugger (as expected/intended). So everything works as intended. Thanks a lot for your answers.
The pdb is usually (for me at least) detected if it is next to the dll (like with the intellisense xml files). Alternatively; you'll need a break point after the module has loaded... At the break-point, bring up the "Modules" window (Ctrl+D,M - or Debug->Windows->Modules). Right click on your dll "Load symbols from", "Symbol path", etc.
Debugging a release version of a DLL (with PDB file)
[ "", "c#", "visual-studio-2008", "debugging", "pdb-files", "" ]
I have a `DataTemplate` defined in a xaml file that I want to access via C# code. Can anyone please tell me how can I access it? I added a new `ResourceDictionary` file and its name is **Dictionary1.xaml**. I have a data template such as: ``` <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <DataTemplate x:Key="mytemplate"> <TextBlock Text="Name:" Background="Blue"/> </DataTemplate> </ResourceDictionary> ``` not I have a `ListBox` called listBox1 and I want to assign it to it's `Itemtemplate` property but I'm not getting how can i do it?
Where exactly are you defining it? If you define it in the ResourceDictionary of your object, then ``` Application.Current.Resources[typeof(yourDataTemplateTargetType)] ``` should work. If you are defining it as a member of something else, like say, an ItemsControl, you need to get a handle to the ItemsControl instance and call the ItemTemplate property. Edit: Ok, I think we're getting somewhere. So you are defining a ResourceDictionary in its own file. Before you can use it in your UI and access it from your code behind, you need to merge that ResourceDictionary into your application. Are you doing this? If you are, then the next step is to get this resource. Each FrameworkElement has a method called [FindResource](http://msdn.microsoft.com/en-us/library/system.windows.frameworkelement.findresource.aspx). This method is great because it walks up the ResourceDictionary tree and attempts to locate the resource with the key. So, if you want to access this resource from a UserControl, you can do the following in the code behind: ``` FindResource(typeof(yourDataTemplateTargetType)); ``` If this doesn't work for you, please show us exactly how you are declaring this resource dictionary and how it is getting merged into your application's resources.
Since `Application.Current` was null in my case, I've ended up using this: ``` var myResourceDictionary = new ResourceDictionary(); myResourceDictionary.Source = new Uri("/DllName;component/Resources/MyResourceDictionary.xaml", UriKind.RelativeOrAbsolute); ``` and then getting the specified key I needed by using `myResourceDictionary["KeyName"] as TypeOfItem` ([source](http://akashsoni7.blogspot.co.il/2012/12/wpf-how-to-use-resource-dictionary-in-c.html))
How can I access ResourceDictionary in wpf from C# code?
[ "", "c#", "wpf", "resources", "" ]
How do I get the general File type description based on extension like Explorer does it? So not MIME but the information that the end-user sees, like. .doc = Microsoft Office Word 97 - 2003 Document .zip = ZIP File .avi = Video File. And how can I get the 'secondary' information that seems to be available, which I guess it not extension based. Like on "Video Files" it can give you the 'Length' of the movie or on doc files how many pages it has.. etc etc..
Thanks Dan, Alright.. This answers the first question I had. Sadly not the second. Note: Not everything prints.. Credits to [PInvoke.net](http://www.pinvoke.net/default.aspx/shlwapi/AssocQueryString.html) ``` using System; using System.Runtime.InteropServices; using System.Text; using System.Diagnostics; namespace WindowsFormsApplication1 { static class Program { [DllImport("Shlwapi.dll", SetLastError = true, CharSet = CharSet.Auto)] static extern uint AssocQueryString(AssocF flags, AssocStr str, string pszAssoc, string pszExtra, [Out] StringBuilder pszOut, [In][Out] ref uint pcchOut); /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { Debug.WriteLine(FileExtentionInfo(AssocStr.Command, ".doc"), "Command"); Debug.WriteLine(FileExtentionInfo(AssocStr.DDEApplication, ".doc"), "DDEApplication"); Debug.WriteLine(FileExtentionInfo(AssocStr.DDEIfExec, ".doc"), "DDEIfExec"); Debug.WriteLine(FileExtentionInfo(AssocStr.DDETopic, ".doc"), "DDETopic"); Debug.WriteLine(FileExtentionInfo(AssocStr.Executable, ".doc"), "Executable"); Debug.WriteLine(FileExtentionInfo(AssocStr.FriendlyAppName, ".doc"), "FriendlyAppName"); Debug.WriteLine(FileExtentionInfo(AssocStr.FriendlyDocName, ".doc"), "FriendlyDocName"); Debug.WriteLine(FileExtentionInfo(AssocStr.NoOpen, ".doc"), "NoOpen"); Debug.WriteLine(FileExtentionInfo(AssocStr.ShellNewValue, ".doc"), "ShellNewValue"); // DDEApplication: WinWord //DDEIfExec: Ñﻴ߾ // DDETopic: System // Executable: C:\Program Files (x86)\Microsoft Office\Office12\WINWORD.EXE // FriendlyAppName: Microsoft Office Word // FriendlyDocName: Microsoft Office Word 97 - 2003 Document } public static string FileExtentionInfo(AssocStr assocStr, string doctype) { uint pcchOut = 0; AssocQueryString(AssocF.Verify, assocStr, doctype, null, null, ref pcchOut); StringBuilder pszOut = new StringBuilder((int)pcchOut); AssocQueryString(AssocF.Verify, assocStr, doctype, null, pszOut, ref pcchOut); return pszOut.ToString(); } [Flags] public enum AssocF { Init_NoRemapCLSID = 0x1, Init_ByExeName = 0x2, Open_ByExeName = 0x2, Init_DefaultToStar = 0x4, Init_DefaultToFolder = 0x8, NoUserSettings = 0x10, NoTruncate = 0x20, Verify = 0x40, RemapRunDll = 0x80, NoFixUps = 0x100, IgnoreBaseClass = 0x200 } public enum AssocStr { Command = 1, Executable, FriendlyDocName, FriendlyAppName, NoOpen, ShellNewValue, DDECommand, DDEIfExec, DDEApplication, DDETopic } } } ```
My code that include check to prevent from some common errors... Hope it helps :-) ``` using System; using System.Diagnostics; using System.IO; using System.Runtime.InteropServices; using System.Text; namespace HQ.Util.Unmanaged { /// <summary> /// Usage: string executablePath = FileAssociation.GetExecFileAssociatedToExtension(pathExtension, "open"); /// </summary> public static class FileAssociation { /// <summary> /// /// </summary> /// <param name="ext"></param> /// <param name="verb"></param> /// <returns>Return null if not found</returns> public static string GetExecFileAssociatedToExtension(string ext, string verb = null) { if (ext[0] != '.') { ext = "." + ext; } string executablePath = FileExtentionInfo(AssocStr.Executable, ext, verb); // Will only work for 'open' verb if (string.IsNullOrEmpty(executablePath)) { executablePath = FileExtentionInfo(AssocStr.Command, ext, verb); // required to find command of any other verb than 'open' // Extract only the path if (!string.IsNullOrEmpty(executablePath) && executablePath.Length > 1) { if (executablePath[0] == '"') { executablePath = executablePath.Split('\"')[1]; } else if (executablePath[0] == '\'') { executablePath = executablePath.Split('\'')[1]; } } } // Ensure to not return the default OpenWith.exe associated executable in Windows 8 or higher if (!string.IsNullOrEmpty(executablePath) && File.Exists(executablePath) && !executablePath.ToLower().EndsWith(".dll")) { if (executablePath.ToLower().EndsWith("openwith.exe")) { return null; // 'OpenWith.exe' is th windows 8 or higher default for unknown extensions. I don't want to have it as associted file } return executablePath; } return executablePath; } [DllImport("Shlwapi.dll", SetLastError = true, CharSet = CharSet.Auto)] static extern uint AssocQueryString(AssocF flags, AssocStr str, string pszAssoc, string pszExtra, [Out] StringBuilder pszOut, [In][Out] ref uint pcchOut); private static string FileExtentionInfo(AssocStr assocStr, string doctype, string verb) { uint pcchOut = 0; AssocQueryString(AssocF.Verify, assocStr, doctype, verb, null, ref pcchOut); Debug.Assert(pcchOut != 0); if (pcchOut == 0) { return ""; } StringBuilder pszOut = new StringBuilder((int)pcchOut); AssocQueryString(AssocF.Verify, assocStr, doctype, verb, pszOut, ref pcchOut); return pszOut.ToString(); } [Flags] public enum AssocF { Init_NoRemapCLSID = 0x1, Init_ByExeName = 0x2, Open_ByExeName = 0x2, Init_DefaultToStar = 0x4, Init_DefaultToFolder = 0x8, NoUserSettings = 0x10, NoTruncate = 0x20, Verify = 0x40, RemapRunDll = 0x80, NoFixUps = 0x100, IgnoreBaseClass = 0x200 } public enum AssocStr { Command = 1, Executable, FriendlyDocName, FriendlyAppName, NoOpen, ShellNewValue, DDECommand, DDEIfExec, DDEApplication, DDETopic } } } ```
How do I get File Type Information based on extension? (not MIME) in c#
[ "", "c#", "file", "mime", "file-type", "" ]
I have this class for page titles: ``` class UI { private static $title; public static function getTitle() { return self::$title; } public static function setTitle($value) { self::$title = $value; } } ``` So in my frontpage I simple declare this: ``` UI::setTitle('Kiubbo.com :: Todas las Noticias, Fotos y Videos'); ``` And works fine. The problem is to get the comments pages titles to change depending on the content (title of links). I want to be the same as this variable (that has the same name but it its for another function its not the same getTitle): ``` <?php echo $article->getTitle(); ?> ``` Which its in this function: ``` function showAllComments($article_id, $param) { $article = Article::getById($article_id); if(!empty($article)) { ?> <div class="news_item"> <h2 class="news_item_title"><b><a href = "<?php echo $article->getUrl(); ?>"><?php echo $article->getTitle(); ?></a></b></h2> </div> ``` If anyone can help I appreciate it. Thx
I'm not sure I completely understand what you want to do. If you want each object to have a separate title, then you need to make the title variable non-static (and the functions non-static as well). A static variable/function has only one instance per class.
In `showAllComments()` maybe you could do ``` UI::setTitle($article->getTitle()); ```
Dynamic page Titles
[ "", "php", "mysql", "html", "" ]
I'm attempting to compile a working copy of the [MagickNet](http://midimick.com/magicknet/) class library (DLL) using the sources from the ImageMagick and MagickNet libraries. I was unable to obtain a copy of the MagickNet source files from the creator's homepage as it is currently down, so I was forced to obtain the files and C++ project file from [here](ftp://133.37.44.6/pub/graphics/ImageMagick/dot-net/), courtesy of a Google search. Following the instructions stated [here](http://www.imagemagick.org/script/install-source.php#windows) and [here](http://72.14.235.132/search?q=cache%3Ahttp%3A%2F%2Fmidimick.com%2Fmagicknet%2Fstart.html), I created a project using the "static multi-threaded DLL" option and compiled it, before moving to the MagickNet project file and compiling that as well, after making sure all the paths pointed to the right folders. Even so, I keep receiving this error upon compilation: ``` CORE_RL_magick_.lib(nt-base.obj) : error LNK2005: _DllMain@12 already defined in MSVCRT.lib(dllmain.obj) ``` I also receive 371 other errors, all of them related to an "unresolved external symbol xxxxxxxx", and a final 372nd error describing that I have "195 unresolved externals". I managed to solve the DllMain error above by commenting out the DllMain declaration from the nt-base.c source file from the CORE\_magick project in the ImageMagick solution, however the 372 other "unresolved externals" errors still remain. I had performed a (Google) search for people with similar issues, and [some](http://www.imagemagick.org/discourse-server/viewtopic.php?f=8&t=12585) have said that the author had offered a download of a pre-compiled MagickNet DLL which works 100%, however (as I mentioned earlier) his homepage appears to be inaccessible now. I'm currently seeking one of these solutions: 1. A solution to my compilation issue, as I may be making a mistake on my part since I'm not familiar with C++ at all, 2. A link to another MagickNet source files/project zip that is 100% confirmed to compile correctly with the latest version of ImageMagick, 3. A link to a 100% working precompiled copy of the MagickNet DLL, if anyone kept a copy from the author's homepage. It should be approximately 3MB as it contains the ImageMagick libraries as well. I would really appreciate any one of these solutions, as I desperately require ImageMagick's ability to manipulate and convert images, as well as MagickNet's ease of use. I can provide additional details if you need more information. Also, I am using Visual Studio 2008 to compile the source files, and the supplied projects convert with no issues at all.
I was just able to get past this and was successfully able to compile MagickNET against the latest version of ImageMagick. I had to do several things. 1. Configured ImageMagick to use StaticMTDll. 2. Edited magick-config.h to undefine X11 support. 3. Removed the CORE\_xlib project from the ImageMagick solution. 4. Clean/Rebuild of ImageMagick 5. Added the following lib files to the linker settings for MagickNet CORE\_DB\_bzlib\_.lib CORE\_DB\_coders\_.lib CORE\_DB\_jbig\_.lib CORE\_DB\_jpeg\_.lib CORE\_DB\_magick\_.lib CORE\_DB\_png\_.lib CORE\_DB\_tiff\_.lib CORE\_DB\_ttf\_.lib CORE\_DB\_wand\_.lib CORE\_DB\_wmf\_.lib CORE\_DB\_zlib\_.lib CORE\_DB\_libxml\_.lib CORE\_DB\_jp2\_.lib CORE\_DB\_lcms\_.lib Ws2\_32.lib 6. Built MagickNet I did have to make updates to MagickNet because methods have been deprecated since the source release on the MagickNet website. I'll try and put a repository online to share the entire solution.
I had the exact same problem and Bromanko's answer solved it ...with one minor addition: in imagemagick's magick-config.h, in addition to commmenting out the line: ``` define MAGICKCORE_X11_DELEGATE ``` ...need to also comment out the line: ``` define ProvideDllMain ``` then you'll be cooking with gas. Thanks bromanko...
MagickNet C++ Source Compilation Failure
[ "", ".net", "c++", "imagemagick", "" ]