Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
From a COM library (Microsoft Office Document Imaging aka MODI) I receive an image as an IPictureDisp which I would like to convert to a System.Drawing.Image object. What would be the best way to do that? Currently I'm using the code below, which however throws an NotImplementedException. ``` internal sealed class IPictureDispHost : AxHost { /// <summary> /// Default Constructor, required by the framework. /// </summary> private IPictureDispHost() : base(string.Empty) { } /// <summary> /// Convert the image to an ipicturedisp. /// </summary> /// <param name="image">The image instance</param> /// <returns>The picture dispatch object.</returns> public new static object GetIPictureDispFromPicture(Image image) { return AxHost.GetIPictureDispFromPicture(image); } /// <summary> /// Convert the dispatch interface into an image object. /// </summary> /// <param name="picture">The picture interface</param> /// <returns>An image instance.</returns> public new static Image GetPictureFromIPicture(object picture) { return AxHost.GetPictureFromIPicture(picture); } } ... // somewhere later the conversion gets called Image image = IPictureDispHost.GetPictureFromIPicture(picture); ``` This is the exception stack trace: ``` System.NotImplementedException: The method or operation is not implemented. at System.Windows.Forms.UnsafeNativeMethods.IPicture.GetHandle() at System.Windows.Forms.AxHost.GetPictureFromIPicture(Object picture) at DocumentViewer.IPictureDispHost.GetPictureFromIPicture(Object picture) ``` I have references to stdole, System.Windows.Forms and System.Drawing in my project. Am I missing something?
As it seems the picture obtained from the Microsoft Office Document Imaging COM components is not a valid IPictureDisp object and there seems no way to convert it.
Check out this [article](http://blogs.msdn.com/andreww/archive/2007/07/30/converting-between-ipicturedisp-and-system-drawing-image.aspx). It describes three different options to take, just pick the one you find easiest or "cleanest" for your purposes (including the one you claim not to be working for you). --- Olivier Jacot-Descombes: The link above is broken. I've added the corresponding link from the Internet Archive [WayBackMachine](https://web.archive.org): [Converting between IPictureDisp and System.Drawing.Image](https://web.archive.org/web/20160114043055/http://blogs.msdn.com/b/andreww/archive/2007/07/30/converting-between-ipicturedisp-and-system-drawing-image.aspx) (MSDN Blogs > Andrew Whitechapel).
How to convert IPictureDisp to System.Drawing.Image
[ "", "c#", "interop", "" ]
Right now I am using a list, and was expecting something like: ``` verts = list (1000) ``` Should I use array instead?
The first thing that comes to mind for me is: ``` verts = [None]*1000 ``` But do you really need to preinitialize it?
Not quite sure why everyone is giving you a hard time for wanting to do this - there are several scenarios where you'd want a fixed size initialised list. And you've correctly deduced that arrays are sensible in these cases. ``` import array verts=array.array('i',(0,)*1000) ``` For the non-pythonistas, the `(0,)*1000` term is creating a tuple containing 1000 zeros. The comma forces python to recognise `(0)` as a tuple, otherwise it would be evaluated as 0. I've used a tuple instead of a list because they are generally have lower overhead.
Initializing a list to a known number of elements in Python
[ "", "python", "arrays", "list", "" ]
I'd like the community's take on some thoughts I've had about Linq to Sql and other ORM mappers. I like Linq to Sql and the idea of expressing data access logic (or CRUD operations in general) in your native development tongue rather than having to deal with the "impedance mismatch" between C# and SQL. For example, to return an ObjectDataSource-compatible list of Event instances for a business layer, we use: ``` return db.Events.Select(c => new EventData() { EventID = c.EventID, Title = c.Title }) ``` If I were to implement this using old SQL-to-C# constructs, I'd have to create a Command class, add the EventID parameter (using a string to describe the "@EventID" argument), add the SQL query string to the Command class, execute the command, and then use (cast-type)nwReader["FieldName"] to pull each returned field value and assign it to a member of a newly created instance of my EventData class (yuck). So, *that* is why people like Linq/SubSonic/etc. and I agree. However, in the bigger picture I see a number of things that are wrong. My sense is that Microsoft also sees something wrong and that is why they are [killing Linq to SQL](http://codebetter.com/blogs/david.hayden/archive/2008/10/31/linq-to-sql-is-dead-read-between-the-lines.aspx) and trying to move people to Linq to Entities. Only, I think that Microsoft is *doubling-down on a bad bet.* So, what is wrong? The problem is that there are [architecture astronauts](http://www.joelonsoftware.com/articles/fog0000000018.html), especially at Microsoft, who look at Linq to Sql and realize that it is not a true data management tool: there are still many things you cannot do easily of comfortably in C# and they aim to *fix it.* You see this manifested in the ambitions behind Linq to Entities, blog posts about the [revolutionary](http://blogs.msdn.com/adioltean/archive/2005/09/13/465471.aspx) nature of Linq and even the [LinqPad challenge](http://www.linqpad.net/Challenge.aspx). And the problem with *that* is that it assumes that SQL is the problem. That is, in order to reduce a mild discomfort (impedance mismatch between SQL and C#), Microsoft has proposed the equivalent of a space suit (full isolation) when a band-aid (Linq to SQL or something similar) would do just fine. As far as I can see, developers are quite smart enough to master the relational model and then apply it intelligently in their development efforts. In fact, I would go one further and say that Linq to SQL, SubSonic, etc. are *already too complex:* the learning curve isn't that much different from mastering SQL itself. Since, for the foreseeable future, developers *must* master SQL and the relational model, we're now faced with learning *two* query / CRUD languages. Worse yet, Linq is often difficult to test (you don't have a query window), removes us one layer from the real work we are doing (it generates SQL), and has very clumsy support (at best) for SQL constructs like Date handling (e.g. DateDiff), "Having" and even "Group By". What is the alternative? Personally, I don't need a different model for data access like Linq to Entities. I'd prefer to simply pop up a window in Visual Studio, enter and validate my SQL, and then press a button to generate or supplement a C# class to encapsulate the call. Since you already know SQL, wouldn't you prefer to just enter something like this: ``` Select EventID, Title From Events Where Location=@Location ``` and end up with an EventData class that A) contains the EventID and Title fields as properties and B) has a factory method that takes a 'Location' string as an argument and that generates a List<EventData>? You'd have to think carefully about the object model (the above example obviously doesn't deal with that) but the fundamental approach of still using SQL while eliminating the impedance mismatch appeals to me a great deal. The question is: am I wrong? Should Microsoft rewrite the SQL infrastructure so that you don't have to learn SQL / relational data management any more? *Can* they rewrite the SQL infrastructure in this way? Or do you think that a very thin layer on top of SQL to eliminate the pain of setting up parameters and accessing data fields is quite sufficient? **Update** I wanted to promote two links to the top because I think that they capture important aspects of what I am after. First, CodeMonkey points out an article entitled ["The Vietnam of Computer Science."](http://blogs.tedneward.com/2006/06/26/The+Vietnam+Of+Computer+Science.aspx) It takes a while to get started but is a very interesting read. Second, AnSGri points to one of Joel Spolsky's more prominent pieces: [The Law of Leaky Abstractions](http://www.joelonsoftware.com/articles/LeakyAbstractions.html). It isn't exactly on topic but it is close and is a great read. Update 2: I've given the "answer" to ocdecio although there are many great answers here and the choice of the "right" answer is purely subjective. In this case, his answer squared with what I think is truly the best practice given the current state of technology. This is an area that I fully expect to evolve, however, so things may well change. I'd like to thank everyone who contributed, I've upvoted everyone who I think gave a thoughtful answer.
For at least 6 years I have been using my own ORM that is based on a very simple concept: projection. Each table is projected into a class, and SQL is generated on the fly based on the class definition. It still requires me to know SQL but it takes care of the 90% simple CRUD, and I never had to manage connections, etc - and it works for the major DB vendors. I'm happy with what I have and didn't find anything worth dropping it for.
Let me preface this by saying that I am a dyed-in-the-wool database guy. ***As a gross over-generalization***: Developers don't know SQL. Developers don't really *want* to know SQL. They can write it, they can design tables, but it makes them feel icky. They tend to do stupid things when the necessary query is more than a simple join. Not because the developers are stupid -- because they can't be bothered. They *like* living in a world where they only have to deal with one concept space; moving from objects to tables and back is a context switch the price for which they don't like paying. This doesn't mean they are bad, or wrong; it means there is an opportunity for improvement. If your customers (in this case, developers using your framework) don't like SQL and tables -- give them a layer of abstraction that lets them get away without dealing with the underlying mess. It's the same logic that makes garbage collection / automated memory management a big hit. Yes, developers can deal with it; yes, they can write code that is better optimized without it; but not having to deal with it makes them happier and more productive.
Doesn't Linq to SQL miss the point? Aren't ORM-mappers (SubSonic, etc.) sub-optimal solutions?
[ "", "sql", "linq-to-sql", "linq-to-entities", "" ]
View this code: ``` function testprecision(){ var isNotNumber = parseFloat('1.3').toPrecision(6); alert(typeof isNotNumber); //=> string } ``` I would have expected a number. If 'isNotNumber' should be a real number, recasting is the solution: ``` alert(typeof parseFloat(isNotNumber)) //=> number ``` [Edit] thanks for your answers. Precision is not so precise a term I conclude. It can represent the *total number of digits* of a number, or the *number of fractional digits*. Most people in the Netherlands (where I come from) think of precision in the 'number of fractional digits'-way. The javascript toPrecision method concerns the first representation, so this is confusing. Anyway, the method makes it possible to introduce 'false precision', am I right? For the second meaning we have toFixed, the same goes for that (returns string, possibility of false precision). Anyway, having made reinventing the wheel my main hobby, I played around to construct a javascript float object, using the knowledge I gathered here. Maybe it's usefull for someone out there, or maybe one of you have better ideas? ``` function Float(f,nDec) { var Base = this,val; setPrecision( nDec || 2 ); set( f || 0, nDec || Base.precision ); Base.set = set; Base.ndec = setPrecision; /** public setprecision * sets a value for the number of fractional * digits (decimals) you would like getf to * return. NB: can't be more than 20. * Returns the Float object, so allows method * chaining * @param {Number} iPrecision */ function setPrecision(iPrecision) { var ix = parseInt(iPrecision,10) || 2; Base.precision = ix >= 21 ? 20 : ix; return Base; } /** public set * sets the 'internal' value of the object. Returns * the Float object, so allows method chaining * @param {Number} f * @param {Number} ndec */ function set(f,ndec) { val = parseFloat(f) || 0; if (ndec) { setPrecision(ndec); } Base.val = val; return Base; } /** public get: * return number value (as a float) */ Base.get = function(){ var ndec = Math.pow(10,Base.precision), ival = parseInt(val*ndec,10)/ndec; Base.val = ival; return Base.val; }; /** public getf * returns formatted string with precision * (see Base.setPrecision) * if [hx] is supplied, it returns * the float as hexadecimal, otherwise * @param {Boolean} hx */ Base.getf = function(hx){ var v = Base.val.toFixed(Base.precision); return hx ? v.toString(16) : v; }; /** public add * adds [f] to the current value (if [f] is a * Float, otherwise returns current value) * optionally sets a new number of decimals * from parameter [ndec] * @param {Number} f * @param {Number} ndec */ Base.add = function(f,ndec){ if ( parseFloat(f) || val===0) { set(Base.val+parseFloat(f)); if (ndec) { setPrecision(ndec);} } return Base.get(); }; /** toString * returns the internal value of the Float object * functions like a getter (supposedly) */ Base.toString = Base.get; } ``` usage/example: ``` var xf = new Float(); //=> value now 0.0 xf.set(0.86/0.8765,17).add(3.459); alert(xf+'|'+xf.getf()); //=> 4.440175128351398|4.44017512835139800 ```
From the docs: "Returns a string representing the Number object to the specified precision." toPrecision() seems intended for formatting output, in which case a string is the most reasonable outcome. It represents the final output in a form that will not be mangled by further manipulation. If you are looking to do some truncation of precision for calculation reasons, I tend to multiply by 10^n where n is the digits I want to keep, take an integer from that and then divide again by the same. This isn't perfect though: in some situations you may invite an overflow. Frankly, I prefer to do more complex financial calculations on the server, where I have a currency, binary coded decimal or similar numeric types.
Assume you have a number like '1.6'. If you format it to have 6 zeroes to the right, you would get a '1.600000'. To the computer, it is still the same number as 1.6, but to you and your website, it is not the same if all your numbers are of different lenghts (which could hurt a parser, for instance). So, as to avoid it, toPrecision returns a string, or else the interpreter would reformat the number to become '1.6' again.
Why does toPrecision return a String?
[ "", "javascript", "floating-point", "types", "numbers", "" ]
I'm confused. Answers to my [previous question](https://stackoverflow.com/questions/516940/can-multiple-cpus-simultaneously-write-to-the-same-ram-location) seems to confirm my assumptions. But as stated [here](https://stackoverflow.com/questions/154551/volatile-vs-interlocked-vs-lock) volatile is not enough to assure atomicity in .Net. Either operations like incrementation and assignment in MSIL are not translated directly to single, native OPCODE or many CPUs can simultaneously read and write to the same RAM location. To clarify: 1. I want to know if writes and reads are atomic on multiple CPUs? 2. I understand what volatile is about. But is it enough? Do I need to use interlocked operations if I want to get latest value writen by other CPU?
Herb Sutter recently wrote an article on `volatile` and what it really means (how it affects ordering of memory access and atomicity) in the native C++. .NET, and Java environments. It's a pretty good read: * [volatile vs. volatile](https://web.archive.org/web/20090421134427/http://www.ddj.com/hpc-high-performance-computing/212701484)
volatile in .NET does make access to the variable atomic. The problem is, that's often not enough. What if you need to read the variable, and if it is 0 (indicating that the resource is free), you set it to 1 (indicating that it's locked, and other threads should stay away from it). Reading the 0 is atomic. Writing the 1 is atomic. But between those two operations, anything might happen. You might read a 0, and then before you can write the 1, another thread jumps in, reads the 0, and writes an 1. However, volatile in .NET *does* guarantee atomicity of accesses to the variable. It just doesn't guarantee thread safety for operations relying on multiple accesses to it. (Disclaimer: volatile in C/C++ does not even guarantee this. Just so you know. It is much weaker, and occasinoally a source of bugs because people assume it guarantees atomicity :)) So you need to use locks as well, to group together multiple operations as one thread-safe chunk. (Or, for simple operations, the `Interlocked` operations in .NET may do the trick)
Why volatile is not enough?
[ "", "c#", "synchronization", "" ]
I am writing a fairly simple Django application where users can enter string queries. The application will the search through the database for this string. ``` Entry.objects.filter(headline__contains=query) ``` This query is pretty strait forward but not really helpful to someone who isn't 100% sure what they are looking for. So I expanded the search. ``` from django.utils import stopwords results = Entry.objects.filter(headline__contains=query) if(!results): query = strip_stopwords(query) for(q in query.split(' ')): results += Entry.objects.filter(headline__contains=q) ``` I would like to add some additional functionality to this. Searching for miss spelled words, plurals, common homophones (sound the same spelled differently), ect. I was just wondering if any of these things were built into Djangos query language. It isn't important enough for me to write a huge algorithm for I am really just looking for something built in. Thanks in advance for all the answers.
djangos orm doesn't have this behavior out-of-box, but there are several projects that integrate django w/ search services like: * sphinx ([django-sphinx](http://github.com/dcramer/django-sphinx/tree/master)) * solr, a lightweight version of lucene ([djangosearch](https://code.google.com/p/django-solr-search/)) * lucene ([django-search-lucene](http://code.google.com/p/django-search-lucene/)) I cant speak to how well options #2 and #3 work, but I've used django-sphinx quite a lot, and am very happy with the results.
You could try using python's [difflib](http://docs.python.org/library/difflib) module. ``` >>> from difflib import get_close_matches >>> get_close_matches('appel', ['ape', 'apple', 'peach', 'puppy']) ['apple', 'ape'] >>> import keyword >>> get_close_matches('wheel', keyword.kwlist) ['while'] >>> get_close_matches('apple', keyword.kwlist) [] >>> get_close_matches('accept', keyword.kwlist) ['except'] ``` Problem is that to use difflib one must build a list of words from the database. That can be expensive. Maybe if you cache the list of words and only rebuild it once in a while. Some database systems support a search method to do what you want, like PostgreSQL's [`fuzzystrmatch`](http://www.postgresql.org/docs/8.3/static/fuzzystrmatch.html) module. If that is your case you could try calling it. --- **edit:** For your new "requirement", well, you are out of luck. No, there is nothing built in inside django's query language.
Django "Did you mean?" query
[ "", "python", "django", "spell-checking", "" ]
What libraries/tools do you have in your Java Swing Tool set? * XUL* Layout Managers* Packagers/Installers* Books* etc.....
Here is what I use: * **"Framework"**: [Swing Application Framework](http://appframework.dev.java.net), does not do much, but does it quite well (if you use it you may want to take a look at [one presentation](http://jfpoilpret.blogspot.com/2008/11/my-presentation-on-jsr-296-at-jazoon-08.html) I did last year) * **JTables**: handling tables is often a pain (lots of boilerplate code...); I generally use [GlazedLists](http://publicobject.com/glazedlists/) which simplifies the work a lot (and brings many improvements) * **[EventBus](https://eventbus.dev.java.net/)**: this was mentioned in another answer * **LayoutManager**: [DesignGridLayout](https://designgridlayout.dev.java.net) (shameless plug, this is one of my open source projects) * **Look & Feel**: [Substance](https://substance.dev.java.net/) is very good in some situations where you don't want to use the system look and feel * **Docking library**: if your application needs docking, you will find [MyDoggy](http://mydoggy.sourceforge.net) useful (and it has a well-written API). One problem it has is a bad integration with some third-party look and feels (like Substance) All these libraries above are open source. In addition to that, I have my own set of utility classes that, among other things, help integrating the GUI with a Dependency Injection library: I have a set of utilities for [HiveMind](http://hivemind.apache.org/) container (for the few developers that know it and still use it), and another -in preparation, soon open sourced- for [Guice](http://code.google.com/p/google-guice/). I have read no specific book about Swing development, but I have used Swing for about 10 years now (not continuously however). Hence I have no recommendation in terms of books (unfortunately, because I admit that this is one weak point of Swing). "Filthy Rich Clients" book is useful only if: 1. you know Swing well 2. you want to build "fancy" GUIs
[Spring Rich Client](http://www.springsource.org/spring-rcp) and [JGoodies](http://www.jgoodies.com/) are the base of my team's GUI applications; Spring remoting for connecting to server, and Java Web Start for deployement.
Java Swing: Libraries, Tools, Layout Managers
[ "", "java", "swing", "devtools", "" ]
In the IntelliJ console, stack traces automatically contain hyperlinks that bring you to the relevant source files. The links appear at the end of each line in the format (Log4jLoggerTest.java:25). I can configure log4j to output text in a similar format. ``` log4j.appender.Console.layout.ConversionPattern=%d{ABSOLUTE} (%F:%L) - %m%n ``` In eclipse, the console automatically turned text like this into links. In IntelliJ, the stack traces are links but my own output in the same form remains un-linked. Is there any way to get IntelliJ to do the same?
Yes you can, try this pattern: ``` <param name="ConversionPattern" value="%-5p - [%-80m] - at %c.%M(%F:%L)%n"/> ```
There is a plugin for IntelliJ IDEA to get clickable links in your console called [Awesome Console](https://plugins.jetbrains.com/plugin/7677).
Can IntelliJ create hyperlinks to the source code from log4j output?
[ "", "java", "eclipse", "log4j", "intellij-idea", "" ]
I am upgrading my environment from eclipse 3.3.1 and java 1.4 to eclipse 3.4.1 and java 1.5. My unit tests are in jUnit 3. eclipse java version 1.5.0\_\_17 stand alone env version 1.5.0\_\_12, or 1.5.0-17, both work. I have a method on a class that writes an XML file to disk. It calls TransformerFactory tf = [javax.xml.transform.]TransformerFactory.newInstance(); When I run the code outside of eclipse it runs fine. When I run the code in jUnit in eclipse I get the stack trace below. The missing class is in the rt.jar of java 1.4 and not in java 5, but shouldn't that be abstracted from me? How can I make the test pass? I get the same error when I run the code in eclipse from an application. ``` java.lang.NoClassDefFoundError: org/apache/xalan/processor/TransformerFactoryImpl at weblogic.xml.jaxp.RegistryTransformerFactory.(RegistryTransformerFactory.java:62) at weblogic.xml.jaxp.RegistrySAXTransformerFactory.(RegistrySAXTransformerFactory.java:12) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:494) at java.lang.Class.newInstance0(Class.java:350) at java.lang.Class.newInstance(Class.java:303) at javax.xml.transform.FactoryFinder.newInstance(FactoryFinder.java:100) at javax.xml.transform.FactoryFinder.findJarServiceProvider(FactoryFinder.java:278) at javax.xml.transform.FactoryFinder.find(FactoryFinder.java:185) at javax.xml.transform.TransformerFactory.newInstance(TransformerFactory.java:103) at com.bellsouth.snt.cnmp.sso.netcool.NetcoolAccessThread.writeXmlFile(NetcoolAccessThread.java:278) at com.bellsouth.snt.cnmp.sso.netcool.NetcoolAccessThreadTest.testWriteXmlFile(NetcoolAccessThreadTest.java:83) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at junit.framework.TestCase.runTest(TestCase.java:164) at junit.framework.TestCase.runBare(TestCase.java:130) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:120) at junit.framework.TestSuite.runTest(TestSuite.java:230) at junit.framework.TestSuite.run(TestSuite.java:225) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) ``` **update** I did some more research in the bowels of the stack trace. The working versions (outside eclipse) are returning an instance of com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl which is the fallback impl class name in javax.xml.transform.TransformerFactory.newInstance() ``` public static TransformerFactory newInstance() throws TransformerFactoryConfigurationError { try { return (TransformerFactory) FactoryFinder.find( /* The default property name according to the JAXP spec */ "javax.xml.transform.TransformerFactory", /* The fallback implementation class name, XSLTC */ "com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl"); } catch (FactoryFinder.ConfigurationError e) { throw new TransformerFactoryConfigurationError( e.getException(), e.getMessage()); } } ```
I added the following line to setup of the unittest ``` System.setProperty("javax.xml.transform.TransformerFactory", "com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl"); ``` I figured out what to do with a bit of RTFM. <http://java.sun.com/j2se/1.5.0/docs/api/javax/xml/transform/TransformerFactory.html#newInstance()>
You could check if you are running with the same Java version when running in Eclipse as when running outside of it (in Eclipse: Run As -> Run Configuration... -> JRE tab).
The XML code runs properly, but junit fails with NoClassDefFound
[ "", "java", "junit", "eclipse-3.4", "xsltc", "" ]
I have a IN/OUT ratio hit counting system on my site. When a user is sent to my site, I grab the referrer, strip the domain, look up the domain and +1 to hits\_in. Very simple. What I discovered thou is, if a user refreshes the page, the referrer is resent to the site, and it counts it as another +1. Whats even worse is that if user clicks on some link on the site, and then hits BACK to go to the original page, the referrer is resent, and it counts as another +1. So if a foreign site sends me 1 user, who clicks on a video link, views the video, hits BACK in his browser, and then does this 3 times, it will count as if a site sent me 4 users when in fact its just 1. Any way I could prevent the 2 examples from happening without actually logging all IPs and checking access times for each IP before doing the +1.
I am not expert in this, but can't you just use sessions. temporarily store referred url into session, so if user clicks back than check if users session contains referral site. if it contains don't count.
Do something like, If the user doesn't have the cookie `'referer_logged'` set, log their referer and set the cookie. This would make it log only one referer per user.
How can you work around the re-sending of the http referrer data on refresh in php?
[ "", "php", "http", "referrer", "" ]
I'm writing a simple document management system for my work. I'm only a hobbyist but have been programming for a while now. My problem is this. When I delete a number of documents through my app, I then need to remove the related database records. What's the best way to go about this without ending up with files with no database records, or records pointing to files that do not exist, if there is an exception thrown?
Store the files in the database. Then you can have referential integrity without adding complexity (MSMQ etc) to your application. Yes, this will increase the size of your db. But the files are already on your server, so it's just the same bits in a different place.
This is a kind of "distributed transaction" system. You probably want to write a consolidation routine and run it once in a while, you *will* have inconsistencies at some point.
Document Management
[ "", "c#", "sql-server", "database", "dms", "" ]
I've got a jquery json request and in that json data I want to be able to sort by unique values. so I have ``` { "people": [{ "pbid": "626", "birthDate": "1976-02-06", "name": 'name' }, { "pbid": "648", "birthDate": "1987-05-22", "name": 'name' }, ..... ``` So, far, i have this ``` function(data) { $.each(data.people, function(i, person) { alert(person.birthDate); }) } ``` but, I am at a total loss as to how efficiently get only the unique birthDates, and sort them by year (or any sort by any other personal data). I'm trying to do this, and be efficient about it (i'm hoping that is possible). Thanks
I'm not sure how performant this will be, but basically I'm using an object as a key/value dictionary. I haven't tested this, but this should be sorted in the loop. ``` function(data) { var birthDates = {}; var param = "birthDate" $.each(data.people, function() { if (!birthDates[this[param]]) birthDates[this[param]] = []; birthDates[this[param]].push(this); }); for(var d in birthDates) { // add d to array here // or do something with d // birthDates[d] is the array of people } } ```
``` function(data){ var arr = new Array(); $.each(data.people, function(i, person){ if (jQuery.inArray(person.birthDate, arr) === -1) { alert(person.birthDate); arr.push(person.birthDate); } }); } ```
how to get distinct values from json in jquery
[ "", "javascript", "jquery", "json", "" ]
So from this string: "name[id]" I need this: "id" I used str.split ('[]'), but it didn't work. Does it only take a single delimiter?
Use a regular expression: ``` import re s = "name[id]" re.find(r"\[(.*?)\]", s).group(1) # = 'id' ``` `str.split()` takes a string on which to split input. For instance: ``` "i,split,on commas".split(',') # = ['i', 'split', 'on commas'] ``` The `re` module also allows you to split by regular expression, which can be *very* useful, and I think is what you meant to do. ``` import re s = "name[id]" # split by either a '[' or a ']' re.split('\[|\]', s) # = ['name', 'id', ''] ```
Either ``` "name[id]".split('[')[1][:-1] == "id" ``` or ``` "name[id]".split('[')[1].split(']')[0] == "id" ``` or ``` re.search(r'\[(.*?)\]',"name[id]").group(1) == "id" ``` or ``` re.split(r'[\[\]]',"name[id]")[1] == "id" ```
How to split a string by using [] in Python
[ "", "python", "string", "" ]
It seems like in most mainstream programming languages, **returning multiple values from a function** is an extremely awkward thing. The typical solutions are to make either a **struct** or a plain old data **class** and return that, or to pass at least some of the parameters by reference or pointer instead of returning them. Using references/pointers is pretty awkward because it relies on side effects and means you have yet another parameter to pass. The class/struct solution is also IMHO pretty awkward because you then end up with a million little classes/structs that are only used to return values from functions, generating unnecessary clutter and verbosity. Furthermore, a lot of times there's one return value that is always needed, and the rest are only used by the caller in certain circumstances. Neither of these solutions allow the caller to ignore unneeded return types. The one language I'm aware of that handles multiple return values elegantly is Python. For those of you who are unfamiliar, it uses tuple unpacking: ``` a, b = foo(c) # a and b are regular variables. myTuple = foo(c) # myTuple is a tuple of (a, b) ``` Does anyone have any other good solutions to this problem? Both idioms that work in existing mainstream languages besides Python and language-level solutions you've seen in non-mainstream languages are welcome.
Pretty much all ML-influenced functional langues (which is most of them) also have great tuple support that makes this sort of thing trivial. For C++ I like boost::tuple plus boost::tie (or std::tr1 if you have it) ``` typedef boost::tuple<double,double,double> XYZ; XYZ foo(); double x,y,z; boost::tie(x,y,z) = foo(); ``` or a less contrived example ``` MyMultimap::iterator lower,upper; boost::tie(lower,upper) = some_map.equal_range(key); ```
A few languages, notably Lisp and JavaScript, have a feature called destructuring assignment or destructuring bind. This is essentially tuple unpacking on steroids: rather than being limited to sequences like tuples, lists, or generators, you can unpack more complex object structures in an assignment statement. For more details, see [here for the Lisp version](http://www.ai.mit.edu/projects/iiip/doc/CommonLISP/HyperSpec/Body/mac_destructuring-bind.html) or [here for the (rather more readable) JavaScript version](https://developer.mozilla.org/en/New_in_JavaScript_1.7#Destructuring_assignment). Other than that, I don't know of many language features for dealing with multiple return values generally. However, there are a few specific uses of multiple return values that can often be replaced by other language features. For example, if one of the values is an error code, it might be better replaced with an exception. While creating new classes to hold multiple return values feels like clutter, the fact that you're returning those values together is often a sign that your code will be better overall once the class is created. In particular, other functions that deal with the same data can then move to the new class, which may make your code easier to follow. This isn't universally true, but it's worth considering. (Cpeterso's answer about data clumps expresses this in more detail).
Elegant ways to return multiple values from a function
[ "", "python", "function", "language-agnostic", "language-design", "syntactic-sugar", "" ]
If you have a List how do you return the item if a specified property or collection of properties exists? ``` public class Testing { public string value1 { get; set; } public string value2 { get; set; } public int value3 { get; set; } } public class TestingList { public void TestingNewList() { var testList = new List<Testing> { new Testing {value1 = "Value1 - 1", value2 = "Value2 - 1", value3 = 3}, new Testing {value1 = "Value1 - 2", value2 = "Value2 - 2", value3 = 2}, new Testing {value1 = "Value1 - 3", value2 = "Value2 - 3", value3 = 3}, new Testing {value1 = "Value1 - 4", value2 = "Value2 - 4", value3 = 4}, new Testing {value1 = "Value1 - 5", value2 = "Value2 - 5", value3 = 5}, new Testing {value1 = "Value1 - 6", value2 = "Value2 - 6", value3 = 6}, new Testing {value1 = "Value1 - 7", value2 = "Value2 - 7", value3 = 7} }; //use testList.Contains to see if value3 = 3 //use testList.Contains to see if value3 = 2 and value1 = "Value1 - 2" } } ```
If you're using .NET 3.5 or better, LINQ is the answer to this one: ``` testList.Where(t => t.value3 == 3); testList.Where(t => t.value3 == 2 && t.value1 == "Value1 - 2"); ``` If not using .NET 3.5 then you can just loop through and pick out the ones you want.
You could use ``` testList.Exists(x=>x.value3 == 3) ```
List.Contains(item) with generic list of objects
[ "", "c#", "" ]
Does Python have extension methods like C#? Is it possible to call a method like: ``` MyRandomMethod() ``` on existing types like `int`? ``` myInt.MyRandomMethod() ```
You can add whatever methods you like on class objects defined in Python code (AKA monkey patching): ``` >>> class A(object): >>> pass >>> def stuff(self): >>> print self >>> A.test = stuff >>> A().test() ``` This does not work on builtin types, because their `__dict__` is not writable (it's a `dictproxy`). So no, there is no "real" extension method mechanism in Python.
It can be done with Forbidden Fruit (<https://pypi.python.org/pypi/forbiddenfruit>) Install forbiddenfruit: ``` pip install forbiddenfruit ``` Then you can extend built-in types: ``` >>> from forbiddenfruit import curse >>> def percent(self, delta): ... return self * (1 + delta / 100) >>> curse(float, 'percent', percent) >>> 1.0.percent(5) 1.05 ``` Forbidden Fruit is fundamentally dependent on the C API, it works only on cpython implementations and won’t work on other python implementations, such as Jython, pypy, etc.
Extension methods in Python
[ "", "python", "function", "extension-methods", "" ]
> **Possible Duplicate:** > [When should you use 'friend' in C++?](https://stackoverflow.com/questions/17434/when-should-you-use-friend-in-c) I was brushing up on my C++ (I'm a Java developer) and I came across the `friend class` keyword which I had forgotten about for a while. Is this one of those features that's just part of the kitchen sink, or is there a good reason for doing this rather than just a vanilla getter? I understand the difference in that it limits who can access the data, but I can't think of a scenario when this would be necessary. Note: I've seen a similar question, but specifically I'm asking, is this just an advanced feature that adds no real value except to confuse people looking at you're code until they realize what you're doing?
I agree with the comments that say the friend keyword can improve encapsulation if used wisely. I'd just add that the most common (legitimate!) use for friend classes may be testing. You may want a tester class to have a greater degree of access than other client classes would have. A tester class could have a good reason to look at internal details that are deliberately hidden from other classes.
In my experience, the cases when friend (or mutable, which is a little similar) to actually enhance encapsulation of data are rare compared with how often it's used to break encapsulation. It's rarely useful to me but when I do use it it's for cases in which I've had to split a class that was formerly a single class into two separate classes that need to access some common data/functionality. Edit to respond to Outlaw Programmer's comment: We absolutely agree on this. One other option apart from friend'ing classes after splitting them is to make public accessors, which sometimes break encapsulation! I think that some people think that friendly classes somehow breaks encapsulation because they've seen it used improperly a lot, and many people probably never see code where it's been used correctly, because it's a rare thing. I like your way of stating it though - friendliness is a good middle ground between not allowing you to split up your class and making EVERYTHING accessible to the public. Edit to respond to David Thornley: I agree that the flexibility that C++ allows you to do things like this is a result of the design decisions that went into C++. I think that's what it makes it even more important to understand what things are generally good and bad style in flexible languages. Java's perspective is that you should never have friend classes so that these aren't provided, but as C++ programmers it's our responsibility as a community to define appropriate use of these very flexible but sometimes misused language constructs. Edit to respond to Tom: Mutable doesn't necessarily break encapsulation, but many of the uses of the mutable keyword that I've seen in real-life situations break encapsulation, because it's much more common to see people breaking encapsulation with mutable than to actually find and understand a proper use of mutable in the first place.
When to use friend class in C++
[ "", "c++", "keyword", "friend", "" ]
Here is part of a stack-trace from a recent run of an unreliable application written in Python which controls another application written in Excel: ``` pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, None, None, None, 0, -2146788248), None) ``` Obviously something has gone wrong ... but what?[1] These COM error codes seem to be excessively cryptic. How can I decode this error message? Is there a table somewhere that allows me to convert this numerical error code into something more meaningful? [1] I actually know what went wrong in this case, it was attempting to access a Name prperty on a Range object which did not have a Name property... not all bugs are this easy to find!
You are not doing anything wrong. The first item in your stack trace (the number) is the error code returned by the COM object. The second item is the description associated with the error code which in this case is "Exception Occurred". pywintypes.com\_error already called the equivalent of win32api.FormatMessage(errCode) for you. We'll look at the second number in a minute. *By the way, you can use the "Error Lookup" utility that comes in Visual Studio (C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\ErrLook.exe) as a quick launching pad to check COM error codes. That utility also calls FormatMessage for you and displays the result. Not all error codes will work with this mechanism, but many will. That's usually my first stop.* Error handling and reporting in COM is a bit messy. I'll try to give you some background. All COM method calls will return a numeric code called an HRESULT that can indicate success or failure. All forms of error reporting in COM build on top of that. The codes are commonly expressed in hex, although sometimes you will see them as large 32-bit numbers, like in your stack trace. There are all kinds of predefined return codes for common results and problems, or the object can return custom numeric codes for special situations. For example, the value 0 (called S\_OK) universally means "No error" and 0x80000002 is E\_OUTOFMEMORY. Sometimes the HRESULT codes are returned by the object, sometimes by the COM infrastructure. A COM object can also choose to provide much richer error information by implementing an interface called IErrorInfo. When an object implements IErrorInfo, it can provide all kinds of detail about what happened, such as a detailed custom error message and even the name of a help file that describes the problem. In VB6 and VBA. the `Err` object allows you to access all that information (`Err.Description`, etc). To complicate matters, late bound COM objects (which use a mechanism called COM Automation or IDispatch) add some layers that need to be peeled off to get information out. Excel is usually manipulated via late binding. Now let's look at your situation again. What you are getting as the first number is a fairly generic error code: DISP\_E\_EXCEPTION. *Note: you can usually figure out the official name of an HRESULT by googling the number, although sometimes you will have to use the hex version to find anything useful.* Errors that begin with DISP\_ are IDISPATCH error codes. The error loosely means "There was a COM exception thrown by the object", with more information packed elsewhere (although I don't quite know where; I'll have to look it up). From what I understand of pywintypes.com\_error, the last number in your message is the actual error code that was returned by the object during the exception. It's the actual numeric code that you would get out of VBA's `Err.Number`. Unfortunately, that second code -2146788248 (0x800A9C68) is in the range reserved for custom application-defined error messages (in VBA: `VbObjectError + someCustomErrorNumber`), so there is no centralized meaning. The same number can mean entirely different things for different programs. In this case, we have reached a dead end: > The error code is "custom", and the application needs to document what it is, except that Excel doesn't. Also, Excel (or the actual source of the error) doesn't seem to be providing any more information via IErrorInfo. Excel is notorious (to me at least) for cryptic error codes from automation and obscure situations that cause them. This is especially so for errors that one could consider "design-time errors" ("you should have known better than calling a method that doesn't exist in the object"). Instead of a nice "Could not read the Name property", you get "***Run-time error '1004': Application defined or object-defined error***" (which I just got by trying to access a Name property on a Range, from VBA in Excel). That is NOT very helpful. The problem is not routed on Python or it's interface to Excel. Excel itself doesn't explain what happened, even to VBA. However, the general procedure above remains valid. If you get an error from Excel in the future, you might get a better error message that you can track the same way. Good luck!
Do it like this: ``` try: [whatever code] except pythoncom.com_error as error: print(win32api.FormatMessage(error.excepinfo[5])) ``` More information on digesting the pythoncom.com\_error object here: <https://web.archive.org/web/20170831073447/http://docs.activestate.com/activepython/3.2/pywin32/com_error.html>
Is there a way to decode numerical COM error-codes in pywin32
[ "", "python", "windows", "excel", "com", "pywin32", "" ]
``` <?php $var = NULL; var_dump(isset($var)); // bool(false) var_dump(isset($unset_var)); // bool(false) ?> ``` isset($var) should return TRUE, since it has been set to NULL. Is there any way to check for this? Thanks!
use get\_defined\_vars() to get an array of the variables defined in the current scope and then test against it with array\_key\_exists(); Edited: if you wanted a function to test existence you would create one like so: ``` function varDefined($name,$scope) { return array_key_exists($name, $scope); } ``` and use like so in any given scope: ``` $exists = varDefined('foo',get_defined_vars()); ``` Should work for any scope.
Not very pretty, but... ``` array_key_exists('var', $GLOBALS); ``` (You can't use `@is_null($var)`, because it evaluates to true either way [and it's not really good practice to suppress errors using the @ operator...])
PHP: How to check if a var has been initiliazed? isset returns false when the var has been set to NULL
[ "", "php", "" ]
> Reading and writing of a single variable is atomic (language guarantee!), unless the variable is of type long or double. I was reading a course's slides and I found that written. The class was about concurrency. Can anyone explain to me why writing a long or a double is not an atomic operation? It really took me by surprise.
It's not atomic because it's a multiple-step operation at the machine code level. That is, longs and doubles are longer than the processor's word length.
Just to clarify the situation for Java, doubles and longs are not read or written to atomically unless they're declared `volatile` [JLS - Nonatomic Treatment of double and long](http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.7)
Writing long and double is not atomic in Java?
[ "", "java", "concurrency", "" ]
Given this C API declaration how would it be imported to C#? ``` int _stdcall z4ctyget(CITY_REC *, void *); ``` I've been able to get this far: ``` [DllImport(@"zip4_w32.dll", CallingConvention = CallingConvention.StdCall, EntryPoint = "z4ctygetSTD", ExactSpelling = false)] private extern static int z4ctygetSTD(ref CITY_REC args, void * ptr); ``` Naturally in C# the "void \*" doesn't compile. Some Googling indicates that it should be translated as "object." Which seems like it should work. But others indicate that "Void \* is called a function pointer in C/C++ terms which in C# terms is a delegate". That doesn't make a whole lot of sense here as what would it delegate to? Some similar calls for other APIs found through Googling use other functions in the respective API. But in this API no other call would make sense. The documentation for the call shows an example: ``` z4ctyget(&city, “00000”); ``` Which seems to show that even a static value could be passed. It will compile with object in place of the void \*. I don't know whether this is right and I haven't had an opportunity to test it (licensing issue).
For the void\* parameter you can just use an IntPtr ``` [DllImport(@"zip4_w32.dll", CallingConvention = CallingConvention.StdCall, EntryPoint = "z4ctygetSTD", ExactSpelling = false)] private extern static int z4ctygetSTD(ref CITY_REC args, IntPtr ptr); ```
You can also use void\* if you mark your class as unsafe. It really depends on what the API is looking for in that parameter. You can add IntPtr or Object\* to get past compiler, but you will still need to pass it the correct data when you call it.
How to import void * C API into C#?
[ "", "c#", "pinvoke", "dllimport", "" ]
I have a simple query: ``` select * from countries ``` with the following results: ``` country_name ------------ Albania Andorra Antigua ..... ``` I would like to return the results in one row, so like this: ``` Albania, Andorra, Antigua, ... ``` Of course, I can write a PL/SQL function to do the job (I already did in Oracle 10g), but is there a nicer, preferably non-Oracle-specific solution (or may be a built-in function) for this task? I would generally use it to avoid multiple rows in a sub-query, so if a person has more then one citizenship, I do not want her/him to be a duplicate in the list. My question is based on the similar question on [SQL server 2005](https://stackoverflow.com/questions/180032/how-can-i-combine-multiple-rows-into-a-comma-delimited-list-in-sql-server-2005). **UPDATE**: My function looks like this: ``` CREATE OR REPLACE FUNCTION APPEND_FIELD (sqlstr in varchar2, sep in varchar2 ) return varchar2 is ret varchar2(4000) := ''; TYPE cur_typ IS REF CURSOR; rec cur_typ; field varchar2(4000); begin OPEN rec FOR sqlstr; LOOP FETCH rec INTO field; EXIT WHEN rec%NOTFOUND; ret := ret || field || sep; END LOOP; if length(ret) = 0 then RETURN ''; else RETURN substr(ret,1,length(ret)-length(sep)); end if; end; ```
Here is a simple way without stragg or creating a function. ``` create table countries ( country_name varchar2 (100)); insert into countries values ('Albania'); insert into countries values ('Andorra'); insert into countries values ('Antigua'); SELECT SUBSTR (SYS_CONNECT_BY_PATH (country_name , ','), 2) csv FROM (SELECT country_name , ROW_NUMBER () OVER (ORDER BY country_name ) rn, COUNT (*) OVER () cnt FROM countries) WHERE rn = cnt START WITH rn = 1 CONNECT BY rn = PRIOR rn + 1; CSV -------------------------- Albania,Andorra,Antigua 1 row selected. ``` As others have mentioned, if you are on 11g R2 or greater, you can now use listagg which is much simpler. ``` select listagg(country_name,', ') within group(order by country_name) csv from countries; CSV -------------------------- Albania, Andorra, Antigua 1 row selected. ```
The `WM_CONCAT` function (if included in your database, pre Oracle 11.2) or `LISTAGG` (starting Oracle 11.2) should do the trick nicely. For example, this gets a comma-delimited list of the table names in your schema: ``` select listagg(table_name, ', ') within group (order by table_name) from user_tables; ``` or ``` select wm_concat(table_name) from user_tables; ``` [More details/options](http://www.oracle-base.com/articles/misc/StringAggregationTechniques.php) [Link to documentation](http://docs.oracle.com/cd/E11882_01/server.112/e10592/functions089.htm)
How can I combine multiple rows into a comma-delimited list in Oracle?
[ "", "sql", "oracle", "concatenation", "pivot", "string-aggregation", "" ]
I'm developing a JavaScript application that needs to run on the Google Android Phone OS. Is there a way to enable the JavaScript Error/Debug console within Safari on either the Android Emulator or an actual device? If so any instructions on how to do so would be appreciated.
A quick Google turns up this [blog post](https://web.archive.org/web/20120413212614/http://www.nanaze.com/2009/01/debugging-javascript-on-android.html) (posted after you asked your question), that should at least let you see any Javascript errors via the Android Debug Bridge using the command: ``` adb logcat WebCore:V *:S ``` Not quite the same as a full debug console though.
It looks like, with the Android 2.0.1 SDK you'll need to filter on "browser" instead of "WebCore"
Is there a way to enable the JavaScript Error/Debug Console for Safari within Android?
[ "", "javascript", "android", "safari", "" ]
I've got a philosophical programming problem. Let's say I have a class named Employees. Employees has business members that get set from a dataTable. In order to fill this, I use a method that takes an instance of the employee class, loops through a dataTable, and sets the members of the instance passed into it. For instance: ``` public void GetEmployees(objEmployee) { //the function I am calling returns a dataTable of all the employees in the db. dim dt as DataTable = dbEmployees.GetEmployees(); foreach(DataRow drow in dt.rows) { objEmployee.Name = drow["Name"].ToString(); objEmployee.ID = drow["ID"].ToString(); } } ``` Then I would call the code like this in my UI logic: ``` public void GetEmployees() { Employees employee = new Employees(); employee.GetEmployees(employee); } ``` My question is, is it acceptable to pass in my class instance into a method and change the properties like I am doing, or would it be more object-oriented to do it through a function like this: ``` public Employees GetEmployees() { Employees objEmployee = new Employees(); //the function I am calling returns a dataTable of all the employees in the db. dim dt as DataTable = dbEmployees.GetEmployees(); foreach(DataRow drow in dt.rows) { objEmployee.Name = drow["Name"].ToString(); objEmployee.ID = drow["ID"].ToString(); } return objEmployee } ``` And then I would call it like this: ``` private void GetEmployees() { Employees employee; employee = employee.GetEmployees(); } ``` Is there any advantage of using a function over a method? Thanks!
Both things are methods (also known as functions). The difference is that the first one "returns by reference" while the second one "returns a reference". There is no advantage in returning by reference in C# because in the simpler, natural case where you merely return a reference, no copying is done (unlike in C++). Returning a reference is, thus, to be always preferred as it's the easiest, and it allows great syntactic flexibility at the call site (such as nesting expressions: manager.Fire(GetEmployee()) without the need for a separate statement).
Sounds to me like you ought to make GetEmployees() a static method off of Employee. You shouldn't have to instantiate an employee to get a list of them. Also, your first example is only going to set your objEmployee to whatever comes up last in your data. While it loops through all the employees, it stops looping when it reaches the last one, which is the data you'll get returned. Also, does the "Employees" class refer to one employee or to many? Name it accordingly. If "Employees" represents one "Employee" then perhaps you should rename it to "Employee" and return a List from the GetEmployees method, which, as I stated above, ought to be static, so you can simply call something like "Employee.GetEmployees()". That being said, I'm never too fond of the architecture where you provide data access capabilities to your business object. It tends to couple the data access layer and the business object layer too tightly. You may want to consider creating a Data Access interface that accepts parameters for searching for employees and returns actual Employee objects as it's result. Once you do that, you would want to create an implementation of this Data Access Layer that you would then use to generate the actual instances. The advantage to this would be that you could quickly change your implementation of the Data Access Layer without having to change the business objects as well. You would program your business objects off of the Interface then, and you might be able to use dynamic assembly loading or some other method to dynamically determine the implementation of your data access.
Should I Use a Method with Parameters or a Function?
[ "", "c#", "methods", "" ]
I would like my program to email me a bug-report when it fails. Is there any way of doing this... safely? I have found I can use System.Net.Mail MailMessage and SmtpClient and such, but of course, I will have to provide a username and a password to it (unless someone knows of one that doesn't need it?). And putting that in code I find a bit... I don't know. Technically it would mean that anyone could look at the source code or the compiled IL code (or what it was called) and find that username and password and use it for spamming or other not so good activites. Which is not very good! Any ideas? Is there a better and/or different approach to this problem? Doesn't really have to be through email. But what I want is a way for the program to notify when something happens that I should fix. And to make that notification as little troublesome as possible to the user. Maybe even invisible (although a YesNo messagebox might be polite). Anyone?
Instead of sending mail you could set up a web service that would receive the reports. The web service could run over https if you want to protect the data. I did this for a customer once and it worked well. The only problem is if the program is running somewhere without internet access. Edit: Don't tell this to anyone, but we even took a screenshot of the program when it crashed and posted it together with all information about the error that we could gather. It was incredibly useful!
You don't need to provide your password to email to yourself, as you don't need other people's password to send email to them. You only need a password if you relay an email over a third party's `SMTP` server. If your `SMTP` client connects right to `example.com` on port `25` and sends an email to `test@example.com`, no password is needed. `example.com` above means an `MX` record, not an `A` record. This is a special type of record that holds the name of the server where all emails for `example.com` should go. There is no easy way to look it up from `.NET`, but if you are not going to change your `SMTP` server's address, you may hardcode it into `SmtpClient.Host` property. To find out your mail server's address, type `nslookup -q=MX example.com` at your command prompt. `SMTP` is not the best way to report errors, though. Home providers often block traffic on port `25` to all servers but their, to prevent spamming etc. You better make a web server, create an instance of [`System.Net.WebClient`](http://msdn.microsoft.com/en-us/library/system.net.webclient(VS.80).aspx) in your program and send bug reports over `HTTP`. It's more reliable and you can easily use your client's proxy settings.
C#: Safely send bug report from a program
[ "", "c#", "email", "bug-tracking", "" ]
I have an HTML form - with PHP, I am sending the data of the form into a MySQL database. Some of the answers to the questions on the form have checkboxes. Obviously, the user does not have to tick all checkboxes for one question. I also want to make the other questions (including radio groups) optional. However, if I submit the form with empty boxes, radio-groups etc, I received a long list of 'Undefined index' error messages for each of them. How can I get around this? Thanks.
Unchecked radio or checkbox elements are not submitted as they are not considered as [successful](http://www.w3.org/TR/REC-html40/interact/forms.html#h-17.13.2). So you have to check if they are sent using the `isset` or `empty` function. ``` if (isset($_POST['checkbox'])) { // checkbox has been checked } ```
I've used this technique from time to time: ``` <input type="hidden" name="the_checkbox" value="0" /> <input type="checkbox" name="the_checkbox" value="1" /> ``` **note:** This gets interpreted [differently](https://stackoverflow.com/questions/1809494/post-the-checkboxes-that-are-unchecked/8972025#8972025) in different server-side languages, so test and adjust if necessary. Thanks to [SimonSimCity](https://stackoverflow.com/users/517914/simonsimcity) for the tip.
Submit an HTML form with empty checkboxes
[ "", "php", "html", "forms", "checkbox", "" ]
Expanding upon my [earlier problem](https://stackoverflow.com/questions/489173/writing-xml-with-c), I've decided to (de)serialize my config file class which worked great. I now want to store an associative array of drive letters to map (key is the drive letter, value is the network path) and have tried using `Dictionary`, `HybridDictionary`, and `Hashtable` for this but I always get the following error when calling `ConfigFile.Load()` or `ConfigFile.Save()`: > There was an error reflecting type > 'App.ConfigFile'. [snip] > System.NotSupportedException: Cannot > serialize member > App.Configfile.mappedDrives [snip] From what I've read Dictionaries and HashTables can be serialized, so what am I doing wrong? ``` [XmlRoot(ElementName="Config")] public class ConfigFile { public String guiPath { get; set; } public string configPath { get; set; } public Dictionary<string, string> mappedDrives = new Dictionary<string, string>(); public Boolean Save(String filename) { using(var filestream = File.Open(filename, FileMode.OpenOrCreate,FileAccess.ReadWrite)) { try { var serializer = new XmlSerializer(typeof(ConfigFile)); serializer.Serialize(filestream, this); return true; } catch(Exception e) { MessageBox.Show(e.Message); return false; } } } public void addDrive(string drvLetter, string path) { this.mappedDrives.Add(drvLetter, path); } public static ConfigFile Load(string filename) { using (var filestream = File.Open(filename, FileMode.Open, FileAccess.Read)) { try { var serializer = new XmlSerializer(typeof(ConfigFile)); return (ConfigFile)serializer.Deserialize(filestream); } catch (Exception ex) { MessageBox.Show(ex.Message + ex.ToString()); return new ConfigFile(); } } } } ```
You can't serialize a class that implements IDictionary. Check out this [link](http://msdn.microsoft.com/en-us/library/ms950721.aspx). > Q: Why can't I serialize hashtables? > > A: The XmlSerializer cannot process > classes implementing the IDictionary > interface. This was partly due to > schedule constraints and partly due to > the fact that a hashtable does not > have a counterpart in the XSD type > system. The only solution is to > implement a custom hashtable that does > not implement the IDictionary > interface. So I think you need to create your own version of the Dictionary for this. Check this [other question](https://stackoverflow.com/questions/67959/c-xml-serialization-gotchas).
There is a solution at [Paul Welter's Weblog - XML Serializable Generic Dictionary](http://weblogs.asp.net/pwelter34/archive/2006/05/03/444961.aspx) > For some reason, the generic Dictionary in .net 2.0 is not XML serializable. The following code snippet is a xml serializable generic dictionary. The dictionary is serialzable by implementing the IXmlSerializable interface. ``` using System; using System.Collections.Generic; using System.Text; using System.Xml.Serialization; [XmlRoot("dictionary")] public class SerializableDictionary<TKey, TValue> : Dictionary<TKey, TValue>, IXmlSerializable { public SerializableDictionary() { } public SerializableDictionary(IDictionary<TKey, TValue> dictionary) : base(dictionary) { } public SerializableDictionary(IDictionary<TKey, TValue> dictionary, IEqualityComparer<TKey> comparer) : base(dictionary, comparer) { } public SerializableDictionary(IEqualityComparer<TKey> comparer) : base(comparer) { } public SerializableDictionary(int capacity) : base(capacity) { } public SerializableDictionary(int capacity, IEqualityComparer<TKey> comparer) : base(capacity, comparer) { } #region IXmlSerializable Members public System.Xml.Schema.XmlSchema GetSchema() { return null; } public void ReadXml(System.Xml.XmlReader reader) { XmlSerializer keySerializer = new XmlSerializer(typeof(TKey)); XmlSerializer valueSerializer = new XmlSerializer(typeof(TValue)); bool wasEmpty = reader.IsEmptyElement; reader.Read(); if (wasEmpty) return; while (reader.NodeType != System.Xml.XmlNodeType.EndElement) { reader.ReadStartElement("item"); reader.ReadStartElement("key"); TKey key = (TKey)keySerializer.Deserialize(reader); reader.ReadEndElement(); reader.ReadStartElement("value"); TValue value = (TValue)valueSerializer.Deserialize(reader); reader.ReadEndElement(); this.Add(key, value); reader.ReadEndElement(); reader.MoveToContent(); } reader.ReadEndElement(); } public void WriteXml(System.Xml.XmlWriter writer) { XmlSerializer keySerializer = new XmlSerializer(typeof(TKey)); XmlSerializer valueSerializer = new XmlSerializer(typeof(TValue)); foreach (TKey key in this.Keys) { writer.WriteStartElement("item"); writer.WriteStartElement("key"); keySerializer.Serialize(writer, key); writer.WriteEndElement(); writer.WriteStartElement("value"); TValue value = this[key]; valueSerializer.Serialize(writer, value); writer.WriteEndElement(); writer.WriteEndElement(); } } #endregion } ```
Serialize Class containing Dictionary member
[ "", "c#", ".net", "serialization", "dictionary", "" ]
I realise that this is a very basic question, but it is one which has always bothered me. As I understand things, if you declare a field private in Java then it is not visible outside of that class. If it is protected then it is available to inherited classes and anything in the same package (correct me if either of those definitions is incorrect). Does this mean it is not possible to declare a field that is accessible to only inherited classes and not other non-inherited classes in the same package? I appreciate that there are ways around this, but are there instances when you would want to have this sort of behaviour? Obviously the above question applies to methods as well as fields. Many thanks.
See: <http://java.sun.com/docs/books/tutorial/java/javaOO/accesscontrol.html> Package > Subclasses, you can never have a field only visible by subclasses but not by classes from the same package.
Basically: * private: Accessible only by the class. * public: Accessible by any class. * protected: Accessible by the class, all inherited classes and the classes of the current package *(edited)*. * no scope defined: Accessible by all classes of the current package. more information [here](http://java.sun.com/docs/books/tutorial/java/javaOO/accesscontrol.html).
Access levels of java class members
[ "", "java", "encapsulation", "" ]
Is there a null coalescing operator in Javascript? For example, in C#, I can do this: ``` String someString = null; var whatIWant = someString ?? "Cookies!"; ``` The best approximation I can figure out for Javascript is using the conditional operator: ``` var someString = null; var whatIWant = someString ? someString : 'Cookies!'; ``` Which is sorta icky IMHO. Can I do better?
**Update** JavaScript now supports the [nullish coalescing operator (??)](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Nullish_Coalescing_Operator). It returns its right-hand-side operand when its left-hand-side operand is `null` or `undefined`, and otherwise returns its left-hand-side operand. **Old Answer** Please check compatibility before using it. --- The JavaScript equivalent of the C# null coalescing operator (`??`) is using a logical OR (`||`): ``` var whatIWant = someString || "Cookies!"; ``` There are cases (clarified below) that the behaviour won't match that of C#, but this is the general, terse way of assigning default/alternative values in JavaScript. --- ## Clarification Regardless of the type of the first operand, if casting it to a Boolean results in `false`, the assignment will use the second operand. Beware of all the cases below: ``` alert(Boolean(null)); // false alert(Boolean(undefined)); // false alert(Boolean(0)); // false alert(Boolean("")); // false alert(Boolean("false")); // true -- gotcha! :) ``` This means: ``` var whatIWant = null || new ShinyObject(); // is a new shiny object var whatIWant = undefined || "well defined"; // is "well defined" var whatIWant = 0 || 42; // is 42 var whatIWant = "" || "a million bucks"; // is "a million bucks" var whatIWant = "false" || "no way"; // is "false" ```
``` function coalesce() { var len = arguments.length; for (var i=0; i<len; i++) { if (arguments[i] !== null && arguments[i] !== undefined) { return arguments[i]; } } return null; } var xyz = {}; xyz.val = coalesce(null, undefined, xyz.val, 5); // xyz.val now contains 5 ``` this solution works like the SQL coalesce function, it accepts any number of arguments, and returns null if none of them have a value. It behaves like the C# ?? operator in the sense that "", false, and 0 are considered NOT NULL and therefore count as actual values. If you come from a .net background, this will be the most natural feeling solution.
Is there a "null coalescing" operator in JavaScript?
[ "", "javascript", "operators", "null-coalescing-operator", "null-coalescing", "" ]
I am using fully qualified name of the enum inside a method in one of my class. But I am getting compiler warning which says *"warning C4482: nonstandard extension used: enum 'Foo' used in qualified name"*. In C++, do we need to use enums without the qualified name? But IMO, that looks ugly. Any thoughts?
Yes, enums don't create a new "namespace", the values in the enum are directly available in the surrounding scope. So you get: ``` enum sample { SAMPLE_ONE = 1, SAMPLE_TWO = 2 }; int main() { std::cout << "one = " << SAMPLE_ONE << std::endl; return 0; } ```
To make it clean, replace: ``` enum Fruit { ORANGE = 0, BANANA = 1 }; ``` with ``` namespace Fruit { enum { //no enum name needed ORANGE = 0, BANANA = 1 }; }; ... int f = Fruit::BANANA; //No warning ```
Using enum inside types - Compiler warning C4482 C++
[ "", "c++", "enums", "compiler-warnings", "" ]
I asked about finding in subdirs with criteria. First answer was use FindFirstFileEx(). It seems the function is no good for this purpose or I'm using it wrong. So can someone explain how I would go about searching in a folder, and all it's subfolders for files that match (to give some sample criteria) *.doc;*.txt;\*.wri; and are newer than 2009-01-01? Please give a specific code example for those criteria so I know how to use it. If it isn't possible, is there an alternative for doing this not-at-all-obscure task??? I am becoming quite baffled that so far there aren't well known/obvious tools/ways to do this.
From [MSDN](http://msdn.microsoft.com/en-us/library/aa364419(VS.85).aspx): If you refer to the code fragment in that page: ``` #include <windows.h> #include <tchar.h> #include <stdio.h> void _tmain(int argc, TCHAR *argv[]) { WIN32_FIND_DATA FindFileData; HANDLE hFind; if( argc != 2 ) { _tprintf(TEXT("Usage: %s [target_file]\n"), argv[0]); return; } _tprintf (TEXT("Target file is %s\n"), argv[1]); hFind = FindFirstFileEx(argv[1], FindExInfoStandard, &FindFileData, FindExSearchNameMatch, NULL, 0); if (hFind == INVALID_HANDLE_VALUE) { printf ("FindFirstFileEx failed (%d)\n", GetLastError()); return; } else { _tprintf (TEXT("The first file found is %s\n"), FindFileData.cFileName); FindClose(hFind); } } ``` You'll see that you can call FindFirstFileEx, where argv[1](http://msdn.microsoft.com/en-us/library/aa364419(VS.85).aspx) is a string (LPCSTR) pattern to look for, and &FindFileData is a data structure that contains file info of the found data.. hFind is the handle you use on subsequent calls with FindNextFile.. I think you can also add more search parameters by using the fourth and sixth parameter to FindFirstFileEx. Good luck! EDIT: BTW, I think you can check a file or dir's attributes by using [GetFileAttributes()](http://msdn.microsoft.com/en-us/library/aa364944(VS.85).aspx) .. Just pass the filename found in FileFindData.. (filename can refer to a file's name or a directory name I think) EDIT: MrVimes, here's what you could do (in pseudocode) find the first file (match with \*) * Check the file find data if it is ".", ".." (these are not really directories or files) + if check passed, check file find data if it has the attributes you are looking for (i.e. check filename, file attributes, **even file creation time can be checked in the file find data**, and what not) and do whatever with it - if check passed, do whatever you need to do with the file + if check failed, either call findnextfile or end, up to you Something like that..
I think you use `FindFirstFile` to find **all** files and ignore the ones whose `WIN32_FIND_DATA` values don't match your search criteria.
Example of using FindFirstFIleEx() with specific search criteria
[ "", "c++", "winapi", "filesystems", "search", "wildcard", "" ]
I am using ASP.NET 2.0 C#. I want to redirect all request for my web app with "www" to without "www" www.example.com to example.com Or example.com to www.example.com Stackoverflow.com is already doing this, I know there is a premade mechanism in PHP (.htaccess) file. But how to do it in asp.net ? Thanks
I've gone with the following solution in the past when I've not been able to modify IIS settings. Either in an HTTPModule (probably cleanest), or global.asax.cs in Application\_BeginRequest or in some BasePage type event, such as OnInit I perform a check against the requested url, with a known string I wish to be using: ``` public class SeoUrls : IHttpModule { #region IHttpModule Members public void Init(HttpApplication context) { context.PreRequestHandlerExecute += OnPreRequestHandlerExecute; } public void Dispose() { } #endregion private void OnPreRequestHandlerExecute(object sender, EventArgs e) { HttpContext ctx = ((HttpApplication) sender).Context; IHttpHandler handler = ctx.Handler; // Only worry about redirecting pages at this point // static files might be coming from a different domain if (handler is Page) { if (Ctx.Request.Url.Host != WebConfigurationManager.AppSettings["FullHost"]) { UriBuilder uri = new UriBuilder(ctx.Request.Url); uri.Host = WebConfigurationManager.AppSettings["FullHost"]; // Perform a permanent redirect - I've generally implemented this as an // extension method so I can use Response.PermanentRedirect(uri) // but expanded here for obviousness: response.AddHeader("Location", uri); response.StatusCode = 301; response.StatusDescription = "Moved Permanently"; response.End(); } } } } ``` Then register the class in your web.config: ``` <httpModules> [...] <add type="[Namespace.]SeoUrls, [AssemblyName], [Version=x.x.x.x, Culture=neutral, PublicKeyToken=933d439bb833333a]" name="SeoUrls"/> </httpModules> ``` This method works quite well for us.
There's a Stackoverflow blog post about this. <https://blog.stackoverflow.com/2008/06/dropping-the-www-prefix/> Quoting Jeff: > Here’s the IIS7 rule to remove the WWW > prefix from all incoming URLs. Cut and > paste this XML fragment into your > web.config file under > > ``` > <system.webServer> / <rewrite> / <rules> > > <rule name="Remove WWW prefix" > > <match url="(.*)" ignoreCase="true" /> > <conditions> > <add input="{HTTP_HOST}" pattern="^www\.domain\.com" /> > </conditions> > <action type="Redirect" url="http://domain.com/{R:1}" > redirectType="Permanent" /> > </rule> > ``` > > Or, if you prefer to use the www > prefix, you can do that too: > > ``` > <rule name="Add WWW prefix" > > <match url="(.*)" ignoreCase="true" /> > <conditions> > <add input="{HTTP_HOST}" pattern="^domain\.com" /> > </conditions> > <action type="Redirect" url="http://www.domain.com/{R:1}" > redirectType="Permanent" /> > </rule> > ```
How to redirect with "www" URL's to without "www" URL's or vice-versa?
[ "", "c#", "asp.net", "redirect", "canonical-name", "" ]
In Visual Studio .NET projects you can add a "Class Diagram" to the project which renders a visual representation of all namespaces, classes, methods, and properties. Is there any way to do this for Win32 (not .NET) C++ projects? Either through Visual Studio itself or with a 3rd party tool?
If you have a Visual Studio 2008 solution composed of multiple C++ projects, you can only generate one class diagram per project. For example, if you have one application project linking to 10 library projects, you'll have to generate 11 separate class diagrams. There are two ways to work around this, neither of which is pleasant: 1. Cram all the source into a single project. 2. Create a class diagram for one project (the application, perhaps) and then drag files from all the other projects into the class diagram. A more thorough exploration of the capabilities of the Visual Studio class designer is given in [Visual C++ Class Designer](http://www.developer.com/net/cplus/article.php/3702821). Given the poor support for C++ class diagrams in Visual Studio, you're probably better off going with a commercial tool if you want anything more than a simple list of what classes you have. [WinTranslator](http://www.excelsoftware.com/wintranslator.html) from Excel Software might be worth looking at, and someone I work with uses [Source Insight](http://sourceinsight.com/).
Most UML tools should be able to do that. I know that Sparx Systems [Enterprise Architect](http://www.sparxsystems.com.au/products/mdg/int/vs/index.htm) does.
Creating a C++ Class Diagram
[ "", "c++", "class-diagram", "" ]
Using PHP and MySQL, I have a forum system I'm trying to build. What I want to know is, how can I set it so that when a user reads a forum entry, it shows as read JUST for that user, no matter what forum they are in, until someone else posts on it. Currently, for each thread, I have a table with a PostID, and has the UserID that posted it, the ThreadID to link it to, the actual Post (as Text), then the date/time it was posted. For the thread list in each forum, there is the threadID (Primary Key), the ThreadName, ForumID it belongs to, NumPosts, NumViews, LastPostDateTime, and CreateDateTime. Any help?
May be storing in another table UserID,threadID, LastReadDateTime when the user read that thread. ``` if (LastPostDateTime > LastReadDateTime) you got an unread post. ``` Sadly you have a great overhead, on every read you'll have a write.
The traditional solution is a [join table](http://en.wikipedia.org/wiki/Junction_table) something along the lines of: ``` CREATE TABLE topicviews ( userid INTEGER NOT NULL, topicid INTEGER NOT NULL, lastread TIMESTAMP NOT NULL, PRIMARY KEY (userid, topicid), FOREIGN KEY (userid) REFERENCES users(id), FOREIGN KEY (topicid) REFERENCES topics(id) ); ``` with lastread updated every time a topic is read. When displaying the list of topics, if the topics.lastupdated is > topicviews.lastread, there are new posts. *The traditional solution is rubbish and will kill your database! Don't do it!* The first problem is that a write on every topic view will soon bring the database server to its knees on a busy forum, especially on MyISAM tables which only have table-level locks. (Don't use MyISAM tables, use InnoDB for everything except fulltext search). You can improve this situation a bit by only bothering to write through the lastread time when there are actually new messages being read in the topic. If topic.lastupdated < topicviews.lastread you have nothing to gain by updating the value. Even so, on a heavily-used forum this can be a burden. The second problem is a combinatorial explosion. One row per user per topic soon adds up: just a thousand users and a thousand topics and you have potentially a million topicview rows to store! You can improve this situation a bit by limiting the number of topics remembered for each user. For example you could remove any topic from the views table when it gets older than a certain age, and just assume all old topics are 'read'. This generally needs a cleanup task to be done in the background. Other, less intensive approaches include: * only storing one lastread time per forum * only storing one lastvisit time per user across the whole site, which would show as 'new' only things updated since the user's previous visit (session) * not storing any lastread information at all, but including the last-update time in a topic's URL itself. If the user's browser has seen the topic recently, it will remember the URL and mark it as visited. You can then use CSS to style visited links as 'topics containing no new messages'.
Determining unread items in a forum
[ "", "php", "mysql", "forum", "" ]
Following this example, I can list all elements into a pdf file ``` import pyPdf pdf = pyPdf.PdfFileReader(open("pdffile.pdf")) list(pdf.pages) # Process all the objects. print pdf.resolvedObjects ``` now, I need to extract a non-standard object from the pdf file. My object is the one named MYOBJECT and it is a string. The piece printed by the python script that concernes me is: ``` {'/MYOBJECT': IndirectObject(584, 0)} ``` The pdf file is this: ``` 558 0 obj <</Contents 583 0 R/CropBox[0 0 595.22 842]/MediaBox[0 0 595.22 842]/Parent 29 0 R/Resources <</ColorSpace <</CS0 563 0 R>> /ExtGState <</GS0 568 0 R>> /Font<</TT0 559 0 R/TT1 560 0 R/TT2 561 0 R/TT3 562 0 R>> /ProcSet[/PDF/Text/ImageC] /Properties<</MC0<</MYOBJECT 584 0 R>>/MC1<</SubKey 582 0 R>> >> /XObject<</Im0 578 0 R>>>> /Rotate 0/StructParents 0/Type/Page>> endobj ... ... ... 584 0 obj <</Length 8>>stream 1_22_4_1 --->>>> this is the string I need to extract from the object endstream endobj ``` How can I follow the `584` value in order to refer to my string (under pyPdf of course)??
each element in `pdf.pages` is a dictionary, so assuming it's on page 1, `pdf.pages[0]['/MYOBJECT']` should be the element you want. You can try to print that individually or poke at it with `help` and `dir` in a python prompt for more about how to get the string you want Edit: after receiving a copy of the pdf, i found the object at `pdf.resolvedObjects[0][558]['/Resources']['/Properties']['/MC0']['/MYOBJECT']` and the value can be retrieved via getData() the following function gives a more generic way to solve this by recursively looking for the key in question ``` import types import pyPdf pdf = pyPdf.PdfFileReader(open('file.pdf')) pages = list(pdf.pages) def findInDict(needle,haystack): for key in haystack.keys(): try: value = haystack[key] except: continue if key == needle: return value if type(value) == types.DictType or isinstance(value,pyPdf.generic.DictionaryObject): x = findInDict(needle,value) if x is not None: return x answer = findInDict('/MYOBJECT',pdf.resolvedObjects).getData() ```
An IndirectObject refers to an actual object (it's like a link or alias so that the total size of the PDF can be reduced when the same content appears in multiple places). The getObject method will give you the actual object. If the object is a text object, then just doing a str() or unicode() on the object should get you the data inside of it. Alternatively, pyPdf stores the objects in the resolvedObjects attribute. For example, a PDF that contains this object: ``` 13 0 obj << /Type /Catalog /Pages 3 0 R >> endobj ``` Can be read with this: ``` >>> import pyPdf >>> pdf = pyPdf.PdfFileReader(open("pdffile.pdf")) >>> pages = list(pdf.pages) >>> pdf.resolvedObjects {0: {2: {'/Parent': IndirectObject(3, 0), '/Contents': IndirectObject(4, 0), '/Type': '/Page', '/Resources': IndirectObject(6, 0), '/MediaBox': [0, 0, 595.2756, 841.8898]}, 3: {'/Kids': [IndirectObject(2, 0)], '/Count': 1, '/Type': '/Pages', '/MediaBox': [0, 0, 595.2756, 841.8898]}, 4: {'/Filter': '/FlateDecode'}, 5: 147, 6: {'/ColorSpace': {'/Cs1': IndirectObject(7, 0)}, '/ExtGState': {'/Gs2': IndirectObject(9, 0), '/Gs1': IndirectObject(10, 0)}, '/ProcSet': ['/PDF', '/Text'], '/Font': {'/F1.0': IndirectObject(8, 0)}}, 13: {'/Type': '/Catalog', '/Pages': IndirectObject(3, 0)}}} >>> pdf.resolvedObjects[0][13] {'/Type': '/Catalog', '/Pages': IndirectObject(3, 0)} ```
pyPdf for IndirectObject extraction
[ "", "python", "pdf", "stream", "pypdf", "" ]
I was going through a few queries I am maintaining, and a programmer had put in the queries "where 1=1" to me that always seems to evaluate to true. Are there benefits to this? **Duplicate:** [Why would someone use WHERE 1=1 AND in a SQL clause?](https://stackoverflow.com/questions/242822/why-would-someone-use-where-11-and-conditions-in-a-sql-clause) That question isn't an answer to this question. Where-clause: ``` select * from table where 1=1 and sStatus not in ('status1','status2','status3') ``` No programming or if statements to push an and in there. A straight query. If you could un-close this, I would like to know whether there is a purpose so that I may rewrite and remove the 1=1 if it is unnecessary.
Was it dynamic queries? Sometimes that's helpful when building dynamic queries based on parameters that are optional.
If you automatically want to add restrictions to your query, it makes your life easier: ``` string sql = "SELECT * FROM table WHERE 1=1"; if (someflag) { sql += " AND valid = 1"; } if (someotherflag) { sql += " AND special = 1"; } execute(sql); ``` Without `WHERE 1 = 1` you would in each case have to check if it's the first restriction you add (and then use `WHERE ...`) or if you already added some other restriction before (and then add `AND ...`).
Why would a sql query have "where 1 = 1"
[ "", "sql", "where-clause", "" ]
In our infrastructure, we have lots of little Java projects built by Maven2. Each project has its own pom.xml that ultimately inherits from our one company "master" parent pom. We've recently started adding small profiles to our parent pom, disabled by default, that, when enabled, execute a single plugin in a conventional manner. Examples: * The '***sources***' profile executes the *maven-source-plugin* to create the jar of project sources. * The '***clover***' profile executes the *maven-clover2-plugin* to generate the Clover report. It also embeds our Clover license file so it need not be re-specified in child projects. * The '***fitnesse***' profile executes the *fitnesse-maven-plugin* to run the fitnesse tests associated with the project. It contains the fitnesse server host and port and other information that need not be repeated. This is being used to specify builds in our CI server like: ``` mvn test -P clover mvn deploy site-deploy -P fitnesse,sources ``` and so on. So far, this seems to provide a convenient composition of optional features. However, are there any dangers or pitfalls in continuing on with this approach (obvious or otherwise)? Could this type of functionality be better implemented or expressed in another way?
You seem slightly suspicious towards that approach, but you're not really sure why - after all, it is quite convenient. Anyway, that's what I feel about it: I don't really know why, but it seems somewhat odd. Let's consider these two questions: a) what are profiles meant for? b) what are the alternative approaches we should should compare your approach with? Regarding a), I think profiles are meant for different build or execution environments. You may depend on locally installed software, where you would use a profile to define the path to the executable in the respective environments. Or you may have profiles for different runtime configurations, such as "development", "test", "production". More about this is found on <http://maven.apache.org/guides/mini/guide-building-for-different-environments.html> and <http://maven.apache.org/guides/introduction/introduction-to-profiles.html>. As for b), ideas that come to my head: 1. triggering the plug-ins with command line properties. Such as mvn -Dfitnesse=true deploy. Like the well known -DdownloadSources=true for the eclipse plugin, or -Dmaven.test.skip=true for surefire. But that would require the plugin to have a flag to trigger the execution. Not all the plug-ins you need might have that. 2. Calling the goals explicitly. You can call several goals on one command line, like "mvn clean package war:exploded". When fitnesse is executed automatically (using the respective profile), it means its execution is bound to a lifecycle phase. That is, whenever that phase in the lifecycle is reached, the plugin is executed. Rather than binding plugin executions to lifecycle phases, you should be able to include the plugin, but only execute it when it is called explicitly. So your call would look like "mvn fitnesse:run source:jar deploy". The answer to question a) might explain the "oddness". It is just not what profiles are meant for. Therefore, I think alternative 2 could actually be a better approach. Using profiles might become problematic when "real" profiles for different execution or build environments come into play. You would end up with a possibly confusing mixture of profiles, where profiles mean very different things (e.g. "test" would denote an environment while "fitnesse" would denote a goal). If you would just call the goals explicitly, I think that would be very clear and flexible. Remembering the plugin/goal names should not be more difficult that remembering the profile names.
The problem with this solution is that you may be creating a "pick and choose" model which is a bit un-mavenesque. In the case of the profiles you're describing you're sort of in-between; if each profile produces a decent result by itself you may be ok. The moment you start *requiring* specific combinations of profiles I think you're heading for troubles. Individual developers will typically run into consistency issues because they forget which set of profiles should be used for a given scenario. Your mileage may vary, but we had real problems with this. Half your developers will forget the "correct" combinations after only a short time and end up wasting *hours* on a regular basis because they run the wrong combinations at the wrong time. The practical problem you'll have with this is that AFAIK there's no way to have a set of "meta" profiles that activate a set of sub-profiles. If there had been a nice way to create an umbrella profile this'd be a really neat feature. Your "fitnesse" and "sources" profiles should really be private, activated by one or more meta-profiles. (You *can* activate a default set in settings.xml for each developer)
Are lots of inheritied single-plugin profiles in Maven a good idea?
[ "", "java", "maven-2", "" ]
I have an application which may only have one instance of itself open at a time. To enforce this, I use this code: ``` System.Diagnostics.Process[] myProcesses = System.Diagnostics.Process.GetProcesses(); System.Diagnostics.Process me = System.Diagnostics.Process.GetCurrentProcess(); foreach (System.Diagnostics.Process p in myProcesses) { if (p.ProcessName == me.ProcessName) if (p.Id != me.Id) { //if already running, abort this copy. return; } } //launch the application. //... ``` It works fine. I would also like it to be able to focus the form of the already-running copy. That is, before returning, I want to bring the other instance of this application into the foreground. How do I do that? SetForegroundWindow works, to a point: ``` [System.Runtime.InteropServices.DllImport("user32.dll")] public static extern bool SetForegroundWindow(IntPtr hWnd); // ... if (p.Id != me.Id) { //if already running, focus it, and then abort this copy. SetForegroundWindow(p.MainWindowHandle); return; } // ... ``` This does bring the window to the foreground if it is not minimized. Awesome. If the window IS minimized, however, it remains minimized. It needs to un-minimize. Solution via SwitchToThisWindow (Works!): ``` [System.Runtime.InteropServices.DllImport("user32.dll")] public static extern void SwitchToThisWindow(IntPtr hWnd, bool fAltTab); [STAThread] static void Main() { System.Diagnostics.Process me = System.Diagnostics.Process.GetCurrentProcess(); System.Diagnostics.Process[] myProcesses = System.Diagnostics.Process.GetProcessesByName(me.ProcessName); foreach (System.Diagnostics.Process p in myProcesses) { if (p.Id != me.Id) { SwitchToThisWindow(p.MainWindowHandle, true); return; } } //now go ahead and start our application ;-) } ```
I had the same problem and [SwitchToThisWindow()](http://msdn.microsoft.com/en-us/library/ms633553.aspx) worked the best for me. The only limitation is that you must have XP sp1 installed. I played with SetForegroundWindow, ShowWindow, and they both had problems pulling the window into view.
C# equivalent of Tom Juergens's answer. Works like a charm for me. ``` private const int SW_SHOWNORMAL = 1; [DllImport("user32.dll", SetLastError = true, CharSet = CharSet.Auto)] private static extern bool ShowWindow(IntPtr hwnd, int nCmdShow); [DllImport("user32.dll", SetLastError = true)] private static extern bool SetForegroundWindow(IntPtr hwnd); public void SetForeground() { Process[] processes = Process.GetProcessesByName("process name"); foreach (Process p in processes) { ShowWindow(p.MainWindowHandle, SW_SHOWNORMAL); SetForegroundWindow(p.MainWindowHandle); } } ```
How do I focus a foreign window?
[ "", "c#", ".net", "focus", "" ]
While I understand this question is fairly vague since I'm not giving you all as much detail as I'd like to, I'm hoping for some general improvements that can be made to my generation code or the reports themselves to speed them up. I've asked for more hardware, but have been denied. ``` public Stream GenerateReport(string reportName, string format) { if (reportName == null) throw new ArgumentNullException("reportName"); reportExecutionService.LoadReport(reportName, null); string extension; string encoding; string mimeType; ReportExecution2005.Warning[] warnings; string[] streamIDs; byte[] results = reportExecutionService.Render(format, null, out extension, out encoding, out mimeType, out warnings, out streamIDs); return new MemoryStream(results); } ``` The reports themselves are taking 6-10 seconds each. I've narrowed down the bottleneck to Reporting Services itself. Where should I start looking to removed potential speed bottlenecks. Note: some code has been removed to protect the innocent.
Although not directly related to the code you posted, here are a couple of generic enhancements you should always consider when writing reports in Reporting Services: 1. Pre-load report tables so that they already aggregate any data that would have been aggregated in the report. For instance, if the report data source summarizes thousands of rows of data and requires joining multiple tables together, then you should create a pre-aggregated table that joins all the data together and already summarizes the data at the required grain for the report. 2. If you are passing parameters into the data source, then the aggregated underlying table should have a clustered index that corresponds with how the table will be searched. For instance, if the report only displays data for an individual customer and for a given transaction date range, then the clustered index should be ordered on the customer and transaction date. 3. Filtering data should occur in the data source query and not in the report itself. Meaning, if you parameterize your report so that it filters data, then the parameters should be passed to the database so that it returns a smaller set of data. Do not return a large set of data and then filter the data. It is easy to make this mistake when using a multi-valued parameter since the out-of-box instructions for using multi-value parameters is to filter the data AFTER the data has been returned to Reporting Services. I hope you are already doing the above and that this is not a relevant post. :)
If you've narrowed it down to Reporting Services solely based on your client code, I would review the queries / SPs that retrieve your data. I've encountered some pretty nasty queries in my day that looked fairly innocent.
What are the generic ways to make Reporting Services faster
[ "", "c#", "optimization", "reporting-services", "" ]
To make a long story short I have to use processing in a class and I'm completely blind. The ide is completely inaccessible so I use the JavaScript version of processing to create a page and have a sited person describe the output to me. Unfortunately if the code has errors the page appears blank and there is no explanation of what’s wrong. Given the following code is there a way for the JavaScript version of processing to tell me that I have a syntax error since I forgot ); ``` void setup() { size(500, 500); } void draw() { line(250, 200, 250, 40 } ```
Well for debuging (when the code compiles) you can use the println() command. I've been using processing for 2 months now and i have the same problem and haven't been able to find a solution for it. Here in SO for what I've seen almost no one uses Processing or knows how to use it. You can try the processing forums, you'll normally get an answer in a day or two. You can find that here <http://processing.org/discourse/> Sorry for not being able to help. The other only solution I see is if you use their PDE (Processsing develop environment) and compile it there. Edit: From previous experience asking processing questions you won't even have much ppl looking into your question :(
A js editor with syntax highlighting (such as Aptana) will show you errors such as you have there. I don't know whether you'd find that any more useful than the Processing ide itself, though.
printing out error messages in the javascript version of processing
[ "", "javascript", "processing", "processing.js", "" ]
I have an interface called ICommand from which various classes inherit. These classes all do various different things with the Run() method, but they all need to have a Name property and a Description property that specifies what they do. The question is, how should I let each subclass specify those properties (Name and Description). At the moment I have put a property for both in ICommand and I get each subclass to implement those properties and just have a `return "Blah Blah Blah"` statement in them. Is this the best way to do it? Or should it be done another way? Sorry if this seems like a silly question - I'm just starting with this OOP design stuff and I want to check I'm doing it right.
I think, defining a Readonly Property in the interface is the best way.
I have written something similar (plugins) where each class has a different name and description and ended up implementing as Sebastian says, with a ReadOnly property ``` interface ICommand { string Name { get; } string Description { get; } ... classSpecific : ICommand { public string Name { get { return "Specific"; }} public string Description { get { return "Specific description"; }} ... ```
How to let subclass specify details such as name and description
[ "", "c#", "oop", "" ]
I have a linq2sql setup where objects are sent from client side (flex via flourinefx) and attach them to a new datacontext a seen below: I also have a "global" datacontext that is used throughout the session. ``` public static void Update(Enquiry enquiry) { OffertaDataContext db = new OffertaDataContext(); db.Enquiries.Attach(enquiry); db.Refresh(RefreshMode.KeepCurrentValues, enquiry); db.SubmitChanges(); } ``` This approach usually works fine, but after a while I get the error "Cannot add an entity with a key that is already in use".
I think this error happens if you `Attach` an *entity* to a `DataContext` that was already loaded. The code that causes the error is exactly like you show here? After creating the new `OffertaDataContext` do you query anything before the `Attach`?
I was getting this error and it was because I had forgotten to set the Primary Key field in the database to "Identity Specification" (auto-increment). When I changed this I was good. Doh!
linq2sql: Cannot add an entity with a key that is already in use
[ "", "c#", "linq-to-sql", "" ]
[This is on an iSeries/DB2 database if that makes any difference] I want to write a procedure to identify columns that are left as blank or zero (given a list of tables). Assuming I can pull out table and column definitions from the central system tables, how should I check the above condition? My first guess is for each column generate a statement dynamically such as: ``` select count(*) from my_table where my_column != 0 ``` and to check if this returns zero rows, but is there a better/faster/standard way to do this? NB This just needs to handle simple character, integer/decimal fields, nothing fancy!
To check for columns that contain only NULLs on DB2: 1. Execute RUNSTATS on your database (<http://www.ibm.com/developerworks/data/library/techarticle/dm-0412pay/>) 2. Check the database statistics by quering SYSSTAT.TABLES and SYSSTAT.COLUMNS . Comparing SYSSTAT.TABLES.CARD and SYSSTAT.COLUMNS.NUMNULLS will tell you what you need. An example could be: ``` select t.tabschema, t.tabname, c.colname from sysstat.tables t, sysstat.columns c where ((t.tabschema = 'MYSCHEMA1' and t.tabname='MYTABLE1') or (t.tabschema = 'MYSCHEMA2' and t.tabname='MYTABLE2') or (...)) and t.tabschema = c.tabschema and t.tabname = c.tabname and t.card = c.numnulls ``` More on system stats e.g. here: <http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/admin/r0001070.htm> and <http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/admin/r0001073.htm> Similarly, you can use SYSSTAT.COLUMNS.AVGCOLLEN to check for empty columns (just it doesn't seem to work for LOBs). EDIT: And, to check for columns that contain only zeros, use try comparing HIGH2KEY and LOW2KEY in SYSSTAT.COLUMNS.
Yes, typically, I would do something like this in SQL Server: ``` SELECT REPLACE(REPLACE(REPLACE( ' SELECT COUNT(*) AS [COUNT NON-EMPTY IN {TABLE_NAME}.{COLUMN_NAME}] FROM [{TABLE_SCHEMA}].[{TABLE_NAME}] WHERE [{COLUMN_NAME}] IS NOT NULL OR [{COLUMN_NAME}] <> 0 ' , '{TABLE_SCHEMA}', c.TABLE_SCHEMA) , '{TABLE_NAME}', c.TABLE_NAME) , '{COLUMN_NAME}', c.COLUMN_NAME) AS [SQL] FROM INFORMATION_SCHEMA.COLUMNS c INNER JOIN INFORMATION_SCHEMA.TABLES t ON t.TABLE_TYPE = 'BASE TABLE' AND c.TABLE_CATALOG = t.TABLE_CATALOG AND c.TABLE_SCHEMA = t.TABLE_SCHEMA AND c.TABLE_NAME = t.TABLE_NAME AND c.DATA_TYPE = 'int' ``` You can get a lot fancier by doing UNIONs of the entire query and checking the IS\_NULLABLE on each column and obviously you might have different requirements for different data types, and skipping identity columns, etc.
How can I identify unused/redundant columns given a list of tables?
[ "", "sql", "database", "" ]
I am writing a piece of code that to work would require an extensive amount of if/then statements. In order to eliminate the need for writing line upon line of if/then statements can I use a dictionary or a list? If so can someone direct me to a good web resource or show an instance where they have done it before? Edit Clarification: I have six inputs, each are to be combo boxes with a group of selections. Below is a detail of the inputs and the selections. (Amps) 1:1 - 1:12 (12 different selections) (Cable Size) 2:1 - 2:13 (13 different selections) Certain items in this list will be excluded by the selection of the first input. (Cable Type) 3:1 - 3:2 (2 different selections) (Temp Rating) 4:1 - 4:3 (3 different selections) (System Type) 5:1 - 5:2 (2 different selections) (Conduit Type) 6:1 - 6:2 (2 different selections) From the above input will come two outputs which will appear in two text boxes. (Cable Qty) 7:1 - 7:16 (16 different outputs) (Conduit Size) 8:1 - 8:8 (8 different outputs) I hope this serves to help and not hinder.
It looks like you're trying to map each combination of the 6 inputs (12 \* 13 \* 2 \* 3 \* 2 \* 2 possibilities) to one of the (16 \* 8) outputs. If that's the case, you'll still have a lot of typing to do - but moving to a collection will allow you to easily externalize the mapping. I would guess that this would probably be best suited for a database table: ``` Amps | CableSize | CableType | TempRating | SystemType | ConduitType | CableQty | ConduitSize ``` You'd put a primary key on the 6 input columns, and then just do a simple SELECT: ``` SELECT CableQty, ConduitSize FROM Table WHERE Amps = @amps AND CableSize = @cableSize...etc ``` To do this in quick and dirty code, arrays would work: ``` const int AMPS = 0; const int CABLE_SIZE = 1; const int TEMP_RATING = 2; // etc. var mappings = new Dictionary<int[], int[]>(12 * 13 * 2 * 3 * 2 * 2); mappings.Add( new int[] { 1, 1, 1, 1, 1, 1 }, // inputs new int[] { 1, 2 } //outputs ); // repeat...a lot var outputs = mappings.First(inputs => { inputs[AMPS] == myAmps && inputs[CABLE_SIZE] == myCableSize && inputs[TEMP_RATING] == myTempRating && // etc }); ``` It doesn't save you much typing - though you could use for loops and the like to populate the mappings if there's some sort of logic to it - but it's a hell of a lot more readable than 6 pages of if statements (I'd probably region off or partial class loading the mappings).
might want to give some idea of what you're doing with the if/the statements. If you're just obtaining a value from a key then, yes, a dictionary probably would work. ``` Dictionary<string,string> map = new Dictionary<string,string>(); ... populate the map with keys... ``` Then use it... ``` string value = "default value"; if (map.ContainsKey(key)) { value = map[key]; } ```
No more "If / Then"; Dictionary Use?
[ "", "c#", "dictionary", "" ]
I have a template class where I want to use objects of that class (along with the parameterized type) inside a map. So far this is the solution that I've been able to arrive at: ``` class IStatMsg; template <typename T> class ITier { public: // Methods ITier(TierType oType) : o_Type(oType){}; virtual ~ITier(){}; typename ITier<T> ParamITier; // line 60 ITier* Get(T oKey) { std::map<T, ParamITier*>::iterator it = map_Tiers.find(oKey); // line 64 if (it != map_Tiers.end()) return it->second; return NULL; } void Set(T oKey, ITier* pTier) { map_Tiers.insert(pair<T, ParamITier*>(oKey, pTier)); // line 74 } TierType GetType() { return o_Type; } protected: // Methods // Attributes std::map<T, ParamITier*> map_Tiers; // line 83 TierType o_Type; private: // Methods // Attributes }; ``` But when I try to compile this code I get a long list of errors: > /home/gayanm/street/src/QueryServer\_NEW/ITier.h:60: > error: expected nested-name-specifier > /home/gayanm/street/src/QueryServer\_NEW/ITier.h:60: > error: `ITier<T>' specified as > declarator-id > /home/gayanm/street/src/QueryServer_NEW/ITier.h:60: > error: perhaps you want`ITier' > for a constructor > /home/gayanm/street/src/QueryServer\_NEW/ITier.h:60: > error: two or more data types in > declaration of `ITier<T>' > /home/gayanm/street/src/QueryServer_NEW/ITier.h:60: > error: expected`;' before > "ParamITier" > /home/gayanm/street/src/QueryServer\_NEW/ITier.h:83: > error: `ParamITier' was not declared > in this scope > /home/gayanm/street/src/QueryServer_NEW/ITier.h:83: > error: template argument 2 is invalid > /home/gayanm/street/src/QueryServer_NEW/ITier.h:83: > error: template argument 4 is invalid > /home/gayanm/street/src/QueryServer_NEW/ITier.h:83: > error: ISO C++ forbids declaration of`map\_Tiers' with no type > /home/gayanm/street/src/QueryServer\_NEW/ITier.h: > In member function `ITier<T>* > ITier<T>::Get(T)': > /home/gayanm/street/src/QueryServer_NEW/ITier.h:64: > error:`ParamITier' undeclared (first > use this function) > /home/gayanm/street/src/QueryServer\_NEW/ITier.h:64: > error: (Each undeclared identifier is > reported only once for each function > it appears in.) > /home/gayanm/street/src/QueryServer\_NEW/ITier.h:64: > error: template argument 2 is invalid > /home/gayanm/street/src/QueryServer\_NEW/ITier.h:64: > error: template argument 4 is invalid > /home/gayanm/street/src/QueryServer\_NEW/ITier.h:64: > error: expected `;' before '::' token > /home/gayanm/street/src/QueryServer_NEW/ITier.h:66: > error:`it' undeclared (first use this > function) > /home/gayanm/street/src/QueryServer\_NEW/ITier.h:66: > error: request for member `end' in`((ITier*)this)->ITier::map\_Tiers', > which is of non-class type `int' > /home/gayanm/street/src/QueryServer_NEW/ITier.h: > In member function`void > ITier::Set(T, ITier*)': > /home/gayanm/street/src/QueryServer\_NEW/ITier.h:74: > error: request for member `insert' in`((ITier\*)this)->ITier::map\_Tiers', > which is of non-class type `int' > /home/gayanm/street/src/QueryServer_NEW/ITier.h:74: > error:`pair' undeclared (first use > this function) > /home/gayanm/street/src/QueryServer\_NEW/ITier.h:74: > error: expected primary-expression > before ',' token > /home/gayanm/street/src/QueryServer\_NEW/ITier.h:74: > error: `ParamITier' undeclared (first > use this function) > /home/gayanm/street/src/QueryServer_NEW/ITier.h:74: > error: expected primary-expression > before '>' token > /home/gayanm/street/src/QueryServer_NEW/ITier.h: > At global scope: > /home/gayanm/street/src/QueryServer_NEW/ITier.h:93: > error: base`ITier' with > only non-default constructor in class > without a constructor > /home/gayanm/street/src/QueryServer\_NEW/ITier.h:109: > error: expected class-name before '{' > token Could you please point out how to fix these? Thank You.
Line 60 does not access a depending name. What you use is `ITier<T>` of which the compiler knows it's a template given an argument. Instead of `typename` you want to use `typedef` ;) Line 64 *does* access the depending name `iterator` which is a type-name, so you have to put `typename` before `std::map`. I put the two disambiguations, *template* and *typename* on this answer: [Disambiguations of dependent names](https://stackoverflow.com/questions/389797/gcc-template-issue#390201). Line 74 would be right, if you fix the bug in Line 60, as far as i can see. Line 83 is alright in itself as far as i can see.
Also, I would recommend that you pass `const T&` to the functions instead of `T`, since you cannot be sure (it's a template parameter!) that it'll be a "cheap" copy.
Template class with "typename"
[ "", "c++", "" ]
``` public static List<Vertex<Integer>> petersenGraph() { List<Vertex<Integer>> v = new ArrayList<Vertex<Integer>>(); for (int i = 0; i < 10; i++) { v.add(new Vertex<Integer>(i)); } int[][] edges = {{0,1}, {1,0}, {1,2}, {2,1}, {2,3}, {3,2}, {3,4}, {4,3}, {4,0}, {0,4}, {5,6}, {6,5}, {6,7}, {7,6}, {7,8}, {8,7}, {8,9}, {9,8}, {9,5}, {5,9}, {5,0}, {0,5}, {6,2}, {2,6}, {7,4}, {4,7}, {8,1}, {1,8}, {9,3}, {3,9}}; for (int[] e : edges) v.get(e[0]).successors().add(v.get(e[1])); return v; } ``` I understand everything up to the point where there's the for which iterates over the edges. What is exactly is going on there? edit: why are they accessed using `e[0]` and `e[1]`? is `e[0]` the first number and `e[1]` the second?
Argh, that's ugly. edges is a bidimensional array, so it is an array of int arrays. In the actual definition, it is an array of pairs. The line for (int[] e: edges) simply means that in each iteration, e will become a different array of ints, so in each iteration it is a different pair. Then, e[0] represents the first item in the pair and e[1] represents the other. So the first coordinate is used to look up a vertex, and then something happens and the second coordinate gets added in. Without seeing vertex or knowing the algorithm, it's unclear.
The wikipedia page on the graph it's creating is <http://en.wikipedia.org/wiki/Petersen_graph>. From the look of it, edges in the graph are represented by the Vertex.successors collection, and the **edges** array is used to construct the graph, using the first index as the *from* node and the second index as the *to* node for each edge. This would also explain why each pair is followed by its opposite, eg {0,1}, {1,0}, as the Peterson Graph is undirected, so connections between nodes must be represented in both directions.
Iterating over bidimensional Java arrays
[ "", "java", "arrays", "iterator", "" ]
I am developing a framework, and some of the objects have reaaally long names. I don't really like this, but I don't like acronyms either. I am trying to come up with a shorter name for "EventModelSocket", basically a wrapper around the .Net socket class that implements various events, and methods to send files, objects, etc. Some of the objects have really long names due to this, such as "EventModelSocketObjectReceivedEventArgs" for example. I've tried everything from a thesaurus, to a dictionary to sitting here for hours thinking. When you come upon situations like this, what is the best way to name something?
Push some of it into the namespace. For example: ``` EventModelSocketObjectReceivedEventArgs ``` becomes ``` EventModel.Sockets.ReceivedEventArgs ```
Well, are the long names hurting something? (edit) two other thoughts: * use `var` in C# 3.0 - that'll save half the width * if you are using the type multiple times in a file, consider a type alias if it is annoying you: `using Fred = Namespace.VeryLongNameThatIsBeingAnnoying;`
Shorter naming convention for types
[ "", "c#", "naming-conventions", "" ]
I am using jQuery in my web application. I want to use arrays, but I am not able to find out functions for arrays (add, remove or append elements in array) in jQuery. Is there any link related to jQuery array functions which will explain the jQuery array functions?
Have a look at <https://developer.mozilla.org/En/Core_JavaScript_1.5_Reference/Global_Objects/Array> for documentation on JavaScript Arrays. jQuery is a library which adds some magic to JavaScript which is a capable and featurefull scripting language. The libraries just fill in the gaps - get to know the core!
jQuery has very limited array functions since JavaScript has most of them itself. But here are the ones they have: *[Utilities - jQuery API](http://docs.jquery.com/Utilities)*.
Array functions in jQuery
[ "", "javascript", "jquery", "arrays", "function", "" ]
I have a request for a contracting gig and one of the requirements in the first draft of the specs says the software (a GUI application for end-users) should run on Win 2000 and Mac OS 7.5. I have no idea why they would want to support such ancient systems, but I guess it leaves me with Java as the only option other than raw C, or doesn't it? So if it would be Java, are there restrictions on what Java version I can use on those targets? Also, though it wouldn't be strictly on topic, I'd appreciate comments on strategies for making software run on both targets. Actually, supporing those ancient systems as well as modern ones might even be harder than supporting Mac and Win, right? As another sideline, I'd also appreciate facts that could be used to talk the client out of this and make him go with OS X and XP. Like "hey, only 2% of all Macs in use today still use OS older than X". --- **Edit**: My main purpose here is to be well prepared technically to negotiate what the specs really should be. Things like that are often the result of some manager thinking "gee, my aunt still uses OS 9 and I bet, there's people even more old-fashioned, so let's just play it safe and write down 7.5". There's no technical judgement whatsoever involved, and that's OK. It's just that, in those cases, you have to explain carefully what tradeoffs there are and if you succeed, it usually gets you much more realistic specs. It's not even unlikely that they'd ditch Mac OS altogether if they have to bet money on it. With that kind of specs, if you don't actively help the client reshape them, what's going to happen is, you put the number in the offer that would pay for all the crazy stuff and then yet some, and less experienced competitors won't see all the implications and put a lower number in their offer, get the gig and it all ends in tears for everybody. You can go "heh heh, told you so", but don't get the cash, either. --- **Edit**: Thanks for still posting facts and advice although I already accepted an answer to my original question. I'll keep upvoting that stuff, and it certainly helps. Also thanks for empathizing with me and trying to save me from signing a bad contract! But don't worry, I'm not *actually going to code for Mac OS 7.5* ... ;-) Really, really overseeing all the implications would be well out of my depth anyway.
It's up to 1.1.8 for all Mac OS Classic (not X)
Unless this is a seriously lucrative contract, or you desperately need the money, I'd recommend running away from it as fast as possible. The chances are the client is not only targeting seriously old OS's, but also old hardware. That'll mean you'll have all sorts of problems with performance (for you can bet the entire value of the contract that they want an app with modern features and performance on this ancient kit). It's near guaranteed to end in tears...
What JVM can be run under Mac OS 7.5?
[ "", "java", "windows", "user-interface", "macos", "cross-platform", "" ]
Wondering if there's any not-too-hard way to edit non-form text in html 4. I know there's a contentEditable property that can be set in html 5, but I want better browser support than that will give me. It seems like gmail is doing something like this in their chat status indicator. This works properly on IE6 (which I don't believe supports html 5). Has anyone seen this done? thanks, -Morgan
HTML5's contentEditable is modelled after the existing property from IE, which is also supported by (at least) Firefox and Safari, so you don't have to wait for HTML5 to be ‘supported’. It's an incredibly ugly piece of design, but it works.
Have you considered using CSS to make a `<textarea>` look like something else? Just because something is a form control, doesn't mean it has to look like once. You can change the background, shadows, borders etc.
editable div content in html 4
[ "", "javascript", "html", "css", "" ]
I have code similar to this in my application: ``` class A { public: int b; } class C { public: int d; } void DoThings (void *arg1, MYSTERYTYPE arg2); A obj_a; C obj_c; DoThings(&obj_a, &A::b); DoThings(&obj_c, &C::d); ``` The question is - What should MYSTERYTYPE be? neither void\* nor int work, despite the value &A::b being printed just fine if you output it through a printf. Clarifications: Yes, &A::b is defined under C++. Yes, I am trying to get the offset to a class member. Yes, I am being tricky. Edit: Oh I can use offsetof(). Thanks anyway.
You have a data member pointer to two unrelated classes. Well, you can't find a common type that can hold both pointers. It will only work if the function parameter is a data member pointer to a member of the derived, because it's guaranteed to contain the member too, if a base contains it: ``` struct a { int c; }; struct b : a { }; int main() { int b::*d = &a::c; } ``` --- **Update**: I think i should write why the above converts from `a::*` to `b::*` implicitly. After all, we usually have `b*` to `a*` ! Consider: ``` struct a { }; struct b : a { int c; }; struct e : a { }; int main() { int a::*d = &b::c; e e_; (e_.*d) = 10; /* oops! */ } ``` If the above would be valid, you would really much screw up. The above is *not* valid, because conversion from `b::*` to `a::*` is not implicit. As you see, we assigned a pointer to b::c, and then we could dereference it using a class that doesn't contain it at all! (`e`). The compiler enforces this order: ``` int main() { int b::*d = &b::c; e e_; (e_.*d) = 10; /* bug! */ } ``` It *fails* to compile now, because `e` is not derived from `b`, the class the member pointer pointer belongs to. Good! The following, however, is very valid and compiles, of course (changed classes `a` and `b`): ``` struct a { int c; }; struct b : a { }; struct e : a { }; int main() { int e::*d = &a::c; e e_; (e_.*d) = 10; /* works! */ } ``` --- To make it work for your case, you have to make your function a template: ``` template<typename Class> void DoThings (int Class::*arg) { /* do something with arg... */ } ``` Now, the compiler will auto-deduce the right class that the given member pointer belongs too. You will have to pass the instance alongside of the member pointer to actually make use of it: ``` template<typename Class> void DoThings (Class & t, int Class::*arg) { /* do something with arg... */ (t.*arg) = 10; } ``` If you just want to set some member you already know at the time you write DoThings, the following suffices: ``` template<typename Class> void DoThings (Class & t) { t.c = 10; } ```
Are you simply trying to call a function with the address of an integer that happens to live inside an `A` or a `C` object? In that case, Jeff McGlynn's answer is the way to go. Otherwise, if you really are trying to do something tricky requiring C++'s weird pointer-to-member facility (and you almost certainly aren't): Since classes `A` and `C` are unrelated, you will need a template function to handle both: ``` template <typename T> void DoThings(int T::*x); ``` If `C` was actually derived from `A`, the following would work: ``` void DoThings(int A::*x); ```
How to cast member variable pointer to generic type in C++
[ "", "c++", "class", "pointers", "member-pointers", "" ]
I'm having a hard time figuring out how to architect the final piece of my system. Currently I'm running a Tomcat server that has a servlet that responds to client requests. Each request in turn adds a processing message to an asynchronous queue (I'll probably be using JMS via Spring or more likely Amazon SQS). The sequence of events is this: Sending side: 1. Take a client request 2. Add some data into a DB related to this request with a unique ID 3. Add a message object representing this request to the message queue Receiving side: 1. Pull a new message object from the queue 2. Unwrap the object and grab some information from a web site based on information contained in the msg object. 3. Send an email alert 4. update my DB row (same unique ID) with the information that operation was completed for this request. I'm having a hard figuring out how to properly deal with the receiving side. On one hand I can probably create a simple java program that I kick off from the command line that picks each item in the queue and processes it. Is that safe? Does it make more sense to have that program running as another thread inside the Tomcat container? I will not want to do this serially, meaning the receiving end should be able to process several objects at a time -- using multiple threads. I want this to be always running, 24 hours a day. What are some options for building the receiving side?
"On one hand I can probably create a simple java program that I kick off from the command line that picks each item in the queue and processes it. Is that safe?" What's unsafe about it? It works great. "Does it make more sense to have that program running as another thread inside the Tomcat container?" Only if Tomcat has a lot of free time to handle background processing. Often, this **is** the case -- you have free time to do this kind of processing. However, threads aren't optimal. Threads share common I/O resources, and your background thread may slow down the front-end. Better is to have a JMS queue between the "port 80" front-end, and a separate backend process. The back-end process starts, connects to the queue, fetches and executes the requests. The backend process can (if necessary) be multi-threaded.
If you are using JMS, why are you placing the tasks into a DB? You can use a durable Queue in JMS. This would keep tasks, even if the JMS broker dies, until they have been acknowledged. You can have redundant brokers so that if one broker dies, the second automatically takes over. This could be more reliable than using a single DB.
Whats the best way to process an asynchronous queue continuously in Java?
[ "", "java", "asynchronous", "queue", "jms", "" ]
I have a javascript function (function1) that checks some global variables (can the user connect, is the service available, etc) before calling another (legacy) function (function2) that actually pops a window and connects the user to our service. I want to prevent function2 from being called anywhere but from function1. Is this possible? As a cheap solution, I figured I could emit a variable from function1 and check for it in function2 before executing. Is there another way? Is there a way to find out the calling element/method in a javascript function?
You should check this: * [How do you find out the caller function in JavaScript?](https://stackoverflow.com/questions/280389/javascript-how-do-you-find-the-caller-function) and this: * <https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/Function/caller>
Read here: [Crockford's method](http://www.crockford.com/javascript/private.html). Declare the function inside the first function.
How can I prevent a function from being called outside another function in Javascript?
[ "", "javascript", "" ]
I have a set of strings with numbers embedded in them. They look something like /cal/long/3/4/145:999 or /pa/metrics/CosmicRay/24:4:bgp:EnergyKurtosis. I'd like to have an expression parser that is * Easy to use. Given a few examples someone should be able to form a new expression. I want end users to be able to form new expressions to query this set of strings. Some of the potential users are software engineers, others are testers and some are scientists. * Allows for constraints on numbers. Something like '/cal/long/3/4/143:#>100&<1110' to specify that a string prefix with '/cal/long/3/4/143:' and then a number between (100,1110) is expected. * Supports '|' and *. So the expression '/cal/(long|short)/3/4/*' would match '/cal/long/3/4/1:2' as well as '/cal/short/3/4/1:2'. * Has a Java implementation available or would be easy to implement in Java. Interesting alternative ideas would be useful. I'm also entertaining the idea of just implementing the subset of regular expressions that I need plus the numerical constraints. Thanks!
I'm inclined to agree with Rex M, although your second requirement for numerical constraints complicates things. Unless you only allowed very basic constraints, I'm not aware of a way to succinctly express that in a regular expression. If there is such a way, please disregard the rest of my answer and follow the other suggestions here. :) You might want to consider a parser generator - things like the classic lex and yacc. I'm not really familiar with the Java choices, but here's a list: <http://java-source.net/open-source/parser-generators> If you're not familiar, the standard approach would be to first create a [lexer](http://en.wikipedia.org/wiki/Lexical_analysis) that turns your strings into tokens. Then you would pass those tokens onto a parser that applies your grammar to them and spits out some kind of result. In your case, I envision the parser resulting in a combination of a regular expression and additional conditions. For your numerical constraint example, it might give you the regular expression `\/cal/long/3/4/143:(\d+)\` and a constraint to apply to the first grouping (the `\d+` portion) that requires that the number lie between 100 and 1100. You'd then apply the RE to your strings for candidates, and apply the constraint to those candidates to find your matches. It's a pretty complicated approach, so hopefully there's a simpler way. I hope that gives you some ideas, at least.
There's no reason to reinvent the wheel! The core of a regular expression engine is built on a strong foundation of mathematics and computer science; the reason we continue to use them today is they are principally sound and won't be improved in the foreseeable future. If you do find or create some alternative parsing language that only covers a subset of the possibilities Regex can, you will quickly have a user asking for a concept that can be expressed in Regex but your flavor just plain leaves out. Spend your time solving problems that haven't been solved instead!
Alternatives to Regular Expressions
[ "", "java", "regex", "parsing", "" ]
I have a MySQL database which contains a table of users. The primary key of the table is 'userid', which is set to be an auto increment field. What I'd like to do is when I insert a new user into the table is to use the same value that the auto increment is creating in the 'userid' field in a different field, 'default\_assignment'. e.g. I'd like a statement like this: ``` INSERT INTO users ('username','default_assignment') VALUES ('barry', value_of_auto_increment_field()) ``` so I create user 'Barry', the 'userid' is generated as being 16 (for example), but I also want the 'default\_assignment' to have the same value of 16. Is there any way to achieve this please? Thanks! *Update:* Thanks for the replies. The default\_assignment field isn't redundant. The default\_assigment can reference any user within the users table. When creating a user I already have a form that allows a selection of another user as the default\_assignment, however there are cases where it needs to be set to the same user, hence my question. *Update:* Ok, I've tried out the update triggers suggestion but still can't get this to work. Here's the trigger I've created: ``` CREATE TRIGGER default_assignment_self BEFORE INSERT ON `users` FOR EACH ROW BEGIN SET NEW.default_assignment = NEW.userid; END; ``` When inserting a new user however the default\_assignment is always set to 0. If I manually set the userid then the default\_assignment does get set to the userid. Therefore the auto assignment generation process clearly happens after the trigger takes effect.
there's no need to create another table, and max() will have problems acording to the auto\_increment value of the table, do this: ``` CREATE TRIGGER trigger_name BEFORE INSERT ON tbl FOR EACH ROW BEGIN DECLARE next_id; SET next_id = (SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE TABLE_SCHEMA=DATABASE() AND TABLE_NAME='tbl'); SET NEW.field = next_id; END ``` I declare the next\_id variable because usually it will be used in some other way(\*), but you could do straight new.field=(select ...) ``` CREATE TRIGGER trigger_name BEFORE INSERT ON tbl FOR EACH ROW BEGIN SET NEW.field=(SELECT AUTO_INCREMENT FROM information_schema.TABLES WHERE TABLE_SCHEMA=DATABASE() AND TABLE_NAME='tbl'); END ``` Also in cases of (SELECT string field) you can use CAST value; ``` CREATE TRIGGER trigger_name BEFORE INSERT ON tbl FOR EACH ROW BEGIN SET NEW.field=CAST((SELECT aStringField FROM information_schema.TABLES WHERE TABLE_SCHEMA=DATABASE() AND TABLE_NAME='tbl') AS UNSIGNED); END ``` (\*) To auto-name an image: ``` SET NEW.field = CONCAT('image_', next_id, '.gif'); ``` (\*) To create a hash: ``` SET NEW.field = CONCAT( MD5( next_id ) , MD5( FLOOR( RAND( ) *10000000 ) ) ); ```
try this ``` INSERT INTO users (default_assignment) VALUES (LAST_INSERT_ID()+1); ```
Can you access the auto increment value in MySQL within one statement?
[ "", "sql", "mysql", "insert", "auto-increment", "" ]
Ok so say I have 100 rows to insert and each row has about 150 columns (I know that sounds like a lot of columns, but I need to store this data in a single table). The inserts will occur at random, (ie whenever a set of users decide to upload a file containing the data), about a 20 times a month. However the database will be under continuous load processing other functions of a large enterprise application. The columns are varchars, ints, as well as a variety of other types. Is the performance gain of wrapping these inserts in a transaction (as opposed to running them one at a time) going to be huge, minimal, or somewhere in between? Why? EDIT: This is for Sql Server 2005, but I'd be interested in 2000/2008 if there is something different to be said. Also I should mention that I understand the point about transactions being primarily for data-consistency, but I want to focus on performance effects.
It can be an impact actually. The point of transactions is not about how many you do, it's about keeping the data update consistent. If you have rows that need to be inserted together and are dependent on each other, those are the records you wrap in a transaction. Transactions are about keeping your data consistent. This should be the first thing you think about when using transactions. For example, if you have a debit (withdrawl) from your checking account, you want to make sure the credit (deposit) is also done. If either of those don't succeed, the whole "transaction" should be rolled back. Therefore, both actions MUST be wrapped in a transaction. When doing batch inserts, break them up in to 3000 or 5000 records and cycle through the set. 3000-5000 has been a sweet number range for me for inserts; don't go above that unless you've tested that the server can handle it. Also, I will put GOs in the batch at about every 3000 or 5000 records for inserts. Updates and deletes I'll put a GO at about 1000, because they require more resources to commit. If your doing this from C# code, then in my opinion, you should build a batch import routine instead of doing millions of inserts one at a time through coding.
While transactions are a mechanism for keeping data consistent they actually have a massive impact on performance if they are used incorrectly or overused. I've just finished a [blog post](http://blog.staticvoid.co.nz/2012/04/making-dapper-faster-with-transactions.html) on the impact on performance of explicitly specifying transactions as opposed to letting them occur naturally. If you are inserting multiple rows and each insert occurs in its own transaction there is a lot of overhead on locking and unlocking data. By encapsulating all inserts in a single transactions you can dramatically improve performance. Conversely if you have many queries running against your database and have large transactions also occurring they can block each other and cause performance issues. Transactions are definitively linked with performance, regardless of their underlying intent.
How much does wrapping inserts in a transaction help performance on Sql Server?
[ "", "sql", "sql-server", "database", "performance", "transactions", "" ]
I want to do some timing tests on a Java application. This is what I am currently doing: ``` long startTime = System.currentTimeMillis(); doSomething(); long finishTime = System.currentTimeMillis(); System.out.println("That took: " + (finishTime - startTime) + " ms"); ``` Is there anything "wrong" with performance testing like this? What is a better way? **Duplicate**: [Is stopwatch benchmarking acceptable?](https://stackoverflow.com/questions/410437/is-stopwatch-benchmarking-acceptable)
The one flaw in that approach is that the "real" time `doSomething()` takes to execute can vary wildly depending on what other programs are running on the system and what its load is. This makes the performance measurement somewhat imprecise. One more accurate way of tracking the time it takes to execute code, assuming the code is single-threaded, is to look at the CPU time consumed by the thread during the call. You can do this with the JMX classes; in particular, with [`ThreadMXBean`](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/management/ThreadMXBean.html). You can retrieve an instance of `ThreadMXBean` from [`java.lang.management.ManagementFactory`](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/management/ManagementFactory.html), and, if your platform supports it (most do), use the [`getCurrentThreadCpuTime`](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/management/ThreadMXBean.html#getCurrentThreadCpuTime()) method in place of `System.currentTimeMillis` to do a similar test. Bear in mind that `getCurrentThreadCpuTime` reports time in nanoseconds, not milliseconds. Here's a sample (Scala) method that could be used to perform a measurement: ``` def measureCpuTime(f: => Unit): java.time.Duration = { import java.lang.management.ManagementFactory.getThreadMXBean if (!getThreadMXBean.isThreadCpuTimeSupported) throw new UnsupportedOperationException( "JVM does not support measuring thread CPU-time") var finalCpuTime: Option[Long] = None val thread = new Thread { override def run(): Unit = { f finalCpuTime = Some(getThreadMXBean.getThreadCpuTime( Thread.currentThread.getId)) } } thread.start() while (finalCpuTime.isEmpty && thread.isAlive) { Thread.sleep(100) } java.time.Duration.ofNanos(finalCpuTime.getOrElse { throw new Exception("Operation never returned, and the thread is dead " + "(perhaps an unhandled exception occurred)") }) } ``` (Feel free to translate the above to Java!) This strategy isn't perfect, but it's less subject to variations in system load.
The code shown in the question is not a good performance measuring code: 1. The compiler might choose to optimize your code by reordering statements. Yes, it can do that. That means your entire test might fail. It can even choose to inline the method under test and reorder the measuring statements into the now-inlined code. 2. The hotspot might choose to reorder your statements, inline code, cache results, delay execution... 3. Even assuming the compiler/hotspot didn't trick you, what you measure is "wall time". What you should be measuring is CPU time (unless you use OS resources and want to include these as well or you measure lock contestation in a multi-threaded environment). The solution? Use a real profiler. There are plenty around, both free profilers and demos / time-locked trials of commercials strength ones.
Java Performance Testing
[ "", "java", "performance", "performance-testing", "" ]
I'm looking at creating a basic ORM (purely for fun), and was wondering, is there a way to return the list of tables in a database and also the fields for every table? Using this, I want to be able to loop through the result set (in C#) and then say for each table in the result set, do this (e.g. use reflection to make a class that will do or contain xyz). Further to this, what are some good online blogs for SQL Server? I know this question is really about using system SPs and databases in Sql Server, and I am ok with general queries, so I'm interested in some blogs which cover this sort of functionality. Thanks
Is this what you are looking for: **Using OBJECT CATALOG VIEWS** ``` SELECT T.name AS Table_Name , C.name AS Column_Name , P.name AS Data_Type , C.max_length AS Size , CAST(P.precision AS VARCHAR) + '/' + CAST(P.scale AS VARCHAR) AS Precision_Scale FROM sys.objects AS T JOIN sys.columns AS C ON T.object_id = C.object_id JOIN sys.types AS P ON C.system_type_id = P.system_type_id WHERE T.type_desc = 'USER_TABLE'; ``` **Using INFORMATION SCHEMA VIEWS** ``` SELECT TABLE_SCHEMA , TABLE_NAME , COLUMN_NAME , ORDINAL_POSITION , COLUMN_DEFAULT , DATA_TYPE , CHARACTER_MAXIMUM_LENGTH , NUMERIC_PRECISION , NUMERIC_PRECISION_RADIX , NUMERIC_SCALE , DATETIME_PRECISION FROM INFORMATION_SCHEMA.COLUMNS; ``` Reference : My Blog - <http://dbalink.wordpress.com/2008/10/24/querying-the-object-catalog-and-information-schema-views/>
Tables :: ``` SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE='BASE TABLE' ``` columns :: ``` SELECT * FROM INFORMATION_SCHEMA.COLUMNS ``` or ``` SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME='your_table_name' ```
Getting list of tables, and fields in each, in a database
[ "", "sql", "t-sql", "" ]
I am looking for a good eCommerce CMS. I need to be able to sell services and products, it must be open source that it can be customised wherever needed. I am very familiar with PHP and Mysql, and somewhat familiar with python and ruby, so a PHP solution would be preferred.
Some of the best PHP carts: [Magento](http://www.magentocommerce.com) - Full featured. Excellent code quality. Hard to learn. Requires lots of server resources. [PrestaShop](http://www.prestashop.com/) - PrestaShop is currently used by 250,000 shops worldwide and is available in 60 different languages. [OpenCart](http://opencart.com) - OpenCart comes with an inbuilt Affiliate system, where affiliates can promote specific products and get paid for this. [InterSpire](http://www.interspire.com/shoppingcart/) - Not Free. [FoxyCart](http://foxycart.com) - Not Free. Hosted checkout that uses your templates. Works well with a CMS like Modx or Expression Engine. [LemonStand](http://lemonstandapp.com/) - Not Free.
I have two recommendations for you. 1. Dont use OSCommerce. 2. Dont use XTCommerce which is based on OSCommerce To be more specific **NEVER** **EVER** use oscommerce for **ANY** project you want to extend at all. If you run the project 100% out-of-box and need some payment extension that nobody features consider it for two seconds but then better DONT. OSCommerce features code from 2001. OSCommerce has only one coding pattern which is FIXHACKCOPYQUICKPASTEHERENOW. Its the negation of everything you think to know from software development and project mangement. If you use OSCommerce and try to extend it your project will need twice as long and you will start hating webdevelopment. And yes i know it sounds like, but i am not kidding. Been there, done that. If anybody tells you to use oscommerce - e.g. for all the existing extentions that are out there - stand up and leave the room. ### Extra Tipps: Magento IS SLOW. * <http://www.prestashop.com/> - In the OSS world Prestashop has some popularity. * <http://www.interspire.com/shoppingcart/> has some reputation but is not OSS.
Ecommerce tool for selling services
[ "", "php", "mysql", "e-commerce", "" ]
Main problem is about changing the index of rows to 1,2,3.. where contact-id and type is the same. but all columns can contain exactly the same data because of some ex-employee messed up and update all rows by contact-id and type. somehow there are rows that aren't messed but index rows are same. It is total chaos. I tried to use an inner cursor with the variables coming from the outer cursor. But It seems that its stuck in the inner cursor. A part of the query looks like this: ``` Fetch NEXT FROM OUTER_CURSOR INTO @CONTACT_ID, @TYPE While (@@FETCH_STATUS <> -1) BEGIN IF (@@FETCH_STATUS <> -2) DECLARE INNER_CURSOR Cursor FOR SELECT * FROM CONTACTS where CONTACT_ID = @CONTACT_ID and TYPE = @TYPE Open INNER_CURSOR Fetch NEXT FROM INNER_CURSOR While (@@FETCH_STATUS <> -1) BEGIN IF (@@FETCH_STATUS <> -2) ``` What can be the problem? Is @@FETCH\_STATUS ambiguous or something? EDIT: everything looks fine if i don't use this code inside inner cursor: ``` UPDATE CONTACTS SET INDEX_NO = @COUNTER where current of INNER_CURSOR ``` EDIT: here is the big picture: ``` BEGIN TRAN DECLARE @CONTACT_ID VARCHAR(15) DECLARE @TYPE VARCHAR(15) DECLARE @INDEX_NO SMALLINT DECLARE @COUNTER SMALLINT DECLARE @FETCH_STATUS INT DECLARE OUTER_CURSOR CURSOR FOR SELECT CONTACT_ID, TYPE, INDEX_NO FROM CONTACTS WHERE CONTACT_ID IN (SELECT CONTACT_ID FROM dbo.CONTACTS WHERE CONTACT_ID IN(...) GROUP BY CONTACT_ID, TYPE, INDEX_NO HAVING COUNT(*) > 1 OPEN OUTER_CURSOR FETCH NEXT FROM OUTER_CURSOR INTO @CONTACT_ID, @TYPE, @INDEX_NO WHILE (@@FETCH_STATUS <> -1) BEGIN IF (@@FETCH_STATUS <> -2) SET @COUNTER = 1 DECLARE INNER_CURSOR CURSOR FOR SELECT * FROM CONTACTS WHERE CONTACT_ID = @CONTACT_ID AND TYPE = @TYPE FOR UPDATE OPEN INNER_CURSOR FETCH NEXT FROM INNER_CURSOR WHILE (@@FETCH_STATUS <> -1) BEGIN IF (@@FETCH_STATUS <> -2) UPDATE CONTACTS SET INDEX_NO = @COUNTER WHERE CURRENT OF INNER_CURSOR SET @COUNTER = @COUNTER + 1 FETCH NEXT FROM INNER_CURSOR END CLOSE INNER_CURSOR DEALLOCATE INNER_CURSOR FETCH NEXT FROM OUTER_CURSOR INTO @CONTACT_ID, @TYPE, @INDEX_NO END CLOSE OUTER_CURSOR DEALLOCATE OUTER_CURSOR COMMIT TRAN ```
I don't fully understand what was the problem with the *"update current of cursor"* but it is solved by using the fetch statement twice for the inner cursor: ``` FETCH NEXT FROM INNER_CURSOR WHILE (@@FETCH_STATUS <> -1) BEGIN UPDATE CONTACTS SET INDEX_NO = @COUNTER WHERE CURRENT OF INNER_CURSOR SET @COUNTER = @COUNTER + 1 FETCH NEXT FROM INNER_CURSOR FETCH NEXT FROM INNER_CURSOR END ```
You have a variety of problems. First, why are you using your specific @@FETCH\_STATUS values? It should just be @@FETCH\_STATUS = 0. Second, you are not selecting your inner Cursor *into* anything. And I cannot think of any circumstance where you would select all fields in this way - spell them out! Here's a sample to go by. Folder has a primary key of "ClientID" that is also a foreign key for Attend. I'm just printing all of the Attend UIDs, broken down by Folder ClientID: ``` Declare @ClientID int; Declare @UID int; DECLARE Cur1 CURSOR FOR SELECT ClientID From Folder; OPEN Cur1 FETCH NEXT FROM Cur1 INTO @ClientID; WHILE @@FETCH_STATUS = 0 BEGIN PRINT 'Processing ClientID: ' + Cast(@ClientID as Varchar); DECLARE Cur2 CURSOR FOR SELECT UID FROM Attend Where ClientID=@ClientID; OPEN Cur2; FETCH NEXT FROM Cur2 INTO @UID; WHILE @@FETCH_STATUS = 0 BEGIN PRINT 'Found UID: ' + Cast(@UID as Varchar); FETCH NEXT FROM Cur2 INTO @UID; END; CLOSE Cur2; DEALLOCATE Cur2; FETCH NEXT FROM Cur1 INTO @ClientID; END; PRINT 'DONE'; CLOSE Cur1; DEALLOCATE Cur1; ``` Finally, are you *SURE* you want to be doing something like this in a stored procedure? It is very easy to abuse stored procedures and often reflects problems in characterizing your problem. The sample I gave, for example, could be far more easily accomplished using standard select calls.
Cursor inside cursor
[ "", "sql", "sql-server", "t-sql", "sql-server-2005", "database-cursor", "" ]
Is there any benefit in using compile for regular expressions in Python? ``` h = re.compile('hello') h.match('hello world') ``` vs ``` re.match('hello', 'hello world') ```
I've had a lot of experience running a compiled regex 1000s of times versus compiling on-the-fly, and have not noticed any perceivable difference. Obviously, this is anecdotal, and certainly not a great argument *against* compiling, but I've found the difference to be negligible. EDIT: After a quick glance at the actual Python 2.5 library code, I see that Python internally compiles AND CACHES regexes whenever you use them anyway (including calls to `re.match()`), so you're really only changing WHEN the regex gets compiled, and shouldn't be saving much time at all - only the time it takes to check the cache (a key lookup on an internal `dict` type). From module re.py (comments are mine): ``` def match(pattern, string, flags=0): return _compile(pattern, flags).match(string) def _compile(*key): # Does cache check at top of function cachekey = (type(key[0]),) + key p = _cache.get(cachekey) if p is not None: return p # ... # Does actual compilation on cache miss # ... # Caches compiled regex if len(_cache) >= _MAXCACHE: _cache.clear() _cache[cachekey] = p return p ``` I still often pre-compile regular expressions, but only to bind them to a nice, reusable name, not for any expected performance gain.
For me, the biggest benefit to `re.compile` is being able to separate definition of the regex from its use. Even a simple expression such as `0|[1-9][0-9]*` (integer in base 10 without leading zeros) can be complex enough that you'd rather not have to retype it, check if you made any typos, and later have to recheck if there are typos when you start debugging. Plus, it's nicer to use a variable name such as num or num\_b10 than `0|[1-9][0-9]*`. It's certainly possible to store strings and pass them to re.match; however, that's *less* readable: ``` num = "..." # then, much later: m = re.match(num, input) ``` Versus compiling: ``` num = re.compile("...") # then, much later: m = num.match(input) ``` Though it is fairly close, the last line of the second feels more natural and simpler when used repeatedly.
Is it worth using Python's re.compile?
[ "", "python", "regex", "" ]
Does javascript coding work better with any particular server language?
Presuming that you mean client-side JavaScript, no. By the time that JavaScript is executing, it's in an environment where server-side code isn't visible. In fact, it would be tricky for JavaScript to even determine what server-size environment created the document it's acting on. All it sees is the output.
No. Client Side JavaScript is unaffected by the language used to code the server side processing.
Does javascript coding work better with any particular server language?
[ "", "javascript", "" ]
I have 4 listboxes (lstStates, lstCounties, lstCities, and lstZipcodes). Here are a few constraints: * None of the listboxes are disabled. * Any listbox can be selected at anytime, meaning there is no specific order the user has to choose. * Filtering is forward and backwards. By this, I mean if a user selects a state from lstStates, it will filter lstCounties, lstCities, and lstZipcodes. If a user selects a zipcode from lstZipcodes, it will filter lstCities, lstCounties, and lstStates. * The listboxes allow multiple selections. Each listbox is bound to a datatable to get its initial data. The datatable is retrieved from a sqlserver stored procedure. Each listbox has its own stored procedure, for example, lstStates has one called GetStates which returns one column (State) and the ListBoxes DataValueField and DataTextField are both set to State. Similar to lstStates, lstCities is bound to a datatable which gets one column from a GetCities stored proc which is city. Another thing I want to point out is that I am connecting an ObjectDataSource to get the datatable.
Already been asked: [What is the most efficient way to filter listboxes based on selections of another in c#?](https://stackoverflow.com/questions/381396/what-is-the-most-efficient-way-to-filter-listboxes-based-on-selections-of-another) [edit] OK, what you need to do is add an event to each [myListbox]\_SelectedIndexChanged event. When the selection is changed you'll need to refresh all the other listboxes based on those selections. I assume that this will need to be handled by the database, since linking States to ZipCodes any other way would be ugly. So presumably your data for States<-->Zips<-->Counties relationships is in your db somewhere. So you'll need to have procs in your db (or LINQ middle layer) that get States by Zips and so on. On each selection changed event, send the new selection back to the db sproc and then rebind the listbox based on the return data. You should be able to make one sproc for each one that returns all states if no zip is passed in and so on. [/edit]
To clarify, on your initial load of the page, you are loading ALL zipcodes, ALL cities, ALL states and ALL countries? This seems a bit cumbersome to me. This is the type of requirement I would question. (Granted I don't know that you didn't already question it or that some good answer came from it).
How to filter the following listboxes in asp.net?
[ "", "c#", "asp.net", "listbox", "" ]
I'm trying to develop a GUI application for an embedded platform, without any windowing whatsoever and I'm doing that with DirectFB, and it suits my needs very fine. Since the embedded I develop for is not that powerful, I would really like to try to develop on my own Ubuntu desktop. The problem is Framebuffer is conflicting with X.org causing me to leave the whole desktop, and shutdown X.org just to see the result of my changes. Is there a good framebuffer simulator that suits my needs? Qt has one, called QVFb, but it only works for developing Qt apps, and the VNC back-end of DirectFB always crash. So, any ideas?
DirectFB has a X11 backend. ``` $ sudo apt-get install libdirectfb-extra # for Debian and Ubuntu, anyhow $ cat ~/.directfbrc system=x11 force-windowed ``` Also, DirectFB has a SDL backend, and SDL has a X11 backend. Also, SDL has a GGI backend, and GGI has an X backend. That's a bit circuitous, but it should work :) I tested it with ``` $ SDL_VIDEODRIVER=directfb ffplay some_movie.avi ``` and got a nice 640x480 window with media playing and DirectFB handling layering and input, so I'm sure this works.
The three previous answers are all good suggestions. I'd suggest trying ephemient's answer because it's the simplest. For more details on setting up your .directfbrc file, check out "man directfbrc". One other possibility would be to switch from X to another virtual terminal (using CTRL+ALT+F1), run your directfb program, and then switch back X (using CTRL+ALT+F7).
How to develop a DirectFB app without leaving X.11 environment
[ "", "c++", "linux", "user-interface", "framebuffer", "directfb", "" ]
I am new to python and am writing some scripts to automate downloading files from FTP servers, etc. I want to show the progress of the download, but I want it to stay in the same position, such as: output: > Downloading File FooFile.txt [47%] I'm trying to avoid something like this: ``` Downloading File FooFile.txt [47%] Downloading File FooFile.txt [48%] Downloading File FooFile.txt [49%] ``` How should I go about doing this? --- **Duplicate**: [How can I print over the current line in a command line application?](https://stackoverflow.com/questions/465348/how-can-i-print-over-the-current-line-in-a-command-line-application/465360#465360)
You can also use the carriage return: ``` sys.stdout.write("Download progress: %d%% \r" % (progress) ) sys.stdout.flush() ```
# Python 2 I like the following: ``` print 'Downloading File FooFile.txt [%d%%]\r'%i, ``` Demo: ``` import time for i in range(100): time.sleep(0.1) print 'Downloading File FooFile.txt [%d%%]\r'%i, ``` # Python 3 ``` print('Downloading File FooFile.txt [%d%%]\r'%i, end="") ``` Demo: ``` import time for i in range(100): time.sleep(0.1) print('Downloading File FooFile.txt [%d%%]\r'%i, end="") ``` # PyCharm Debugger Console with Python 3 ``` # On PyCharm Debugger console, \r needs to come before the text. # Otherwise, the text may not appear at all, or appear inconsistently. # tested on PyCharm 2019.3, Python 3.6 import time print('Start.') for i in range(100): time.sleep(0.02) print('\rDownloading File FooFile.txt [%d%%]'%i, end="") print('\nDone.') ```
How do I write output in same place on the console?
[ "", "python", "console-output", "" ]
I ran `gprof` on a C++ program that took `16.637s`, according to `time()`, and I got this for the first line of output: ``` % cumulative self self total time seconds seconds calls s/call s/call name 31.07 0.32 0.32 5498021 0.00 0.00 [whatever] ``` Why does it list `31.07%` of time if it only took `.32` seconds? Is this a per-call time? (Wouldn't that be self s/call?) This is my first time using `gprof`, so please be kind :) Edit: by scrolling down, it appears that gprof only thinks my program takes 1.03 seconds. Why might it be getting it so wrong?
The bottleneck turned out to be in file I/O (see [Is std::ifstream significantly slower than FILE?](https://stackoverflow.com/questions/477225/is-stdifstream-significantly-slower-than-file)). I switched to reading the entire file in a buffer and it sped up enormously. The problem here was that gprof doesn't appear to generate accurate profiling when waiting for file I/O (see <http://www.regatta.cs.msu.su/doc/usr/share/man/info/ru_RU/a_doc_lib/cmds/aixcmds2/gprof.htm>). In fact, `seekg` and `tellg` were not even on the profiling list, and they were the bottleneck!
**Self seconds** is the time spent in [whatever]. **Cumulative seconds** is the time spent in [whatever] and the calls above it (such as [whatever] + main) **Neither of those include time spent in functions called from [whatever].** That's why you're not seeing more time listed. If your [whatever] function is calling lots of printf's, for example, your gprof output is telling you that printf is eating the majority of that time.
Confusing gprof output
[ "", "c++", "optimization", "profiling", "gprof", "" ]
I'm trying to build a C/C++ static library using visual studio 2005. Since the selection of the runtime library is a compile option, I am forced to build four variations of my library, one for each variation of the runtime library: * /MT - static runtime library * /MD - DLL runtime library * /MTd - debug static runtime library * /MDd - debug DLL runtime library These are *compiler* options, not linker options. Coming from a Linux background, this seems strange. Do the different runtime libraries have different calling conventions or something? Why can't the different runtime libraries be resolved at link time, i.e. when I link the application which uses my static library?
One side effect of the C preprocessor definitions like `_DLL` and `_DEBUG` that zdan mentioned: Some data structures (such as STL containers and iterators) may be sized differently in the debug runtime, possibly due to features such as `_HAS_ITERATOR_DEBUGGING` and `_SECURE_SCL`. You must compile your code with [structure definitions that are binary-compatible with the library you're linking to](http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=352680). If you mix and match object files that were compiled against different runtime libraries, you will get linker warnings such as the following: ``` warning LNK4098: defaultlib 'LIBCMT' conflicts with use of other libs ```
These options may add defines (\_\_DLL and \_\_DEBUG for example) that are used in the runtime library header files. One common thing to do is to add \_\_declspec(dllimport) to function declarations when linked dynamically. The compiler also seems to use these to assist the linker in linking to the correct libraries. This is explained in the [MSDN](http://msdn.microsoft.com/en-us/library/2kzt1wy3(VS.80).aspx).
Why is runtime library a compiler option rather than a linker option?
[ "", "c++", "c", "visual-studio", "linker", "" ]
Is there a good formatter for PL/SQL which is free and offline?
There is a desktop version for [SQLinForm](https://www.sqlinform.com)
I am not really sure what you mean by "offline". However, Oracle has a tool called SQL Developer that is free and can be downloaded from their website. It has some formatting options you can apply to your code.
Good free offline PL/SQL formatter
[ "", "sql", "plsql", "formatting", "offline", "" ]
I'm using the [jQuery autocomplete plugin](http://docs.jquery.com/Plugins/Autocomplete) to get a list of locations, which works fine. But if the user clicks the browser's back button after submitting the page with the autocomplete textbox, the textbox is empty. If I take the autocomplete off the textbox and submit & click back it remembers the text. Is there a way to stop the autocomplete from clearing the textbox when the page loads?
Well I found the issue. It seems to be on Firefox, not IE, and it's not technically due to the autocomplete plugin. It's because the plugin adds the attribute `autocomplete="off"` to the textbox. This is so that the browser's autocomplete history doesn't conflict with jquery's autocomplete, but in Firefox, fields that have this attribute don't get pre-populated when the user clicks the back button. I'm guessing there isn't a way around this, it appears to be the default browser behaviour. If anyone knows different please post a comment.
I had the same problem today, and found a way around this. ``` $("#search_form").submit(function() { $("#location")[0].removeAttribute("autocomplete"); }); ``` This code will remove the autocomplete attribute just before the form is submitted. You will have to change the selectors so that they match your form and input(s).
Stop the jQuery autocomplete plugin from forgetting text when the user clicks back
[ "", "javascript", "jquery", "browser", "autocomplete", "" ]
I've come across some SQL queries in Oracle that contain '(+)' and I have no idea what that means. Can someone explain its purpose or provide some examples of its use? Thanks
It's Oracle's synonym for `OUTER JOIN`. ``` SELECT * FROM a, b WHERE b.id(+) = a.id ``` gives same result as ``` SELECT * FROM a LEFT OUTER JOIN b ON b.id = a.id ```
The + is a short cut for OUTER JOIN, depending on which side you put it on, it indicates a LEFT or RIGHT OUTER JOIN Check the second entry in [this forum post](http://www.orafaq.com/forum/t/79187/2/) for some examples
Meaning of (+) in SQL queries
[ "", "sql", "oracle", "" ]
What's the "Bad magic number" ImportError in python, and how do I fix it? The only thing I can find online suggests this is caused by compiling a .py -> .pyc file and then trying to use it with the wrong version of python. In my case, however, the file seems to import fine some times but not others, and I'm not sure why. The information python's providing in the traceback isn't particularly helpful (which is why I was asking here...), but here it is in case it helps: ``` Traceback (most recent call last): File "run.py", line 7, in <module> from Normalization import Normalizer ```
The magic number comes from UNIX-type systems where the first few bytes of a file held a marker indicating the file type. Python puts a similar marker into its `pyc` files when it creates them. Then the python interpreter makes sure this number is correct when loading it. Anything that damages this magic number will cause your problem. This includes editing the `pyc` file or trying to run a `pyc` from a different version of python (usually later) than your interpreter. If they are *your* `pyc` files (or you have the `py` files for them), just delete them and let the interpreter re-compile the `py` files. On UNIX type systems, that could be something as simple as: ``` rm *.pyc ``` or: ``` find . -name '*.pyc' -delete ``` If they are not yours, and the original `py` files are not provided, you'll have to either *get* the `py` files for re-compilation, or use an interpreter that can run the `pyc` files with that particular magic value. One thing that might be causing the intermittent nature. The `pyc` that's causing the problem may only be imported under certain conditions. It's highly unlikely it would import sometimes. You should check the actual full stack trace when the import fails. As an aside, the first word of all my `2.5.1(r251:54863)` `pyc` files is `62131`, `2.6.1(r261:67517)` is `62161`. The list of all magic numbers can be found in `Python/import.c`, reproduced here for completeness (current as at the time the answer was posted, has changed since then): ``` 1.5: 20121 1.5.1: 20121 1.5.2: 20121 1.6: 50428 2.0: 50823 2.0.1: 50823 2.1: 60202 2.1.1: 60202 2.1.2: 60202 2.2: 60717 2.3a0: 62011 2.3a0: 62021 2.3a0: 62011 2.4a0: 62041 2.4a3: 62051 2.4b1: 62061 2.5a0: 62071 2.5a0: 62081 2.5a0: 62091 2.5a0: 62092 2.5b3: 62101 2.5b3: 62111 2.5c1: 62121 2.5c2: 62131 2.6a0: 62151 2.6a1: 62161 2.7a0: 62171 ```
Deleting all .pyc files will fix "Bad Magic Number" error. ``` find . -name "*.pyc" -delete ```
What's the bad magic number error?
[ "", "python", "" ]
I am looking to use a javascript obfuscator. What are some of the most popular and what impact do they have on performance?
Yahoo has a pretty good one. It's technically a minifier, but it does a nice job of obfuscating in the process. [YUI Compressor](http://developer.yahoo.com/yui/compressor/ "YUI Compressor")
Tested 8 different obfuscators (except www.javascriptobfuscator.com), and was amazed by how much they all suck. Ended up writing my own obfuscator using regular expressions. Enjoy: ``` static Dictionary<string, string> names = new Dictionary<string, string>(); static bool testing = false; static string[] files1 = @"a.js,b.js,c.js" .Split(new string[] { Environment.NewLine, " ", "\t", "," }, StringSplitOptions.RemoveEmptyEntries); static string[] ignore_names = @"sin,cos,order,min,max,join,round,pow,abs,PI,floor,random,index,http, __defineGetter__,__defineSetter__,indexOf,isPrototypeOf,length,clone,toString,split,clear,erase RECT,SIZE,Vect,VectInt,vectint,vect,int,double,canvasElement,text1,text2,text3,textSizeTester,target,Number number,TimeStep,images,solid,white,default,cursive,fantasy,". Split(new string[] { Environment.NewLine, " ", "\t", "," }, StringSplitOptions.RemoveEmptyEntries); string[] extra_names = @"a,b,c".Split(new string[] { Environment.NewLine, " ", "\t", "," }, StringSplitOptions.RemoveEmptyEntries); string src = @"C:\temp"; string dest1 = src + "\\all1.js"; string dest2 = src + "\\all2.js"; static void Main() { File.Delete(dest1); File.Delete(dest2); foreach (string s in files1) File.AppendAllText(dest1, File.ReadAllText(src + "\\" + s) + Environment.NewLine + Environment.NewLine + Environment.NewLine + Environment.NewLine + Environment.NewLine + Environment.NewLine, Encoding.UTF8); string all = File.ReadAllText(dest1); int free_index = 0; foreach (string s in extra_names) { free_index++; string free_name = "" + (char)('A' + (free_index % 25)) + (char)('A' + ((free_index / 25) % 25)); Debug.Assert(free_name != "AA"); names.Add(s, free_name); } Regex reg1 = new Regex("(var |function |\\.prototype\\.)([a-zA-Z0-9_]+)"); int startat = 0; while (startat < all.Length) { Match match = reg1.Match(all, startat); if (!match.Success) break; string key = all.Substring(match.Groups[2].Index, match.Groups[2].Length); if (!ignore_names.Contains(key)) { free_index++; string free_name = "" + (char)('A' + (free_index % 25)) + (char)('A' + ((free_index / 25) % 25)); Debug.Assert(free_name != "AA"); if (!names.ContainsKey(key)) names.Add(key, testing ? key + free_name : free_name); } startat = match.Groups[0].Index + match.Groups[0].Length; } Regex reg2 = new Regex(@"/\*.*\*/", RegexOptions.Multiline); Regex reg3 = new Regex("([^:\\\\])//.*\r\n"); Regex reg4 = new Regex("([a-zA-Z0-9_]+)"); Regex reg5 = new Regex("(\r\n)*[ \t]+"); Regex reg6 = new Regex("(\r\n)+"); all = reg2.Replace(all, eval2); all = reg3.Replace(all, eval3); all = reg4.Replace(all, eval4); all = reg5.Replace(all, eval5); all = reg6.Replace(all, eval6); File.WriteAllText(dest2, all); } public static string eval4(Match match) { return names.ContainsKey(match.Groups[1].Value) ? names[match.Groups[1].Value] : match.Groups[0].Value; } public static string eval5(Match match) { return string.IsNullOrEmpty(match.Groups[1].Value) ? " " : Environment.NewLine; } public static string eval6(Match match) { return Environment.NewLine; } public static string eval2(Match match) { return " "; } public static string eval3(Match match) { return match.Groups[1].Value + Environment.NewLine; } ```
What is the best javascript obfuscator?
[ "", "javascript", "obfuscation", "" ]
I often find the need to do something along these lines: ``` public class OperationResult { bool succes; SomeOtherObject someobject; } public interface ICanDoSomethingWeird { OperationResult DoMyThing(); } ``` Where the OperationResult is really a class that belongs to the ICanDoSomethingWeird interface. I find it very annoying I cannot place it in the namespace of the interface. I'm wondering how other people deal with this. Do you just stick it in the global namespace? or just one namespace up from the namespace where the interface sits? My current approach is to do rename the OperationResult class to ICanDoSomethingWeird\_OperationResult, but am not very impressed by how pretty that is :) Anybody have a better solution?
If `OperationResult` is a type used solely by `ICanDoSomethingWeird` and clients of `ICanDoSomethingWeird`, then it belongs in the same "context" as `ICanDoSomethingWeird`. That is, it *belongs* in the *same* namespace as `ICanDoSomethingWeird`: ``` namespace MyNamespace { public class OperationResult {} public interface ICanDoSomethingWeird { OperationResult DoMyThing(); } } ``` Think about the client code. Would you prefer this: ``` using MyNamespace; ICanDoSomethingWeird myWeirdThing = ...; ICanDoSomethingWeird.OperationResult result = myWeirdThing.DoMyThing(); ``` or this: ``` using MyNamespace; ICanDoSomethingWeird myWeirdThing = ...; OperationResult result = myWeirdThing.DoMyThing(); ``` The latter makes more sense to me, and is the only option where interfaces are concerned. You cannot declare inner types in interfaces. And do note the [general advice](http://en.csharp-online.net/.NET_Type_Design_Guidelines%E2%80%94Nested_Types) regarding nested types: > DO NOT use nested types if the type is > likely to be referenced outside of the > containing type.
An interface does not define any implementation. By including a class your interface is effectively including some implementation. Nested classes have the purpose of assisting the class implementation that defines it. Nested classes should not be used as if the class has merit outside of its use with the containing class. A class should not be used as a surrogate for a namespace to define other classes. If an interface defines some behaviour that requires the use of a class that as yet does not exist it would make sense to create that class in the same namespace to which the interface belongs.
How to deal with not being allowed to define types in an interface in c#
[ "", "c#", "" ]
How to make it so if one copy of a program is running another won't be able to open? Or better yet, how to make it so that if one copy is already running, then trying to run another copy will just act as if you maximized the original process?
Scott Hanselman wrote [a post on doing this sort of thing](http://www.hanselman.com/blog/TheWeeklySourceCode31SingleInstanceWinFormsAndMicrosoftVisualBasicdll.aspx)
This article [True Single instance application - WinForms.NET](http://www.openwinforms.com/single_instance_application.html) explains how to create a *true* single instance: > This article simply explains how you > can create a windows application with > control on the number of its instances > or run only single instance. This is > very typical need of a business > application. There are already lots of > other possible solutions to control > this. > > e.g. Checking the process list with > the name of our application. But this > methods don't seems to be a good > approach to follow as everything is > decided just on the basis on the > application name which may or may not > be unique all across. ``` using System; using Microsoft.VisualBasic.ApplicationServices; namespace Owf { public class SingleInstanceController : WindowsFormsApplicationBase { public SingleInstanceController() { // Set whether the application is single instance this.IsSingleInstance = true; this.StartupNextInstance += new StartupNextInstanceEventHandler(this_StartupNextInstance); } void this_StartupNextInstance(object sender, StartupNextInstanceEventArgs e) { // Here you get the control when any other instance is // invoked apart from the first one. // You have args here in e.CommandLine. // You custom code which should be run on other instances } protected override void OnCreateMainForm() { // Instantiate your main application form this.MainForm = new Form1(); } } } ``` Change you main function this way: ``` [STAThread] static void Main() { string[] args = Environment.GetCommand SingleInstanceController controller = new SingleInstanceController(); controller.Run(args); } ```
How to make it so if one copy of a program is running another won't be able to open?
[ "", "c#", ".net", "" ]
I used the function: ``` document.getElementsByTagName('strong') ``` to get all the text in a page with that type of formatting. The HTML looks like: ``` <td align="center" valign="bottom"><H1><font size="+4"><strong>TEXT_HERE</strong></font> <br> ``` I would like to change "TEXT\_HERE" to maybe something else or remove it all together. How might I go about doing that? Thanks in advance for your help :)
With a for loop? ``` var strongElems = document.getElementsByTagName('strong'); var wantToHide = true || false; for (var i=0; i<strongElems.length; i++) { var thisElem = strongElems[i]; if (wantToHide) { thisElem.style.display = "none"; // hide it } else { thisElem.textContent = "something else"; // change it } } ```
``` // ==UserScript== // @name MyScript // @namespace http://example.com // @description Example // @include * // // @require http://ajax.googleapis.com/ajax/libs/jquery/1.3.1/jquery.min.js // ==/UserScript== var shouldHide = false; $('strong').each(function() { if(shouldHide) { $(this).hide(); } else { $(this).text("New Text"); } }); ```
Greasemonkey: Change text in a webpage?
[ "", "javascript", "greasemonkey", "" ]
Using the .Net Framework 1.1, what options are available for monitoring threads from other threads? I am attempting to deal with a shortcoming of the threading implementation in 1.1 wherein unhandled exceptions will cause threads to die silently. In 2.0 and later this had been corrected so that any unhandled exception on any thread will cause the entire application to die, I beleive. When threads in my application die, I would like to retrieve as much context as possible from the main thread: the method they were executing, a stack trace, etc. I can determine when they die but retrieving context has proven difficult. I have tried registering a handler for the `AppDomain.CurrentDomain.UnhandledException` event. But I am not getting called back when events occur. This is probably due to a limitation with the API I'm developing for. Besides this event, what options are available to get context from threads as they die on another thread?
How are you creating these threads? Are you adding a delegate to the thread pool? IF so, you could create a wrapper method that takes the delegate provided and wraps another delegate around it which takes care of your try / catch, and then adds that new delegate to the thread pool queue. That way you could put in your error handling code in that second delegate.
You could try add a thread exception handler: The System.Threading.ThreadExceptionEventArgs e will contain the information about the unhandled exception. ``` // Setup Default Thread Exception Handler Application.ThreadException += new System.Threading.ThreadExceptionEventHandler(Application_ThreadException); static void Application_ThreadException(object sender, System.Threading.ThreadExceptionEventArgs e) { // Insert Code } ``` Source: <http://msdn.microsoft.com/en-us/library/system.windows.forms.application.threadexception.aspx>
C# 1.1: Monitoring worker threads
[ "", "c#", ".net", "multithreading", "" ]
Should you declare the getters/setters of the class inside the .h file and then define them in .cpp Or do both in .h file. Which style do you prefer and why? I personally like the latter wherein all of them are in .h and only methods which have logic associated with it other than setters/getters in .cpp.
For me it depends on who's going to be using the .h file. If it's a file largely internal to a module, then I tend to put the tiny methods in the header. If it's a more external header file that presents a more fixed API, then I'll put everything in the .cpp files. In this case, I'll often use the [PIMPL Idiom](https://stackoverflow.com/questions/60570/usage-of-the-pimpl-idiom) for a full compilation firewall. The trade-offs I see with putting them in the headers are: * Less typing * Easier inlining for the compiler (although compilers can sometimes do inlining between multiple translation units now anyway.) * More compilation dependencies
I would say that header files should be about interface, not implementation. I'd put them in the .cpp.
Programming style of method declaration of get/set method variables in C++?
[ "", "c++", "coding-style", "" ]
I used the following query to find duplicates: ``` SELECT userID, COUNT(userID) AS NumOccurrences FROM userDepartments GROUP BY userID HAVING ( COUNT(userID) > 1 ) ``` I then tried adding an inner join so I could see the user names that match, which are stored in a different table. ``` SELECT userDepartments.userID, users.firstname, users.lastname, COUNT(userID) AS NumOccurrences FROM userDepartments INNER JOIN users ON userDepartments.userID = users.userID GROUP BY userID HAVING ( COUNT(userID) > 1 ) ``` But it gave me an error saying that users.firstname was not part of some aggregate function or something... Does anyone know how I can get the count, only show users with more than 1 department, and also get the first and last name out of the other table so I can get a list of users names who have more than one department assigned? EDIT: THIS IS THE QUERY THAT ENDED UP WORKING FOR ME... ``` SELECT firstname, lastname FROM tbl_users WHERE (userID IN (SELECT userID FROM tbl_usersDepts GROUP BY userID HAVING (COUNT(userID) > 1))) ```
I would rearrange the query a little bit.... ``` SELECT duplicates.NumOccurrences, duplicates.userID, users.firstname, users.lastname FROM ( SELECT userID, COUNT(userID) AS NumOccurrences FROM userDepartments GROUP BY userID HAVING COUNT(userID) > 1 ) duplicates INNER JOIN users ON duplicates.userID = users.userID ```
The SQL engine doesn't know that you only have one username per userid, so you have to group by firstname and lastname as well as by user id. ``` SELECT userDepartments.userID, users.firstname, users.lastname, COUNT(userID) AS NumOccurrences FROM userDepartments INNER JOIN users ON userDepartments.userID = users.userID GROUP BY userID, users.firstname, users.lastname HAVING ( COUNT(userID) > 1 ) ``` If you don't group by firstname and lastname, the engine doesn't know what it's supposed to do if it gets more than one value of firstname for a given userid. By telling it to group by all three values, it knows that if there is more than one row per userid, it should return all those rows. Even though this shouldn't happen, the engine isn't smart enough in this case to decide that on its own. You could also do it this way: ``` SELECT users.userId, users.firstname, users.lastname, departments.NumOccurrences FROM users INNER JOIN ( SELECT userId, count(userId) as NumOccurrences FROM userDepartments GROUP BY userID HAVING ( COUNT(userID) > 1 ) ) departments ON departments.userID = users.userID ```
Can anyone show me why my SQL query isn't working (see details)?
[ "", "sql", "count", "inner-join", "" ]
The code below compiles, but has different behavior for the char type than for the int types. In particular ``` cout << getIsTrue< isX<int8>::ikIsX >() << endl; cout << getIsTrue< isX<uint8>::ikIsX >() << endl; cout << getIsTrue< isX<char>::ikIsX >() << endl; ``` result in 3 instantiations of templates for three types: int8, uint8 and char. What gives? The same is not true for ints: int and uint32 which result in the same template instantiation, and signed int another. The reason seems to be that C++ sees char, signed char and unsigned char as three different types. Whereas int is the same as a signed int. Is this right or am I missing something? ``` #include <iostream> using namespace std; typedef signed char int8; typedef unsigned char uint8; typedef signed short int16; typedef unsigned short uint16; typedef signed int int32; typedef unsigned int uint32; typedef signed long long int64; typedef unsigned long long uint64; struct TrueType {}; struct FalseType {}; template <typename T> struct isX { typedef typename T::ikIsX ikIsX; }; // This int==int32 is ambiguous //template <> struct isX<int > { typedef FalseType ikIsX; }; // Fails template <> struct isX<int32 > { typedef FalseType ikIsX; }; template <> struct isX<uint32 > { typedef FalseType ikIsX; }; // Whay isn't this ambiguous? char==int8 template <> struct isX<char > { typedef FalseType ikIsX; }; template <> struct isX<int8 > { typedef FalseType ikIsX; }; template <> struct isX<uint8 > { typedef FalseType ikIsX; }; template <typename T> bool getIsTrue(); template <> bool getIsTrue<TrueType>() { return true; } template <> bool getIsTrue<FalseType>() { return false; } int main(int, char **t ) { cout << sizeof(int8) << endl; // 1 cout << sizeof(uint8) << endl; // 1 cout << sizeof(char) << endl; // 1 cout << getIsTrue< isX<int8>::ikIsX >() << endl; cout << getIsTrue< isX<uint8>::ikIsX >() << endl; cout << getIsTrue< isX<char>::ikIsX >() << endl; cout << getIsTrue< isX<int32>::ikIsX >() << endl; cout << getIsTrue< isX<uint32>::ikIsX >() << endl; cout << getIsTrue< isX<int>::ikIsX >() << endl; } ``` I'm using g++ 4.something
Here is your answer from the standard: > 3.9.1 Fundamental types [basic.fundamental] > > Objects declared as characters (`char`) shall be large enough to store any member of the implementation's basic character set. If a character from this set is stored in a character object, the integral value of that character object is equal to the value of the single character literal form of that character. It is implementation-defined whether a `char` object can hold negative values. Characters can be explicitly declared `unsigned` or `signed`. ***Plain `char`, `signed char`, and `unsigned char` are three distinct types.*** A `char`, a `signed char`, and an `unsigned char` occupy the same amount of storage and have the same alignment requirements (*basic.types*); that is, they have the same object representation. For character types, all bits of the object representation participate in the value representation. For unsigned character types, all possible bit patterns of the value representation represent numbers. These requirements do not hold for other types. In any particular > implementation, a plain `char` object can take on either the same values as a `signed char` or an `unsigned char`; which one is implementation-defined.
While most integral types like `short` and `int` default to being `signed`, `char` does not have a default signage in C++. It is neither the type `signed char` nor `unsigned char`, so implementations may decide whether it is signed. It's a common mistake that C++ programmers run into when they use `char` as an 8 bit integer type.
char!=(signed char), char!=(unsigned char)
[ "", "c++", "char", "" ]
How would I refactor this to get it to return a string not a dataset? ``` [WebMethod] public DataSet GetPONumber(string Database) { SqlConnection sqlConn = new SqlConnection(); sqlConn.ConnectionString = GetConnString(Database); // build query string strSQL = @" A SELECT QUERY!!!!! "; SqlDataAdapter da = new SqlDataAdapter(strSQL, sqlConn); DataSet ds = new DataSet(); da.Fill(ds, "NEWPO"); return (ds); } ```
This is what I finnished with and is working, thank you for your input: ``` [WebMethod] public String GetPONumber(string Database) { //Create Object ready for Value object po = ""; //Set Connection SqlConnection Connection = new SqlConnection(GetConnString(Database)); //Open Connection Connection.Open(); //Set Query to string string Query = @" SQL QUERY GOES HERE!!!! "; //Run Query SqlCommand Command = new SqlCommand(Query, Connection); //Set Value from Query try { po = Command.ExecuteScalar(); } catch { //Error } //Clean up sql Command.Dispose(); Command = null; //Clean up connection Connection.Close(); Connection.Dispose(); Connection = null; //Return Value return po.ToString(); } ```
``` //Use an SqlCommand and the ExecuteScalar method. //Cast returnValue to known object. SqlCommand command = sqlConn.CreateCommand(); command.CommandType = CommandType.Text; command.CommandText = @" A SELECT QUERY!!!!! "; sqlConn.Open(); object returnValue = command.ExecuteScalar(); command.Dispose(); return returnValue.ToString(); ```
Return String from a webmethod instead of a dataset.
[ "", "c#", ".net", "web-services", "" ]
In your opinions, is it better to return a newly allocated memory stream from a function, or pass it into the function? For instance, ``` void Foo(MemoryStream m) { m.Write(somebuffer, 0, somebuffer.Length); } ``` or ``` void MemoryStream Foo() { MemoryStream retval = new MemoryStream(); retval.Write(somebuffer, 0, somebuffer.Length); return retval; } ```
This is a bit like asking whether you should return a string from a method or take a StringBuilder and append to it. The answer depends on what the use case is. Is it likely that the caller will want to call your method with an existing stream containing some data? Might they want to call it several times using the same stream? If so, the version taking the MemoryStream would be more efficient. On the other hand, if they only want the data once, returning it as a MemoryStream (or, more simply, as a byte array) may well be more appropriate. Unfortunately from the description we can't really tell what's going on. Of course, you could implement both as overloads and call one from the other.
**Passing** a memory stream into a function and **returning** a memory stream for a function shouldn't be used interchangeably. The methods you describe serve two different purposes. * Passing something to a function is for when you want the function to do something with the parameter. * Returning something from a function is when the caller is supposed to do something with the result. You are talking about two different things, apples and oranges.
Returning memory stream from function
[ "", "c#", "" ]
How do I simply get `GET` and `POST` values with JQuery? What I want to do is something like this: ``` $('#container-1 > ul').tabs().tabs('select', $_GET('selectedTabIndex')); ```
For GET parameters, you can grab them from `document.location.search`: ``` var $_GET = {}; document.location.search.replace(/\??(?:([^=]+)=([^&]*)&?)/g, function () { function decode(s) { return decodeURIComponent(s.split("+").join(" ")); } $_GET[decode(arguments[1])] = decode(arguments[2]); }); document.write($_GET["test"]); ``` For POST parameters, you can serialize the `$_POST` object in JSON format into a `<script>` tag: ``` <script type="text/javascript"> var $_POST = <?php echo json_encode($_POST); ?>; document.write($_POST["test"]); </script> ``` While you're at it (doing things on server side), you might collect the GET parameters on PHP as well: ``` var $_GET = <?php echo json_encode($_GET); ?>; ``` **Note:** You'll need PHP version 5 or higher to use the built-in `json_encode` function. --- **Update:** Here's a more generic implementation: ``` function getQueryParams(qs) { qs = qs.split("+").join(" "); var params = {}, tokens, re = /[?&]?([^=]+)=([^&]*)/g; while (tokens = re.exec(qs)) { params[decodeURIComponent(tokens[1])] = decodeURIComponent(tokens[2]); } return params; } var $_GET = getQueryParams(document.location.search); ```
There's a plugin for jQuery to get GET params called [.getUrlParams](http://www.mathias-bank.de/2007/04/21/jquery-plugin-geturlparam-version-2/) For POST the only solution is echoing the POST into a javascript variable using PHP, like Moran suggested.
how to get GET and POST variables with JQuery?
[ "", "javascript", "jquery", "" ]
Put the following into a file **hello.py** (and `easy_install paramiko` if you haven't got it): ``` hostname,username,password='fill','these','in' import paramiko c = paramiko.SSHClient() c.set_missing_host_key_policy(paramiko.AutoAddPolicy()) c.connect(hostname=hostname, username=username, password=password) i,o,e = c.exec_command('ls /') print(o.read()) c.close() ``` Fill in the first line appropriately. Now type ``` python hello.py ``` and you'll see some ls output. Now instead type ``` python ``` and then from within the interpreter type ``` import hello ``` and voila! It hangs! It will unhang if you wrap the code in a function `foo` and do `import hello; hello.foo()` instead. Why does Paramiko hang when used within module initialization? **How is Paramiko even aware that it's being used during module initialization in the first place?**
Paramiko uses separate threads for the underlying transport. You should *never* have a module that spawns a thread as a side effect of importing. As I understand it, there is a single import lock available, so when a child thread from your module attempts another import, it can block indefinitely, because your main thread still holds the lock. (There are probably other gotchas that I'm not aware of too) In general, modules shouldn't have side effects of any sort when importing, or you're going to get unpredictable results. Just hold off execution with the `__name__ == '__main__'` trick, and you'll be fine. [EDIT] I can't seem to create a simple test case that reproduces this deadlock. I still assume it's a threading issue with import, because the auth code is waiting for an event that never fires. This may be a bug in paramiko, or python, but the good news is that you shouldn't ever see it if you do things correctly ;) This is a good example why you always want to minimize side effects, and why functional programming techniques are becoming more prevalent.
As [JimB](https://stackoverflow.com/a/450895/1729555) pointed out it is an **import issue** when python tries to implicitly import the `str.decode('utf-8')` decoder on first use during an ssh connection attempt. See *Analysis* section for details. In general, one cannot stress enough that you should avoid having a module automatically spawning new threads on import. If you can, try to avoid magic module code in general as it almost always leads to unwanted side-effects. 1. The easy - and sane - fix for your problem - as already mentioned - is to put your code in a `if __name__ == '__main__':` body which will only be executed if you execute this specific module and wont be executed when this mmodule is imported by other modules. 2. (not recommended) Another fix is to just do a dummy str.decode('utf-8') in your code before you call `SSHClient.connect()` - see analysis below. So whats the root cause of this problem? **Analysis (simple password auth)** *Hint: If you want to debug threading in python import and set `threading._VERBOSE = True`* 1. `paramiko.SSHClient().connect(.., look_for_keys=False, ..)` implicitly spawns a new thread for your connection. You can also see this if you turn on debug output for `paramiko.transport`. `[Thread-5 ] [paramiko.transport ] DEBUG : starting thread (client mode): 0x317f1d0L` 2. this is basically done as part of `SSHClient.connect()`. When `client.py:324::start_client()` is called, a lock is created `transport.py:399::event=threading.Event()` and the thread is started `transport.py:400::self.start()`. Note that the `start()` method will then execute the class's `transport.py:1565::run()` method. 3. `transport.py:1580::self._log(..)` prints the our log message "starting thread" and then proceeds to `transport.py:1584::self._check_banner()`. 4. `check_banner` does one thing. It retrieves the ssh banner (first response from server) `transport.py:1707::self.packetizer.readline(timeout)` (note that the timeout is just a socket read timeout), checks for a linefeed at the end and otherwise times out. 5. In case a server banner was received, it attempts to utf-8 decode the response string `packet.py:287::return u(buf)` and thats where the deadlock happens. The `u(s, encoding='utf-8')` does a str.decode('utf-i') and implicitly imports `encodings.utf8` in `encodings:99` via `encodings.search_function` ending up in an import deadlock. So a dirty fix would be to just import the utf-8 decoder once in order to not block on that specifiy import due to module import sideeffects. (`''.decode('utf-8')`) **Fix** **dirty fix** - *not recommended* ``` import paramiko hostname,username,password='fill','these','in' ''.decode('utf-8') # dirty fix c = paramiko.SSHClient() c.set_missing_host_key_policy(paramiko.AutoAddPolicy()) c.connect(hostname=hostname, username=username, password=password) i,o,e = c.exec_command('ls /') print(o.read()) c.close() ``` **good fix** ``` import paramiko if __name__ == '__main__': hostname,username,password='fill','these','in' c = paramiko.SSHClient() c.set_missing_host_key_policy(paramiko.AutoAddPolicy()) c.connect(hostname=hostname, username=username, password=password) i,o,e = c.exec_command('ls /') print(o.read()) c.close() ``` ref [paramiko issue tracker: issue 104](https://github.com/paramiko/paramiko/issues/104)
Why does Paramiko hang if you use it while loading a module?
[ "", "python", "multithreading", "ssh", "module", "paramiko", "" ]
I am working on a product that runs an SQL server which allows some applications to login and their logins are granted permission to run a stored procedure- AND NOTHING ELSE. The stored procedure is owned by an admin; the stored procedure takes a query and executes it, then the results are returned to the application. Unfortunately I can't figure out why the application can call the stored procedure to which it's granted access, but the stored procedure cannot execute the SQL statement which was passed into it. The stored procedure executes the passed in query when I'm logged in as an admin, but when I log in as the limited user it throws an exception in the execute statement. For example: ``` EXEC [Admin].[STORED_PROC] @SQL_STATEMENT = 'SELECT * FROM table_x' ``` the STORED\_PROC looks something like this: ``` BEGIN TRY EXEC (@SQL_STATEMENT) END TRY BEGIN CATCH -- some logging when an exception is caught, and the exception is caught here!!! END CATCH ``` There is nothing inside the the try catch statement except that EXEC... and the SQL\_STATEMENT works when I'm logged in as the Admin, but not when I'm logged in as the User. Can anybody help me figure out what permissions I need to set in order to allow the User to run queries through the stored proc only? --- So there have been some comments about allowing raw SQL statements to be executed via stored proc defeats the purpose of using a stored proc... but in reality what we're actually doing is we're passing an encrypted SQL statement into the stored proc and the stored proc gets the statement decrypted and THEN it executes it. So yes, in reality raw SQL statements are not secure and they defeat the purpose of stored procs, but I don't know how to encrypt SQL queries that are passed through ODBC and run against a pre-2005 SQL Server. In any case, I tried to put up some minimal safeguards to at least have some basic security.
Since you are using dynamic sql, SQL server can't tell which tables you are using, so you have to grant SELECT rights to all the tables as well
Users also need to have SELECT grant on the tables
User Granted Access to Stored Procedure but Can't Run Query
[ "", "sql", "sql-server", "stored-procedures", "database-permissions", "sql-grant", "" ]
I have a datatable which returns around 30,000 records. I am displaying these records in an ASP:GridView control. I have a dropdown control in which I want the user to be able to select the number of records to display in the GridView. The default should be All, but it could also have values such as 20, 50, 100 for example. I am not quite sure how to do this. What if I don't have Paging turned on. Will PageSize still work? I hardcoded GridView.PageSize = 1 and it still returned all the records.
If you want that PageSize works you **have** to set AllowPaging of GridView to true. Obviously you have to create the method to manage the event PageIndexChanging. Like this: ``` protected void myGvw_OnPageIndexChanging(object sender, GridViewPageEventArgs e) { myGvw.PageIndex = e.NewPageIndex; // then recall the method that bind myGvw } ``` The DropDown could have the property AutoPostBack set to true and on his OnSelectedIndexChanged you must set ``` myGvw.PageSize = Convert.ToInt32(myDropDown.SelectedValue) ```
Paging should always be dealt with as early as possible in the data selection. You don't want to retrieve 30,000 records from the DB to the app server to then only show 50 of them. Something like: (for selecting page 3, 50 per page) ``` select top 50 from x where pk not in (select top 100 pk from x) ``` Which translates as: ``` CREATE PROCEDURE sspGetItems (@pageSize int, @pageNum int) AS SELECT TOP @pageSize FROM x WHERE pk NOT IN ( SELECT TOP (@pageNum - 1) * @pageSize pk FROM x ) ``` Obviously more care will need to be taken around sorting etc.
Show number of results in gridview based on dropdown?
[ "", "c#", "asp.net", "gridview", "datatable", "" ]
I have a program which only needs a NotifyIcon to work as intended. So I've been trying to get the main form to hide when the program starts. In frmMain\_Load, I tried both ``` this.Hide(); this.Visible = false; ``` without success. They work in other methods, like in the NotifyIcon\_MouseClick-method, but I want it to hide at Load. I saw in another question here at SO where Matias suggested this: ``` BeginInvoke(new MethodInvoker(delegate { Hide(); })); ``` This works, but when I launch the program I can see the form flashing real fast. It's better than nothing, but I wonder if there is any better solution to this. Thanks.
``` // In Your Program.cs Convert This static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } // To This static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Form1 TheForm = new Form1(); Application.Run(); } // Call Application.Exit() From Anywhere To Stop Application.Run() Message Pump and Exit Application ```
There is an easy way, if your program has the default Visual Studio generated Program.cs file: ``` [STAThread] static void Main() { Application.EnableVisualStyles (); Application.SetCompatibleTextRenderingDefault (false); Application.Run (new MainForm ()); } ``` the simple fact of calling `Run` will, indeed make the form visible. Try doing the following in the properties of your form: 1. Set `WindowState` to `Minimized` 2. Set `ShowInTaskbar` to `false` This should do the trick!
Hide form at launch
[ "", "c#", "winforms", "forms", "hide", "" ]
What I want to do is: ``` bool Convert( out Object output, Object source) { // find type of output. // convert source to that type if possible // store result in output. return success } ``` Is it possible? Obviously, there is a brute force massive "if" construct that could work, but that would require writing an if block for every conceivable data type. Even assuming we'll limit it to primitives and strings, it's still a huge chunk of code. I'm thinking about something a bit more reflective. Aside: While going through the api, I came across the Convert.IsDBNull() method, which will save me a lot of ``` if ( !databasefield.GetType().Equals( DBNull.Value ) ) ``` Why in the name of G-d is it in Convert? Why not DBNull.IsDBNull() ?
There is no holy grail of conversion. For m types you would need m\*(m-1) conversion routines to cover all the permutations. For primitive types, use Convert.ChangeType If a type is convertable to-from a primitive, it can implement the IConvertable interface and be used from the Convert class. For everything else, @Brian Rudolfs answer is the best. Register an explicit conversion method for each permutation you require.
Here is a sample that I use, you can inject other complex conversions into it by registering other type converters. ``` public static class Converter { public static T Convert<T>(object obj, T defaultValue) { if (obj != null) { if (obj is T) { return (T)obj; } TypeConverter converter = TypeDescriptor.GetConverter(typeof(T)); if (converter.CanConvertFrom(obj.GetType())) { return (T)converter.ConvertFrom(obj); } } return defaultValue; } ```
How do I make a universal type conversion method
[ "", "c#", ".net", "" ]
Is there a difference between single and double quotes in Java?
Use single quotes for literal `char`s, double quotes for literal `String`s, like so: ``` char c = 'a'; String s = "hello"; ``` They cannot be used any other way around (like in Python, for example).
A char is a single UTF-16 character, that is a letter, a digit, a punctuation mark, a tab, a space or something similar. A char literal is either a single one character enclosed in single quote marks like this ``` char myCharacter = 'g'; ``` or an escape sequence, or even a unicode escape sequence: ``` char a = '\t'; // Escape sequence: tab char b = '\177' // Escape sequence, octal. char c = '\u03a9' // Unicode escape sequence. ``` It is worth noting that Unicode escape sequences are processed very early during compilation and hence using '\u00A' will lead to a compiler error. For special symbols it is better to use escape sequences instead, i.e. '\n' instead of '\u00A' . Double quotes being for `String`, you have to use a "double quote escape sequence" (`\"`) inside strings where it would otherwise terminate the string. For instance: ``` System.out.println("And then Jim said, \"Who's at the door?\""); ``` It isn't necessary to escape the double quote inside single quotes. The following line is legal in Java: ``` char doublequote = '"'; ```
Is there a difference between single and double quotes in Java?
[ "", "java", "syntax", "" ]
Is there an easy way in C# to read a properties file that has each property on a separate line followed by an equals sign and the value, such as the following: ``` ServerName=prod-srv1 Port=8888 CustomProperty=Any value ``` In Java, the Properties class handles this parsing easily: ``` Properties myProperties=new Properties(); FileInputStream fis = new FileInputStream (new File("CustomProps.properties")); myProperties.load(fis); System.out.println(myProperties.getProperty("ServerName")); System.out.println(myProperties.getProperty("CustomProperty")); ``` I can easily load the file in C# and parse each line, but is there a built in way to easily get a property without having to parse out the key name and equals sign myself? The C# information I have found seems to always favor XML, but this is an existing file that I don't control and I would prefer to keep it in the existing format as it will require more time to get another team to change it to XML than parsing the existing file.
No there is no built-in support for this. You have to make your own "INIFileReader". Maybe something like this? ``` var data = new Dictionary<string, string>(); foreach (var row in File.ReadAllLines(PATH_TO_FILE)) data.Add(row.Split('=')[0], string.Join("=",row.Split('=').Skip(1).ToArray())); Console.WriteLine(data["ServerName"]); ``` Edit: Updated to reflect Paul's comment.
Final class. Thanks [@eXXL](https://stackoverflow.com/questions/485659/can-net-load-and-parse-a-properties-file-equivalent-to-java-properties-class/862690#862690). ``` public class Properties { private Dictionary<String, String> list; private String filename; public Properties(String file) { reload(file); } public String get(String field, String defValue) { return (get(field) == null) ? (defValue) : (get(field)); } public String get(String field) { return (list.ContainsKey(field))?(list[field]):(null); } public void set(String field, Object value) { if (!list.ContainsKey(field)) list.Add(field, value.ToString()); else list[field] = value.ToString(); } public void Save() { Save(this.filename); } public void Save(String filename) { this.filename = filename; if (!System.IO.File.Exists(filename)) System.IO.File.Create(filename); System.IO.StreamWriter file = new System.IO.StreamWriter(filename); foreach(String prop in list.Keys.ToArray()) if (!String.IsNullOrWhiteSpace(list[prop])) file.WriteLine(prop + "=" + list[prop]); file.Close(); } public void reload() { reload(this.filename); } public void reload(String filename) { this.filename = filename; list = new Dictionary<String, String>(); if (System.IO.File.Exists(filename)) loadFromFile(filename); else System.IO.File.Create(filename); } private void loadFromFile(String file) { foreach (String line in System.IO.File.ReadAllLines(file)) { if ((!String.IsNullOrEmpty(line)) && (!line.StartsWith(";")) && (!line.StartsWith("#")) && (!line.StartsWith("'")) && (line.Contains('='))) { int index = line.IndexOf('='); String key = line.Substring(0, index).Trim(); String value = line.Substring(index + 1).Trim(); if ((value.StartsWith("\"") && value.EndsWith("\"")) || (value.StartsWith("'") && value.EndsWith("'"))) { value = value.Substring(1, value.Length - 2); } try { //ignore dublicates list.Add(key, value); } catch { } } } } } ``` Sample use: ``` //load Properties config = new Properties(fileConfig); //get value whith default value com_port.Text = config.get("com_port", "1"); //set value config.set("com_port", com_port.Text); //save config.Save() ```
Can .NET load and parse a properties file equivalent to Java Properties class?
[ "", "c#", "configuration", "file-io", "load", "" ]
Are there any c++ networking libs that are very useful and robust? and libs to help them be run better? something like automatically endian conversion when using <<, blocking reads until the struct or w/e your reading completely transfers, something to help debug your protocol, etc
Have you had a look at [Boost.Asio](http://www.boost.org/doc/libs/1_37_0/doc/html/boost_asio.html)? It's a networking library supporting both asynchronous and synchronous operation. I've made some experiments with it in the past, and found it quite useful.
I like the [ADAPTIVE Communication Environment](http://www.cs.wustl.edu/~schmidt/ACE.html). It has built in constructs for just about all the networking patterns. I particullarly like ACE\_Task. It makes message passing **SO** much easier.
networking lib + helper (c++)
[ "", "c++", "networking", "" ]
I am trying to insert a .csv file into a database with unix linebreaks. The command I am running is: ``` BULK INSERT table_name FROM 'C:\file.csv' WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ) ``` If I convert the file into Windows format the load works, but I don't want to do this extra step if it can be avoided. Any ideas?
I felt compelled to contribute as I was having the same issue, and I need to read 2 UNIX files from SAP at least a couple of times a day. Therefore, instead of using unix2dos, I needed something with less manual intervention and more automatic via programming. As noted, the Char(10) works within the sql string. I didn't want to use an sql string, and so I used ''''+Char(10)+'''', but for some reason, this didn't compile. What did work very slick was: with (ROWTERMINATOR = '0x0a') Problem solved with Hex!
Thanks to all who have answered but I found my preferred solution. When you tell SQL Server ROWTERMINATOR='\n' it interprets this as meaning the default row terminator under Windows which is actually "\r\n" (using C/C++ notation). If your row terminator is really just "\n" you will have to use the dynamic SQL shown below. ``` DECLARE @bulk_cmd varchar(1000) SET @bulk_cmd = 'BULK INSERT table_name FROM ''C:\file.csv'' WITH (FIELDTERMINATOR = '','', ROWTERMINATOR = '''+CHAR(10)+''')' EXEC (@bulk_cmd) ``` Why you can't say BULK INSERT ...(ROWTERMINATOR = CHAR(10)) is beyond me. It doesn't look like you can evaluate any expressions in the WITH section of the command. What the above does is create a string of the command and execute that. Neatly sidestepping the need to create an additional file or go through extra steps.
Bulk insert, SQL Server 2000, unix linebreaks
[ "", "sql", "sql-server", "bulkinsert", "" ]
I have the following string (japanese) " ユーザー名" , the first character is "like" whitespace but its number in unicode is 12288, so if I do " ユーザー名".trim() I get the same string (trim doesn't work). If i do trim in c++ it works ok. Does anyone know how to solve this issue in java? Is there a special trim method for unicode?
As an alternative to the `StringUtils` class mentioned by Mike, you can also use a Unicode-aware regular expression, using only Java's own libraries: ``` " ユーザー名".replaceAll("\\p{Z}", "") ``` Or, to really only trim, and not remove whitespace inside the string: ``` " ユーザ ー名 ".replaceAll("(^\\p{Z}+|\\p{Z}+$)", "") ```
Have a look at [Unicode Normalization](http://www.unicode.org/charts/normalization/index.html) and the [Normalizer](http://java.sun.com/javase/6/docs/api/java/text/Normalizer.html) class. The class is new in Java 6, but you'll find an equivalent version in the [ICU4J](http://icu-project.org/) library if you're on an earlier JRE. ``` int character = 12288; char[] ch = Character.toChars(character); String input = new String(ch); String normalized = Normalizer.normalize(input, Normalizer.Form.NFKC); System.out.println("Hex value:\t" + Integer.toHexString(character)); System.out.println("Trimmed length :\t" + input.trim().length()); System.out.println("Normalized trimmed length:\t" + normalized.trim().length()); ```
Problem trimming Japanese string in java
[ "", "java", "string", "nlp", "" ]
I want to distribute a webstart application which uses RMI. I want to avoid to ask the user the address of the RMI server, because the website that distributes the application is also the RMI server. Furthermore, I don't know the address of the server at build time, therefore the address must be added with ad hoc configuration to the JNLP, or the address must be provided at execution time. This is the preferred way: is it possible to do so?
I haven't used Java RMI exactly (I've only used Java Web Start with [Hessian](http://hessian.caucho.com/) binary protocol for doing something like RMI), but at least the part of passing the server address to the Web Start client app should be easy. When you generate the JNLP file in your application, add the address as a property: ``` <jnlp> [...] <resources> [...] <property name="serverAddress" value="..." /> </resources> </jnlp> ``` Then, in client code read that property: ``` String serverAddress = System.getProperty("serverAddress"); ``` I assume here the website that distributes the application knows its own address :) **Edit** (with the additional limitation of not knowing address at build time): Hmm, Is the website distributing the app a dynamic or static one? 1. dynamic: either generate whole JNLP dynamically (with a JSP page or dom4j or whatever), or read the "template" JNLP XML file and replace the real server address in place 2. static: I guess the person who deploys the site has to manually configure the right address in the JNLP file?
The JDK includes source code (in the samples/jnlp/servlet directory) for a servlet that inserts appropriate values into the JNLP file. You can use this to insert the host name into the JNLP file. * [JnlpDownloadServlet Guide](http://java.sun.com/javase/6/docs/technotes/guides/javaws/developersguide/downloadservletguide.html)
How to mix java webstart with RMI?
[ "", "java", "rmi", "java-web-start", "" ]
When `foreach`ing through a generic list I often want to do something different for the first element in the list: ``` List<object> objs = new List<object> { new Object(), new Object(), new Object(), new Object() }; foreach (object o in objs) { if (o == objs.First()) { System.Diagnostics.Debug.WriteLine("First object - do something special"); } else { System.Diagnostics.Debug.WriteLine("object Do something else"); } } ``` This will output: ``` First object - do something special object Do something else object Do something else object Do something else ``` This is all fine and dandy. However if my generic list is of a value type, this approach will fail. ``` List<int> ints = new List<int> { 0, 0, 0, 0 }; foreach (int i in ints) { if (i == ints.First()) { System.Diagnostics.Debug.WriteLine("First int - do something special"); } else { System.Diagnostics.Debug.WriteLine("int Do something else"); } } ``` This will output: ``` First int - do something special First int - do something special First int - do something special First int - do something special ``` Now I know I could recode this to add a `boolean` flag variable or traditional `for` loop, but I am wondering if there's any way to find out if a foreach loop is on the first iteration of its looping.
Well, you could code it using explicit iteration: ``` using(var iter = ints.GetEnumerator()) { if(iter.MoveNext()) { // do "first" with iter.Current while(iter.MoveNext()) { // do something with the rest of the data with iter.Current } } } ``` The bool flag option (with `foreach`) is probably easier though... that is what I (almost) always do! Another option would be LINQ: ``` if(ints.Any()) { var first = ints.First(); // do something with first } foreach(var item in ints.Skip(1)) { // do something with the rest of them } ``` The downside of the above is that it tries to look at the list 3 times... since we know it is a list, that is fine - but if all we had was an `IEnumerable<T>`, it would only be sensible to iterate it once (since the source might not be re-readable).
A while ago I wrote [SmartEnumerable](http://www.yoda.arachsys.com/csharp/miscutil/usage/smartenumerable.html) (part of MiscUtil) which lets you know if the current element is the first or last, as well as its index. That may help you... it's part of MiscUtil, which is open source - you can take just SmartEnumerable under the same licence, of course. Sample code (c'n'p from the web page): ``` using System; using System.Collections.Generic; using MiscUtil.Collections; class Example { static void Main(string[] args) { List<string> list = new List<string>(); list.Add("a"); list.Add("b"); list.Add("c"); list.Add("d"); list.Add("e"); foreach (SmartEnumerable<string>.Entry entry in new SmartEnumerable<string>(list)) { Console.WriteLine ("{0,-7} {1} ({2}) {3}", entry.IsLast ? "Last ->" : "", entry.Value, entry.Index, entry.IsFirst ? "<- First" : ""); } } } ``` EDIT: Note that while it works with reference types with distinct references, it'll still fail if you give it a list where the first reference crops up elsewhere in the list.
foreach with generic List, detecting first iteration when using value type
[ "", "c#", ".net", "" ]
I'm stuck at not being able to map texture to a square in openGLES. I'm trying to display a jpg image on the screen, and in order for me to do that, I draw a square that I want to then map image onto. However all I get as an output is a white square. I don't know what am I doing wrong. And this problem is preventing me from moving forward with my project. I'm using [Managed OpenGL ES wrapper](http://www.koushikdutta.com/search/label/Managed%20OpenGL%20ES) for Windows Mobile. I verified that the texture is loading correctly, but I can't apply it to my object. I uploaded sample project that shows my problem [here](http://cid-af14228f1b5d4868.skydrive.live.com/self.aspx/.Public/TestTexture.zip). You would need VS2008 with Windows Mobile 6 SDK to be able to run it. I'm also posting the code of the Form that renders and textures an object here. Any suggestions would be much appreciated, since I've been stuck on this problem for a while, and I can't figure out what am I doing wrong. ``` public partial class Form1 : Form { [DllImport("coredll")] extern static IntPtr GetDC(IntPtr hwnd); EGLDisplay myDisplay; EGLSurface mySurface; EGLContext myContext; public Form1() { InitializeComponent(); myDisplay = egl.GetDisplay(new EGLNativeDisplayType(this)); int major, minor; egl.Initialize(myDisplay, out major, out minor); EGLConfig[] configs = new EGLConfig[10]; int[] attribList = new int[] { egl.EGL_RED_SIZE, 5, egl.EGL_GREEN_SIZE, 6, egl.EGL_BLUE_SIZE, 5, egl.EGL_DEPTH_SIZE, 16 , egl.EGL_SURFACE_TYPE, egl.EGL_WINDOW_BIT, egl.EGL_STENCIL_SIZE, egl.EGL_DONT_CARE, egl.EGL_NONE, egl.EGL_NONE }; int numConfig; if (!egl.ChooseConfig(myDisplay, attribList, configs, configs.Length, out numConfig) || numConfig < 1) throw new InvalidOperationException("Unable to choose config."); EGLConfig config = configs[0]; mySurface = egl.CreateWindowSurface(myDisplay, config, Handle, null); myContext = egl.CreateContext(myDisplay, config, EGLContext.None, null); egl.MakeCurrent(myDisplay, mySurface, mySurface, myContext); gl.ClearColor(0, 0, 0, 0); InitGL(); } void InitGL() { gl.ShadeModel(gl.GL_SMOOTH); gl.ClearColor(0.0f, 0.0f, 0.0f, 0.5f); gl.BlendFunc(gl.GL_SRC_ALPHA, gl.GL_ONE_MINUS_SRC_ALPHA); gl.Hint(gl.GL_PERSPECTIVE_CORRECTION_HINT, gl.GL_NICEST); } public unsafe void DrawGLScene() { gl.MatrixMode(gl.GL_PROJECTION); gl.LoadIdentity(); gl.Orthof(0, ClientSize.Width, ClientSize.Height, 0, 0, 1); gl.Disable(gl.GL_DEPTH_TEST); gl.MatrixMode(gl.GL_MODELVIEW); gl.LoadIdentity(); Texture myImage; Bitmap Image = new Bitmap(@"\Storage Card\Texture.jpg"); using (MemoryStream ms = new MemoryStream()) { Image.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp); myImage = Texture.LoadStream(ms, false); } float[] rectangle = new float[] { 0, 0, myImage.Width, 0, 0, myImage.Height, myImage.Width, myImage.Height }; float[] texturePosition = new float[] { 0, 0, myImage.Width, 0, 0, myImage.Height, myImage.Width, myImage.Height }; //Bind texture gl.BindTexture(gl.GL_TEXTURE_2D, myImage.Name); gl.TexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MIN_FILTER, gl.GL_LINEAR); gl.TexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER, gl.GL_LINEAR); gl.EnableClientState(gl.GL_TEXTURE_COORD_ARRAY); gl.EnableClientState(gl.GL_VERTEX_ARRAY); //draw square and texture it. fixed (float* rectanglePointer = &rectangle[0], positionPointer = &texturePosition[0]) { gl.TexCoordPointer(2, gl.GL_FLOAT, 0, (IntPtr)positionPointer); gl.VertexPointer(2, gl.GL_FLOAT, 0, (IntPtr)rectanglePointer); gl.DrawArrays(gl.GL_TRIANGLE_STRIP, 0, 4); } gl.DisableClientState(gl.GL_TEXTURE_COORD_ARRAY); gl.DisableClientState(gl.GL_VERTEX_ARRAY); } protected override void OnPaintBackground(PaintEventArgs e) { } protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); gl.Clear(gl.GL_COLOR_BUFFER_BIT); DrawGLScene(); egl.SwapBuffers(myDisplay, mySurface); gl.Clear(gl.GL_COLOR_BUFFER_BIT); } protected override void OnClosing(CancelEventArgs e) { if (!egl.DestroySurface(myDisplay, mySurface)) throw new Exception("Error while destroying surface."); if (!egl.DestroyContext(myDisplay, myContext)) throw new Exception("Error while destroying context."); if (!egl.Terminate(myDisplay)) throw new Exception("Error while terminating display."); base.OnClosing(e); } } ```
You need to enable texturing: ``` glEnable( GL_TEXTURE_2D ); ``` before rendering the square.
If you work with OpenGL|ES also take a look if the glDrawTexImage-Extension is supported (well - it should, it's a core-extension and required, but you never know...) It won't help you with your problem directly (e.g. you have to enable texturing as well), but glDrawTexImage is a hell lot more efficient than polygon rendering. And it needs less code to write as well.
Problem applying texture to square in OpenGL
[ "", "c#", ".net", "windows-mobile", "opengl-es", "" ]
I have some neural net code written in c# that would benefit from using [SIMD](http://go-mono.com/docs/index.aspx?tlink=0@N%3aMono.Simd) support. [Mono 2.2](http://www.mono-project.com/news/archive/2009/Jan-13.html) just came out that supports SIMD but Microsoft's c# does not support this yet. Being happy with my c# setup I was wondering if I could write a lib in mono for that piece and call it from .net. Edit: I guess what I really want to know is it possible to compile [mono](http://www.mono-project.com/Main_Page) down to something like a DLL that I then can call from dotnet. I heard [Miguel de Icaza](http://en.wikipedia.org/wiki/Miguel_de_Icaza) on a [podcast](http://www.hanselminutes.com/default.aspx?showID=157) saying that for the iphone the mono compiler would allow them to compile down to an exe for moonlight so it did not violate the terms of service for iphone so it got me thinking what else can you compile to. I heard Miguel de Icaza on another pod cast [Herding Code Episode 28](http://herdingcode.com/?p=109) say that you could use the mono complier to compile to an exe not just to intermediate code. What are the implications of this? This got my curiosity up so I thought that I would throw a bounty at it.
Essentially, if you write it with Simd and distribute the dll with your code, it will use acceleration if the target VM supports it. If not, it doesn't break. So you can use the library and give any users of your program who run .NET apps with Mono a speed boost. Microsoft has been said to be planning to add such support in its next release of its runtime, though I cannot find the link and don't have it handy right this sec---can dig the link out of a historic backup if anyone is interested enough.
From [Miguel de Icaza's blog](http://tirania.org/blog/archive/2008/Nov-03.html): > Our library provides C# fallbacks for > all of the accelerated instructions. > This means that if your code runs on a > machine that does not provide any SIMD > support, or one of the operations that > you are using is not supported in your > machine, the code will continue to > work correctly. > > This also means that you can use the > Mono.Simd API with Microsoft's .NET on > Windows to prototype and develop your > code, and then run it at full speed > using Mono. As I understand it, this means that you can write code that uses Mono.Simd, and will be able to run it under .Net, but it won't be any faster than regular code, because the .Net runtime doesn't support SIMD yet.
Calling mono c# code from Microsoft .net?
[ "", "c#", "compiler-construction", "mono", "" ]
I've found myself writing ``` for(int i=0;i<myvec.size();i++) myvec[i]->DoWhatever(param); ``` a lot, and I'd like to compress this into a `foreach` statement, but I'm not sure how to get `param` in there without going super-verbose. I've also got things like ``` for(int i=0;i<myvec.size();i++) if(myvec[i]->IsOK()) myvec[i]->DoWhatever(param); ``` and I'd like to rewrite that guy too. Any thoughts? Oh, also, for various reasons, I don't want to use boost.
``` #include <vector> #include <algorithm> #include <functional> class X { public: void doWhat(int x) {} bool IsOK() const {return true;} }; class CallWhatIfOk { public: CallWhatIfOk(int p): param(p) {} void operator()(X& x) const { if (x.IsOK()) {x.doWhat(param);}} private: int param; }; int main() { std::vector<X> myVec; std::for_each( myVec.begin(), myVec.end(), std::bind2nd(std::mem_fun_ref(&X::doWhat),4) ); std::for_each( myVec.begin(), myVec.end(), CallWhatIfOk(4) ); } ```
> Oh, also, for various reasons, I don't want to use boost. Valid decision, but most likely the wrong one. Consider Boost as an extension to the STL. C++ is a library-driven language. If you don't take this into account, your code will be qualitatively inferior. While `std::for_each` can be used here, the absence of lambda expressions in C++ until C++0x makes this tedious. I advocate using [Boost.ForEach](http://www.boost.org/doc/libs/1_37_0/doc/html/foreach.html)! It makes this *much* easier: ``` foreach (yourtype x, yourvec) if (x.IsOK()) x.Whatever(); ```
How to use std::foreach with parameters/modification
[ "", "c++", "stl", "loops", "enumeration", "" ]
I'm trying to create a linq query based on some dynamic/optional arguments passed into a method. ``` User [Table] -> zero to many -> Vehicles [Table] User [Table] -> zero to many -> Pets ``` So we want all users (including any vechile and/or pet info). Optional filters are * Vehicle numberplate * Pet name Because the vehicle and pet tables are zero-to-many, i usually have outer joins between the user table and the vehicle|pet table. *To speed up the query, i was trying to create the dynamic linq and if we have an optional argument provided, redfine the outer join to an inner join*. (The context diagram will have the two tables linked as an outer join by default.) Can this be done? I'm also not sure if [this SO post](https://stackoverflow.com/questions/389094/how-to-create-a-dynamic-linq-join-extension-method) can help me, either.
I think you are heading in the wrong direction. You can easily use the fact that LINQ queries are composable here. First, you would always use the outer join, and get all users with the appropriate vehicles and pets: ``` // Get all the users. IQueryable<User> users = dbContext.Users; ``` Then you would add the filters if necessary: ``` // If a filter on the pet name is required, filter. if (!string.IsNullOrEmpty(petNameFilter)) { // Filter on pet name. users = users.Where(u => u.Pets.Where( p => p.Name == petNameFilter).Any()); } // Add a filter on the license plate number. if (!string.IsNullOrEmpty(licensePlateFilter)) { // Filter on the license plate. users = users.Where( u => u.Cars.Where(c => c.LicensePlace == licensePlateFilter).Any()); } ``` Note that this will **not** filter out the pets or cars that don't meet the filter, as it is simply looking for the users that have pets with that name, or cars with that plate.
If you are trying to change tables or joins of a LINQ to SQL query at runtime you need to do that with reflection. LINQ expressions are not special; same as working with any other object call - you can change the value of properties and variables at runtime, but choosing which properties to change or which methods to call requires reflecting. I would add to that by pointing out dynamically creating LINQ expressions via reflection is probably a little silly for most (all?) cases, since under the hood the expression is essentially reflected back into SQL statements. Might as well write the SQL yourself if you are doing it on-the-fly. The point of LINQ is to abstract the data source from the developer, not the end-user.
Trying to create some dynamic linq
[ "", "c#", "linq", "" ]
I have a pointer to pointer array. I am assigning each row in the while loop below and the printf inside the while loop shows each is assigned my id number 1-20. After, out side of the while loop I iterate through the array and every element is written with id 20? Any help is greatly appreciated. (FYI- I am using the Template2doc library and example1.c, at the bottom here- <http://www.algonet.se/~thunberg/template2doc/c_version/docs/index.htm>) Below code only shows problem area, I took out the rest. ``` char **tableData[500]={NULL}; char *myData[500][2]; while(rc == SQLITE_ROW){ tableData[r] = myData[r]; printf(*tableData[r]); <-- Displays id 1-20 fine r=r+1; }//end while tableData[r+1] = NULL;//null terminated array for (a=0; a<r; a++){ printf("\n"); printf(*tableData[a]); <--Displays 20 elements all of id 20? } outputFile=insertTableData(outputFile, dataMarker, fieldMarker, tableData); ```
You should create something that actually compiles and reproduces the problem. Not only will it help people help you, but in doing so you may very well find the problem yourself. In your code excerpts we have no idea: * What rc is, how its value is set, or how its value is ever going to change and therefore terminate the loop * What the initial value of r is * What the actual contents of myData are I created this code based on what you posted, which produces the same output from both loops. So either I've missed something in what you did post, or you left something important out. ``` int main( int argc, char** argv ) { #define STRING char * STRING dummy = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*()"; STRING *tableData[500]={0}; STRING myData[500][2]; int r=0; while(r < 20){ myData[r][0] = dummy+2*r; myData[r][1] = dummy+r; tableData[r] = myData[r]; printf(*tableData[r]); printf("\n"); r=r+1; }//end while int a; for (a=0; a<r; a++){ printf(*tableData[a]); printf("\n"); } } ```
How are you populating myData? I don't see a clear bug in the code example given, but I suspect the problem is that you are assigning a pointer to a buffer in myData without actually copying the contents, so that myData[0 .. r] all point to the same buffer, which will only store the most recent value read. Try this: ``` while(rc == SQLITE_ROW){ tableData[r] = myData[r]; if (r > 0) printf(*tableData[r-1]); r=r+1; }//end while ``` That should print the ids from 1 to 19 fine. If it starts at id 2 instead of id 1, that suggests myData is not keeping a copy of the data, it's all pointing at the same location.
Array over written with last value assigned?
[ "", "c++", "arrays", "pointers", "" ]
What are the primary differences between LISP and C# with regards to functional programming? In specific, if a LISP programmer was to switch to using C#, what are the features they are most likely to miss?
Doing functional programming in C# is technically possible (well, any language that has function pointers or delegates equivalent can be "functional") -- but C# gets very very painful if you try to do much. Off the top of my head, in no particular order: * Type inference + Only exists for locals now + Should apply to pretty much everything + The #1 problem I have with C# is this. Particularly when you declare a local function... Func<...> = ouch. * Full first class functions + Delegates aren't the answer, since they aren't structually equivalent. There's no canonical type to represent a function of a certain type. Ex: What is "increment"? Is it a Func? Is it a Converter? Is it something else? This in turn makes inference more complicated. * Automatic generalization + Sucks to have to calculate and specify all the generic type parameters and their constraints * Better support for immutability + Make it trivial to declare simple data types + Copy-with-modify type stuff (var x = oldX { SomeField = newVal }) * **Tuples** C# 7 * Discriminated unions (sum types) * **Pattern matching** C# 7 + Makes tuples and sum types much more valuable + Allows more expression-oriented code * General monad syntax + **Makes things like async code much easier to write** C# 5 + After you've nested 2+ layers of BeginXXX/EndXXX, it gets quite ugly. * Easy syntax for function blocks, so you don't end up with closing lines like "});});" *Edit:* One more: * Function composition + Right now it's painful to do much of any sort of function composition. Currying, chaining, etc. LINQ doesn't get as hurt here because extension methods take the first parameter like an instance method. * **C# should emit tail.call too.** Not needed, the JIT will add tail calls itself as appropriate. Items in **bold** have been addressed since this answer was written.
Support for immutability, primarily. It should be easy to create an immutable type and verify that it's immutable. It should be easy to *use* an immutable type - support like object and collection initializers, but for the immutable case. After that: better type inference capabilities, tuples, pattern matching, and supporting libraries (again, immutable lists etc). EDIT: I should say that I'm *not* a functional programmer, but I'm a C# programmer trying to learn more functional ways. I'm currently helping out (in a small way) with a functional programming book, so I'm learning lots of stuff there. However, you will hopefully be pleased with LINQ. It does make life *much* easier for working with sequences of data.
Functional Programming in C# vs LISP
[ "", "c#", "functional-programming", "programming-languages", "lisp", "" ]