Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I've gone through a few Java questions on SO. And I must say the content here is [pretty well written](https://stackoverflow.com/questions/457822/what-are-the-things-java-got-right) and the Java guys on SO can really pump out the answers. But what I always found was Java answers for Java people. Which is great on its own, but I'm a Java noob. So I don't really care for the workings of ["Joint union in type parameter variance"](https://stackoverflow.com/questions/15496/hidden-features-of-java/42686#42686). It's probably handy down the line but for now.. it's not. So Java for a noob (coming from PHP and Python) what are the cheatcodes? If you could link to an SO answer (which is probably out there but I couldn't find) or write up what are the things Java does differently than other languages? (on a basic level) Some might call these the Java Gotchas (I couldn't find the official one though)
There's a collection of semi-official "gotchas", known as the Java Puzzlers, and documented in a [book of the same name](http://www.amazon.co.uk/Java-Puzzlers-Traps-Pitfalls-Corner/dp/032133678X/ref=sr_1_1?ie=UTF8&s=books&qid=1241483349&sr=8-1); you might also find a few screencasts on the web. Periodically, Joshua Bloch and Neal Gafter like to present a bunch of innocuous-looking challenges and proceed to systematically remove any notion you had that you understood even the vaguest basics of the Java language. On a slightly less cynical note, if you're looking to avoid many of the common design and implementation cock-ups, you might take a look at [Effective Java](http://www.amazon.co.uk/Effective-Java-Second-Joshua-Bloch/dp/0321356683/ref=sr_1_1?ie=UTF8&s=books&qid=1241483503&sr=1-1), by the aforementioned Joshua Bloch, which has a wealth of decent advice on how to go about designing several of the important - but frequently badly written - aspects of writing components in Java, including a comprehensive explanation of how to properly implement the contract of `equals()` and `hashCode()`, and why you should avoid `clone()` like the plague. Oh, and don't compare strings with the `==` operator.
Saying that objects are passed by reference. Actually, methods work only with copies of object references..which are passed by value. [**Java only works with pass by value.**](http://www.javaranch.com/campfire/StoryPassBy.jsp) Also worth reading: [Is-java-pass-by-reference?](https://stackoverflow.com/questions/40480/is-java-pass-by-reference)
What are the pitfalls of a Java noob?
[ "", "java", "" ]
referring to [this question](https://stackoverflow.com/questions/863867/database-speed-optimization-few-tables-with-many-rows-or-many-tables-with-few-r), I've decided to duplicate the tables every year, creating tables with the data of the year, something like, for example: ``` orders_2008 orders_2009 orders_2010 etc... ``` Well, I know that probably the speed problem could be solved with just 2 tables for each element, like orders\_history and order\_actual, but I thought that once the handler code is been wrote, there will be no difference.. just many tables. Those tables will have even some child with foreign key; for example the orders\_2008 will have the child items\_2008: ``` CREATE TABLE orders_2008 ( id serial NOT NULL, code character(5), customer text ); ALTER TABLE ONLY orders_2008 ADD CONSTRAINT orders_2008_pkey PRIMARY KEY (id); CREATE TABLE items_2008 ( id serial NOT NULL, order_id integer, item_name text, price money ); ALTER TABLE ONLY items_2008 ADD CONSTRAINT items_2008_pkey PRIMARY KEY (id); ALTER TABLE ONLY items_2008 ADD CONSTRAINT "$1" FOREIGN KEY (order_id) REFERENCES orders_2008(id) ON DELETE CASCADE; ``` So, my problem is: what do you think is the best way to replicate those tables every 1st january and, of course, keeping the table dependencies? A PHP/Python script that, query after query, rebuild the structure for the new year (called by a cron job)? Can the PostgreSQL's functions be used in that way? If yes, how (an little example will be nice) Actually I'm going for the first way (a .sql file containing the structure, and a php/python script loaded by cronjob that rebuild the structure), but i'm wondering if this is the best way. **edit:** i've seen that the pgsql function CREATE TABLE LIKE, but the foreigns keys must be added in a second time.. or it will keep the new tables referencied tot he old one.
PostgreSQL has a feature that lets you create a table that inherits fields from another table. The documentation can be found in [their manual](http://www.postgresql.org/docs/8.3/static/ddl-inherit.html). That might simplify your process a bit.
You should look at [Partitioning in Postgresql](http://developer.postgresql.org/pgdocs/postgres/ddl-partitioning.html). It's the standard way of doing what you want to do. It uses inheritance as John Downey suggested.
PostgreSQL: best way to create new/duplicate existing tables every year
[ "", "php", "postgresql", "database", "" ]
I created this program: ``` #include <iostream> #include <fstream> using namespace std; int main () { fstream file; file.open("test.bin", ios::in | ios::out | ios::binary); if(!file.is_open()) { return -1; } int n = 5; int x; file.write(reinterpret_cast<char*>(&n), sizeof(n)); file.read(reinterpret_cast<char*>(&x), sizeof(x)); std::cout<<x; file.close(); std::cin.ignore(); return 0; } ``` that's supposed to write an integer "n" into a .bin file "test.bin", then read data from "test.bin" into an integer "x", then displays "x" to the screen. When I run the program, it displays not 5, but -842150451. Why does this occur, and how can I fix it?
Isn't the file.write() moving the current file pointer when you write it, causing you to read data from the first location AFTER the written data?
Insert `file.seekg(0);` between the read and write commands.
Why doesn't this program read (or write?) correctly from a .bin file? (C++)
[ "", "c++", "file-io", "binary-data", "" ]
original string is `"a,d,k"` I want to remove all `,` and make it to `"adk"`. I tried code below but it doesn't work. ``` "a,d,k".replace(/,/,"") ```
You aren't assigning the result of the replace method back to your variable. When you call replace, it returns a new string without modifying the old one. For example, load this into your favorite browser: ``` <html><head></head><body> <script type="text/javascript"> var str1 = "a,d,k"; str1.replace(/\,/g,""); var str2 = str1.replace(/\,/g,""); alert (str1); alert (str2); </script> </body></html> ``` In this case, `str1` will still be `"a,d,k"` and `str2` will be `"adk"`. If you want to change `str1`, you should be doing: ``` var str1 = "a,d,k"; str1 = str1.replace (/,/g, ""); ```
Use [`String.replace()`](https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/String/replace), e.g. ``` var str = "a,d,k"; str = str.replace( /,/g, "" ); ``` Note the `g` (global) flag on the regular expression, which matches all instances of ",".
how to remove "," from a string in javascript
[ "", "javascript", "string", "" ]
I need to download several files via http in Python. The most obvious way to do it is just using urllib2: ``` import urllib2 u = urllib2.urlopen('http://server.com/file.html') localFile = open('file.html', 'w') localFile.write(u.read()) localFile.close() ``` But I'll have to deal with the URLs that are nasty in some way, say like this: `http://server.com/!Run.aspx/someoddtext/somemore?id=121&m=pdf`. When downloaded via the browser, the file has a human-readable name, ie. `accounts.pdf`. Is there any way to handle that in python, so I don't need to know the file names and hardcode them into my script?
Download scripts like that tend to push a header telling the user-agent what to name the file: ``` Content-Disposition: attachment; filename="the filename.ext" ``` If you can grab that header, you can get the proper filename. There's [another thread](https://stackoverflow.com/questions/163009/urllib2-file-name) that has a little bit of code to offer up for `Content-Disposition`-grabbing. ``` remotefile = urllib2.urlopen('http://example.com/somefile.zip') remotefile.info()['Content-Disposition'] ```
Based on comments and @Oli's anwser, I made a solution like this: ``` from os.path import basename from urlparse import urlsplit def url2name(url): return basename(urlsplit(url)[2]) def download(url, localFileName = None): localName = url2name(url) req = urllib2.Request(url) r = urllib2.urlopen(req) if r.info().has_key('Content-Disposition'): # If the response has Content-Disposition, we take file name from it localName = r.info()['Content-Disposition'].split('filename=')[1] if localName[0] == '"' or localName[0] == "'": localName = localName[1:-1] elif r.url != url: # if we were redirected, the real file name we take from the final URL localName = url2name(r.url) if localFileName: # we can force to save the file as specified name localName = localFileName f = open(localName, 'wb') f.write(r.read()) f.close() ``` It takes file name from Content-Disposition; if it's not present, uses filename from the URL (if redirection happened, the final URL is taken into account).
How to download a file using python in a 'smarter' way?
[ "", "python", "http", "download", "" ]
Can anyone recommend a good Java game engine for developing simple tile-based games? I'm looking for an engine that will allow me to build maps using something like Tiled www.mapeditor.org Slick is exactly what I'm looking for, slick.cokeandcode.com but I can't get it working on Vista-64. The best I can manage is:Can't load IA 32-bit .dll on a AMD 64-bit platform (and this after downloading the latest LWJGL version). Can anyone suggest something similar that will run on 64-bit vista?
I'd recommend purchasing the book "Developing Games in Java" by David Brackeen, it includes a tile-based game framework which seems excellent (I haven't implemented anything with it yet though). [Link to amazon](https://rads.stackoverflow.com/amzn/click/com/1592730051) You could also download the code without getting the book, but I'd recommend the book.
checkout this <http://www.interactivepulp.com/pulpcore/>
Java 2D Game engine for tile-based Game
[ "", "java", "frameworks", "2d", "" ]
I require a SQL script to validate a VARCHAR field in a table in a SQL Server 2005 database that contains DateTime values, in the format `DD/MM/YYYY`, or `NULL` values. I would like to identify all invalid dates. Can anyone suggest a method? **UPDATE** * The answer has to make use of T-SQL; for performance reasons, I can't make use of SQLCLR. Thanks, MagicAndi
Use "ISDATE()" OR "IS NULL": but set the language first to recognise the day-month-year order ``` SET LANGUAGE british SELECT ISDATE('12/31/2009'), ISDATE('31/12/2009') SET LANGUAGE us_english SELECT ISDATE('12/31/2009'), ISDATE('31/12/2009') ``` Edit: As mentioned by @edosoft, you can use SET DATEFORMAT too. SET LANGUAGE implicitly sets DATEFORMAT, SET DATEFORMAT overrides SET LANGUAGE
You should specify the dateformat when using ISDATE(). From Books Online: ``` SET LANGUAGE us_english; SET DATEFORMAT dmy; SELECT ISDATE('15/04/2008'); --Returns 1. ```
Validate DateTime String in SQL Server 2005
[ "", "sql", "sql-server", "sql-server-2005", "validation", "datetime", "" ]
I am developing project using `Spring`, `Struts2` & `Hibernate`. Now I want to use a `JasperReport` with `Struts2`. But I am totally new with `JasperReport`. Can anyone give simple example or a tutorial or any other link which can help me.. Thanx in advance..
I think any jasper-report tutorial will be good for you. Struts2 won't have much influence on how you generate your JasperReport. You might want to take a look at "Stream Result": <http://struts.apache.org/2.x/docs/stream-result.html> **EDIT:** I just found this and I think this is what you are looking for: <http://struts.apache.org/2.x/docs/jasperreports-plugin.html> <http://struts.apache.org/2.x/docs/jasperreports-tutorial.html>
We use Struts2, Spring and Hibernate in our projects. There are two ways we do Jasper Reports and they both use the Struts2-Jasper plugin 1. Use Hiberate to retrieve the List of objects to be put in the report. This technique we try avoid as much as possible as the report is not portable and forces the report to be generated in the same JVM as your application. 2. We use embedded SQL in the JRXML. This we think is a better approach because eventually we can upload the JRXML to a dedicated JasperServer. Prior to Struts 2.1.x this was difficult to implement because there is no way to pass the connection to the JRXML. The following is the tutorial as to how to integrate Jasper Report with embedded SQL with Struts 2 <http://yellow-jbox.blogspot.com/2011/04/jasper-report-with-embedded-sql-using.html>
Want to develop JasperReport with Struts2
[ "", "java", "struts2", "jasper-reports", "" ]
Django tends to fill up horizontal space when adding or editing entries on the admin, but, in some cases, is a real waste of space, when, i.e., editing a date field, 8 characters wide, or a CharField, also 6 or 8 chars wide, and then the edit box goes up to 15 or 20 chars. How can I tell the admin how wide a textbox should be, or the height of a TextField edit box?
You should use [ModelAdmin.formfield\_overrides](http://docs.djangoproject.com/en/dev/ref/contrib/admin/). It is quite easy - in `admin.py`, define: ``` from django.forms import TextInput, Textarea from django.db import models class YourModelAdmin(admin.ModelAdmin): formfield_overrides = { models.CharField: {'widget': TextInput(attrs={'size':'20'})}, models.TextField: {'widget': Textarea(attrs={'rows':4, 'cols':40})}, } admin.site.register(YourModel, YourModelAdmin) ```
**To change the width for a specific field.** Made via [ModelAdmin.get\_form](https://docs.djangoproject.com/en/1.5/ref/contrib/admin/#django.contrib.admin.ModelAdmin.get_form): ``` class YourModelAdmin(admin.ModelAdmin): def get_form(self, request, obj=None, **kwargs): form = super(YourModelAdmin, self).get_form(request, obj, **kwargs) form.base_fields['myfield'].widget.attrs['style'] = 'width: 45em;' return form ```
Resize fields in Django Admin
[ "", "python", "django", "django-models", "django-admin", "" ]
The only way I've found of retrieving MCC and MNC is by overriding an activity's onConfigurationChanged method, as such: ``` public void onConfigurationChanged(Configuration config) { super.onConfigurationChanged(config); DeviceData.MCC = "" + config.mcc; DeviceData.MNC = "" +config.mnc; } ``` However, I need this data as soon as the app starts and can't wait for the user to switch the phone's orientation or equivalent to trigger this method. Is there a better way to access the current Configuration object?
The [TelephonyManager](http://developer.android.com/reference/android/telephony/TelephonyManager.html "android.telephony.TelephonyManager") has a method to return the MCC+MNC as a String ([getNetworkOperator()](http://developer.android.com/reference/android/telephony/TelephonyManager.html#getNetworkOperator() "getNetworkOperator()")) which will do you what you want. You can get access it via: ``` protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); TelephonyManager tel = (TelephonyManager) getSystemService(Context.TELEPHONY_SERVICE); String networkOperator = tel.getNetworkOperator(); if (!TextUtils.isEmpty(networkOperator)) { int mcc = Integer.parseInt(networkOperator.substring(0, 3)); int mnc = Integer.parseInt(networkOperator.substring(3)); } } ```
You do know there are two MCC/MNC's for an active phone? (One is the country code and carrier id for the Sim card, the other is for the network/cell tower in use.) If the `getResources().getConfiguration().mcc` is not empty in airplane mode, it's the `Sim` value `TelephonyManager.getSimOperator()`, not the `tower` value `TelephonyManager.getNetworkOperator()`. I don't know which the OP wants, but Answer 3 will give him different results than his original code if the `getConfiguration` is really the `Sim` values.
how do you get the phone's MCC and MNC in Android?
[ "", "java", "android", "mobile", "" ]
* I'd like to avoid writting `errorCount += 1` in more than one place. * I'm looking for a better way than ``` success = False try: ... else: success = True finally: if success: storage.store.commit() else: storage.store.rollback() ``` * I'm trying to avoid `store.rollback()` in every except clause. Any idea on how to do this? ``` count = 0 successCount = 0 errorCount = 0 for row in rows: success = False count += 1 newOrder = storage.RepeatedOrder() storage.store.add(newOrder) try: try: newOrder.customer = customers[row.customer_id] except KeyError: raise CustomerNotFoundError, (row.customer_id,) newOrder.nextDate = dates[row.weekday] _fillOrder(newOrder, row.id) except CustomerNotFoundError as e: errorCount += 1 print u"Error: Customer not found. order_id: {0}, customer_id: {1}".format(row.id, e.id) except ProductNotFoundError as e: errorCount += 1 print u"Error: Product not found. order_id: {0}, product_id: {1}".format(row.id, e.id) else: success = True successCount += 1 finally: if success: storage.store.commit() else: storage.store.rollback() print u"{0} of {1} repeated orders imported. {2} error(s).".format(successCount, count, errorCount) ```
This look like a possible application of Python's new `with` statement. It allows to to unwind operations and release resources securely no matter what outcome a block of code had. Read about it in [PEP 343](http://www.python.org/dev/peps/pep-0343/)
My suggestion would to write an `logError()` method that increments `errorCount` (make it a member variable) and prints the error. Since your exception code is similar, you could also shorten your code by doing this: ``` try: # something except (CustomerNotFoundError, ProductNotFoundError), e: logError(e) ``` You can print whatever you want based on `e`. Also, you don't need to track succeses: `successCount = len(rows) - errorCount`
try... except... except... : how to avoid repeating code
[ "", "python", "try-catch", "dry", "" ]
Here's the issue: I have 2 data contexts that I would like to do a join on. Now I know that LINQ doesn't allow joins from one context to another, and I know that 2 possible solutions would be to either create a single datacontext or to have 2 seperate queries (which is what I'm doing for now). However what I would like to do is to "simulate" a join. Here's what I've tried. ``` using (var _baseDataContext = Instance) { var query = from a in _baseDataContext.Account.ACCOUNTs where a.STR_ACCOUNT_NUMBER.ToString() == accountID join app in _baseDataContext.Account.APPLICATIONs on a.GUID_ACCOUNT_ID equals app.GUID_ACCOUNT join l in GetLoans() on app.GUID_APPLICATION equals l.GUID_APPLICATION select l.GUID_LOAN; return query.Count() > 0 ? query.First() : Guid.Empty; } private static IQueryable<LOAN> GetLoans() { using (var _baseDataContext = Instance) { return (from l in _baseDataContext.Loan.LOANs select l).AsQueryable(); } } ``` In run time I get is System.InvalidOperationException: The query contains references to items defined on a different data context EDIT: Working Solution: ``` using (var _baseDataContext = Instance) { var query = from a in _baseDataContext.Account.ACCOUNTs where a.STR_ACCOUNT_NUMBER.ToString() == accountID join app in _baseDataContext.Account.APPLICATIONs on a.GUID_ACCOUNT_ID equals app.GUID_ACCOUNT join l in GetLoans() on app.GUID_APPLICATION equals l.GUID_APPLICATION select l.GUID_LOAN; return (query.Count() > 0) ? query.First() : Guid.Empty; } private static IEnumerable<LOAN> GetLoans() { using (var _baseDataContext = Instance) { return (from l in _baseDataContext.Loan.LOANs select l).AsQueryable(); } } ```
Maybe something like this can get you started in the right direction. I made a mock database with similar columns based on your column names and got some results. ``` class Program { static AccountContextDataContext aContext = new AccountContextDataContext(@"Data Source=;Initial Catalog=;Integrated Security=True"); static LoanContextDataContext lContext = new LoanContextDataContext(@"Data Source=;Initial Catalog=;Integrated Security=True"); static void Main() { var query = from a in aContext.ACCOUNTs join app in aContext.APPLICATIONs on a.GUID_ACCOUNT_ID equals app.GUID_ACCOUNT where app.GUID_APPLICATION.ToString() == "24551D72-D4C2-428B-84BA-5837A25D8CF6" select GetLoans(app.GUID_APPLICATION); IEnumerable<LOAN> loan = query.First(); foreach (LOAN enumerable in loan) { Console.WriteLine(enumerable.GUID_LOAN); } Console.ReadLine(); } private static IEnumerable<LOAN> GetLoans(Guid applicationGuid) { return (from l in lContext.LOANs where l.GUID_APPLICATION == applicationGuid select l).AsQueryable(); } } ``` Hope this helps!
This is the "work around" that we have found... We built our tables from the other database out manually and if it is on the same server then we prefixed the table name with: ``` <DatabaseName>.<SchemaName>.<YourTableName> ``` if they are on a linked server then you have to prefix it with the server name as well: ``` <ServerName>.<DatabaseName>.<SchemaName>.<YourTableName> ``` This will allow you to do joins and still return an non executed IQueryable... which is what we wanted. The other 2 ways in involve joining in-memory IEnumerables which means your pull all records for each before doing the join (above) and doing an IQueryable join using a contains method which has limitations... Hopefully in the future the DataContext will be built smart enough to know that if the servers are linked then you can do joins between two different ones.
Simulating Cross Context Joins--LINQ/C#
[ "", "c#", "entity-framework", "linq", "datacontext", "" ]
I'm a web-game developer and I got a problem with random numbers. Let's say that a player has 20% chance to get a critical hit with his sword. That means, 1 out of 5 hits should be critical. The problem is I got very bad real life results — sometimes players get 3 crits in 5 hits, sometimes none in 15 hits. Battles are rather short (3-10 hits) so it's important to get good random distribution. Currently I use PHP `mt_rand()`, but we are just moving our code to C++, so I want to solve this problem in our game's new engine. I don't know if the solution is some uniform random generator, or maybe to remember previous random states to force proper distribution.
I agree with the earlier answers that real randomness in small runs of some games is undesirable -- it does seem too unfair for some use cases. I wrote a simple Shuffle Bag like implementation in Ruby and did some testing. The implementation did this: * If it still seems fair or we haven't reached a threshold of minimum rolls, it returns a fair hit based on the normal probability. * If the observed probability from past rolls makes it seem unfair, it returns a "fair-ifying" hit. It is deemed unfair based on boundary probabilities. For instance, for a probability of 20%, you could set 10% as a lower bound and 40% as an upper bound. Using those bounds, I found that with runs of 10 hits, **14.2% of the time the true pseudorandom implementation produced results that were out of those bounds**. About 11% of the time, 0 critical hits were scored in 10 tries. 3.3% of the time, 5 or more critical hits were landed out of 10. Naturally, using this algorithm (with a minimum roll count of 5), a much smaller amount (0.03%) of the "Fairish" runs were out of bounds. Even if the below implementation is unsuitable (more clever things can be done, certainly), it is worth noting that noticably often your users will feel that it's unfair with a real pseudorandom solution. Here is the meat of my `FairishBag` written in Ruby. The whole implementation and quick Monte Carlo simulation [is available here (gist)](http://gist.github.com/118194). ``` def fire! hit = if @rolls >= @min_rolls && observed_probability > @unfair_high false elsif @rolls >= @min_rolls && observed_probability < @unfair_low true else rand <= @probability end @hits += 1 if hit @rolls += 1 return hit end def observed_probability @hits.to_f / @rolls end ``` **Update:** Using this method does increase the overall probability of getting a critical hit, to about 22% using the bounds above. You can offset this by setting its "real" probability a little bit lower. A probability of 17.5% with the fairish modification yields an observed long term probability of about 20%, and keeps the short term runs feeling fair.
> That means, 1 out of 5 hits should be critical. The problem is I got very bad real life results - sometimes players get 3 crits in 5 hits, sometimes none in 15 hits. What you need is a [shuffle bag](https://web.archive.org/web/20150324141028/http://kaioa.com/node/53). It solves the problem of true random being too random for games. The algorithm is about like this: You put 1 critical and 4 non-critical hits in a bag. Then you randomize their order in the bag and pick them out one at a time. When the bag is empty, you fill it again with the same values and randomize it. That way you will get in average 1 critical hit per 5 hits, and at most 2 critical and 8 non-critical hits in a row. Increase the number of items in the bag for more randomness. Here is an example of [an implementation](http://github.com/orfjackal/puzzle-warrior/blob/6a0455993984f5f2a9b617d7587c360393524e44/src/main/java/net/orfjackal/puzzlewarrior/ShuffleBag.java) (in Java) and [its test cases](http://github.com/orfjackal/puzzle-warrior/blob/6a0455993984f5f2a9b617d7587c360393524e44/src/test/java/net/orfjackal/puzzlewarrior/ShuffleBagSpec.java) that I wrote some time ago.
Need for predictable random generator
[ "", "c++", "algorithm", "random", "" ]
I have dynamic web page with JS. There is a `<textarea>` and a *Send* button, but no `<form>` tags. How do I make *Submit* button fire and the `<textarea>` get cleared when *Enter* is pressed in the `<textarea>`?
You could use an keyup handler for the textarea (although I would advise against it\*). ``` [SomeTextarea].onkeyup = function(e){ e = e || event; if (e.keyCode === 13) { // start your submit function } return true; } ``` \*Why not use a text input field for this? Textarea is escpecially suited for multiline input, a text input field for single line input. With an enter handler you criple the multiline input. I remember using it once for a XHR (aka AJAX) chat application (so the textarea behaved like an MSN input area), but re-enabled multiline input using the CTRL-enter key for new lines. May be that's an idea for you? The listener would be extended like this: ``` [SomeTextarea].onkeyup = function(e){ e = e || event; if (e.keyCode === 13 && !e.ctrlKey) { // start your submit function } return true; } ```
Hitting Enter in a `textarea` inserts a line break, rather than submitting the parent form; this would't work in that fashion even with regular form tags. It would be inadvisable to attempt to work around this behaviour, as it would violate the user's expectation of how text area controls behave, both in other web sites, and in other applications on their platform.
Submitting data from textarea by hitting "Enter"
[ "", "javascript", "html", "forms", "" ]
(I'm sure this is a FAQ, but also hard to google) Why does Python use abs(x) instead of x.abs? As far as I see everything abs() does besides calling `x.__abs__` could just as well be implemented in object.abs() Is it historical, because there hasn't always been a root class?
The official answer from Guido van Rossum, with additional explanation from Fredrik Lundh, is here: <http://effbot.org/pyfaq/why-does-python-use-methods-for-some-functionality-e-g-list-index-but-functions-for-other-e-g-len-list.htm> In a nutshell: 1. abs(x) reads more naturally than x.abs() for most such operations 2. you know that abs(x) is getting an absolute value, whereas a method x.abs() could mean something different depending on the class of x.
I think you are looking a typical example where a language designer decides that readability and terseness trump purist constructs.
Why builtin functions instead of root class methods?
[ "", "python", "oop", "" ]
Well, not random, because its the same every time, but ``` #include<iostream> using namespace std; int main() { char box[10][10]; for(int i=-1;i<11;i++) { cout<<"---------------------"<<endl<<"|"; for(int j=0;j<10;j++) { cout<<box[j][i]<<"|"; } cout<<endl; } intx;cin>>x; return 0; } ``` outputs a series of international characters (well, not all of them are 'international' per se, but I get things like pi and spanish inverted question mark). Anyways, I know this is becuase the program access chars that have not been initialized, but why do particular values create particular symbols, what are the ASCII values of the symbols (if they have ASCII values) and how can I get the symbols without glitching my program?
Your loop over i doesn't make sense... ``` for(int i=-1;i<11;i++) ``` This will hit two indices that aren't valid, -1 and 10, when you reference box here: ``` cout<<box[j][i]<<"|"; ``` It should be 0 to < 10 like the other loop. Also you haven't initialized the contents of box to anything, so you're printing uninitialized memory. You have to put something into your "box" before you can take anything out. The symbols themselves are probably extended ASCII, you can get at them through any extended ASCII table. [This one](http://www.cdrummond.qc.ca/cegep/informat/Professeurs/Alain/files/ascii.htm) came up first on google. For instance, you could do: ``` cout << "My extended ascii character is: " << (char)162 << endl; ``` to get a crazy international o.
For the same reason that ``` #include <iostream> using namespace std; int main() { int x; cout << x; } ``` displays a random value. Uninitialised variables (or arrays) contain garbage.
Why does this program display seemingly random characters? (C++)
[ "", "c++", "special-characters", "" ]
Imagine the following table (called `TestTable`): ``` id somedate somevalue -- -------- --------- 45 01/Jan/09 3 23 08/Jan/09 5 12 02/Feb/09 0 77 14/Feb/09 7 39 20/Feb/09 34 33 02/Mar/09 6 ``` I would like a query that returns a running total in date order, like: ``` id somedate somevalue runningtotal -- -------- --------- ------------ 45 01/Jan/09 3 3 23 08/Jan/09 5 8 12 02/Feb/09 0 8 77 14/Feb/09 7 15 39 20/Feb/09 34 49 33 02/Mar/09 6 55 ``` I know there are [various ways of doing this](https://web.archive.org/web/20200212211219/http://geekswithblogs.net:80/Rhames/archive/2008/10/28/calculating-running-totals-in-sql-server-2005---the-optimal.aspx) in SQL Server 2000 / 2005 / 2008. I am particularly interested in this sort of method that uses the aggregating-set-statement trick: ``` INSERT INTO @AnotherTbl(id, somedate, somevalue, runningtotal) SELECT id, somedate, somevalue, null FROM TestTable ORDER BY somedate DECLARE @RunningTotal int SET @RunningTotal = 0 UPDATE @AnotherTbl SET @RunningTotal = runningtotal = @RunningTotal + somevalue FROM @AnotherTbl ``` ... this is very efficient but I have heard there are issues around this because you can't necessarily guarantee that the `UPDATE` statement will process the rows in the correct order. Maybe we can get some definitive answers about that issue. But maybe there are other ways that people can suggest? edit: Now with a [SqlFiddle](http://sqlfiddle.com/#!3/c8880/4) with the setup and the 'update trick' example above
**Update**, if you are running SQL Server 2012 see: <https://stackoverflow.com/a/10309947> The problem is that the SQL Server implementation of the Over clause is [somewhat limited](http://wayback.archive.org/web/20090625062153/http://www.mydatabasesupport.com/forums/sqlserver-programming/189015-tsql-accmulations.html). Oracle (and ANSI-SQL) allow you to do things like: ``` SELECT somedate, somevalue, SUM(somevalue) OVER(ORDER BY somedate ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS RunningTotal FROM Table ``` SQL Server gives you no clean solution to this problem. My gut is telling me that this is one of those rare cases where a cursor is the fastest, though I will have to do some benchmarking on big results. The update trick is handy but I feel its fairly fragile. It seems that if you are updating a full table then it will proceed in the order of the primary key. So if you set your date as a primary key ascending you will `probably` be safe. But you are relying on an undocumented SQL Server implementation detail (also if the query ends up being performed by two procs I wonder what will happen, see: MAXDOP): Full working sample: ``` drop table #t create table #t ( ord int primary key, total int, running_total int) insert #t(ord,total) values (2,20) -- notice the malicious re-ordering insert #t(ord,total) values (1,10) insert #t(ord,total) values (3,10) insert #t(ord,total) values (4,1) declare @total int set @total = 0 update #t set running_total = @total, @total = @total + total select * from #t order by ord ord total running_total ----------- ----------- ------------- 1 10 10 2 20 30 3 10 40 4 1 41 ``` You asked for a benchmark this is the lowdown. The fastest SAFE way of doing this would be the Cursor, it is an order of magnitude faster than the correlated sub-query of cross-join. The absolute fastest way is the UPDATE trick. My only concern with it is that I am not certain that under all circumstances the update will proceed in a linear way. There is nothing in the query that explicitly says so. Bottom line, for production code I would go with the cursor. Test data: ``` create table #t ( ord int primary key, total int, running_total int) set nocount on declare @i int set @i = 0 begin tran while @i < 10000 begin insert #t (ord, total) values (@i, rand() * 100) set @i = @i +1 end commit ``` Test 1: ``` SELECT ord,total, (SELECT SUM(total) FROM #t b WHERE b.ord <= a.ord) AS b FROM #t a -- CPU 11731, Reads 154934, Duration 11135 ``` Test 2: ``` SELECT a.ord, a.total, SUM(b.total) AS RunningTotal FROM #t a CROSS JOIN #t b WHERE (b.ord <= a.ord) GROUP BY a.ord,a.total ORDER BY a.ord -- CPU 16053, Reads 154935, Duration 4647 ``` Test 3: ``` DECLARE @TotalTable table(ord int primary key, total int, running_total int) DECLARE forward_cursor CURSOR FAST_FORWARD FOR SELECT ord, total FROM #t ORDER BY ord OPEN forward_cursor DECLARE @running_total int, @ord int, @total int SET @running_total = 0 FETCH NEXT FROM forward_cursor INTO @ord, @total WHILE (@@FETCH_STATUS = 0) BEGIN SET @running_total = @running_total + @total INSERT @TotalTable VALUES(@ord, @total, @running_total) FETCH NEXT FROM forward_cursor INTO @ord, @total END CLOSE forward_cursor DEALLOCATE forward_cursor SELECT * FROM @TotalTable -- CPU 359, Reads 30392, Duration 496 ``` Test 4: ``` declare @total int set @total = 0 update #t set running_total = @total, @total = @total + total select * from #t -- CPU 0, Reads 58, Duration 139 ```
In SQL Server 2012 you can use [SUM()](http://msdn.microsoft.com/en-us/library/ms187810.aspx?ppud=4) with the [OVER()](http://msdn.microsoft.com/en-us/library/ms189461.aspx) clause. ``` select id, somedate, somevalue, sum(somevalue) over(order by somedate rows unbounded preceding) as runningtotal from TestTable ``` [SQL Fiddle](http://sqlfiddle.com/#!6/62242/6)
Calculate a Running Total in SQL Server
[ "", "sql", "sql-server", "t-sql", "cumulative-sum", "" ]
Django has various numeric fields available for use in models, e.g. [DecimalField](http://docs.djangoproject.com/en/dev/ref/models/fields/#decimalfield) and [PositiveIntegerField](http://docs.djangoproject.com/en/dev/ref/models/fields/#positiveintegerfield). Although the former can be restricted to the number of decimal places stored and the overall number of characters stored, is there any way to restrict it to storing *only* numbers within a certain range, e.g. 0.0-5.0? Failing that, is there any way to restrict a PositiveIntegerField to only store, for instance, numbers up to 50? ***Update: now that Bug 6845 [has been closed](http://code.djangoproject.com/ticket/6845#comment:71), this StackOverflow question may be moot.***
You could also create a custom model field type - see <http://docs.djangoproject.com/en/dev/howto/custom-model-fields/#howto-custom-model-fields> In this case, you could 'inherit' from the built-in IntegerField and override its validation logic. The more I think about this, I realize how useful this would be for many Django apps. Perhaps a IntegerRangeField type could be submitted as a patch for the Django devs to consider adding to trunk. This is working for me: ``` from django.db import models class IntegerRangeField(models.IntegerField): def __init__(self, verbose_name=None, name=None, min_value=None, max_value=None, **kwargs): self.min_value, self.max_value = min_value, max_value models.IntegerField.__init__(self, verbose_name, name, **kwargs) def formfield(self, **kwargs): defaults = {'min_value': self.min_value, 'max_value':self.max_value} defaults.update(kwargs) return super(IntegerRangeField, self).formfield(**defaults) ``` Then in your model class, you would use it like this (field being the module where you put the above code): ``` size = fields.IntegerRangeField(min_value=1, max_value=50) ``` OR for a range of negative and positive (like an oscillator range): ``` size = fields.IntegerRangeField(min_value=-100, max_value=100) ``` What would be really cool is if it could be called with the range operator like this: ``` size = fields.IntegerRangeField(range(1, 50)) ``` But, that would require a lot more code since since you can specify a 'skip' parameter - range(1, 50, 2) - Interesting idea though...
You can use [Django's built-in validators](https://docs.djangoproject.com/en/dev/ref/validators/#built-in-validators)— ``` from django.db.models import IntegerField, Model from django.core.validators import MaxValueValidator, MinValueValidator class CoolModelBro(Model): limited_integer_field = IntegerField( default=1, validators=[ MaxValueValidator(100), MinValueValidator(1) ] ) ``` **Edit**: When working directly with the model, make sure to call the model [full\_clean](https://docs.djangoproject.com/en/2.2/ref/models/instances/#django.db.models.Model.full_clean) method before saving the model in order to trigger the validators. This is not required when using `ModelForm` since the forms will do that automatically.
How to limit the maximum value of a numeric field in a Django model?
[ "", "python", "django", "django-models", "limit", "django-validation", "" ]
Is it ok to use conditional operators like a statement like so? ``` (x == y) ? alert("yo!") : alert("meh!"); ``` Or is it more correct to use it to assign a value like so? ``` z = (x == y) ? "yo!" : "meh!"; ``` If it's not incorrect to use it like a statement, then is it possible to add more than one line of code for execution like so? Is it more correct to use ifthen and switch statements for multiple lines of code? ``` (x == y) ? (alert("yo!"), document.write("woot!")) : (alert("meh!"), document.write("blah!")); ```
Conditional operators are intentionally succinct and especially useful for assignments: ``` var a = x ? 1 : 2; ``` Using them to conditionally run functions, while possible, should, for the sake of readability be done using IF/ELSE statements: ``` // This is possible but IMO not best practice: X ? doSomething() : doSomethingElse(); ``` While long-winded, most of the time, this is the better solution: ``` if (X) { doSomething(); } else { doSomethingElse(); } ``` One notable benefit to the IF/ELSE structure is that you can add additional tasks under each condition with minimal hassle. Your last snippet is also possible but it looks somewhat long-winded and, again, might be better suited to a more conventional logical structure; like an IF/ELSE block. That said, a conditional operator can still be readable, e.g. ``` (something && somethingElse > 2) ? doSomeLongFunctionName() : doSomeOtherLongFunctionName(); ``` In the end, like many things, it's down to personal preference. Always remember that the code you're writing is not just for you; other developers might have to wade through it in the future; try and make it as readable as possible.
JavaScript won't prevent you from doing it, but it's very a unusual practice that will confuse anyone reading your code. The conditional operator is almost always used for selecting two alternative values, not statements. An `if` statement is preferred for conditional branching of statements. As to your last question, yes, if you really must, you can abuse the `[]` construct: ``` (x == y) ? [alert("yo!"), document.write("woot!")] : otherstuff(); ``` But please don't. 8-)
Conditional Operators in Javascript
[ "", "javascript", "operators", "conditional-operator", "conditional-statements", "" ]
When I try to send E-mail using C# with gmail's smtp server,I get this error.. **"The remote certificate is invalid according to the validation procedure".** SSL is enabled Port used is 587 server name used is "Smtp.gmail.com". username and password is correct outlook express works fine on the same pc with the same settings The c# program also works fine in other places...we get this error only in the clients place. Would appreciate any help.. Thanks Edit: @Andomar,Where do I find the root certificates in the client? How do I fix this? @Alnitak,How do I issue starttls using System.Net.Mail library though? @David,What do I pass as parameter for "(object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors)" Thanks David. I've added those lines. But,I'm still confused about whats going on since this code doesn't have any direct connection with System.Net.Mail as far as my understanding.Hope the problem goes away.
Also check that the root certificates are in the Client's Trusted Root Authority store. If this is from a service then adding the root certificates to the Local Machine store may also help. To get a better grasp of the reason then I have found the following policy helpful... ``` public bool ValidateServerCertificate( object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) { // No errors so continue… if (sslPolicyErrors == SslPolicyErrors.None) return true; // I’m just logging it to a label on the page, // this should be stored or logged to the event log at this time. lblStuff.Text += string.Format("Certificate error: {0} <BR/>", sslPolicyErrors); // If the error is a Certificate Chain error then the problem is // with the certificate chain so we need to investigate the chain // status for further info. Further debug capturing could be done if // required using the other attributes of the chain. if (sslPolicyErrors == SslPolicyErrors.RemoteCertificateChainErrors) { foreach (X509ChainStatus status in chain.ChainStatus) { lblStuff.Text += string.Format("Chain error: {0}: {1} <BR/>", status.Status, status.StatusInformation); } } // Do not allow this client to communicate //with unauthenticated servers. return false; } ``` To add the policy in, use the following, this only needs to be done once for the Application domain. ``` using System.Net; ... ServicePointManager.ServerCertificateValidationCallback = new Security.RemoteCertificateValidationCallback(ValidateServerCertificate); ``` You can also use the policy to remove the error altogether but it would be better to fix the problem than do that.
Check if the proper root certificates are in the client's store. And the client's system date is correct.
Problem sending E-mail using C#
[ "", "c#", "email", "smtp", "" ]
I have a row collection (DataRow[] rows). And I want to import all rows to another DataTable (DataTable dt). But how? ### Code ``` DataTable dt; if (drs.Length>0) { dt = new DataTable(); foreach (DataRow row in drs) { dt.Columns.Add(row???????) } // If it possible, something like that => dt.Columns.AddRange(????????) for(int i = 0; i < drs.Length; i++) { dt.ImportRow(drs[i]); } } ```
Assuming the rows all have the same structure, the easiest option is to clone the old table, omitting the data: ``` DataTable dt = drs[0].Table.Clone(); ``` Alternatively, something like: ``` foreach(DataColumn col in drs[0].Table.Columns) { dt.Columns.Add(col.ColumnName, col.DataType, col.Expression); } ```
If your DataRows is from a Data Table with Columns defined in it, ``` DataRow[] rows; DataTable table = new DataTable(); var columns = rows[0].Table.Columns; table.Columns.AddRange(columns.Cast<DataColumn>().ToArray()); foreach (var row in rows) { table.Rows.Add(row.ItemArray); } ```
How to get columns from a datarow?
[ "", "c#", "datatable", "datarowcollection", "" ]
I am having a problem accessing a static const variable defined in my class private member variable section. Specifically, the code written below can output the variable within the constructor, but when I try to access it through an accessor function, I get an error discussed below. If anyone knows why I would appreciate your help. ``` #include <iostream> using namespace std; class TestStaticVariables { // Private member variable: static const double static_double_variable; public: // Constructor: TestStaticVariables() { // Initialization: static const double static_double_variable = 20.0; cout << static_double_variable; } // Member Function: void test(); }; void TestStaticVariables::test() { ``` When this next line is uncommented I get the following error message: Line Location Tool:0: "TestStaticVariables::static\_double\_variable", referenced from: ``` //cout << static_double_variable; } int main(int argc, char* const argv[]) { TestStaticVariables test_instance; return 0; } ```
Try initializing the variable outside the class definition, here is a working example: ``` #include <iostream> class Foo { static const double _bar; public: Foo(); void Bar(); }; const double Foo::_bar = 20.0; Foo::Foo() { std::cout << Foo::_bar << std::endl; } void Foo::Bar() { std::cout << Foo::_bar << std::endl; } int main( int argc, char *argv[] ) { Foo f; f.Bar(); return 0; } ```
What you have marked as "// Initialization" is actually a creating and initializing a second variable with the same name, in a different scope. The static\_double\_variable variable created inside the constructor is a local variable in the constructor, and does not refer to the class-level static variable with the same name. What you need to do to avoid this is to simply remove the type information so that it's a normal statement rather than an initialization, like so: ``` // Initialization: static_double_variable = 20.0; ``` But of course this won't actually work because it's an assignment to a const variable, and you still have a second problem, which is what I think is actually causing the error you see. When you write: ``` // Private member variable: static const double static_double_variable; ``` You are declaring that such a variable will exist. However you are not actually defining that variable (i.e. instructing the compiler to create storage for it). In order to do that, and fix both your issues outside of your `class { }` construct, you would write: ``` const double TestStaticVariables::static_double_variable = 20.0; ``` This both defines the variable and gives it an initial, constant value. In case that was unclear, this issue is also described succinctly in the C++ FAQ: <http://www.parashift.com/c++-faq-lite/ctors.html#faq-10.10>
Problem accessing static const variables through class member functions
[ "", "c++", "" ]
I've come across the following piece of JavaScript and would like to know what it's doing: ``` function flipString(aString) { var last = aString.length - 1; var result = new Array(aString.length) for (var i = last; i >= 0; --i) { var c = aString.charAt(i) var r = flipTable[c] result[last - i] = r != undefined ? r : c } return result.join('') } ```
It looks like some sort of encryption / obfuscation. Without knowing what `filpTable` looks like it's hard to say. ``` function flipString(aString) { var last = aString.length - 1; // Create a new array with the length of the string var result = new Array(aString.length) // Walk through the string backwards for (var i = last; i >= 0; --i) { // Get the current character var c = aString.charAt(i) // Find the associated character in the flipTable var r = flipTable[c] // If the character wasn't in the flip table, use it as-is, else use the one we found // Store as (last-i) instead of (i) so it comes out backwards result[last - i] = r != undefined ? r : c } // Return the result as a string instead of an array return result.join('') } ```
It walks through a string from the first character to the last, whilst storing the character found at each index in an array. For each character, if the array "flipTable" has an entry associated with it, it uses the flipTable entry rather than the character. The resultant array is then joined with '' to make a string. In simpler terms, it reverses a string whilst simultaneously changing every character that is a key for flipTable to the entry associated with it. Why you'd do this, I have no idea without context.
Explaining a JavaScript function line by line
[ "", "javascript", "" ]
I have been working on a project and trying to find the source of a large slowdown in execution time and have narrowed it down to a single method which I have managed to optimise out of the logic. The problem is that my solution involves using a reference which makes another section of the code run quite slowly... The question I'd like answered is why the inner loop takes so much longer to evaluate when the map is a reference as opposed to a local variable? Here's the old way prior to optimisation: ``` // old method: create an empty map, populate it // and then assign it back to the path object later map<int,float> screenline_usage; for (int i=0; i<numCandidates; ++i) { // timing starts here. map<int, float>& my_screenline_usage = path->get_combined_screenline_usage(legnr, stop_id); map<int, float>::iterator it = my_screenline_usage.begin(); for (; it != my_screenline_usage.end(); ++it) screenline_usage[it->first] += usage * it->second; // timing ends here, this block evaluated 4 million times for overall execution time of ~12 seconds } // This function call is evaluated 400k times for an overall execution time of ~126 seconds path->set_zone_screenline_usage(access_mode, zone_id, screenline_usage); // TOTAL EXECUTION TIME: 138 seconds. ``` New way after optimisation: ``` // new method: get a reference to internal path mapping and populate it map<int, float>& screenline_usage = path->get_zone_screenline_usage(access_mode, zone_id); screenline_usage.clear(); for (int i=0; i<numCandidates; ++i) { // timing starts here map<int, float>& my_screenline_usage = path->get_combined_screenline_usage(legnr, stop_id); map<int, float>::iterator it = my_screenline_usage.begin(); for (; it != my_screenline_usage.end(); ++it) screenline_usage[it->first] += usage * it->second; // timing ends here, this block evaluated 4 million times for overall execution time of ~76 seconds } // New method... no need to assign back to path object (0 seconds execution :) // TOTAL EXECUTION TIME: 76 seconds (62 second time saving) ... but should be able to do it in just 12 seconds if the use of reference didn't add so much time :( ``` Here are the pertinent subroutines called from that code: ``` // This is the really slow routine, due to the copy assignment used. void set_zone_screenline_usage(int access_mode, int zone_id, map<int,float>& screenline_usage) { m_container[access_mode][zone_id] = screenline_usage; } map<int,float>& get_zone_screenline_usage(int access_mode, int zone_id) { return m_container[access_mode][zone_id]; } ``` NOTES: Timing information is for a single run in which the above code is evaluated approximately 400k times. The timing is done using some classes that I built to access the RDTSC time stamp counter (yes i know TSC means time stamp counter), the average value of numCandidates is 10 and the average number of elements put into the screenline\_usage map is 25. --- UPDATE: Firstly thanks to everyone who has gotten involved here. I think that in the end this had nothing to do c++ references at all and had more to do with cache consistency. I have replaced the optimised code above with a vector& and a hash function implemented as a member variable map ``` // newest method: get a reference to internal path mapping (as vector) and populate it // map<int,int> m_linkNum_to_SlNum declared in header and populated in constructor. vector<float>& screenline_usage = path->get_zone_screenline_usage(access_mode, zone_id); for (int i=0; i<numCandidates; ++i) { // timing starts here map<int, float>& my_screenline_usage = path->get_combined_screenline_usage(legnr, stop_id); map<int, float>::iterator it = my_screenline_usage.begin(); for (; it != my_screenline_usage.end(); ++it) screenline_usage[m_linkNum_to_SlNum[it->first]] += usage * it->second; // timing ends here, this block evaluated 4 million times for overall execution time of ~9 seconds } // Newest method... again no need to assign back to path object (0 seconds execution :) // TOTAL EXECUTION TIME: just 9 seconds (129 second time saving) ... this is even better than using a locally constructed map which took 12 seconds in the inner loop :) ``` It seems to me here that, given that the vector isn't local but is a contiguous block of memory and that the hashing function (m\_linkNum\_to\_SlNum) is a local member variable, this approach leads to code/data that is able to fit into cache without having to go out to main memory for data resulting in the significant speed up. Other conclusions given these findings are greatly appreciated.
As per my updates, I think this is most likely a cache-consistency problem rather than a c++ reference issue.
Maybe you C++ compiler is able to inline some code for the local map, but not when the map is a reference.
speed of references in C++
[ "", "c++", "optimization", "reference", "dictionary", "timing", "" ]
I had an issue playing with Gallery when I changed a setting. However, I noticed there is a pattern to the error: The URLs look as such: main.php/d/number/name.jpg **"number" is dynamic (ie "9496-2") "name" is dynamic (ie "all+clad+7pc+b")** Everything else is static. Unfortunately, when I made the setting change, the "number" portion then changed from "9496-2" to "9495-2". How can I subtract the value "1" from variable "number"? Jeff
This should do it. ``` RewriteEngine On RewriteBase / RewriteRule ^photos/ebay/main.php/d/([0-9]*)6-([0-9]*)/(.*).jpg /photos/ebay/main.php/d/$1\5-$2/$3.jpg [QSA,L] ``` I know you said you already got it but here is a solution without an additional script. (And I actually tested this one to ensure that \5 works).
Here is the solution that worked. It involved using a .htaccess file and a PHP script. **.htaccess file**: Create the ".htaccess" file with the below content: ``` RewriteEngine On RewriteBase / # Redirect to PHP Script RewriteRule ^main.php/d/([0-9]*)-([0-9])/(.*)$ scriptName.php?v1=$1&v2=$2&v3=$3 [R=301,QSA,L] ``` **PHP Script:** Create the "scriptName.php", then add in your logic. In my case, I needed to subtract "1" from v1. *NOTE: make sure the .htaccess and PHP script are in the same path.*
mod-rewrite URL Change
[ "", "php", ".htaccess", "mod-rewrite", "" ]
For doing development, does it matter what OS PHP is running on? In other words would it be fine to develop a PHP application on Windows and deploy on Unix (or vice versa)? In other words, do PHP applications run identically on Windows and Unix?
Almost, but not quite. There are a couple of things you have to watch out for. **1)** File names: Windows is a case-insensitive operating system. If you create a file Foo.php, you can include it using `include('Foo.php')` *OR* `include('foo.php')`. When you move your project to Linux/Unix, this will break if you don't have the right case. **2)** There are some language-specific platform differences, generally when it comes to something that relies on integrated OS functionality. These rarely come up, but you might run into them occasionally. For example, the [checkdnsrr()](http://www.php.net/manual/en/function.checkdnsrr.php) function didn't exist in Windows PHP until version 5.3.0. I had one more too, but I think I need a coffee, my brain just stopped. Edit: Oh yeah, I remember: **3)** Installs. The PHP packages you get for Linux/Unix can very widely in what they include in a default install compared to Windows. You need to make sure to test your app on a development box of the opposite platform just to be sure you have all the required libraries compiled/added in, or you'll get some nice fatal errors from an otherwise normal looking app.
The core PHP language runs identically, but with any language, you'll run into OS issues if you develop one one platform and deploy on another. There are differences in paths (file locations), web server configuration (what keys are available in the $\_SERVER superglobal), what extensions are available if they use OS functionality (some PHP extensions are platform specific). However, these issues are fairly specific. They can be frustrating, but you'll solve each one of them once and then move on. In practice most web developers end up using a development machine that has configuration differences from their production servers, and learning to work around those issues is helpful and results in a more portable application.
Hosting PHP
[ "", "php", "" ]
I have sets of strings in a database. Each set will have less than 500 members, there will be tens of thousands of sets, and the strings are natural language. I would like to detect duplicate strings within each set. New strings will be compared with an existing set, and added to the database if they are unique. Are there hashing algorithms that would be effective at finding (very) similar strings? For example, the strings probably would have the same number of words, but the encoding may be slightly different (UTF-8 vs Latin-1).
For starters, you should probably do some sort of normalization. You should probably convert all of your text to a single encoding (eg: UTF-8). You may also want to do case-folding, other [Unicode normalizations](http://unicode.org/reports/tr15/) and perhaps also sorting each set (depending on how you're storing them). It's unclear (to me) from your question whether you want to find exact matches or just string sets that are "similar". If you only care about exact matches once the normalization is taken into account, then you're pretty much done. Just have an index on the normalized forms of your string sets and you can look up new sets quickly by normalizing them as well. If you want to find near matches then you'll probably want to do some sort of similarity hashing. The Wikipedia article on [Locality Sensitive Hashing](http://en.wikipedia.org/wiki/Locality_sensitive_hashing) describes a number of techniques. The basic idea behind a number of these techniques is to compute a handful of very lossy hashes on each string, h[0] through h[n]. To look up a new string set you'd compute its hashes and look each of these up. Anything that gets at least one match is "similar", and the more matches the more similar it is (and you can choose what threshhold to cut things off at).
If there are only 500 strings in the database, perhaps you can directly compare to each one. First convert to a standard representation (say UTF-16). The [Levenshtein distance](http://en.wikipedia.org/wiki/Levenshtein_distance) can be a nice way of comparing the similarity of two strings.
Duplicate text detection / hashing
[ "", "python", "" ]
This doesn't work: ``` >>> pa = Person.objects.all() >>> pa[2].nickname u'arst' >>> pa[2].nickname = 'something else' >>> pa[2].save() >>> pa[2].nickname u'arst' ``` But it works if you take ``` p = Person.objects.get(pk=2) ``` and change the nick. Why so.
``` >>> type(Person.objects.all()) <class 'django.db.models.query.QuerySet'> >>> pa = Person.objects.all() # Not evaluated yet - lazy >>> type(pa) <class 'django.db.models.query.QuerySet'> ``` DB queried to give you a Person object ``` >>> pa[2] ``` DB queried again to give you yet another Person object. ``` >>> pa[2].first_name = "Blah" ``` Let's call this instance PersonObject1 that resides in memory. So it's equivalent to something like this: ``` >>> PersonObject1.first_name = "Blah" ``` Now let's do this: ``` >>> pa[2].save() ``` The pa[2] again queries a db an returns Another instance of person object, say PersonObject2 for example. Which will be unchanged! So it's equvivalent to calling something like: ``` PersonObject2.save() ``` But this has nothing to do with PersonObject1.
If you assigned your `pa[2]` to a variable, like you do with `Person.objects.get(pk=2)` you'd have it right: ``` pa = Person.objects.all() print pa[2].nickname 'Jonny' pa[2].nickname = 'Billy' print pa[2].nickname 'Jonny' # when you assign it to some variable, your operations # change this particular object, not something that is queried out each time p1 = pa[2] print p1.nickname 'Jonny' p1.nickname = 'Billy' print p1.nickname 'Billy' ``` This has nothing to do with the method you pull the objects from database. And, btw, django numbers PrimaryKeys starting from 1, not 0, so ``` Person.objects.all()[2] == Person.objects.get(pk=2) False Person.objects.all()[2] == Person.objects.get(pk=3) True ```
Django objects change model field
[ "", "python", "django", "django-models", "" ]
I'm creating a custom script control in ASP.NET The purpose of the control is simply a server variant of the tag, used to load javascript files The main purpose of this control however is to combine multiple scripts into one response so on the client side they see something like tag for each location, so all scripts registered in the DocumentTop location will be combined into a single tag with the exception of the location "inline", all inline scripts are rendered individually where they exist in the markup I have also created an httphandler, js.ashx, that does the actual combining of the scripts Everything is working fine except for the "Head" location, for the two document locations i simply use the ClientScriptManager during prerender but for the Head location i have tried the following code during pre render ``` var scriptControl = new HtmlGenericControl("script"); scriptControl.Attributes["language"] = "javascript"; scriptControl.Attributes["type"] = "text/javascript"; scriptControl.Attributes["src"] = src; Page.Header.Controls.Add(scriptControl); ``` and I get the following error: The control collection cannot be modified during DataBind, Init, Load, PreRender or Unload phases. does anyone know how add a control to the page header from within a custom control? Incidentally, the control is used on a content page that has two nested masters and also has a ScriptManager registered on the root master. The project is an asp.net 3.5 web application project
Ive discovered an answer to my question. I don't quite understand the why but the problem lies in when I am trying to add my script control into the head, doing it in the control's PreRender event causes my error but if you add the control during the Page's PreRender event it all works fine and dandy eg: ``` protected override void OnInit(EventArgs e) { base.OnInit(e); this.Page.PreRender += new EventHandler(Page_PreRender); } void Page_PreRender(object sender, EventArgs e) { var scriptControl = new HtmlGenericControl("script"); Page.Header.Controls.Add(scriptControl); scriptControl.Attributes["language"] = "javascript"; scriptControl.Attributes["type"] = "text/javascript"; scriptControl.Attributes["src"] = "blah.js"; } ```
I don't know why you get this error but how about using [ClientScript](http://msdn.microsoft.com/en-us/library/system.web.ui.page.clientscript.aspx) like that : ``` protected void Page_Load(object sender, EventArgs e) { string scriptFile = "myscriptFile.js"; if (!this.Page.ClientScript.IsClientScriptIncludeRegistered("myScript")) { this.Page.ClientScript.RegisterClientScriptInclude("myScript", scriptFile); } } ``` [ClientScriptManager.RegisterClientScriptInclude Method](http://msdn.microsoft.com/en-us/library/2552td66.aspx)
Add a control to the page header in ASP.NET
[ "", "c#", "asp.net", "custom-controls", "" ]
I have a Django application I'm developing that must make a system call to an external program on the server. In creating the command for the system call, the application takes values from a form and uses them as parameters for the call. I suppose this means that one can essentially use bogus parameters and write arbitrary commands for the shell to execute (e.g., just place a semicolon and then `rm -rf *`). This is bad. While most users aren't malicious, it is a potential security problem. How does one handle these potential points of exploit? **EDIT** (for clarification): The users will see a form that is split up with various fields for each of the parameters and options. However some fields will be available as open text fields. All of these fields are combined and fed to `subprocess.check_call()`. Technically, though, this isn't separated too far from just handing the users a command prompt. This has got to be fairly common, so what do other developers do to sanitize input so that they don't get a [Bobby Tables](http://xkcd.com/327/).
Based on my understanding of the question, I'm assuming you aren't letting the users specify commands to run on the shell, but just arguments to those commands. In this case, you can avoid [shell injection](http://en.wikipedia.org/wiki/Code_injection#Shell_Injection) attacks by using the [`subprocess`](http://docs.python.org/library/subprocess.html) module and *not* using the shell (i.e. specify use the default `shell=False` parameter in the `subprocess.Popen` constructor. Oh, and *never* use `os.system()` for any strings containing any input coming from a user.
By never trusting users. Any data coming from the web browser should be considered tainted. And absolutely do not try to validate the data via JS or by limiting what can be entered in the FORM fields. You need to do the tests on the server before passing it to your external application. **Update after your edit:** no matter how you present the form to users on your front-end the backend should treat it as though it came from a set of text boxes with big flashing text around them saying "insert whatever you want here!"
How do I prevent execution of arbitrary commands from a Django app making system calls?
[ "", "python", "django", "security", "" ]
When I pass 'this' to an anonymous function like so: ``` MyClass.prototype.trigger = function(){ window.setTimeout(function(){this.onTimeout();},1000); } ``` I get a "this.onTimeout is not a function"-error. I guess that 'this' is no longer available at the time the anonymous function is executing? So I've been doing this: ``` MyClass.prototype.trigger = function(){ var me = this window.setTimeout(function(){me.onTimeout();},1000); } ``` Is this really how you're supposed to do things? It kinda works, but it feels weird. Then we have this example: ``` $(function(){ function MyClass(){ this.queue = new Array(); } MyClass.prototype.gotAnswer = function(count){ $('body').append("count:"+count+"<br/>"); } MyClass.prototype.loadAll = function(){ var count = 0; var item; while(item = this.queue.pop()){ count++; var me = this; $.getJSON("answer.html",{},function(data){me.gotAnswer(count);}); } } var o = new MyClass(); o.queue.push(1); o.queue.push(2); o.loadAll(); }); ``` This outputs: ``` 2 2 ``` Shouldn't it output: ``` 1 2 ``` instead? Then I discovered that putting the $.getJSON-statement in another function makes it all work: ``` MyClass.prototype.loadAll = function(){ var count = 0; var item; while(item = this.queue.pop()){ count++; this.newRequest(count); } } MyClass.prototype.newRequest = function(count){ var me = this; $.getJSON("answer.html",null,function(data){ me.gotAnswer(count); }); } ``` This outputs: ``` 1 2 ``` (Or the other way around.) What's happening here? What is the right way to pass variables to an anonnymous function? Sorry for the confusing and lengthy post.
What you are experiencing is the correct behavior - it's not a good behavior, but it's part of the language. The value of "this" is reset inside **every** function definition. There are four ways to call a function that have different ways of setting "this". 1. The regular function invocation ``` myFunc(param1, param2); ``` This way of calling a function will always reset "this" to the global object. That's what's happening in your case. 2. Calling it as a method ``` myObj.myFunc(param1, param2); ``` This unsurprisingly sets "this" to whatever object the method is being called on. Here, "this" == "myObj". 3. Apply method invocation ``` myFunc.apply(myObj, [param1, param2]) ``` This is an interesting one - here "this" is set to the object you pass as the first parameter to the apply method - it's like calling a method on an object that does not have that method (be careful that the function is written to be called this way). All functions by default have the apply method. 4. As a constructor (with "new") ``` myNewObj = new MyConstructor(param1, param2); ``` When you call a function this way, "this" is initialized to a new object that inherits methods and properties from your function's prototype property. In this case, the new object would inherit from MyConstructor.prototype. In addition, if you don't return a value explicitly, "this" will be returned. The solution you used is the recommended solution - assign the outside value of "this" to another variable that will still be visible inside your function. The only thing I would change is to call the variable "that" as Török Gábor says - that's sort of the de-facto standard and might make your code easier to read for other programmers.
You are confused about the closures. For the first problem, yes, you are right, that is the way it can be done. The only difference that there is a convention to name the variable `that` that holds `this`. ``` MyClass.prototype.trigger = function(){ var that = this; window.setTimeout(function(){that.onTimeout();},1000); } ``` There is already a nice thread about this on StackOverflow. Check answers for question [How does a javascript closure work?](https://stackoverflow.com/questions/111102/how-does-a-javascript-closure-work). Your second problem is an exact duplicate of [Javascript closure inside loops - simple practical example](https://stackoverflow.com/questions/750486/javascript-closure-inside-loops-simple-practical-example).
How is data passed to anonymous functions in JavaScript?
[ "", "javascript", "closures", "" ]
I have a generic question that I will try to explain using an example. Say I have a table with the fields: "id", "name", "category", "appearances" and "ratio" The idea is that I have several items, each related to a single category and "appears" several times. The ratio field should include the percentage of each item's appearances out of the total number of appearances of items in the category. In pseudo-code what I need is the following: * **For each category** find the total sum of appearances for items related to it. For example it can be done with (`select sum("appearances") from table group by category`) * **For each item** set the ratio value as the item's appearances divided by the sum found for the category above Now I'm trying to achieve this with a single update query, but can't seem to do it. What I thought I should do is: ``` update Table T set T.ratio = T.appearances / ( select sum(S.appearances) from Table S where S.id = T.id ) ``` But MySQL does not accept the alias T in the update column, and I did not find other ways of achieving this. Any ideas?
Following the two answers I received (none of which was complete so I wrote my own), what I eventually did is as follows: ``` UPDATE Table AS target INNER JOIN ( select category, appearances_sum from Table T inner join ( select category as cat, sum(appearances) as appearances_sum from Table group by cat ) as agg where T.category = agg.cat group by category ) as source ON target.category = source.category SET target.probability = target.appearances / source.appearances_sum ``` It works very quickly. I also tried with correlated subquery but it was much slower (orders of magnitude), so I'm sticking with the join.
Use joins right after UPDATE: [Reference Manual – 13.2.11 UPDATE Syntax](http://dev.mysql.com/doc/refman/5.5/en/update.html) so UPDATE table1 inner join table2 on .... set table1.foo=value where table2.bla = someothervalue With these kind of things, always look at the manual. MySql has a proper reference manual, so it shouldn't be that hard to get the right syntax ;)
MySQL/SQL: Update with correlated subquery from the updated table itself
[ "", "mysql", "sql", "sql-update", "correlated-subquery", "" ]
I have a 2d array that looks like this: ``` XX xx ``` What's the most efficient way to add an extra row and column: ``` xxy xxy yyy ``` For bonus points, I'd like to also be able to knock out single rows and columns, so for example in the matrix below I'd like to be able to knock out all of the a's leaving only the x's - specifically I'm trying to delete the nth row and the nth column at the same time - and I want to be able to do this as quickly as possible: ``` xxaxx xxaxx aaaaa xxaxx xxaxx ```
The shortest in terms of lines of code i can think of is for the first question. ``` >>> import numpy as np >>> p = np.array([[1,2],[3,4]]) >>> p = np.append(p, [[5,6]], 0) >>> p = np.append(p, [[7],[8],[9]],1) >>> p array([[1, 2, 7], [3, 4, 8], [5, 6, 9]]) ``` And the for the second question ``` p = np.array(range(20)) >>> p.shape = (4,5) >>> p array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]]) >>> n = 2 >>> p = np.append(p[:n],p[n+1:],0) >>> p = np.append(p[...,:n],p[...,n+1:],1) >>> p array([[ 0, 1, 3, 4], [ 5, 6, 8, 9], [15, 16, 18, 19]]) ```
**A useful alternative answer to the first question, using the examples from** tomeedee’s **answer, would be to use numpy’s** vstack **and** column\_stack **methods:** Given a matrix p, ``` >>> import numpy as np >>> p = np.array([ [1,2] , [3,4] ]) ``` an augmented matrix can be generated by: ``` >>> p = np.vstack( [ p , [5 , 6] ] ) >>> p = np.column_stack( [ p , [ 7 , 8 , 9 ] ] ) >>> p array([[1, 2, 7], [3, 4, 8], [5, 6, 9]]) ``` These methods may be convenient in practice than np.append() as they allow 1D arrays to be appended to a matrix without any modification, in contrast to the following scenario: ``` >>> p = np.array([ [ 1 , 2 ] , [ 3 , 4 ] , [ 5 , 6 ] ] ) >>> p = np.append( p , [ 7 , 8 , 9 ] , 1 ) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.6/dist-packages/numpy/lib/function_base.py", line 3234, in append return concatenate((arr, values), axis=axis) ValueError: arrays must have same number of dimensions ``` **In answer to the second question, a nice way to remove rows and columns is to use logical array indexing as follows:** Given a matrix p, ``` >>> p = np.arange( 20 ).reshape( ( 4 , 5 ) ) ``` suppose we want to remove row 1 and column 2: ``` >>> r , c = 1 , 2 >>> p = p [ np.arange( p.shape[0] ) != r , : ] >>> p = p [ : , np.arange( p.shape[1] ) != c ] >>> p array([[ 0, 1, 3, 4], [10, 11, 13, 14], [15, 16, 18, 19]]) ``` Note - for reformed Matlab users - if you wanted to do these in a one-liner you need to index twice: ``` >>> p = np.arange( 20 ).reshape( ( 4 , 5 ) ) >>> p = p [ np.arange( p.shape[0] ) != r , : ] [ : , np.arange( p.shape[1] ) != c ] ``` This technique can also be extended to remove *sets* of rows and columns, so if we wanted to remove rows 0 & 2 and columns 1, 2 & 3 we could use numpy's **setdiff1d** function to generate the desired logical index: ``` >>> p = np.arange( 20 ).reshape( ( 4 , 5 ) ) >>> r = [ 0 , 2 ] >>> c = [ 1 , 2 , 3 ] >>> p = p [ np.setdiff1d( np.arange( p.shape[0] ), r ) , : ] >>> p = p [ : , np.setdiff1d( np.arange( p.shape[1] ) , c ) ] >>> p array([[ 5, 9], [15, 19]]) ```
What's the simplest way to extend a numpy array in 2 dimensions?
[ "", "python", "arrays", "math", "numpy", "" ]
New to javascript, but I'm sure this is easy. Unfortunately, most of the google results haven't been helpful. Anyway, I want to set the value of a hidden form element through javascript when a drop down selection changes. I can use jQuery, if it makes it simpler to get or set the values.
If you have HTML like this, for example: ``` <select id='myselect'> <option value='1'>A</option> <option value='2'>B</option> <option value='3'>C</option> <option value='4'>D</option> </select> <input type='hidden' id='myhidden' value=''> ``` All you have to do is [bind a function to the `change` event of the select](http://docs.jquery.com/Events/change#fn), and do what you need there: ``` <script type='text/javascript'> $(function() { $('#myselect').change(function() { // if changed to, for example, the last option, then // $(this).find('option:selected').text() == D // $(this).val() == 4 // get whatever value you want into a variable var x = $(this).val(); // and update the hidden input's value $('#myhidden').val(x); }); }); </script> ``` All things considered, if you're going to be doing a lot of jQuery programming, always have the [documentation](http://docs.jquery.com/Main_Page) open. It is very easy to find what you need there if you give it a chance.
Plain old Javascript: ``` <script type="text/javascript"> function changeHiddenInput (objDropDown) { var objHidden = document.getElementById("hiddenInput"); objHidden.value = objDropDown.value; } </script> <form> <select id="dropdown" name="dropdown" onchange="changeHiddenInput(this)"> <option value="1">One</option> <option value="2">Two</option> <option value="3">Three</option> </select> <input type="hidden" name="hiddenInput" id="hiddenInput" value="" /> </form> ```
Javascript to set hidden form value on drop down change
[ "", "javascript", "jquery", "" ]
Why does Java have **transient** fields?
The `transient` keyword in Java is used to indicate that a field should not be part of the serialization (which means saved, like to a file) process. From the [Java Language Specification, Java SE 7 Edition](http://docs.oracle.com/javase/specs/jls/se7/html/index.html), [Section 8.3.1.3. `transient` Fields](http://docs.oracle.com/javase/specs/jls/se7/html/jls-8.html#jls-8.3.1.3): > Variables may be marked `transient` to > indicate that they are not part of the > persistent state of an object. For example, you may have fields that are derived from other fields, and should only be done so programmatically, rather than having the state be persisted via serialization. Here's a `GalleryImage` class which contains an image and a thumbnail derived from the image: ``` class GalleryImage implements Serializable { private Image image; private transient Image thumbnailImage; private void generateThumbnail() { // Generate thumbnail. } private void readObject(ObjectInputStream inputStream) throws IOException, ClassNotFoundException { inputStream.defaultReadObject(); generateThumbnail(); } } ``` In this example, the `thumbnailImage` is a thumbnail image that is generated by invoking the `generateThumbnail` method. The `thumbnailImage` field is marked as `transient`, so only the original `image` is serialized rather than persisting both the original image and the thumbnail image. This means that less storage would be needed to save the serialized object. (Of course, this may or may not be desirable depending on the requirements of the system -- this is just an example.) At the time of deserialization, the [`readObject`](http://java.sun.com/javase/6/docs/api/java/io/ObjectInputStream.html#readObject()) method is called to perform any operations necessary to restore the state of the object back to the state at which the serialization occurred. Here, the thumbnail needs to be generated, so the `readObject` method is overridden so that the thumbnail will be generated by calling the `generateThumbnail` method. For additional information, the article [Discover the secrets of the Java Serialization API](http://www.oracle.com/technetwork/articles/java/javaserial-1536170.html) (which was originally available on the Sun Developer Network) has a section which discusses the use of and presents a scenario where the `transient` keyword is used to prevent serialization of certain fields.
Before understanding the `transient` keyword, one has to understand the concept of serialization. If the reader knows about serialization, please skip the first point. ### What is serialization? Serialization is the process of making the object's state persistent. That means the state of the object is converted into a stream of bytes to be used for persisting (e.g. storing bytes in a file) or transferring (e.g. sending bytes across a network). In the same way, we can use the deserialization to bring back the object's state from bytes. This is one of the important concepts in Java programming because serialization is mostly used in networking programming. The objects that need to be transmitted through the network have to be converted into bytes. For that purpose, every class or interface must implement the [`Serializable`](https://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html) interface. It is a marker interface without any methods. ### Now what is the `transient` keyword and its purpose? By default, all of object's variables get converted into a persistent state. In some cases, you may want to avoid persisting some variables because you don't have the need to persist those variables. So you can declare those variables as `transient`. If the variable is declared as `transient`, then it will not be persisted. That is the main purpose of the `transient` keyword. I want to explain the above two points with the following example (borrowed from [this article](http://www.javabeat.net/what-is-transient-keyword-in-java/)): > ``` > package javabeat.samples; > > import java.io.FileInputStream; > import java.io.FileOutputStream; > import java.io.IOException; > import java.io.ObjectInputStream; > import java.io.ObjectOutputStream; > import java.io.Serializable; > > class NameStore implements Serializable{ > private String firstName; > private transient String middleName; > private String lastName; > > public NameStore (String fName, String mName, String lName){ > this.firstName = fName; > this.middleName = mName; > this.lastName = lName; > } > > public String toString(){ > StringBuffer sb = new StringBuffer(40); > sb.append("First Name : "); > sb.append(this.firstName); > sb.append("Middle Name : "); > sb.append(this.middleName); > sb.append("Last Name : "); > sb.append(this.lastName); > return sb.toString(); > } > } > > public class TransientExample{ > public static void main(String args[]) throws Exception { > NameStore nameStore = new NameStore("Steve", "Middle","Jobs"); > ObjectOutputStream o = new ObjectOutputStream(new FileOutputStream("nameStore")); > // writing to object > o.writeObject(nameStore); > o.close(); > > // reading from object > ObjectInputStream in = new ObjectInputStream(new FileInputStream("nameStore")); > NameStore nameStore1 = (NameStore)in.readObject(); > System.out.println(nameStore1); > } > } > ``` And the output will be the following: > ``` > First Name : Steve > Middle Name : null > Last Name : Jobs > ``` *Middle Name* is declared as `transient`, so it will not be stored in the persistent storage.
Why does Java have transient fields?
[ "", "java", "field", "transient", "" ]
I know how to test intersection between a point and a triangle. ...But i dont get it, how i can move the starting position of the point onto the screen plane precisely by using my mouse coordinates, so the point angle should change depending on where mouse cursor is on the screen, also this should work perfectly no matter which perspective angle i am using in my OpenGL application, so the point angle would be different on different perspective angles... gluPerspective() is the function im talking about.
Well, gonna take a shot and guess what you mean. The guess is that you would like to pick objects with your mouse. Check out: [glUnProject](http://nehe.gamedev.net/data/articles/article.asp?article=13). This transforms the screen coordinates back into 3d world coordinates. [Google](http://www.google.se/search?hl=sv&rlz=1G1GGLQ_SVSE328&q=ray+picking+gluUnProject&btnG=S%C3%B6k&meta=) has more information if you run into problems. Cheers !
yes, i want to move the point on the screen plane, so for example i could render a cube on that point where my mouse is currently, by using 3d coordinates, and then i shoot a line from that position to the place where my mouse is pointing, so it would hit the triangle in my 3d world, and that how i could select that object with mouse. sorry for being unclear :/ -- Edit: yay i got it working with that nehe tutorial! thanks, i didnt know it would be that easy! This is the code im using now and it works great: ``` void GetOGLPos(int x, int y, GLdouble &posX, GLdouble &posY, GLdouble &posZ){ GLint viewport[4]; GLdouble modelview[16]; GLdouble projection[16]; GLfloat winX, winY, winZ; glGetDoublev(GL_MODELVIEW_MATRIX, modelview); glGetDoublev(GL_PROJECTION_MATRIX, projection); glGetIntegerv(GL_VIEWPORT, viewport); winX = (float)x; winY = (float)viewport[3]-(float)y; glReadPixels(x, int(winY), 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); gluUnProject(winX, winY, winZ, modelview, projection, viewport, &posX, &posY, &posZ); } ```
Point-triangle intersection in 3d from mouse coordinates?
[ "", "c++", "opengl", "3d", "mouse", "intersection", "" ]
I'm wondering if there's a way to count lines inside a div for example. Say we have a div like so: ``` <div id="content">hello how are you?</div> ``` Depending on many factors, the div can have one, or two, or even four lines of text. Is there any way for the script to know? In other words, are automatic breaks represented in DOM at all?
I am convinced that it is impossible now. It was, though. IE7’s implementation of getClientRects did exactly what I want. Open [this page](http://home.arcor.de/martin.honnen/javascript/2008/08/test2008082301.html) in IE8, try refreshing it varying window width, and see how number of lines in the first element changes accordingly. Here’s the key lines of the javascript from that page: ``` var rects = elementList[i].getClientRects(); var p = document.createElement('p'); p.appendChild(document.createTextNode('\'' + elementList[i].tagName + '\' element has ' + rects.length + ' line(s).')); ``` Unfortunately for me, Firefox always returns one client rectangle per element, and IE8 does the same now. (Martin Honnen’s page works today because IE renders it in IE compat view; press F12 in IE8 to play with different modes.) This is sad. It looks like once again Firefox’s literal but worthless implementation of the spec won over Microsoft’s useful one. Or do I miss a situation where new getClientRects may help a developer?
If the div's size is dependent on the content (which I assume to be the case from your description) then you can retrieve the div's height using: ``` var divHeight = document.getElementById('content').offsetHeight; ``` And divide by the font line height: ``` document.getElementById('content').style.lineHeight; ``` Or to get the line height if it hasn't been explicitly set: ``` var element = document.getElementById('content'); document.defaultView.getComputedStyle(element, null).getPropertyValue("lineHeight"); ``` You will also need to take padding and inter-line spacing into account. **EDIT** Fully self-contained test, explicitly setting line-height: ``` function countLines() { var el = document.getElementById('content'); var divHeight = el.offsetHeight var lineHeight = parseInt(el.style.lineHeight); var lines = divHeight / lineHeight; alert("Lines: " + lines); } ``` ``` <body onload="countLines();"> <div id="content" style="width: 80px; line-height: 20px"> hello how are you? hello how are you? hello how are you? hello how are you? </div> </body> ```
How can I count text lines inside an DOM element? Can I?
[ "", "javascript", "html", "dom", "" ]
Is there an equivalent to 'intellisense' for Python? Perhaps i shouldn't admit it but I find having intellisense really speeds up the 'discovery phase' of learning a new language. For instance switching from VB.net to C# was a breeze due to snippets and intellisense helping me along.
[This](http://blog.dispatched.ch/2009/05/24/vim-as-python-ide/) blog entry explains setting Vim up as a Python IDE, he covers Intellisense-like functionality: [![Python Intellsense](https://i.stack.imgur.com/XG5NA.png)](https://i.stack.imgur.com/XG5NA.png) (source: [dispatched.ch](http://blog.dispatched.ch/wp-content/uploads/2009/05/omnicompletion.png)) This is standard in Vim 7. There are a number of other very useful plugins for python development in Vim, such as [Pyflakes](http://www.vim.org/scripts/script.php?script_id=2441) which checks code on the fly and [Python\_fn.vim](http://www.vim.org/scripts/script.php?script_id=30) which provides functionality for manipulating python indentation & code blocks.
Have a look at [python tools for visual studio](https://web.archive.org/web/20180126035502/http://pytools.codeplex.com:80/), they provide code completion (a.k.a intellisense), debugging etc ... Below is a screenshot of the interactive shell for python showing code completion. ![enter image description here](https://i.stack.imgur.com/UhMnP.png)
Python and Intellisense
[ "", "python", "ide", "intellisense", "" ]
How would I setup my controls for the following situation?: I have a parent-container for example a `GroupBox`. Inside this parent-container I have two similar controls, like for example `ListBox`es, next to each other. They both have the same size, so the border between the two of them is exactly in the middle of the `GroupBox`. Now when the `GroupBox` is resized, I want the `ListBox`es to also be resized, but the two should always be at the same size than the other one. So also the border between the two of them stays in the middle of the `GroupBox`. So, how would I set up the properties for these three controls to achieve my desired behaviours?
You need another container. The TableLayoutPanel is the best solution. Use 1 row and 2 columns and dock (Dock = Fill) it in the group box. The width of both columns should be set to 50%. Next you can add your controls in the individual cells and dock them (Dock = Fill)
Perhaps a `SplitContainer` with the two-halves set evenly and `IsSplitterFixed` set to `true` (to stop the user moving it): ``` [STAThread] static void Main() { Application.EnableVisualStyles(); Application.Run(new Form { Controls = { new SplitContainer { Width = 200, IsSplitterFixed = true, SplitterDistance = 100, SplitterWidth = 1, Dock = DockStyle.Fill, Panel1 = { Controls = { new ListBox { IntegralHeight = false, Dock = DockStyle.Fill, BackColor = Color.Blue, Items = {"abc","def","ghi"} } } }, Panel2 = { Controls = { new ListBox { Dock = DockStyle.Fill, BackColor = Color.Red, IntegralHeight = false, Items = {"jkl","mno","pqr"} } } } }} }); } ```
c#/winforms: How can two controls dynamically share available space?
[ "", "c#", "winforms", "" ]
I have a date picker library written for MooTools that I want to port to Prototype. It's looking to be a long arduous task and I'm wondering if anyone has seen or written guides on the differences between the two. A translation dictionary of sorts, where I can look up a Moo function and see the prototype equivalent, or vise-versa. The same would also be nice for jQuery to Prototype.
I've wanted to find something like this for a while. It got so I wanted to make my own and a few weeks ago I started [ArtLung Rosetta](http://github.com/artlung/Artlung-Rosetta/), an effort to have a "Hello World" page with various techniques in many major libraries. It's a work in progress. **However,** a few weeks ago I came across this EXCELLENT resource by **Matthias Schütz**. **[The JavaScript Library Comparison Matrix](http://matthiasschuetz.com/javascript-framework-matrix/en/)**. That reference site has syntax comparisons with links to relevant documentation for: [DOM Ready](http://matthiasschuetz.com/javascript-framework-matrix/en/dom-ready), [DOM Basics](http://matthiasschuetz.com/javascript-framework-matrix/en/dom-basics), [DOM Filtering](http://matthiasschuetz.com/javascript-framework-matrix/en/dom-filtering), [DOM Manipulation](http://matthiasschuetz.com/javascript-framework-matrix/en/dom-manipulation), [Effects](http://matthiasschuetz.com/javascript-framework-matrix/en/effects), [Transitions](http://matthiasschuetz.com/javascript-framework-matrix/en/transitions), [Events](http://matthiasschuetz.com/javascript-framework-matrix/en/events), [Custom Functions](http://matthiasschuetz.com/javascript-framework-matrix/en/custom-functions), [Ajax](http://matthiasschuetz.com/javascript-framework-matrix/en/ajax), and [Classes](http://matthiasschuetz.com/javascript-framework-matrix/en/classes). I enjoy comparing the various libraries approaches -- I find I learn a great deal about the libraries themselves, and my own programming style and how I can improve it by carrying out the same task in multiple frameworks.
I'll take a stab at this. If you haven't found any resources after a thorough google search, I'd say you're stuck with just opening up the API's of each and just consulting them back and forth and looking at the examples. I'm actually going from Prototype to jQuery on a project right now. (Aside: I'm much happier with jQuery having used both for awhile.) My basic work flow is just referencing the API as needed. I would be surprised if there was such a mapping of one to the other though. In either framework, there are many ways of doing any given task, and on top of that they work in different and important ways. Sure there's some 1:1 correspondence like "addClass" vs. "addClassName", but jQuery DOM manipulation typically works on sets of elements (which may be just a set of 1) while Prototype works on single elements (which may be wrapped by an each statement to work with sets). Fortunately, both have pretty good and easy to use/reference API's. Good luck.
Are there any guides on converting between Javascript frameworks?
[ "", "javascript", "mootools", "prototypejs", "javascript-framework", "porting", "" ]
I'm trying to determine how to ***count*** the matching rows on a table using the EntityFramework. The problem is that each row might have many megabytes of data (in a Binary field). Of course the SQL would be something like this: ``` SELECT COUNT(*) FROM [MyTable] WHERE [fkID] = '1'; ``` I could load all of the rows and *then* find the Count with: ``` var owner = context.MyContainer.Where(t => t.ID == '1'); owner.MyTable.Load(); var count = owner.MyTable.Count(); ``` But that is grossly inefficient. Is there a simpler way? --- EDIT: Thanks, all. I've moved the DB from a private attached so I can run profiling; this helps but causes confusions I didn't expect. And my real data is a bit deeper, I'll use **Trucks** carrying **Pallets** of **Cases** of **Items** -- and I don't want the **Truck** to leave unless there is at least one **Item** in it. My attempts are shown below. The part I don't get is that CASE\_2 never access the DB server (MSSQL). ``` var truck = context.Truck.FirstOrDefault(t => (t.ID == truckID)); if (truck == null) return "Invalid Truck ID: " + truckID; var dlist = from t in ve.Truck where t.ID == truckID select t.Driver; if (dlist.Count() == 0) return "No Driver for this Truck"; var plist = from t in ve.Truck where t.ID == truckID from r in t.Pallet select r; if (plist.Count() == 0) return "No Pallets are in this Truck"; #if CASE_1 /// This works fine (using 'plist'): var list1 = from r in plist from c in r.Case from i in c.Item select i; if (list1.Count() == 0) return "No Items are in the Truck"; #endif #if CASE_2 /// This never executes any SQL on the server. var list2 = from r in truck.Pallet from c in r.Case from i in c.Item select i; bool ok = (list.Count() > 0); if (!ok) return "No Items are in the Truck"; #endif #if CASE_3 /// Forced loading also works, as stated in the OP... bool ok = false; foreach (var pallet in truck.Pallet) { pallet.Case.Load(); foreach (var kase in pallet.Case) { kase.Item.Load(); var item = kase.Item.FirstOrDefault(); if (item != null) { ok = true; break; } } if (ok) break; } if (!ok) return "No Items are in the Truck"; #endif ``` And the SQL resulting from CASE\_1 is piped through **sp\_executesql**, but: ``` SELECT [Project1].[C1] AS [C1] FROM ( SELECT cast(1 as bit) AS X ) AS [SingleRowTable1] LEFT OUTER JOIN (SELECT [GroupBy1].[A1] AS [C1] FROM ( SELECT COUNT(cast(1 as bit)) AS [A1] FROM [dbo].[PalletTruckMap] AS [Extent1] INNER JOIN [dbo].[PalletCaseMap] AS [Extent2] ON [Extent1].[PalletID] = [Extent2].[PalletID] INNER JOIN [dbo].[Item] AS [Extent3] ON [Extent2].[CaseID] = [Extent3].[CaseID] WHERE [Extent1].[TruckID] = '....' ) AS [GroupBy1] ) AS [Project1] ON 1 = 1 ``` [*I don't really have Trucks, Drivers, Pallets, Cases or Items; as you can see from the SQL the Truck-Pallet and Pallet-Case relationships are many-to-many -- although I don't think that matters. My real objects are intangibles and harder to describe, so I changed the names.*]
Query syntax: ``` var count = (from o in context.MyContainer where o.ID == '1' from t in o.MyTable select t).Count(); ``` Method syntax: ``` var count = context.MyContainer .Where(o => o.ID == '1') .SelectMany(o => o.MyTable) .Count() ``` Both generate the same SQL query.
I think you want something like ``` var count = context.MyTable.Count(t => t.MyContainer.ID == '1'); ``` (edited to reflect comments)
How to COUNT rows within EntityFramework without loading contents?
[ "", "c#", "database", "entity-framework", "" ]
I have a form with a datagridview and when user start entering value for first cell in first row , , can also press f2 which submit that value , but i cant access cell value unless user hit tab and go to another cell following is my code for accessing cell value when f2 is hit ``` protected override bool ProcessCmdKey(ref Message msg, Keys keyData) { var key = new KeyEventArgs(keyData); ShortcutKey(this, key); return base.ProcessCmdKey(ref msg, keyData); } protected virtual void ShortcutKey(object sender, KeyEventArgs key) { switch (key.KeyCode) { case Keys.F2: MessageBox.Show(dataGridView1.SelectedCells[0].Value.ToString()); break; } } ``` dataGridView1.SelectedCells[0].Value returns null
@BFree thanks your code inspired me ;) why not just calling this.dataGridView1.EndEdit(); before MessageBox.Show(dataGridView1.SelectedCells[0].Value.ToString()); this code works just fine : ``` protected override bool ProcessCmdKey(ref Message msg, Keys keyData) { var key = new KeyEventArgs(keyData); ShortcutKey(this, key); return base.ProcessCmdKey(ref msg, keyData); } protected virtual void ShortcutKey(object sender, KeyEventArgs key) { switch (key.KeyCode) { case Keys.F2: dataGridView1.EndEdit(); MessageBox.Show(dataGridView1.SelectedCells[0].Value.ToString()); break; } } ```
How about doing something like this instead. Hook into the DataGridView's "EditingControlShowing" event and capture the F2 there. Some code: ``` public partial class Form1 : Form { private DataTable table; public Form1() { InitializeComponent(); this.dataGridView1.EditingControlShowing += new DataGridViewEditingControlShowingEventHandler(HandleEditingControlShowing); this.table = new DataTable(); table.Columns.Add("Column"); table.Rows.Add("Row 1"); this.dataGridView1.DataSource = table; } private void HandleEditingControlShowing(object sender, DataGridViewEditingControlShowingEventArgs e) { var ctl = e.Control as DataGridViewTextBoxEditingControl; if (ctl == null) { return; } ctl.KeyDown -= ctl_KeyDown; ctl.KeyDown += new KeyEventHandler(ctl_KeyDown); } private void ctl_KeyDown(object sender, KeyEventArgs e) { var box = sender as TextBox; if (box == null) { return; } if (e.KeyCode == Keys.F2) { this.dataGridView1.EndEdit(); MessageBox.Show(box.Text); } } ``` } The idea is simple, you hook into the EditingControlShowing event. Every time a cell enters edit mode, that gets fired. The cool thing is, it exposes the actual underlying control and you can cast it to the actual winforms control, and hook into all it's events as you normally would.
accessing Datagridview cell value while its value is being edited
[ "", "c#", "datagridview", "cell", "" ]
More WCF woes... :) All my workflows implement the same 3 methods. After a lot of copy and paste, I decided to make them inherit from the same interface: ``` [ServiceContract(Namespace = "http://schema.company.com/messages/")] public interface IBasicContract<TRequest, TResponse> where TRequest : class where TResponse : class { [OperationContract(Name = "GetReport", Action = "http://schema.company.com/messages/GetReport", ReplyAction = "http://schema.company.com/messages/GetReportResponse")] TResponse GetReport(TRequest inquiry); [OperationContract(Name = "GetRawReport", Action = "http://schema.company.com/messages/GetRawReport", ReplyAction = "http://schema.company.com/messages/GetRawReportResponse")] string GetRawReport(string guid); [OperationContract(Name = "GetArchiveReport", Action = "http://schema.company.com/messages/GetArchiveReport", ReplyAction = "http://schema.company.com/messages/GetArchiveReportResponse")] TResponse GetArchiveReport(string guid); } ``` I have also decided to create a common implementation of the service client: ``` public class BasicSvcClient<TRequest, TResponse> : ClientBase<IBasicContract<TRequest, TResponse>>, IBasicContract<TRequest, TResponse> where TRequest : class where TResponse : class { public BasicSvcClient() { } public BasicSvcClient(string endpointConfigurationName) : base(endpointConfigurationName) { } public BasicSvcClient(string endpointConfigurationName, string remoteAddress) : base(endpointConfigurationName, remoteAddress) { } public BasicSvcClient(string endpointConfigurationName, EndpointAddress remoteAddress) : base(endpointConfigurationName, remoteAddress) { } public BasicSvcClient(Binding binding, EndpointAddress remoteAddress) : base(binding, remoteAddress) { } public TResponse GetReport(TRequest inquiry) { return Channel.GetReport(inquiry); } public string GetRawReport(string guid) { return Channel.GetRawReport(guid); } public TResponse GetArchiveReport(string guid) { return Channel.GetArchiveReport(guid); } } ``` The problem is when I try to use this: ``` using (var client = new BasicSvcClient<TRequest, TResponse>()) { var response = client.GetReport(inquiry); context.Response.ContentType = "text/xml"; context.Response.Write(response.AsXML()); } ``` I am always getting an error saying that it cannot find the configuration for contract IBasicContract, in that weird syntax that .NET uses: > Could not find default endpoint > element that references contract > 'BasicWorkflow.IBasicContract`2... I tried doing this: ``` using (var client = new BasicSvcClient<TRequest, TResponse>("myConfig")) ``` It doesn't help - it's still also looking for that specific contract. I know that the ServiceContract attribute has a ConfigurationName parameter, but I cannot use that at compile time, because I have *many* workflows I'm calling from the same program (and therefore many configuration entries). Is there a way to set the ConfigurationName at runtime? I thought that this is what the ClientBase constructor was supposed to do, but apparently not. [Edit] This is the endpoint in the .config file, I don't believe it's very helpful in this case: ``` <endpoint address="https://localhost/services/Contract.svc" binding="basicHttpBinding" bindingConfiguration="httpsDataEndpoint" contract="IContract" name="IContractSvc" /> ``` [Edit2] Ok... I found a way that's working, though I'm still not completely satisfied with it: ``` using (var wf = new BasicSvcClient<TRequest, TResponse>( new BasicHttpBinding("httpsDataEndpoint"), new EndpointAddress("https://localhost/services/Contract.svc"))) ``` The only problem I have now is that I would prefer to retrieve the endpoint address from the .config file (using the actual contract name, like IContract). Anybody who can help me with that part? [Edit3] Finally found the complete solution :) Long live Reflector! ``` var cf = (ClientSection) ConfigurationManager.GetSection("system.serviceModel/client"); foreach (ChannelEndpointElement endpoint in cf.Endpoints) { if (endpoint.Name != "ContractSvc") continue; using (var wf = new BasicSvcClient<TRequest, TResponse>( new BasicHttpBinding("httpsDataEndpoint"), new EndpointAddress(endpoint.Address.ToString()))) { //... call wf.GetReport() } break; } ```
"that weird syntax that .NET uses" is actually the type name at runtime for a generic type bound to specific types. Typename`n[[Type],...] where n denotes the number of type arguments contained in your generic type. How does your endpoint configuration then look like?
Why don't you specify a name for your contract in the ServiceContract attribute: ``` [ ServiceContract ( Namespace = "http://schema.company.com/messages/", Name="MyBasicContract" ) ] ``` If you don't explicitlly specify a name, it will default to the qualified name of your interface in "that weird syntax that .NET uses".
Inheriting from a generic contract in WCF
[ "", "c#", "wcf", "generics", "" ]
I'm using Spring with Hibernate as a JPA provider and are trying to get a @OneToMany (a contact having many phonenumbers) to save the foreign key in the phone numbers table. From my form i get a Contact object that have a list of Phone(numbers) in it. The Contact get persisted properly (Hibernate fetches an PK from the specified sequence). The list of Phone(numbers) also gets persisted with a correct PK, but there's no FK to the Contacts table. ``` public class Contact implements Serializable { @OneToMany(mappedBy = "contactId", cascade = CascadeType.ALL, fetch=FetchType.EAGER) private List<Phone> phoneList; } public class Phone implements Serializable { @JoinColumn(name = "contact_id", referencedColumnName = "contact_id") @ManyToOne private Contact contactId; } @Repository("contactDao") @Transactional(readOnly = true) public class ContactDaoImpl implements ContactDao { @Transactional(readOnly = false, propagation = Propagation.REQUIRES_NEW) public void save(Contact c) { em.persist(c); em.flush(); } } @Controller public class ContactController { @RequestMapping(value = "/contact/new", method = RequestMethod.POST) public ModelAndView newContact(Contact c) { ModelAndView mv = new ModelAndView("contactForm"); contactDao.save(c); mv.addObject("contact", c); return mv; } } ``` Hopefully I got all of the relevant bits above, otherwise please let me know.
You have to manage the Java relationships yourself. For this kind of thing you need something like: ``` @Entity public class Contact { @Id private Long id; @OneToMany(cascade = CascadeType.PERSIST, mappedBy = "contact") private List<Phone> phoneNumbers; public void addPhone(PhoneNumber phone) { if (phone != null) { if (phoneNumbers == null) { phoneNumbers = new ArrayList<Phone>(); } phoneNumbers.add(phone); phone.setContact(this); } } ... } @Entity public class Phone { @Id private Long id; @ManyToOne private Contact contact; ... } ```
In reply to Cletus' answer. I would say that it's important to have the `@column` annotation on the id fields, as well as all the sequence stuff. An alternative to using the mappedBy parameter of the `@OneToMany` annotation is to use the `@JoinColumn` annotation. As a kinda aside your implementation of addPhone needs looking at. It should probably be something like. ``` public void addPhone(PhoneNumber phone) { if (phone == null) { return; } else { if (phoneNumbers == null) { phoneNumbers = new ArrayList<Phone>(); } phoneNumbers.add(phone); phone.setContact(this); } } ```
JPA not saving foreign key to @OneToMany relation
[ "", "java", "spring", "jpa", "" ]
I want to have a tree in memory, where each node can have multiple children. I would also need to reference this tree as a flat structure by index. for example: ``` a1 b1 b2 b3 c1 d1 e1 e2 d2 f1 ``` Would be represented as a flat structure as I laid out (i.e.; a1=0, b1=1, d1=5, etc..) Ideally I would want lookup by index to be O(1), and support insert, add, remove, etc.. with a bonus of it being threadsafe, but if that is not possible, let me know.
If you have a reasonably balanced tree, you can get indexed references in O(log n) time - just store in each node a count of the number of nodes under it, and update the counts along the path to a modified leaf when you do inserts, deletions, etc. Then you can compute an indexed access by looking at the node counts on each child when you descend from the root. How important is it to you that indexed references be O(1) instead of O(log n)? If modifications are infrequent with respect to accesses, you could compute a side vector of pointers to nodes when you are finished with a set of modifications, by doing a tree traversal. Then you could get O(1) access to individual nodes by referencing the side vector, until the next time you modify the tree. The cost is that you have to do an O(n) tree traversal after doing modifications before you can get back to O(1) node lookups. Is your access pattern such that this would be a good tradeoff for you?
I use something similar to this in a Generic Red-Black tree I use. Essentially to start you need a wrapper class like Tree, which contains the actual nodes. This is based on being able to reference the tree by index So you can do something like the following to set up a tree with a Key, Value ``` class Tree<K, V> { //constructors and any methods you need //Access the Tree like an array public V this[K key] { get { //This works just like a getter or setter return SearchForValue(key); } set { //like a setter, you can use value for the value given if(SearchForValue(key) == null) { // node for index doesn't exist, add it AddValue(key, value); } else { /* node at index already exists... do something */ } } } ``` This works on the assumption that you already know how to create a tree, but want to to able to do stuff like access the tree by index. Now you can do something like so: ``` Tree<string,string> t = new Tree<string,string>(); t["a"] = "Hello World"; t["b"] = "Something else"; Console.Writeline("t at a is: {0}", t["a"]); ``` Finally, for thread saftety, you can add an object to you're Tree class and on any method exposed to the outside world simply call ``` Lock(threadsafetyobject) { /*Code you're protecting */ } ``` Finally, if you want something cooler for threadsafety, I use an object in my tree call a [ReaderWriterLockSlim](http://msdn.microsoft.com/en-us/library/system.threading.readerwriterlockslim.aspx) that allows multiple reads, but locks down when you want to do a write, which is especially importantif you're changing the tree's structure like doing a rotation whilst another thread is trying to do a read. One last thing, i rewrote the code to do this from memory, so it may not compile, but it should be close :)
How to implement a tree structure, but still be able to reference it as a flat array by index?
[ "", "c#", "tree", "" ]
When marshaling objects between AppDomains in .NET the CLR will either serialize the object (if it has the `Serializable` attribute) or it will generate a proxy (if it inherits from `MarshalByRef`) With strings however the CLR will just pass the reference to the string object into the new AppDomain. The CLR still ensures integrity since .NET strings are immutable and any change by the second AppDomain to the string will not effect the original object. Which brings me to my question: is there a way to tell the CLR that my custom type is immutable and when used in remoting it should just pass the reference to the object as it does with the string class?
Marshalling is actually [fairly tricky](http://blogs.msdn.com/cbrumme/archive/2003/06/01/51466.aspx). The behaviour you are describing is called "marshal-by-bleed", the runtime uses it to marshal strings (sometimes) and marshal System.Threading.Thread ALWAYS. As far as I can tell you have no control over this (its mentioned in the article that you can define custom marshalling behaviour but I can not find any documentation on it), you could potentially pass an IntPtr around and use unsafe code to simulate this, but it smells like a huge hack to me.
I don't think so, no. I believe that this, like the primitives, is handled directly by the runtime.
Can I tell the CLR to marshal immutable objects between AppDomains by reference?
[ "", "c#", ".net", "clr", "remoting", "appdomain", "" ]
The code below is for a simple newsletter signup widget. I'm sure there's a way to make it more concise, any ideas? ``` var email_form = $('.widget_subscribe form'); var email_submit = $('.widget_subscribe .submit'); var email_link = $('.widget_subscribe .email'); // Hide the email entry form when the page loads email_form.hide(); // Show the form when the email link is clicked $(email_link).click( function () { $(this).toggle(); $(email_form).toggle(); return false; }); // Hide the form when the form submit is clicked $(email_submit).click( function () { $(email_link).toggle(); $(email_form).toggle(); }); // Clear/reset the email input on focus $('input[name="email"]').focus( function () { $(this).val(""); }).blur( function () { if ($(this).val() == "") { $(this).val($(this)[0].defaultValue); } }); ```
You have some similar code here. ``` // Show the form when the email link is clicked $(email_link).click( function () { $(this).toggle(); $(email_form).toggle(); return false; }); // Hide the form when the form submit is clicked $(email_submit).click( function () { $(email_link).toggle(); $(email_form).toggle(); }); ``` It could be refactored so the similarity is obvious. ``` // Show the form when the email link is clicked $(email_link).click( function () { $(email_link).toggle(); $(email_form).toggle(); return false; }); // Hide the form when the form submit is clicked $(email_submit).click( function () { $(email_link).toggle(); $(email_form).toggle(); }); ``` So you could wrap toggling the link and the form into a function. ``` var toggleEmailLinkAndForm = function () { $(email_link).toggle(); $(email_form).toggle(); } $(email_link).click(toggleEmailLinkAndForm); $(email_submit).click(toggleEmailLinkAndForm); ``` And as others have pointed out, you can drop the redunant $()s. ``` var toggleEmailLinkAndForm = function () { email_link.toggle(); email_form.toggle(); } email_link.click(toggleEmailLinkAndForm); email_submit.click(toggleEmailLinkAndForm); ```
It's already pretty concise, there's not much more you can do. Anywhere you have $(email\_submit) you can just have email\_submit, because you've already wrapped it in $() (which makes it a jquery object). Eg: ``` email_submit.click( function () { email_link.toggle(); email_form.toggle(); }); ```
How can I refactor this jQuery code?
[ "", "javascript", "jquery", "" ]
In WPF, where can I **save a value** when in one UserControl, then later in another UserControl **access that value** again, something like session state in web programming, e.g.: **UserControl1.xaml.cs:** ``` Customer customer = new Customer(12334); ApplicationState.SetValue("currentCustomer", customer); //PSEUDO-CODE ``` **UserControl2.xaml.cs:** ``` Customer customer = ApplicationState.GetValue("currentCustomer") as Customer; //PSEUDO-CODE ``` # ANSWER: Thanks, Bob, here is the code that I got to work, based on yours: ``` public static class ApplicationState { private static Dictionary<string, object> _values = new Dictionary<string, object>(); public static void SetValue(string key, object value) { if (_values.ContainsKey(key)) { _values.Remove(key); } _values.Add(key, value); } public static T GetValue<T>(string key) { if (_values.ContainsKey(key)) { return (T)_values[key]; } else { return default(T); } } } ``` To save a variable: ``` ApplicationState.SetValue("currentCustomerName", "Jim Smith"); ``` To read a variable: ``` MainText.Text = ApplicationState.GetValue<string>("currentCustomerName"); ```
Something like this should work. ``` public static class ApplicationState { private static Dictionary<string, object> _values = new Dictionary<string, object>(); public static void SetValue(string key, object value) { _values.Add(key, value); } public static T GetValue<T>(string key) { return (T)_values[key]; } } ```
[The Application class](http://msdn.microsoft.com/en-us/library/ms743714.aspx#The_Application_Class) already has this functionality built in. ``` // Set an application-scope resource Application.Current.Resources["ApplicationScopeResource"] = Brushes.White; ... // Get an application-scope resource Brush whiteBrush = (Brush)Application.Current.Resources["ApplicationScopeResource"]; ```
How can I save global application variables in WPF?
[ "", "c#", "wpf", "global-variables", "" ]
I'm looking to be able to sort an array of associative arrays on more than one column. To further complicate it, I'd like to be able to set specific sort options per key/column. I have a set of data that is similar to a db query's result set, but it doesn't actually come from one so I need to sort it in PHP rather than SQL. ``` [ ['first_name' => 'Homer', 'last_name' => 'Simpson', 'city' => 'Springfield', 'state' => 'Unknown', 'zip' => '66735'], ['first_name' => 'Patty', 'last_name' => 'Bouvier', 'city' => 'Scottsdale', 'state' => 'Arizona', 'zip' => '85250'], ['first_name' => 'Moe', 'last_name' => 'Szyslak', 'city' => 'Scottsdale', 'state' => 'Arizona', 'zip' => '85255'], ['first_name' => 'Nick', 'last_name' => 'Riviera', 'city' => 'Scottsdale', 'state' => 'Arizona', 'zip' => '85255'], ]; ``` I would like to be able to sort it similar to what could be done with a DB query. Oh, and sometimes a column/key needs to be specified by number. What I had in mind was something similar to this: ``` $sortOptions = array( array( 'city', SORT_ASC, SORT_STRING), array( 'zip', SORT_DESC, SORT_NUMERIC), array( 2, SORT_ASC, SORT_STRING) // 2='last_name' ); $sorter = new MultiSort($data, $sortOptions); $sortedData = $sorter->getSortedArray(); print_r($jmsSorted); ``` What I would like to end up with is this: ``` Array ( [0] => Array ( [first_name] => Nick [last_name] => Riviera [city] => Scottsdale [state] => Arizona [zip] => 85255 ) [1] => Array ( [first_name] => Moe [last_name] => Szyslak [city] => Scottsdale [state] => Arizona [zip] => 85255 ) [2] => Array ( [first_name] => Patty [last_name] => Bouvier [city] => Scottsdale [state] => Arizona [zip] => 85250 ) [3] => Array ( [first_name] => Homer [last_name] => Simpson [city] => Springfield [state] => Unknown [zip] => 66735 ) ) ``` **UPDATE:** I think that ideally, a solution would result in dynamically creating ``` array_multisort( $city, SORT_ASC, SORT_STRING, $zip, SORT_DESC, SORT_NUMERIC, $last_name, SORT_ASC, SORT_STRING, $inputArray); ``` The problem is that **I don't want to have to "hard code" those key names in there**. I tried creating a solution based upon **Example #3 Sorting database results** from the [`array_multisort()`](http://us.php.net/manual/en/function.array-multisort.php) documentation that ended up using `array_multisort()` but I cannot seem to find a way to use my dynamically built argument list for `array_multisort()`. My attempt was to "chain" those arguments together into an array and then ``` call_user_func_array( 'array_multisort', $functionArgs); ``` That results in an > Warning: Parameter 2 to array\_multisort() expected to be a reference, value given in...
Here is what I finally settled on for being able to sort multi-dimensional arrays. Both of the answers above are good but I was also looking for something flexible. I definitely don’t think there is any one “right” answer, but this is what works for my needs and is flexible. As you can see from my `@link` in the comment of `_usortByMultipleKeys()` it was adapted from a comment in the PHP manual that currently doesn't seem to exist, but I believe <http://www.php.net/manual/en/function.usort.php#104398> is a new version of the original comment. I have not explored using that new suggestion. ``` /** * Sort the resultSet. * * Usage: $sortOptions = array( * 'section', // Defaults to SORT_ASC * 'row' => SORT_DESC, * 'retail_price' => SORT_ASC); * $results->sortResults($sortOptions); * * @param array $sortOptions An array of sorting instructions */ public function sortResults(array $sortOptions) { usort($this->_results, $this->_usortByMultipleKeys($sortOptions)); } /** * Used by sortResults() * * @link http://www.php.net/manual/en/function.usort.php#103722 */ protected function _usortByMultipleKeys($key, $direction=SORT_ASC) { $sortFlags = array(SORT_ASC, SORT_DESC); if (!in_array($direction, $sortFlags)) { throw new InvalidArgumentException('Sort flag only accepts SORT_ASC or SORT_DESC'); } return function($a, $b) use ($key, $direction, $sortFlags) { if (!is_array($key)) { //just one key and sort direction if (!isset($a->$key) || !isset($b->$key)) { throw new Exception('Attempting to sort on non-existent keys'); } if ($a->$key == $b->$key) { return 0; } return ($direction==SORT_ASC xor $a->$key < $b->$key) ? 1 : -1; } else { //using multiple keys for sort and sub-sort foreach ($key as $subKey => $subAsc) { //array can come as 'sort_key'=>SORT_ASC|SORT_DESC or just 'sort_key', so need to detect which if (!in_array($subAsc, $sortFlags)) { $subKey = $subAsc; $subAsc = $direction; } //just like above, except 'continue' in place of return 0 if (!isset($a->$subKey) || !isset($b->$subKey)) { throw new Exception('Attempting to sort on non-existent keys'); } if ($a->$subKey == $b->$subKey) { continue; } return ($subAsc==SORT_ASC xor $a->$subKey < $b->$subKey) ? 1 : -1; } return 0; } }; } ```
This should work for the situation you describe. ``` usort($arrayToSort, "sortCustom"); function sortCustom($a, $b) { $cityComp = strcmp($a['city'],$b['city']); if($cityComp == 0) { //Cities are equal. Compare zips. $zipComp = strcmp($a['zip'],$b['zip']); if($zipComp == 0) { //Zips are equal. Compare last names. return strcmp($a['last_name'],$b['last_name']); } else { //Zips are not equal. Return the difference. return $zipComp; } } else { //Cities are not equal. Return the difference. return $cityComp; } } ``` You could condense it into one line like so: ``` function sortCustom($a, $b) { return ($cityComp = strcmp($a['city'],$b['city']) ? $cityComp : ($zipComp = strcmp($a['zip'],$b['zip']) ? $zipComp : strcmp($a['last_name'],$b['last_name']))); } ``` As far as having a customizable sort function, you're reinventing the wheel. Take a look at the [`array_multisort()`](https://www.php.net/manual/en/function.array-multisort.php) function.
Sort array of associative arrays on multiple columns using specified sorting rules
[ "", "php", "arrays", "sorting", "multidimensional-array", "" ]
I want that my unit tests to cover my POCO's. How should I test them? What If I add a new property? How to make my test fail? Testing the properties and methods I know, but the problem is, how to make sure my tests fail is anything is added to my poco's.
From the reading of your question, you either misunderstand what a POCO is, or you misunderstand unit testing. A POCO is just an old fashioned object. It has state and behavior. You unit test the state by putting (setting) values in to the properties, and asserting that the value is what you expected. You unit test behavior by asserting expectations against methods. Here would be an oversimplified example of a POCO and its tests. Notice that there's more test code than implementation code. When unit testing is done right (TDD), this is the case. ``` public class Person { private Name name = Name.Empty; private Address address = Address.Empty; private bool canMove; public Name Name { set { name = value; } get { return name; } } public Address Address { private set { address = value; } get { return address; } } public bool CanMove { set { canMove = value; } get { return value; } } public bool MoveToNewAddress(Address newAddress) { if (!CanMove) return false; address = newAddress; return true; } } [TestFixture] public class PersonTests { private Person toTest; private readonly static Name NAME = new Name { First = "Charlie", Last = "Brown" }; private readonly static Address ADDRESS = new Address { Line1 = "1600 Pennsylvania Ave NW", City = "Washington", State = "DC", ZipCode = "20500" }; [SetUp] public void SetUp() { toTest = new Person; } [Test] public void NameDefaultsToEmpty() { Assert.AreEqual(Name.Empty, toTest.Name); } [Test] public void CanMoveDefaultsToTrue() { Assert.AreEqual(true, toTest.CanMove); } [Test] public void AddressDefaultsToEmpty() { Assert.AreEqual(Address.Empty, toTest.Address); } [Test] public void NameIsSet() { toTest.Name = NAME; Assert.AreEqual(NAME, toTest.Name); } [Test] public void CanMoveIsSet() { toTest.CanMove = false; Assert.AreEqual(false, toTest.CanMove); } [Test] public void AddressIsChanged() { Assert.IsTrue(toTest.MoveToNewAddress(ADDRESS)); Assert.AreEqual(ADDRESS, toTest.Address); } [Test] public void AddressIsNotChanged() { toTest.CanMove = false; Assert.IsFalse(toTest.MoveToNewAddress(ADDRESS)); Assert.AreNotEqual(ADDRESS, toTest.Address); } } ``` In order to make the test fail first, stub the methods or properties, but do not implement any behavior. Run the tests, watch them fail, then add in behavior one line at a time until it passes. Once it passes, stop. Do not write any more code unless you write more tests (unless you're refactoring, in which case you do not add behavior).
Testing is about verifying whether what is written is able to do what it should do, nothing more, nothing less. So if you write some code, you do that for a reason. Your tests should reflect that the code indeed matches the reason you wrote the code for. That's it, there's nothing else. I.o.w.: if you write a bunch of classes, you should test whether the behavior you've written indeed is correct compared to what the behavior should do.
How can I make my unit tests fail if I add something to my code?
[ "", "c#", "unit-testing", "poco", "" ]
Hi I just want to make sure I have these concepts right. Overloading in java means that you can have a constructor or a method with different number of arguments or different data types. i.e ``` public void setValue(){ this.value = 0; } public void setValue(int v){ this.value = v; } ``` How about this method? Would it still be considered overloading since it's returning a different data type? ``` public int setValue(){ return this.value; } ``` --- Second question is: what is overriding in java? Does it relate to inheritance. Let's I have the following: ``` public class Vehicle{ double basePrice = 20000; //constructor defined public double getPrice(){ return basePrice; } } public class Truck extends Vehicle{ double truckPrice = 14000; //constructor defined public double getPrice(){ return truckPrice; } } ``` So now let's say I have the following ``` Truck truck = new Truck(); ``` if I call ``` truck.super.getPrice() ``` this would return the price from the Vehicle class, 20,000 if I call ``` truck.getPrice() ``` this would return the price in the truck class, 14,000 **Is my knowledge correct for both questions?**
You are basically correct. Overloading is having multiple methods in a single class where the method has the same name. However, the return value is not seen as part of the *signature* of the method. Thus, you cannot overload a method by changing only the return value. You cannot have the following code, from your example: ``` public void setValue() { this.value = 0; } public int setValue() { return this.value; } ``` This will fail to compile. As Rob identified, I believe you mean **overriding**, and you have that correct. Note with overriding, you cannot change the return type. As of Java 5, you can return a derived type of what the base class method returned. Before Java 5, it must be the identical type. That is, you cannot do the below until Java 5 and later: ``` public class AnimalNoise {} public class Miaw extends AnimalNoise {} public class Animal { public AnimalNoise makeNoise() { return new AnimalNoise(); } } public class Cat extends Animal { public Miaw makeNoise() { return new Miaw (); } } ``` However, even in Java 5 and later, you *cannot* do the following: ``` public class Animal { public String makeNoise() { return "silence"; } } public class Cat extends Animal { public Miaw makeNoise() { return new Miaw (); } } public class Miaw {} ``` Finally, a big difference between overloading and overriding that is often overlooked is that overloading is decided at compile time and overriding is decided at runtime. This catches many people by surprise when they expect overloading to be decided at runtime.
Correct; overloading is providing multiple signatures for the same method. Overriding, which is what I think you mean by "overwriting" is the act of providing a different implementation of a method inherited from a base type, and is basically the point of polymorphism by inheritance, i.e. ``` public class Bicycle implements Vehicle { public void drive() { ... } } public class Motorcycle extends Bicycle { public void drive() { // Do motorcycle-specific driving here, overriding Bicycle.drive() // (we can still call the base method if it's useful to us here) } } ```
Java overloading vs overriding
[ "", "java", "overloading", "overwrite", "" ]
This Code: ``` Something = new Guid() ``` is returning: > 00000000-0000-0000-0000-000000000000 all the time and I can't tell why? So, why?
You should use [`Guid.NewGuid()`](http://msdn.microsoft.com/en-us/library/system.guid.newguid.aspx)
Just a quick explanation for why you need to call NewGuid as opposed to using the default constructor... In .NET all structures (value types like int, decimal, Guid, DateTime, etc) must have a default parameterless constructor that initializes all of the fields to their default value. In the case of Guid, the bytes that make up the Guid are all zero. Rather than making a special case for Guid or making it a class, they use the NewGuid method to generate a new "random" Guid.
How to use Guids in C#?
[ "", "c#", "guid", "" ]
> **Possible Duplicate:** > [Working with latitude/longitude values in Java](https://stackoverflow.com/questions/120283/working-with-latitude-longitude-values-in-java) ### Duplicate: * [Working with latitude/longitude values in Java](https://stackoverflow.com/questions/120283/) * [How do I calculate distance between two latitude longitude points?](https://stackoverflow.com/questions/27928/) I need to calculate the distance between two points given by two coordinates. The project I am working on is a Java-project, so Java-code will be great, but pseudo-code can also be given, then I can implement it myself :) As you probably know, there are three ways to represent coordinates: * Degrees:Minutes:Seconds (49°30'00"N, 123°30'00"W) * Degrees:Decimal Minutes (49°30.0', -123°30.0'), (49d30.0m,-123d30.0') * Decimal Degrees (49.5000°,-123.5000°), generally with 4-6 decimal numbers. It's the third way my coordinates are given in, so the code for this values will be preferred :)
Based on [another question on stackoverflow](https://stackoverflow.com/questions/120283/), I got this code.. This calculates the result in meters, not in miles :) ``` public static float distFrom(float lat1, float lng1, float lat2, float lng2) { double earthRadius = 6371000; //meters double dLat = Math.toRadians(lat2-lat1); double dLng = Math.toRadians(lng2-lng1); double a = Math.sin(dLat/2) * Math.sin(dLat/2) + Math.cos(Math.toRadians(lat1)) * Math.cos(Math.toRadians(lat2)) * Math.sin(dLng/2) * Math.sin(dLng/2); double c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)); float dist = (float) (earthRadius * c); return dist; } ```
You can use the [Java Geodesy Library for GPS](http://www.gavaghan.org/blog/free-source-code/geodesy-library-vincentys-formula-java/), it uses the [Vincenty's formulae](http://en.wikipedia.org/wiki/Vincenty%27s_formulae) which takes account of the earths surface curvature. Implementation goes like this: ``` import org.gavaghan.geodesy.*; ... GeodeticCalculator geoCalc = new GeodeticCalculator(); Ellipsoid reference = Ellipsoid.WGS84; GlobalPosition pointA = new GlobalPosition(latitude, longitude, 0.0); // Point A GlobalPosition userPos = new GlobalPosition(userLat, userLon, 0.0); // Point B double distance = geoCalc.calculateGeodeticCurve(reference, userPos, pointA).getEllipsoidalDistance(); // Distance between Point A and Point B ``` The resulting distance is in meters.
Calculate distance in meters when you know longitude and latitude in java
[ "", "java", "distance", "latitude-longitude", "" ]
I'm using regular expressions with a python framework to pad a specific number in a version number: 10.2.11 I want to transform the second element to be padded with a zero, so it looks like this: 10.02.11 My regular expression looks like this: ``` ^(\d{2}\.)(\d{1})([\.].*) ``` If I just regurgitate back the matching groups, I use this string: ``` \1\2\3 ``` When I use my favorite regular expression test harness (<http://kodos.sourceforge.net/>), I can't get it to pad the second group. I tried \1\20\3, but that interprets the second reference as 20, and not 2. Because of the library I'm using this with, I need it to be a one liner. The library takes a regular expression string, and then a string for what should be used to replace it with. I'm assuming I just need to escape the matching groups string, but I can't figure it out. Thanks in advance for any help.
What about removing the `.` from the regex? ``` ^(\d{2})\.(\d{1})[\.](.*) ``` replace with: ``` \1.0\2.\3 ```
How about a completely different approach? ``` nums = version_string.split('.') print ".".join("%02d" % int(n) for n in nums) ```
Pad an integer using a regular expression
[ "", "python", "regex", "" ]
Consider this - a base class A, class B inheriting from A, class C inheriting from B. What is a generic way to call a parent class initialiser in an initialiser? If this still sounds too vague, here's some code. ``` class A(object): def __init__(self): print "Initialiser A was called" class B(A): def __init__(self): super(B,self).__init__() print "Initialiser B was called" class C(B): def __init__(self): super(C,self).__init__() print "Initialiser C was called" c = C() ``` This is how I do it now. But it still seems a bit too non-generic - you still must pass a correct type by hand. Now, I've tried using `self.__class__` as a first argument to super(), but, obviously it doesn't work - if you put it in the initialiser for C - fair enough, B's initialiser gets called. If you do the same in B, "self" still points to an instance of C so you end up calling B's initialiser again (this ends in an infinite recursion). There is no need to think about diamond inheritance for now, I am just interested in solving this specific problem.
The way you are doing it is indeed the recommended one (for Python 2.x). The issue of whether the class is passed explicitly to `super` is a matter of style rather than functionality. Passing the class to `super` fits in with Python's philosophy of "explicit is better than implicit".
Python 3 includes an improved super() which allows use like this: ``` super().__init__(args) ```
Chain-calling parent initialisers in python
[ "", "python", "oop", "inheritance", "constructor", "" ]
I am trying to read a CSV file with accented characters with Python (only French and/or Spanish characters). Based on the Python 2.5 documentation for the csvreader (<http://docs.python.org/library/csv.html>), I came up with the following code to read the CSV file since the csvreader supports only ASCII. ``` def unicode_csv_reader(unicode_csv_data, dialect=csv.excel, **kwargs): # csv.py doesn't do Unicode; encode temporarily as UTF-8: csv_reader = csv.reader(utf_8_encoder(unicode_csv_data), dialect=dialect, **kwargs) for row in csv_reader: # decode UTF-8 back to Unicode, cell by cell: yield [unicode(cell, 'utf-8') for cell in row] def utf_8_encoder(unicode_csv_data): for line in unicode_csv_data: yield line.encode('utf-8') filename = 'output.csv' reader = unicode_csv_reader(open(filename)) try: products = [] for field1, field2, field3 in reader: ... ``` Below is an extract of the CSV file I am trying to read: ``` 0665000FS10120684,SD1200IS,Appareil photo numérique PowerShot de 10 Mpx de Canon avec trépied (SD1200IS) - Bleu 0665000FS10120689,SD1200IS,Appareil photo numérique PowerShot de 10 Mpx de Canon avec trépied (SD1200IS) - Gris 0665000FS10120687,SD1200IS,Appareil photo numérique PowerShot de 10 Mpx de Canon avec trépied (SD1200IS) - Vert ... ``` Even though I try to encode/decode to UTF-8, I am still getting the following exception: ``` Traceback (most recent call last): File ".\Test.py", line 53, in <module> for field1, field2, field3 in reader: File ".\Test.py", line 40, in unicode_csv_reader for row in csv_reader: File ".\Test.py", line 46, in utf_8_encoder yield line.encode('utf-8', 'ignore') UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 68: ordinal not in range(128) ``` How do I fix this?
The `.encode` method gets applied to a Unicode string to make a byte-string; but you're calling it on a byte-string instead... the wrong way 'round! Look at the `codecs` module in the standard library and `codecs.open` in particular for better general solutions for reading UTF-8 encoded text files. However, for the `csv` module in particular, you need to pass in utf-8 data, and that's what you're already getting, so your code can be much simpler: ``` import csv def unicode_csv_reader(utf8_data, dialect=csv.excel, **kwargs): csv_reader = csv.reader(utf8_data, dialect=dialect, **kwargs) for row in csv_reader: yield [unicode(cell, 'utf-8') for cell in row] filename = 'da.csv' reader = unicode_csv_reader(open(filename)) for field1, field2, field3 in reader: print field1, field2, field3 ``` PS: if it turns out that your input data is NOT in utf-8, but e.g. in ISO-8859-1, then you do need a "transcoding" (if you're keen on using utf-8 at the `csv` module level), of the form `line.decode('whateverweirdcodec').encode('utf-8')` -- but probably you can just use the name of your existing encoding in the `yield` line in my code above, instead of `'utf-8'`, as `csv` is actually going to be just fine with ISO-8859-\* encoded bytestrings.
### Python 2.X There is a [unicode-csv](https://github.com/jdunck/python-unicodecsv) library which should solve your problems, with added benefit of not naving to write any new csv-related code. Here is a example from their readme: ``` >>> import unicodecsv >>> from cStringIO import StringIO >>> f = StringIO() >>> w = unicodecsv.writer(f, encoding='utf-8') >>> w.writerow((u'é', u'ñ')) >>> f.seek(0) >>> r = unicodecsv.reader(f, encoding='utf-8') >>> row = r.next() >>> print row[0], row[1] é ñ ``` ### Python 3.X In python 3 this is supported out of the box by the build-in `csv` module. See this example: ``` import csv with open('some.csv', newline='', encoding='utf-8') as f: reader = csv.reader(f) for row in reader: print(row) ```
Reading a UTF8 CSV file with Python
[ "", "python", "utf-8", "csv", "character-encoding", "" ]
I need to start developing applications using the Spring framework, and am wondering what tools I need to download to have me up and running. On the [SpringSource](http://www.springsource.com/download/community?project=) website I am seeing all these applications to download and I am wondering, do I really need all this? And what versions should I use, especially for Spring Framework? * Spring Framework * SpringSource dm Server Samples * Spring Security * Spring Web Flow * Spring Web Services * Spring Dynamic Modules * Spring Integration * Spring Batch * Spring.NET * Spring JavaConfig * Spring LDAP * Spring Extensions * Spring IDE * Spring BlazeDS Integration * SpringSource Bundlor * Spring ROO What other applications do I need to download (eg. Struts, Glassfish, Apache, etc.)?
This depends on what you want to use Spring for. Typically that's Web applications. If so you only need two things: * Spring framework (with minimal dependencies); and * A servlet container (eg Tomcat) or a full-blown application server (eg Glassfish, JBoss). Everything else is optional. I believe the only required dependency is Apache Commons logging. Depending on what features you use, you may well need more. If so, here is a [tutorial][1] that creates a barebones Spring MVC project. There are countless others around for that and other topics. It's entirely possible to use Spring in, say, a Swing application in which case you obviously don't need a servlet container.
All you need from SpringSource is the Spring Framework. Spring 3.0 is on the way, but for now, use 2.5.6.SEC01, the current production release. You can get started with a simple servlet container (ie: Tomcat) rather than a full blown application server (eg: JBoss, Glassfish). The Spring Framework comes bundled with jars for web development - ie: spring-web and spring-webmvc. See [#117535](https://stackoverflow.com/questions/116978/can-anyone-recommend-a-simple-java-web-app-framework/117535#117535) for a simple example of using Spring MVC.
What do I need to download to start developing apps using the Spring framework?
[ "", "java", "spring", "" ]
Can anybody give me an example how to use the osgi framework classes? I haven't a clue how to use those classes ... BR, Markus
It dependes on which OSGi implementation you are using. I use Eclipse Equinox and start the framework from within a regular java class. The Eclipse jar (called org.eclipse.osgi\_longversion.jar) has a class called org.eclipse.core.runtime.adaptor.EclipseStarter. This will boot your OSGi framework. ``` Properties props = new Properties(); // add some properties to config the framework EclipseStarter.setInitialProperties(props); BundleContext context = EclipseStarter.startup(new String[]{},null); ``` You need some properties to configure the framework. You can see all the documented properties [here](http://help.eclipse.org/ganymede/index.jsp?topic=/org.eclipse.platform.doc.isv/reference/misc/runtime-options.html). Once you call startup, the BundleContext you receive is the System Bundle context, so you can install/start/stop bundles from here. If you set all the properties, you won't have to pass any arguments to startup(). You can download all Equinox and other bundles from the [Equinox website](http://download.eclipse.org/equinox/drops/R-3.4.2-200902111700/index.php).
In OSGi 4.1 this is standardized. Have a look at <http://njbartlett.name/2011/03/07/embedding-osgi.html> which explains how to embed any OSGi container.
How to run an OSGi framework within usual java-code?
[ "", "java", "osgi", "" ]
I know [similar questions](https://stackoverflow.com/questions/32899/how-to-generate-dynamic-unit-tests-in-python) have been asked before but they don't really have the information I'm looking for - I'm not asking about the mechanics of how to generate unit tests, but whether it's a good idea. I've written a module in Python which contains objects representing physical constants and units of measurement. A lot of the units are formed by adding on prefixes to base units - e.g. from `m` I get `cm`, `dm`, `mm`, `hm`, `um`, `nm`, `pm`, etc. And the same for `s`, `g`, `C`, etc. Of course I've written a function to do this since the end result is over 1000 individual units and it would be a major pain to write them all out by hand ;-) It works something like this (not the actual code): ``` def add_unit(name, value): globals()[name] = value for pfx, multiplier in prefixes: globals()[pfx + name] = multiplier * value add_unit('m', <definition of a meter>) add_unit('g', <definition of a gram>) add_unit('s', <definition of a second>) # etc. ``` The problem comes in when I want to write unit tests for these units (no pun intended), to make sure they all have the right values. If I write code that automatically generates a test case for every unit individually, any problems that are in the unit generation function are likely to also show up in the test generation function. But given the alternative (writing out all 1000+ tests by hand), should I just go ahead and write a test generation function anyway, check it really carefully and hope it works properly? Or should I only test, say, one series of units (`m`, `cm`, `dm`, `km`, `nm`, `um`, and all other multiples of the meter), just enough to make sure the unit generation function seems to be working? Or something else?
You're right to identify the weakness of automatically generating test cases. The usefulness of a test comes from taking two different paths (your code, and your own mental reasoning) to come up with what should be the same answer -- if you use the same path both times, nothing is being tested. In summary: **Never write automatically generated tests**, unless the algorithm for generating the test results is dramatically simpler than the algorithm that you are testing. (Testing of a sorting algorithm is an example of when automatically generated tests would be a good idea, since it's easy to verify that a list of numbers is in sorted order. Another good example would be a puzzle-solving program [as suggested by ChrisW](https://stackoverflow.com/questions/845887/unit-tests-for-automatically-generated-code-automatic-or-manual/845907#845907) in a comment. In both cases, auto-generation makes sense because it is much easier to verify that a given solution is correct than to generate a correct solution.) **My suggestion for your case:** Manually test a small, representative subset of the possibilities. [Clarification: Certain types of automated tests are appropriate and highly useful, e.g. [fuzzing](http://en.wikipedia.org/wiki/Fuzz_testing). I mean that that it is unhelpful to auto-generate unit tests for generated code.]
If you auto-generate the tests: * You might find it faster to then read all the tests (to inspect them for correctness) that it would have been to write them all by hand. * They might also be more maintainable (easier to edit, if you want to edit them later).
Unit tests for automatically generated code: automatic or manual?
[ "", "python", "unit-testing", "code-generation", "" ]
I am consuming a cpp COM object from c# code. My c# code looks like this: ``` try { var res = myComServer.GetSomething(); } catch (Exception e) { } ``` However the exception never contains any of the details I set in cpp, in particular my error message. In my cpp side I have followed several examples I have found on the web: ``` ... ICreateErrorInfo *pcerrinfo; IErrorInfo *perrinfo; HRESULT hr; hr = CreateErrorInfo(&pcerrinfo); pcerrinfo->SetDescription(L"C++ Exception"); hr = pcerrinfo->QueryInterface(IID_IErrorInfo, (LPVOID FAR*) &perrinfo); if (SUCCEEDED(hr)) { SetErrorInfo(0, perrinfo); perrinfo->Release(); } pcerrinfo->Release(); return E_FAIL; // E_FAIL or other appropriate failure code ... ``` Am I missing anything? Is there anything else that could affect this, like marshaling, the interop creation or attributes of the com server itself?
Does your COM class support [ISupportErrorInfo](http://msdn.microsoft.com/en-us/library/ms221083.aspx) ?
Assuming that your class does implement `ISupportErrorInfo`, did you by any chance add the support *AFTER* you imported the library into your C# project from Visual Studio? Visual Studio generates the gunk that it needs to talk to a COM library only once, when you import the library. For this purpose, it builds a special translation DLL called "*originalDllName*.Interop.dll", based on the information available in the TypeLib of the DLL at the time of the import. You can make implementation changes as often as you want without trouble; but if you changed the library in any way (added new classes, changed the interface definitions, changed the iterfaces implemented by your classes...), you will have to remove the COM DLL from your References, and then re-import it, for the Interop DLL to be refreshed.
Exception details lost when thrown from C++ to C# through COM interop?
[ "", "c#", ".net", "com", "interop", "exception", "" ]
``` Notice: Undefined variable: username in C:\xampp\htdocs\test_class.php on line 20 Notice: Undefined variable: password in C:\xampp\htdocs\test_class.php on line 20 ``` I get the above error when i use this piece of code for checking down my username and password with my database. ``` <?php class test_class { public function __construct() { } public function doLogin() { include("connection.php"); if (isset($_POST['username'])) { $username= $_POST['username']; } if (isset($_POST['password'])) { $password= $_POST['password']; } $query = "SELECT * FROM users WHERE username = '$username' AND password = '$password'"; $result = mysql_fetch_array(mysql_query($query)); if(!$result) { return 'assa'; }else{ return 'assa112121212'; } } } ?> ```
This means, most likely, that your form hasn't been submitted. You should make sure that you only use the variables if they exist. Furthermore, you should [never ever](http://xkcd.com/327/) use the input from users without validating it. Try the following, for example: ``` if (isset($_POST['username']) && isset($_POST['password'])) { $username= $_POST['username']; $password= $_POST['password']; $query = "SELECT * FROM users WHERE username = '".mysql_real_escape_string($username)."' AND password = '".mysql_real_escape_string($password)."'"; $result = mysql_fetch_array(mysql_query($query)); # ... } else { return NULL; } ```
This is just a **notice** that the variables are being referenced in the query without being in scope. Define $username and $password at the top of doLogin() and initialized them to Null or similar. Then check for them later. You also seem to be executing the query regardless of $username and $password being set. You should do something more like: ``` if( isset($_POST['username']) && isset($_POST['password'])){ //create vars, do query }else{ // Nothing to process } ``` Both errors occur on line 20, which I assume is the query string interpolation. The issues here are: 1. inconsistent scope/referencing (which sucks in PHP anyway) 2. Your ifs need to be a bit more orderly. This error is small, but worse ones will bite you in the bum later if you handle variables like this :) Also: **escape your variables before dumping them like hot coals into your SQL** see PDO (which I would go for) or mysql\_escape\_string() **good luck**!
Undefined variable error in PHP
[ "", "php", "mysql", "" ]
Why does the following: ``` class A(object): def __init__(self, var=[]): self._var = var print 'var = %s %s' % (var, id(var)) a1 = A() a1._var.append('one') a2 = A() ``` result in: ``` var = [] 182897439952 var = ['one'] 182897439952 ``` I don't understand why it is not using a new instance of a list when using optional keyword arguments, can anyone explain this?
The empty list in your function definition is created once, at the time the function itself is created. It isn't created every time the function is called. If you want a new one each time, do this: ``` class A(object): def __init__(self, var=None): if var is None: var = [] self._var = var ```
This is simply wrong. You can't (meaningfully) provide a mutable object as a default value in a function declaration. ``` class A(object): def __init__(self, var=[]): self._var = var print 'var = %s %s' % (var, id(var)) ``` You must do something like this. ``` class A(object): def __init__(self, var=None): self._var = var if var is not None else [] print 'var = %s %s' % (var, id(var)) ```
Initialisation of keyword args in Python
[ "", "python", "initialization", "arguments", "instantiation", "" ]
I'm currently writing myself a little C# back up program. I'm using a standard windows form for the interface, and am calling cmd.exe as a new process, and then using XCOPY from within this new process. Every thing's working great, except for this last feature I want to add in, which is the ability to break the operation. From a native command prompt, I can do this cleanly with ctrl+c, but try as I might, I can't replicate this functionality using the winforms and process approach. I've tried redirecting the standardinput and using that to send consolespecialkeys.ControlC to the process, I've also tried sending 0x03 and "/x03", both of which I've read on other forum posts are hex code for ctrl+c. Nothing I'm sending is registered though, and exiting the process kills the user interface, but leaves the xcopy.exe working in the background. Killing xcopy.exe manually results in it leaving the file it was copying half copied and corrupted, not something that happens using the ctrl+c in a command prompt. Am I missing something blindingly obvious? I'm new-ish to C#, so I'll hold my hands up and admit this is most likely me being slow, or misunderstanding how the process is working with cmd.exe. However, since processes support standard input redirection, it seems like something that should work... to me at least. I've put the basic outline of my code below, in case it helps identify where I'm messing up. ``` string XCopyArguments = "\"" + dir.FullName + "\" \"" + destination + "\" /D /S /I /E"; Process XCopyProcess = new Process(); ProcessStartInfo XCopyStartInfo = new ProcessStartInfo(); XCopyStartInfo.FileName = "CMD.exe "; XCopyStartInfo.RedirectStandardError = true; XCopyStartInfo.RedirectStandardOutput = true; XCopyStartInfo.RedirectStandardInput = true; XCopyStartInfo.UseShellExecute = false; XCopyStartInfo.CreateNoWindow = true; XCopyStartInfo.Arguments = " /D /c XCOPY " + XCopyArguments; XCopyProcess.EnableRaisingEvents = true; XCopyProcess.StartInfo = XCopyStartInfo; XCopyProcess.Start(); XCopyProcess.WaitForExit(15000); int ExitCode = XCopyProcess.ExitCode; if (ExitCode > 0 & !XCopyProcess.HasExited) { XCopyProcess.Kill(); } XCopyProcess.Dispose(); ``` Many thanks in advance for any help anyone can offer.
I don't want to be a besserwisser, but I think you'd be much better off doing the copying inside your program. Using File, Directory and the other classes in the System.IO namespace, it's really simple, and leaves you in full control to report progress, cancel operations etc.
Yes, doing the operation in .NET would be easier. BUT, I need to send ctrl-c to a process also and I don't have that option. So can we please get an answer to this question? **EDIT:** Do I have to post a duplicate to get an answer? And no, @j0rd4n didn't answer the question.
Send ctrl+c to a cmd.exe process in c#
[ "", "c#", ".net", "winforms", "cmd", "" ]
i'm writing my anti spam/badwors filter and i need if is possible, to match (detect) only words formed by mixed characters like: fr1&nd$ and not friends is this possible with regex!? best regards!
Of course it's possible with regex! You're not asking to match nested parentheses! :P But yes, this is the kind of thing regular expressions were built for. An example: ``` /\S*[^\w\s]+\S*/ ``` This will match all of the following: ``` @ss as$ a$s @$s a$$ @s$ @$$ ``` It will *not* match this: ``` ass ``` Which I believe is what you want. How it works: `\S*` matches 0 or more non-space characters. `[^\w\s]+` matches only the symbols (it will match anything that isn't a word or a space), and matches 1 or more of them (so a symbol character is required.) Then the `\S*` again matches 0 or more non-space characters (symbols and letters). If I may be allowed to suggest a better strategy, in Perl you can store a regex in a variable. I don't know if you can do this in PHP, but if you can, you can construct a list of variables like such: ``` $a = /[aA@]/ # regex that matches all a-like symbols $b = /[bB]/ $c = /[cC(]/ # etc... ``` Or: ``` $regex = array( 'a' => /[aA@]/, 'b' => /[bB]/, 'c' => /[cC(]/, ... ); ``` So that way, you can match "friend" in all its permutations with: ``` /$f$r$i$e$n$d/ ``` Or: ``` /$regex['f']$regex['r']$regex['i']$regex['e']$regex['n']$regex['d']/ ``` Granted, the second one looks unnecessarily verbose, but that's PHP for you. I think the second one is probably the best solution, since it stores them all in a hash, rather than all as separate variables, but I admit that the regex it produces is a bit ugly.
You could build some regular expressions like the following: ``` \p{L}+[\d\p{S}]+\S* ``` This will match any sequence of one or more letters (`\p{L}+`, see [Unicode character preferences](http://docs.php.net/manual/regexp.reference.php#regexp.reference.unicode)), one or more digits or symbols (`[\d\p{S}]+`) and any following non-whitespace characters `\S*`. ``` $str = 'fr1&nd$ and not friends'; preg_match('/\p{L}+[\d\p{S}]+\S*/', $str, $match); var_dump($match); ```
Regex - Match ( only ) words with mixed chars
[ "", "php", "regex", "filter", "match", "profanity", "" ]
I have a class TContainer that is an aggregate of several stl collections pointers to TItems class. I need to create an Iterator to traverse the elements in all the collections in my TContainer class abstracting the client of the inner workings. What would be a good way to do this?. Should I crate a class that extends an iterator (if so, what iterator class should I extend), should I create an iterator class that is an aggregate of iterators? I only need a FORWARD\_ONLY iterator. I.E, If this is my container: ``` typedef std::vector <TItem*> ItemVector; class TContainer { std::vector <ItemVector *> m_Items; }; ``` What would be a good Iterator to traverse all the items contained in the vectors of the m\_Items member variable.
When I did my own iterator (a while ago now) I inherited from std::iterator and specified the type as the first template parameter. Hope that helps. For forward iterators user forward\_iterator\_tag rather than input\_iterator\_tag in the following code. This class was originally taken from istream\_iterator class (and modified for my own use so it may not resemble the istram\_iterator any more). ``` template<typename T> class <PLOP>_iterator :public std::iterator<std::input_iterator_tag, // type of iterator T,ptrdiff_t,const T*,const T&> // Info about iterator { public: const T& operator*() const; const T* operator->() const; <PLOP>__iterator& operator++(); <PLOP>__iterator operator++(int); bool equal(<PLOP>__iterator const& rhs) const; }; template<typename T> inline bool operator==(<PLOP>__iterator<T> const& lhs,<PLOP>__iterator<T> const& rhs) { return lhs.equal(rhs); } ``` Check this documentation on iterator tags: <http://www.sgi.com/tech/stl/iterator_tags.html> Having just re-read the information on iterators: <http://www.sgi.com/tech/stl/iterator_traits.html> This is the old way of doing things (iterator\_tags) the more modern approach is to set up iterator\_traits<> for your iterator to make it fully compatible with the STL.
If you have access to Boost, using [`iterator_facade`](http://www.boost.org/doc/libs/release/libs/iterator/doc/iterator_facade.html) is the most robust solution, and it's pretty simple to use.
Custom Iterator in C++
[ "", "c++", "stl", "iterator", "" ]
I'd like to select all rows from one table which match "one or more" rows in another table, in the most efficient way. ``` SELECT identity.id FROM identity INNER JOIN task ON task.identityid=identity.id AND task.groupid IN (78, 122, 345, 12, 234, 778, 233, 123, 33) ``` Currently if there are multiple matching tasks this returns the same identity multiple times (but the performance penalty of eliminating these later is not too bad). I'd like this to instead return only one row for each identity, that matches one or more of these task groups, and I was wondering if there was a more efficient way than to do DISTINCT or GROUP BY. The trouble with doing DISTINCT or GROUP BY is that the task table is still scanned for all groupid matches, then they are later reduced down to one by way of a temporary table (sometimes with filesort). I would rather it do some sort of short-circuit evaluation - do not pursue further any subsequent task matches for same identity after it has found one. I was thinking of doing an EXISTS subquery, but I don't know how these are optimised. I'd need for it to join the task table first, before the identity table, so I am not doing a full scan of the identity table which is very large and will have a lot of non-matches.
Just using "SELECT DISTINCT" with what you have should be efficient in mysql. You may need to put your values in a table and join to it, rather than using "IN ( ... )".
Does MYSQL support the TOP N syntax? If so: ``` SELECT TOP 1 identity.id FROM identity INNER JOIN task ON task.identityid=identity.id AND task.groupid IN (78, 122, 345, 12, 234, 778, 233, 123, 33) ```
Quickly select all rows with "1 or more" matching rows in another table
[ "", "sql", "mysql", "query-optimization", "" ]
Consider: ``` import java.awt.*; import javax.swing.*; import java.awt.event.*; import javax.crypto.*; import javax.crypto.spec.*; import java.security.*; import java.io.*; public class EncryptURL extends JApplet implements ActionListener { Container content; JTextField userName = new JTextField(); JTextField firstName = new JTextField(); JTextField lastName = new JTextField(); JTextField email = new JTextField(); JTextField phone = new JTextField(); JTextField heartbeatID = new JTextField(); JTextField regionCode = new JTextField(); JTextField retRegionCode = new JTextField(); JTextField encryptedTextField = new JTextField(); JPanel finishPanel = new JPanel(); public void init() { //setTitle("Book - E Project"); setSize(800, 600); content = getContentPane(); content.setBackground(Color.yellow); content.setLayout(new BoxLayout(content, BoxLayout.Y_AXIS)); JButton submit = new JButton("Submit"); content.add(new JLabel("User Name")); content.add(userName); content.add(new JLabel("First Name")); content.add(firstName); content.add(new JLabel("Last Name")); content.add(lastName); content.add(new JLabel("Email")); content.add(email); content.add(new JLabel("Phone")); content.add(phone); content.add(new JLabel("HeartBeatID")); content.add(heartbeatID); content.add(new JLabel("Region Code")); content.add(regionCode); content.add(new JLabel("RetRegionCode")); content.add(retRegionCode); content.add(submit); submit.addActionListener(this); } public void actionPerformed(ActionEvent e) { if (e.getActionCommand() == "Submit"){ String subUserName = userName.getText(); String subFName = firstName.getText(); String subLName = lastName.getText(); String subEmail = email.getText(); String subPhone = phone.getText(); String subHeartbeatID = heartbeatID.getText(); String subRegionCode = regionCode.getText(); String subRetRegionCode = retRegionCode.getText(); String concatURL = "user=" + subUserName + "&f=" + subFName + "&l=" + subLName + "&em=" + subEmail + "&p=" + subPhone + "&h=" + subHeartbeatID + "&re=" + subRegionCode + "&ret=" + subRetRegionCode; concatURL = padString(concatURL, ' ', 16); byte[] encrypted = encrypt(concatURL); String encryptedString = bytesToHex(encrypted); content.removeAll(); content.add(new JLabel("Concatenated User Input -->" + concatURL)); content.add(encryptedTextField); setContentPane(content); } } public static byte[] encrypt(String toEncrypt) throws Exception{ try{ String plaintext = toEncrypt; String key = "01234567890abcde"; String iv = "fedcba9876543210"; SecretKeySpec keyspec = new SecretKeySpec(key.getBytes(), "AES"); IvParameterSpec ivspec = new IvParameterSpec(iv.getBytes()); Cipher cipher = Cipher.getInstance("AES/CBC/NoPadding"); cipher.init(Cipher.ENCRYPT_MODE, keyspec, ivspec); byte[] encrypted = cipher.doFinal(toEncrypt.getBytes()); return encrypted; } catch(Exception e){ } } public static byte[] decrypt(byte[] toDecrypt) throws Exception{ String key = "01234567890abcde"; String iv = "fedcba9876543210"; SecretKeySpec keyspec = new SecretKeySpec(key.getBytes(), "AES"); IvParameterSpec ivspec = new IvParameterSpec(iv.getBytes()); Cipher cipher = Cipher.getInstance("AES/CBC/NoPadding"); cipher.init(Cipher.DECRYPT_MODE, keyspec, ivspec); byte[] decrypted = cipher.doFinal(toDecrypt); return decrypted; } public static String bytesToHex(byte[] data) { if (data == null) { return null; } else { int len = data.length; String str = ""; for (int i=0; i<len; i++) { if ((data[i]&0xFF) < 16) str = str + "0" + java.lang.Integer.toHexString(data[i]&0xFF); else str = str + java.lang.Integer.toHexString(data[i]&0xFF); } return str; } } public static String padString(String source, char paddingChar, int size) { int padLength = size-source.length() % size; for (int i = 0; i < padLength; i++) { source += paddingChar; } return source; } } ``` I'm getting an unreported exception: ``` java.lang.Exception; must be caught or declared to be thrown byte[] encrypted = encrypt(concatURL); ``` As well as: ``` .java:109: missing return statement ``` How do I solve these problems?
All your problems derive from this ``` byte[] encrypted = cipher.doFinal(toEncrypt.getBytes()); return encrypted; ``` Which are enclosed in a try, catch block, the problem is that in case the program found an exception you are not returning anything. Put it like this (modify it as your program logic stands): ``` public static byte[] encrypt(String toEncrypt) throws Exception{ try{ String plaintext = toEncrypt; String key = "01234567890abcde"; String iv = "fedcba9876543210"; SecretKeySpec keyspec = new SecretKeySpec(key.getBytes(), "AES"); IvParameterSpec ivspec = new IvParameterSpec(iv.getBytes()); Cipher cipher = Cipher.getInstance("AES/CBC/NoPadding"); cipher.init(Cipher.ENCRYPT_MODE,keyspec,ivspec); byte[] encrypted = cipher.doFinal(toEncrypt.getBytes()); return encrypted; } catch(Exception e){ return null; // Always must return something } } ``` For the second one you must catch the Exception from the *encrypt* method call, like this (also modify it as your program logic stands): ``` public void actionPerformed(ActionEvent e) . . . try { byte[] encrypted = encrypt(concatURL); String encryptedString = bytesToHex(encrypted); content.removeAll(); content.add(new JLabel("Concatenated User Input -->" + concatURL)); content.add(encryptedTextField); setContentPane(content); } catch (Exception exc) { // TODO: handle exception } } ``` The lessons you must learn from this: * A method with a return-type must **always** return an object of that type, I mean in all possible scenarios * All checked exceptions must **always** be handled
The problem is in this method: ``` public static byte[] encrypt(String toEncrypt) throws Exception{ ``` This is the ***method signature*** which pretty much says: * what the method name is: **encrypt** * what parameter it receives: a String named **toEncrypt** * its access modifier: **public static** * and if it may or not **throw** an exception when invoked. In this case the method signature says that when invoked this method "could" potentially throw an exception of type "Exception". ``` .... concatURL = padString(concatURL, ' ', 16); byte[] encrypted = encrypt(concatURL); <-- HERE!!!!! String encryptedString = bytesToHex(encrypted); content.removeAll(); ...... ``` So the compilers is saying: Either you surround that with a try/catch construct or you declare the method ( where is being used ) to throw "Exception" it self. The real problem is the "encrypt" method definition. No method should ever return "Exception", because it is too generic and may hide some other [kinds of exception](https://stackoverflow.com/questions/528917/why-dont-you-have-to-explicitly-declare-that-you-might-throw-some-built-in-excep/528942#528942) better is to have an specific exception. Try this: ``` public static byte[] encrypt(String toEncrypt) { try{ String plaintext = toEncrypt; String key = "01234567890abcde"; String iv = "fedcba9876543210"; SecretKeySpec keyspec = new SecretKeySpec(key.getBytes(), "AES"); IvParameterSpec ivspec = new IvParameterSpec(iv.getBytes()); Cipher cipher = Cipher.getInstance("AES/CBC/NoPadding"); cipher.init(Cipher.ENCRYPT_MODE,keyspec,ivspec); byte[] encrypted = cipher.doFinal(toEncrypt.getBytes()); return encrypted; } catch ( NoSuchAlgorithmException nsae ) { // What can you do if the algorithm doesn't exists?? // this usually won't happen because you would test // your code before shipping. // So in this case is ok to transform to another kind throw new IllegalStateException( nsae ); } catch ( NoSuchPaddingException nspe ) { // What can you do when there is no such padding ( whatever that means ) ?? // I guess not much, in either case you won't be able to encrypt the given string throw new IllegalStateException( nsae ); } // line 109 won't say it needs a return anymore. } ``` Basically in this particular case you should make sure the cryptography package is available in the system. Java needs an extension for the cryptography package, so, the exceptions are declared as "checked" exceptions. For you to handle when they are not present. In this small program you cannot do anything if the cryptography package is not available, so you check that at "development" time. If those exceptions are thrown when your program is running is because you did something wrong in "development" thus a RuntimeException subclass is more appropriate. The last line don't need a return statement anymore, in the first version you were catching the exception and doing nothing with it, that's wrong. ``` try { // risky code ... } catch( Exception e ) { // a bomb has just exploited // you should NOT ignore it } // The code continues here, but what should it do??? ``` If the code is to fail, it is better to [Fail fast](http://en.wikipedia.org/wiki/Fail-fast) Here are some related answers: * [Catching Exceptions in Java](https://stackoverflow.com/questions/534718/catching-exceptions-in-java/534939#534939) * [When to choose checked and unchecked exceptions](https://stackoverflow.com/questions/27578/when-to-choose-checked-and-unchecked-exceptions) * [Why don’t you have to explicitly declare that you might throw some built in exceptions in Java?](https://stackoverflow.com/questions/528917/why-dont-you-have-to-explicitly-declare-that-you-might-throw-some-built-in-excep/528942#528942) * [Exception other than RuntimeException](https://stackoverflow.com/questions/462501/exception-other-than-runtimeexception/462745#462745)
Why do I get "Exception; must be caught or declared to be thrown" when I try to compile my Java code?
[ "", "java", "exception", "" ]
We are having another discussion here at work about using parametrized sql queries in our code. We have two sides in the discussion: Me and some others that say we should always use parameters to safeguard against sql injections and the other guys that don't think it is necessary. Instead they want to replace single apostrophes with two apostrophes in all strings to avoid sql injections. Our databases are all running Sql Server 2005 or 2008 and our code base is running on .NET framework 2.0. Let me give you a simple example in C#: I want us to use this: ``` string sql = "SELECT * FROM Users WHERE Name=@name"; SqlCommand getUser = new SqlCommand(sql, connection); getUser.Parameters.AddWithValue("@name", userName); //... blabla - do something here, this is safe ``` While the other guys want to do this: ``` string sql = "SELECT * FROM Users WHERE Name=" + SafeDBString(name); SqlCommand getUser = new SqlCommand(sql, connection); //... blabla - are we safe now? ``` Where the SafeDBString function is defined as follows: ``` string SafeDBString(string inputValue) { return "'" + inputValue.Replace("'", "''") + "'"; } ``` Now, as long as we use SafeDBString on all string values in our queries we should be safe. Right? There are two reasons to use the SafeDBString function. First, it is the way it has been done since the stone ages, and second, it is easier to debug the sql statements since you see the excact query that is run on the database. So then. My question is whether it really is enough to use the SafeDBString function to avoid sql injection attacks. I have been trying to find examples of code that breaks this safety measure, but I can't find any examples of it. Is there anybody out there that can break this? How would you do it? **EDIT:** To summarize the replies so far: * Nobody has found a way to get around the SafeDBString on Sql Server 2005 or 2008 yet. That is good, I think? * Several replies pointed out that you get a performance gain when using parametrized queries. The reason is that the query plans can be reused. * We also agree that using parametrized queries give more readable code that is easier to maintain * Further it is easier to always use parameters than to use various versions of SafeDBString, string to number conversions and string to date conversions. * Using parameters you get automatic type conversion, something that is especially useful when we are working with dates or decimal numbers. * And finally: [Don't try to do security yourself](https://stackoverflow.com/questions/910465/avoiding-sql-injection-without-parameters/910694#910694) as JulianR wrote. The database vendors spend lots of time and money on security. There is no way we can do better and no reason we should try to do their job. So while nobody was able to break the simple security of the SafeDBString function I got lots of other good arguments. Thanks!
I think the correct answer is: **Don't try to do security yourself**. Use whatever trusted, industry standard library there is available for what you're trying to do, rather than *trying* to do it yourself. Whatever assumptions you make about security, might be incorrect. As secure as your own approach may look (and it looks shaky at best), there's a risk you're overlooking something and do you really want to take that chance when it comes to security? Use parameters.
And then somebody goes and uses " instead of '. Parameters are, IMO, the only safe way to go. It also avoids a lot of i18n issues with dates/numbers; what date is 01/02/03? How much is 123,456? Do your servers (app-server and db-server) agree with each-other? If the risk factor isn't convincing to them, how about performance? The RDBMS can re-use the query plan if you use parameters, helping performance. It can't do this with just the string.
Avoiding SQL injection without parameters
[ "", "c#", "asp.net", "sql-server", "sql-injection", "" ]
I have some clients who are not English speaking. They would like the JavaScript I write for them to be in another language. Can browsers understand other languages, or am I limited to non-English comments? ``` navigateur.nomApp.indice("Microsoft") ``` Instead of : ``` navigator.appName.indexOf("Microsoft") ```
You can get any object and assign it to a variable with a French name for instance ``` var nomAppDeNavigateur = navigator.appName; ``` Then use it wherever, it's just the keywords that are restricted to Javascriptish. It still has to make sense though, whatever language you are aiming for.
JavaScript isn't written in English, it's written in JavaScript.
Write JavaScript in Other Languages
[ "", "javascript", "html", "multilingual", "" ]
We want to serve ads on our site but the adserver we are in talks with has issues with delivering their advertising fast enough for us. The issue as I see it is that we are supposed to include a `<script src="http://advertiserurl/myadvertkey"></script>` where we want to display the ad and it will then download a script and use document.write to insert some html. Problem is that the call to the advertiser website is slowish and the code returned then downloads another file (the ad) which means the speed of rendering our pages slows while we wait for the request to be filled. Is there a way to take the output from the document.write call and write this in after the page has loaded? Basically I want to do this: ``` <html> <body> <script> function onLoad() { var urlToGetContentFrom = 'http://advertiserurl/myadvertkey'; // download js from above url somehow var advertHtml = // do something awesome to interprete document.write output $('someElement').innerHTML = advertHtml; } </script> </body> </html> ``` Or anything similar that will let me get the output of that file and display it.
To decouple the main page loading from the ad loading, you can put the ad in its own page in an iframe or, similarly, download the script file with AJAX and execute it whenever it comes down. If the former is not adequate, because of referring URI or whatever, the latter gives you some flexibility: you could use string replacement to rewrite "document.write" to something else, or perhaps temporarily replace it like "document.write = custom\_function;".
If I understand correctly, you want to capture document.write to a variable instead of writing it to the document. You can actually do this: ``` var advertHtml = ''; var oldWrite = document.write; document.write = function(str) { advertHtml += str; } // Ad code here // Put back the old function document.write = oldWrite; // Later... ...innerHTML = advertHtml; ``` You still have the hit of loading the script file though.
Redirect document.write from javascript script
[ "", "javascript", "" ]
How to pass multiple checkboxes using jQuery ajax post this is the ajax function ``` function submit_form(){ $.post("ajax.php", { selectedcheckboxes:user_ids, confirm:"true" }, function(data){ $("#lightbox").html(data); }); } ``` and this is my form ``` <form> <input type='checkbox' name='user_ids[]' value='1'id='checkbox_1' /> <input type='checkbox' name='user_ids[]' value='2'id='checkbox_2' /> <input type='checkbox' name='user_ids[]' value='3'id='checkbox_3' /> <input name="confirm" type="button" value="confirm" onclick="submit_form();" /> </form> ```
From the jquery docs for POST ([3rd example](http://api.jquery.com/jquery.post/#entry-examples)): ``` $.post("test.php", { 'choices[]': ["Jon", "Susan"] }); ``` So I would just iterate over the checked boxes and build the array. Something like ``` var data = { 'user_ids[]' : []}; $(":checked").each(function() { data['user_ids[]'].push($(this).val()); }); $.post("ajax.php", data); ```
Could use the following and then explode the post result `explode(",", $_POST['data']);` to give an array of results. ``` var data = new Array(); $("input[name='checkBoxesName']:checked").each(function(i) { data.push($(this).val()); }); ```
How to pass multiple checkboxes using jQuery ajax post
[ "", "javascript", "jquery", "html", "ajax", "" ]
I would like to open an OleDbConnection to an Excel file that's in memory as a FileStream. Does anyone know if this is possible and how to accomplish it? Big picture I would like to pass the file bytes into a .NET CLR Stored Procedure and extract the data. Perhaps there's a better way to do this that I've not considered. Thanks in advance.
The VSTO solution JP offers is definitely viable. However, if you really want to use OleDb and some SQL, take a look at [this](http://support.microsoft.com/kb/321686) artcile: it's a MS knowledge base article that describes how to import data from excel into SQLSever using OleDb.
You can use [SpreadsheetGear](http://www.spreadsheetgear.com/) to open a file from a byte array in memory or from any stream with SpreadsheetGear.Factory.GetWorkbookSet().Workbooks.OpenFromMemory(byte[]) / OpenFromStream(System.IO.Stream). Disclaimer: I own SpreadsheetGear LLC
In Memory OleDbConnection to Excel File
[ "", ".net", "sql", "sql-server-2005", "excel", "" ]
I'm writing an experimental language, which is very close to pseudocode, for fun and to learn more about C++. One of the problems is that I need to parse some grammar, and my search to find a good C++ grammar parser has been unsuccessful (I couldn't find anything). What I want to accomplish is this: ``` set a to 4 ``` And I want the parser to return a map containing something similar to: ``` command = "set" var = "a" value = 4 ``` based on a set of rules I give it. I ultimately would have to roll my own if I don't find anything useful, but I would like to prevent re-inventing the wheel as much as I can. So, does anyone here know of a good/sane grammar parser that generates C++ code? It's much better if it's widely used. Thanks!
Rolling your own can be as easy as writing the grammar in the first place! It's a great way to learn about parsing, glean a more intimate knowledge of your programming language, and most of all it's fun. The method is called [Recursive Descent](http://en.wikipedia.org/wiki/Recursive_descent_parser). It usually comes out much more simple and elegant than what a parser generater, and you'll never have to find a Yacc port/fork again :) For a great tutorial, check out Jack Crenshaw's [Let's Build a Compiler](http://compilers.iecc.com/crenshaw/) Otherwise, Lex and Yacc are the tradional tools while Boost::Spirit is C++ specific and more modern. I would recommend Boost::Spirit as it will help cement the C++ paradigms in your programming.
Check if [Boost::Spirit](http://www.boost.org/doc/libs/1_38_0/libs/spirit/classic/index.html) can be used. The Spirit framework enables a target grammar to be written exclusively in C++.
Is there a grammar parser (similar to yapps for python) for C++?
[ "", "c++", "parsing", "grammar", "" ]
I have a CSV file with several entries, and each entry has 2 unix timestamp formatted dates. I have a method called `convert()`, which takes in the timestamp and converts it to `YYYYMMDD`. Now, since I have 2 timestamps in each line, how would I replace each one with the new value? EDIT: Just to clarify, I would like to convert each occurrence of the timestamp into the `YYYYMMDD` format. This is what is bugging me, as `re.findall()` returns a list.
I assume that by "unix timestamp formatted date" you mean a number of seconds since the epoch. This assumes that every number in the file is a UNIX timestamp. If that isn't the case you'll need to adjust the regex: ``` import re, sys # your convert function goes here regex = re.compile(r'(\d+)') for line in sys.stdin: sys.stdout.write(regex.sub(lambda m: convert(int(m.group(1))), line)) ``` This reads from stdin and calls convert on each number found. The "trick" here is that `re.sub` can take a function that transforms from a match object into a string. I'm assuming your convert function expects an int and returns a string, so I've used a lambda as an adapter function to grab the first group of the match, convert it to an int, and then pass that resulting int to convert.
If you know the replacement: ``` p = re.compile( r',\d{8},') p.sub( ','+someval+',', csvstring ) ``` if it's a format change: ``` p = re.compile( r',(\d{4})(\d\d)(\d\d),') p.sub( r',\3-\2-\1,', csvstring ) ``` EDIT: sorry, just realised you said python, modified above
Regex Substitution in Python
[ "", "python", "regex", "timestamp", "" ]
I'm writing a hangman game. I'm having a logic fail, both with myself and my game logic. Char guess (the letter the person guessed) isn't being added into the correct memory slot of the vector guessArray. Assume that word is an inputted word by the user. I assume this would work if guessArray were a raw array. Is there some reason this isn't working with a vector? ``` //assume that attempts is num of attempts left void coutWord(int attempts, std::string word, char guess) { std::vector<char> guessArray(word.length()); //this is supposed to add the guess to the correct area of guessArray; //It's not. for (int i = 0; i < word.length(); i++) { if (guess == word[i]) { guessArray[i] = guess; std::cout << " " << guessArray[i] << " "; continue; } std::cout << " _ "; } std::cout << std::endl << std::endl; } ``` EDIT: My objective with this code is to cout all the unguessed spaces AND the guessed spaces in the same for loop. I just need to "remember" previous guesses so that I get correct output. Given word = "Applesauce": ``` Input: a a _ _ _ _ _ a _ _ _ Input: p a p p _ _ _ a _ _ _ ``` etc.
A vector can be indexed with subscript notation `[]`, and it is stored in contiguous memory. It is an STL container so, like an array, you can have one of any type. A vector is automatically resized. An array is 'statically' sized, and cannot be easily resized (with the exception of a manual function called to realloc.) You can use a push\_back function to handle this, and you can also .reserve() memory ahead of time to save on reallocation. An array does not track it's own size, whereas a vector has functions that can check this. If you're unsure of the size of a vector, just go ahead and use .push\_back() to add items to handle the matter of automatically sizing. If you reserve a chunk of memory through resize() and then index into it, it's easier to use as an array, but you lose some of the syntatic benefit of using it as a dynamically-sized object.
There are fundamental logical flaws in your code, beyond the use of vector or arrays. There are two tasks you are trying to do here: * Update an array of guesses * Output the array of guesses It's easy to get mixed up while you're attempting to do both tasks in the one function. My suggestion to you is to put these into separate functions. Here's a basic code structure (using functions that you can implement): ``` int attempts = 0; std::vector<char> guessArray(word.length()); while( (attempts > maxAttemps) && (!HasFoundWord(guessArray) ) { char guess = InputGuess(); UpdateResults(guessArray, guess, word); OutputGuess(guessArray); ++attempts; } ``` The UpdateResults would have a function signature like: ``` void UpdateResults(std::vector<char>& guessArray, char guess, const std::string& word) ``` Once you have separated out the pieces of functionality, you'll find the problem a lot more straightforward to solve.
How similar is an std::vector to a raw array in C++?
[ "", "c++", "arrays", "logic", "" ]
I am tying to write a digg , hackernews , <http://collectivesys.com/> like application where users submit something and other users can vote up or down , mark items as favorite ect . I was just wondering if there are some open source implementations django/python that i could use as starting point , instead of reinventing the wheel by starting from scratch.
I'd recommend taking a close look at the [django-voting project](http://code.google.com/p/django-voting/wiki/RedditStyleVoting) on Google Code. They claim to be an django implementation of "Reddit Style Voting"
Check out [Pinax](http://pinaxproject.com/) and [Django Pluggables](http://djangoplugables.com/) for some pre-made Django apps to help you out.
Writing Digg like system in django/python
[ "", "python", "django", "digg", "" ]
I'm trying to deploy a Java applet on my website. I also need to sign it, because I need to access the clipboard. I've followed all the signing tutorials I could find but have not had any success. Here is what I've done so far: * Wrote an applet in NetBeans. It runs fine in the applet viewer. * Made a .jar file out of it. * Created a certificate by doing this: > ``` > keytool -genkey -keyalg rsa -alias myKeyName > keytool -export -alias myKeyName -file myCertName.crt > ``` * Signed it wtih jarsigner like this: > ``` > jarsigner "C:\my path\myJar.jar" myKeyName > ``` * Made an html file containing this: > ``` > <html> > <body> > <applet code="my/path/name/myApplet.class" archive="../dist/myJar.jar"/> > </body> > </html> > ``` When I open that html file, I never get the security confirmation dialog box (and thus get the "java.security.AccessControlException: access denied" error). This happens on all browsers. Am I missing a step?
Perhaps it's because you're opening some .class files *outside* the jar file? That way it may not display the warning. I tried doing it that way but it still showed me the certificate warning and for a simple case it actually prevented me from accessing a class from the JAR with the separated class. Maybe your specific setup or file organization causes that behavior. If you can layout that in more detail we could help better (or rather, try putting all those .class files in yet another signed Jar and add it to the archive"..., anotherJar.jar").
3 easy steps 1. keytool -genkey -keystore myKeyStore -alias me 2. keytool -selfcert -keystore myKeyStore -alias me 3. jarsigner -keystore myKeyStore jarfile.jar me
How do I sign a Java applet for use in a browser?
[ "", "java", "browser", "applet", "certificate", "self-signed", "" ]
As far as I can tell, [JAXP by default supports W3C XML Schema and RelaxNG from Java 6](http://java.sun.com/javase/6/docs/api/javax/xml/validation/SchemaFactory.html). I can see a few APIs, mostly experimental or incomplete, on the [schematron.com links page](http://www.schematron.com/links.html). Is there an approach on validating schematron in Java that's complete, efficient and can be used with the JAXP API?
[Jing](http://code.google.com/p/jing-trang/) supports pre-ISO Schematron validation (note that Jing's implementation is based also on XSLT). There are also XSLT implementations that can be very easily invoked from Java. You need to decide what version of Schematron you are interested in and then get the corresponding stylesheet - all of them should be available from schematron.com. The process is very simple simple, involving basically 2 steps: * apply the skeleton XSLT on your Schematron schema to get a new XSLT stylesheet that represents your Schematron schema in XSLT * apply the obtained XSLT on your instance document or documents to validate them JAXP is just an API and it does not require support for Relax NG from an implementation. You need to check the specific implementation that you use to see if that supports Relax NG or not.
A pure Java Schematron implementation is located at <https://github.com/phax/ph-schematron/> It brings support for both the XSLT approach and the pure Java approach.
How can I validate documents against Schematron schemas in Java?
[ "", "java", "validation", "schema", "jaxp", "schematron", "" ]
I would like to append one DataTable to another DataTable. I see the DataTable class has two methods; "Load(IDataReader)" and "Merge(DataTable)". From the documentation, both appear to 'merge' the incoming data with the existing DataTable if rows exist. I will be doing the merge in a data access layer. I could use an `IDataReader` and use the Load method to merge the DataTables. Or I could load a DataSet using the `IDataReader`, get the DataTable from the DataSet, and then use the Merge method to merge the DataTables. I was wondering if someone could tell me which is the proper method to use? Alternatively, let me know if you have a different suggestion on how to accomplish this task.
Merge takes a DataTable, Load requires an IDataReader - so depending on what your data layer gives you access to, use the required method. My understanding is that Load will internally call Merge, but not 100% sure about that. If you have two DataTables, use Merge.
The datatype in the same columns name must be equals. ``` dataTable1.Merge(dataTable2); ``` After that the result is: dataTable1 = dataTable1 + dataTable2
How to append one DataTable to another DataTable
[ "", "c#", ".net", "vb.net", "datatable", "" ]
is there a way to batch copy certain wikipedia articles(about 10,000) to my own mediawiki site? EDIT: How do I do this without overwriting similarly named articles/pages? Also I don't plan on using illegal means (crawlers etc)
If you're looking to obtain a specific set of articles, then you may be able to use the Export page (<http://en.wikipedia.org/wiki/Special:Export>) to obtain an XML dump of the pages involved; you can export multiple pages at once, although you may wish to space out your requests. You can import the XML dumps into MediaWiki using `Special:Import` or one of the import scripts in `maintenance/`.
The Wikipedia database is available for [download](http://en.wikipedia.org/wiki/Wikipedia_database)
Copy chosen Wikipedia articles into own wiki?
[ "", "php", "mysql", "mediawiki", "wikipedia", "" ]
I have a class that had an inline member, but I later decided that I wanted to remove the implementation from the headers so I moved the members body of the functions out to a cpp file. At first I just left the inlined signature in the header file (sloppy me) and the program failed to link correctly. Then I fixed my header and it all works fine, of course. *But wasn't inline totally optional?* In code: First: ``` //Class.h class MyClass { void inline foo() {} }; ``` Next changed to (won't link): ``` //Class.h class MyClass { void inline foo(); }; //Class.cpp void MyClass::foo() {} ``` And then to (will work fine): ``` //Class.h class MyClass { void foo(); }; //Class.cpp void MyClass::foo() {} ``` I thought inline was optional, and imagined I might get by with a warning for my sloppiness, but didn't expect a linking error. What's the correct/standard thing a compiler should do in this case, did I deserve my error according to the standard?
Indeed, there is this one definition rule saying that an inline function *must* be defined in every translation unit it is used. Gory details follow. First `3.2/3`: > Every program shall contain exactly one definition of every non-inline function or object that is used in that program; no diagnostic required. The definition can appear explicitly in the program, it can be found in the standard or a user-defined library, or (when appropriate) it is implicitly defined (see 12.1, 12.4 and 12.8). > An inline function shall be defined in every translation unit in which it is used. And of course `7.1.2/4`: > An inline function shall be defined in every translation unit in which it is used and shall have exactly the same definition in every case (3.2). [Note: a call to the inline function may be encountered before its definition appears in the translation unit. ] If a function with external linkage is declared inline in one translation unit, it shall be declared inline in all translation units in which it appears; no diagnostic is required. An inline function with external linkage shall have the same address in all translation units. A static local variable in an extern inline function always refers to the same object. A string literal in an extern inline function is the same object in different translation units. However, if you define your function within the class definition, it is implicitly declared as `inline` function. That will allow you to include the class definition containing that inline function body multiple times in your program. Since the function has `external` linkage, any definition of it will refer to the *same* function (or more gory - to the same `entity`). Gory details about my claim. First `3.5/5`: > In addition, a member function, static data member, class or enumeration of class scope has external linkage if the name of the class has external linkage. Then `3.5/4`: > A name having namespace scope has external linkage if it is the name of [...] a named class (clause 9), or an unnamed class defined in a typedef declaration in which the class has the typedef name for linkage purposes. This "name for linkage purposes" is this fun thing: ``` typedef struct { [...] } the_name; ``` Since now you have multiple definitions of the *same entity* in your programs, another thing of the ODR happens to restrict you. `3.2/5` follows with boring stuff. > There can be more than one definition of a class type (clause 9), enumeration type (7.2), inline function with external linkage (7.1.2) [...] in a program provided that each definition appears in a different translation unit, and provided the definitions satisfy the following requirements. Given such an entity named D defined in more than one translation unit, then > > * each definition of D shall consist of the same sequence of tokens; and > * in each definition of D, corresponding names, looked up according to 3.4, shall refer to an entity defined within the definition of D, or shall refer to the same entity, after overload resolution (13.3) and after matching of partial template specialization (14.8.3) [...] I cut off some unimportant stuff now. The above are the two important one to remember about inline functions. If you define an extern inline function multiple times, but do define it differently, or if you define it and names used within it resolve to different entities, then you are doing undefined behavior. The rule that the function has to be defined in every TU in which it is used is easy to remember. And that it is the same is also easy to remember. But what about that name resolution thingy? Here some example. Consider a static function `assert_it`: ``` static void assert_it() { [...] } ``` Now, since `static` will give it internal linkage, when you include it into multiple translation units, then each definition will define a *different entity*. This means that you are *not* allowed to use `assert_it` from an extern inline function that's going to be defined multiple times in the program: Because what happens is that the inline function will refer to one entity called `assert_it` in one TU, but to another entity of the same name in another TU. You will find that this all is boring theory and compilers won't probably complain, but i found this example in particular shows the relation between the ODR and entities. --- What follows is getting back to your particular problem again. Following are the same things: ``` struct A { void f() { } }; struct A { inline void f(); }; void A::f() { } // same TU! ``` But this one is different, since the function is non-inline. You will violate the ODR, since you have more than one definition of `f` if you include the header more than once ``` struct A { void f(); }; void A::f() { } // evil! ``` Now if you put `inline` on the declaration of `f` inside the class, but then omit defining it in the header, then you violate `3.2/3` (and `7.1.2/4` which says the same thing, just more elaborating), since the function isn't defined in that translation unit! Note that in C (C99), inline has different semantics than in C++. If you create an extern inline function, you should first read some good paper (preferably the Standard), since those are really tricky in C (basically, any used inline-definition of a function will need another, non-inline function definition in another TU. static inline functions in C are easy to handle. They behave like any other function, apart of having the usual "inline substitution" hint. static `inline` in both C and C++ serve only as a inline-substitution hint. Since static will already create a different entity any time it's used (because of internal linkage), `inline` will just add the inline-substitution hint - not more.
Whether or not the method is actually inlined is at the sole discretion of the compiler. However the presence of the inline keyword will also affect the linkage of the method. C++ linkage is not my specialty so I'll defer to the links for a better explanation. * <http://publib.boulder.ibm.com/infocenter/zos/v1r9/index.jsp?topic=/com.ibm.zos.r9.cbclx01/inline_linkage.htm> * <http://en.wikipedia.org/wiki/Inline_function> Alternately you can just wait for [litb](https://stackoverflow.com/users/34509/litb) to provide the gory details in an hour or so ;)
If inlining is optional, why does removing 'inline' cause linker errors?
[ "", "c++", "inline", "one-definition-rule", "inline-functions", "" ]
I'm implementing a TryParse(string s, Out object result) method. If the parse fails, I would like not to touch the out parameter so any previous result will remain intact. But VS2k8 won't let me. I have to set the value of the out object no matter what. Should I just put result = result for the sake of pleasing the compiler? Am I missing something?
Your suggestion of `result = result` won't work, because it's an `out` parameter - it's not definitely assigned to start with, so you can't read its value until *you've* assigned a value to it. ``` result = null; ``` is definitely the right way to go for an `object` out parameter. Basically use `default(T)` for whatever type `T` you've got. (The `default` operator is useful in generic methods - for non-generic code I'd normally just use `null`, 0, whatever.) EDIT: Based on the comment from Boris, it may be worth elaborating on the difference between a `ref` parameter and an `out` parameter: **Out parameters** * *Don't* have to be definitely assigned by the caller * Are treated as "not definitely assigned" at the start of the method (you can't read the value without assigning it first, just like a local variable) * Have to be definitely assigned (by the method) before the method terminates normally (i.e. before it returns; it can throw an exception without assigning a value to the parameter) **Ref parameters** * *Do* have to be definitely assigned by the caller * Are treated as "definitely assigned" at the start of the method (so you can read the value without assigning it first) * Don't have to be assigned to within the method (i.e. you can leave the parameter with its original value)
Assign null (or default(T) more generally). You must assign a value, that's what 'out' means.
What should the out value be set to with an unsuccessfull TryXX() method?
[ "", "c#", ".net", "tryparse", "" ]
I have a multi-select listbox which I am binding to a DataTable. DataTable contains 2 columns description and value. Here's the listbox populating code: ``` DataTable copytable = null; copytable = GlobalTable.Copy(); // GlobalTable is a DataTable copytable.Rows[0][0] = "--ALL--"; copytable.Rows[0][1] = "--ALL--"; breakTypeList.DataSource = copytable; this.breakTypeList.DisplayMember = copytable.Columns[0].ColumnName; // description this.breakTypeList.ValueMember = copytable.Columns[1].ColumnName; // value this.breakTypeList.SelectedIndex = -1; ``` I am setting description as the DisplayMember and value as the ValueMember of the ListBox. Now depending on what the value is passed I need to set the selected item in the ListBox. Here's my code: ``` ListBox lb = c as ListBox; lb.SelectedValue = valuePassedByUser; ``` which is not working. Hence I have to resort to the code below (where I loop through all the items in the list box) ``` for (int i = 0; i < lb.Items.Count; i++) { DataRowView dr = lb.Items[i] as DataRowView; if (dr["value"].ToString() == valuePassedByUser) { lb.SelectedIndices.Add(i); break; } } ``` I would like to know what is missing/ erroneous in my code. Why is lb.SelectedValue = valuePassedByUser; selecting incorrect items?
Ok ... here comes hard-to-digest answer which I realized only yesterday. It's my mistake though that I didn't mention one important thing in my question because I felt it is irrelevant to problem at hand: The data in the data table was not sorted. Hence I **had set the listbox's Sorted property to true**. Later I realized When the listbox's or even combo box's sorted property is set to true then the value member does not get set properly. So if I write: ``` lb.SelectedValue = valuePassedByUser; ``` it sets some other item as selected rather than settting the one whose Value is valuePassedByUser. In short it messes with the indexes. For e.g. if my initial data is: ``` Index ValueMember DisplayMember 1 A Apple 2 M Mango 3 O Orange 4 B Banana ``` And I set sorted = true. Then the listbox items are: ``` Index ValueMember DisplayMember 1 A Apple 2 B Banana 3 M Mango 4 O Orange ``` Now if I want to set Banana as selected, I run the stmt: ``` lb.SelectedValue = "B"; ``` But instead of setting Banana as selected, it sets Orange as selected. Why? Because had the list not been sorted, index of Banana would be 4. So even though after sorting index of Banana is 2, it sets index 4 as selected, thus making Orange selected instead of Banana. Hence for sorted listbox, I am using the following code to set selected items: ``` private void SetSelectedBreakType(ListBox lb, string value) { for (int i = 0; i < lb.Items.Count; i++) { DataRowView dr = lb.Items[i] as DataRowView; if (dr["value"].ToString() == value) { lb.SelectedIndices.Add(i); break; } } } ```
I think the only way you'll be able to select multiple items is by using a foreach loop. The SelectedValue property only seems to return 1 item. If you want to select more then 1 item you'll have to use : ``` var tempListBox = c As ListBox; if (tempListBox != null) (tempListBox.SelectedItems.Add(tempListBox.Items[tempListBox.FindStringExact(fieldValue)]); ``` Also the FindStringExact doesn't search through the Value fields it only looks through the displayed text. Also to cut down on code might want to cast a new variable as a listbox so you don't keep casting C as a listbox.
Setting selected item in a ListBox without looping
[ "", "c#", "listbox", "asp.net-2.0", "" ]
I'm looking into adding some unit tests for some classes in my data access layer and I'm looking at an update routine that has no return value. It simply updates a row based on the id you provide at whichever column name you provide. Inside of this method, we collect the parameters and pass them to a helper routine which calls the stored procedure to update the table. Is there a recommended approach for how to do unit testing in such a scenario? I'm having a hard time thinking of a test that wouldn't depend on other methods.
Test the method that reads the data from the database, first. Then you can call the update function, and use the function that was tested above, to verify that the value that was updated is correct. I tend to use other methods in my unit tests as long as I have tests that also test those that were called. If your helper functions are in the database (stored procedures or functions) then just test those with a DatabaseUnitTest first, then test the visual basic code.
I would just use a lookup method to validate that the data was properly updated. Yes, technically this would relay on the lookup method working properly, but I don't think you necessarily have to avoid that dependency. Just make sure the lookup method is tested as well.
Unit Testing the Data Access Layer - Testing Update Methods?
[ "", "sql", "vb.net", "unit-testing", "stored-procedures", "nunit", "" ]
Maybe I'm missing it somewhere in the PHP manual, but what exactly is the difference between an error and an exception? The only difference that I can see is that errors and exceptions are handled differently. But what causes an exception and what causes an error?
Exceptions are [thrown](https://www.php.net/throw) - they are intended to be caught. Errors are generally unrecoverable. Lets say for instance - you have a block of code that will insert a row into a database. It is possible that this call fails (duplicate ID) - you will want to have a "Error" which in this case is an "Exception". When you are inserting these rows, you can do something like this ``` try { $row->insert(); $inserted = true; } catch (Exception $e) { echo "There was an error inserting the row - ".$e->getMessage(); $inserted = false; } echo "Some more stuff"; ``` Program execution will continue - because you 'caught' the exception. An exception will be treated as an error unless it is caught. It will allow you to continue program execution after it fails as well.
I usually [`set_error_handler`](//php.net/set_error_handler) to a function that takes the error and throws an exception so that whatever happens I'll just have exceptions to deal with. No more [`@file_get_contents`](//php.net/file_get_contents) just nice and neat try/catch. In debug situations, I also have an exception handler that outputs an asp.net like page. I've cut and pasted some of my code together to make a sample. ``` <?php define( 'DEBUG', true ); class ErrorOrWarningException extends Exception { protected $_Context = null; public function getContext() { return $this->_Context; } public function setContext( $value ) { $this->_Context = $value; } public function __construct( $code, $message, $file, $line, $context ) { parent::__construct( $message, $code ); $this->file = $file; $this->line = $line; $this->setContext( $context ); } } /** * Inspire to write perfect code. everything is an exception, even minor warnings. **/ function error_to_exception( $code, $message, $file, $line, $context ) { throw new ErrorOrWarningException( $code, $message, $file, $line, $context ); } set_error_handler( 'error_to_exception' ); function global_exception_handler( $ex ) { ob_start(); dump_exception( $ex ); $dump = ob_get_clean(); // send email of dump to administrator?... // if we are in debug mode we are allowed to dump exceptions to the browser. if ( defined( 'DEBUG' ) && DEBUG == true ) { echo $dump; } else // if we are in production we give our visitor a nice message without all the details. { echo file_get_contents( 'static/errors/fatalexception.html' ); } exit; } function dump_exception( Exception $ex ) { $file = $ex->getFile(); $line = $ex->getLine(); if ( file_exists( $file ) ) { $lines = file( $file ); } ?><html> <head> <title><?= $ex->getMessage(); ?></title> <style type="text/css"> body { width : 800px; margin : auto; } ul.code { border : inset 1px; } ul.code li { white-space: pre ; list-style-type : none; font-family : monospace; } ul.code li.line { color : red; } table.trace { width : 100%; border-collapse : collapse; border : solid 1px black; } table.thead tr { background : rgb(240,240,240); } table.trace tr.odd { background : white; } table.trace tr.even { background : rgb(250,250,250); } table.trace td { padding : 2px 4px 2px 4px; } </style> </head> <body> <h1>Uncaught <?= get_class( $ex ); ?></h1> <h2><?= $ex->getMessage(); ?></h2> <p> An uncaught <?= get_class( $ex ); ?> was thrown on line <?= $line; ?> of file <?= basename( $file ); ?> that prevented further execution of this request. </p> <h2>Where it happened:</h2> <? if ( isset($lines) ) : ?> <code><?= $file; ?></code> <ul class="code"> <? for( $i = $line - 3; $i < $line + 3; $i ++ ) : ?> <? if ( $i > 0 && $i < count( $lines ) ) : ?> <? if ( $i == $line-1 ) : ?> <li class="line"><?= str_replace( "\n", "", $lines[$i] ); ?></li> <? else : ?> <li><?= str_replace( "\n", "", $lines[$i] ); ?></li> <? endif; ?> <? endif; ?> <? endfor; ?> </ul> <? endif; ?> <? if ( is_array( $ex->getTrace() ) ) : ?> <h2>Stack trace:</h2> <table class="trace"> <thead> <tr> <td>File</td> <td>Line</td> <td>Class</td> <td>Function</td> <td>Arguments</td> </tr> </thead> <tbody> <? foreach ( $ex->getTrace() as $i => $trace ) : ?> <tr class="<?= $i % 2 == 0 ? 'even' : 'odd'; ?>"> <td><?= isset($trace[ 'file' ]) ? basename($trace[ 'file' ]) : ''; ?></td> <td><?= isset($trace[ 'line' ]) ? $trace[ 'line' ] : ''; ?></td> <td><?= isset($trace[ 'class' ]) ? $trace[ 'class' ] : ''; ?></td> <td><?= isset($trace[ 'function' ]) ? $trace[ 'function' ] : ''; ?></td> <td> <? if( isset($trace[ 'args' ]) ) : ?> <? foreach ( $trace[ 'args' ] as $i => $arg ) : ?> <span title="<?= var_export( $arg, true ); ?>"><?= gettype( $arg ); ?></span> <?= $i < count( $trace['args'] ) -1 ? ',' : ''; ?> <? endforeach; ?> <? else : ?> NULL <? endif; ?> </td> </tr> <? endforeach;?> </tbody> </table> <? else : ?> <pre><?= $ex->getTraceAsString(); ?></pre> <? endif; ?> </body> </html><? // back in php } set_exception_handler( 'global_exception_handler' ); class X { function __construct() { trigger_error( 'Whoops!', E_USER_NOTICE ); } } $x = new X(); throw new Exception( 'Execution will never get here' ); ?> ```
PHP: exceptions vs errors?
[ "", "php", "exception", "error-handling", "" ]
I was asked to write a servlet that collects client's details such as ip, mac address etc. getting his IP is pretty straight-forward (`request.getRemoteAddr()`) but I dont find an elegant way to get his MAC address. seems reasonable that the web server has access to data such as Mac address etc, since it gets the TCP packets and all. does this data registers somewhere? is it accessible? (I'm working on top of Tomcat6)
You're probably not going to get what you want. (the client's MAC address) If the server is close enough (directly connected via hub or maybe a switch) you can [ARP](http://en.wikipedia.org/wiki/Address_Resolution_Protocol) for the MAC Address. If you do this for an IP across the Internet you're probably going to get the inside interface of the closest Router or Switch. Because of the way TCP/IP works the MAC address used in the 'frame' will get ripped off and re-assembled each at each hop the information takes between the server and the host. ![alt text](https://upload.wikimedia.org/wikipedia/commons/thumb/3/3b/UDP_encapsulation.svg/450px-UDP_encapsulation.svg.png) [Encapsulation](http://en.wikipedia.org/wiki/Encapsulation_(networking))
TCP/IP... You can't get the MAC Address, that's a too low layer AFAIK
how to get a client's MAC address from HttpServlet?
[ "", "java", "tomcat", "servlets", "tcp", "" ]
In the late years of college, I had a course on Compilers. We created a compiler for a subset of C. I have always wondered how a pass-by-ref function call is compiled into assembly in C++. For what I remember, a pass-by-val function call follows the following procedure: * Store the address of the PP * Push the arguments onto the stack * Perform the function call * In the function, pop from stack the parameters What's different for pass-by-reference? (int void(int&);) **EDIT:** I may sound totally lost but, if you could help me I'd really appreciate it. Everyone's answer is basically that it passes the address instead of the value. I understood that to be basically what passing a pointer is. So how come, these two functions, behave differently?: ``` struct A { int x; A(int v){ x = v; } }; int byRef(A& v){ v = A(3); return 0; } int byP (A* v){ v = &A(4); //OR new A(4) return 0; } int _tmain(int argc, _TCHAR* argv[]) { A a (1); A b (2); byRef(a); byP (&b); cout << a.x << " " << b.x; system("pause"); return 0; } ``` I know that in byP(A\*), v is being passed by value, thus, it won't affect the caller's argument. Then, how would you implement byRef(A&) in terms of A\*?
``` int byRef(A& v){ v = A(3); return 0; } ``` This invokes the assignment of the temporary object to the object passed by reference, the object used in the function call is modified. A shallow copy will be performed if no assignment operator is provided. ``` int byP (A* v){ v = &A(4); //OR new A(4) return 0; } ``` This copies a pointer to a temporary object to the passed in pointer value. No assignment function is called. The value of 'v' is changed but the object v pointed to, the object address passed as an argument, is unchanged. If you did this: ``` struct A { int x; A(int v){ x = v; } A &operator = (A &rhs){ cout << "assignment!"; } }; ``` then the "assignment" will be output in the `byRef` function but not the `byP` function. Although the `&` is implemented using pointers 'under the hood', as others have said, they are treated as the object passed to the function by the language. So, to implement the `byRef` using pointers: ``` int byRefUsingP (A *v) { *v = A(3); // or you could do: // v->operator = (A(3)); // if an operator = is defined (don't know if it will work without one defined) return 0; } ```
You pass a pointer to the referand, exactly as you would any other pointer, and the callee knows how to use it. Depending on the implementation it therefore might not be on the stack - some parameters are passed in registers in some calling conventions. There may be other ways to do it, since the C++ standard doesn't specify how references are implemented afaik, and even if they are implemented as pointers I suppose they might be distinguished in the calling convention. Pointers are the most obvious implementation, though.
How does c++ by-ref argument passing is compiled in assembly?
[ "", "c++", "assembly", "compiler-construction", "" ]
i have 2 tables Group and People People has GroupId that is linked to Group.GroupId (primary key) how can I select groups that don't have any people? in t-sql and in linq thank you
**Update** I've run four different ways to do this through SQL Server 2005 and included the execution plan. ``` -- 269 reads, 16 CPU SELECT * FROM Groups WHERE NOT EXISTS ( SELECT * FROM People WHERE People.GroupId = Groups.GroupId ); -- 249 reads, 15 CPU SELECT * FROM Groups WHERE ( SELECT COUNT(*) FROM People WHERE People.GroupId = Groups.GroupId ) = 0 -- 249 reads, 14 CPU SELECT * FROM Groups WHERE GroupId NOT IN ( SELECT DISTINCT GroupId FROM Users ) -- 10 reads, 12 CPU SELECT * FROM Groups LEFT JOIN Users ON Users.GroupId = Groups.GroupId WHERE Users.GroupId IS NULL ``` So the last one, while arguably the least readable of the four, performs the best. That comes as something as a surprise to me, and honestly I still prefer the WHERE NOT EXISTS syntax because I think it's more explicit - it reads exactly like what you're trying to do.
My preferred method is a left-anti-semi join: ``` SELECT g.* FROM Groups g LEFT JOIN People p ON g.GroupID = p.GroupID WHERE p.GroupID IS NULL ``` I find it most intitive, flexible, and performant. I wrote an entire article on various query strategies to search for the absence of data - have a look [here](http://code.msdn.microsoft.com/SQLExamples/Wiki/View.aspx?title=QueryBasedUponAbsenceOfData&referringTitle=Home) if you're interested.
select rows that do not have any foreign keys linked
[ "", "sql", "linq", "" ]
How to get the applications installed in the system using c# code?
Iterating through the registry key "SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall" seems to give a comprehensive list of installed applications. Aside from the example below, you can find a similar version to what I've done [here](http://bytes.com/groups/net-c/276143-getting-list-installed-programs-windows-using-c). This is a rough example, you'll probaby want to do something to strip out blank rows like in the 2nd link provided. ``` string registry_key = @"SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall"; using(Microsoft.Win32.RegistryKey key = Registry.LocalMachine.OpenSubKey(registry_key)) { foreach(string subkey_name in key.GetSubKeyNames()) { using(RegistryKey subkey = key.OpenSubKey(subkey_name)) { Console.WriteLine(subkey.GetValue("DisplayName")); } } } ``` Alternatively, you can use WMI as has been mentioned: ``` ManagementObjectSearcher mos = new ManagementObjectSearcher("SELECT * FROM Win32_Product"); foreach(ManagementObject mo in mos.Get()) { Console.WriteLine(mo["Name"]); } ``` But this is rather slower to execute, and I've heard it may only list programs installed under "ALLUSERS", though that may be incorrect. It also ignores the Windows components & updates, which may be handy for you.
I wanted to be able to extract a list of apps just as they appear in the start menu. Using the registry, I was getting entries that do not show up in the start menu. I also wanted to find the exe path and to extract an icon to eventually make a nice looking launcher. Unfortunately, with the registry method this is kind of a hit and miss since my observations are that this information isn't reliably available. My alternative is based around the shell:AppsFolder which you can access by running `explorer.exe shell:appsFolder` and which lists all apps, including store apps, currently installed and available through the start menu. The issue is that this is a virtual folder that can't be accessed with `System.IO.Directory`. Instead, you would have to use native shell32 commands. Fortunately, Microsoft published the [Microsoft.WindowsAPICodePack-Shell](https://www.nuget.org/packages/Microsoft.WindowsAPICodePack-Shell/) on Nuget which is a wrapper for the aforementioned commands. Enough said, here's the code: ``` // GUID taken from https://learn.microsoft.com/en-us/windows/win32/shell/knownfolderid var FOLDERID_AppsFolder = new Guid("{1e87508d-89c2-42f0-8a7e-645a0f50ca58}"); ShellObject appsFolder = (ShellObject)KnownFolderHelper.FromKnownFolderId(FOLDERID_AppsFolder); foreach (var app in (IKnownFolder)appsFolder) { // The friendly app name string name = app.Name; // The ParsingName property is the AppUserModelID string appUserModelID = app.ParsingName; // or app.Properties.System.AppUserModel.ID // You can even get the Jumbo icon in one shot ImageSource icon = app.Thumbnail.ExtraLargeBitmapSource; } ``` And that's all there is to it. You can also start the apps using ``` System.Diagnostics.Process.Start("explorer.exe", @" shell:appsFolder\" + appModelUserID); ``` This works for regular Win32 apps and UWP store apps. How about them apples. Since you are interested in listing all installed apps, it is reasonable to expect that you might want to monitor for new apps or uninstalled apps as well, which you can do using the `ShellObjectWatcher`: ``` ShellObjectWatcher sow = new ShellObjectWatcher(appsFolder, false); sow.AllEvents += (s, e) => DoWhatever(); sow.Start(); ``` Edit: One might also be interested in knowing that the AppUserMoedlID mentioned above is the [unique ID Windows uses to group windows in the taskbar](https://learn.microsoft.com/en-us/windows/win32/shell/appids). 2022: Tested in Windows 11 and still works great. Windows 11 also seems to cache apps that aren't installed per se, portable apps that don't need installing, for example. They appear in the start menu search results and can also be retrieved from `shell:appsFolder` as well.
Get installed applications in a system
[ "", "c#", ".net", "installation", "" ]
Is there an `IIf` equivalent in C#? Or similar shortcut?
C# has the `?` ternary operator, like other C-style languages. However, this is not perfectly equivalent to `IIf()`; there are two important differences. To explain the first difference, the false-part argument for this `IIf()` call causes a `DivideByZeroException`, even though the boolean argument is `True`. ``` IIf(true, 1, 1/0) ``` `IIf()` is just a function, and like all functions all the arguments must be evaluated before the call is made. Put another way, `IIf()` does *not* short circuit in the traditional sense. On the other hand, this ternary expression does short-circuit, and so is perfectly fine: ``` (true)?1:1/0; ``` The other difference is `IIf()` is not type safe. It accepts and returns arguments of type `Object`. The ternary operator *is* type safe. It uses type inference to know what types it's dealing with. Note you can fix this very easily with your own generic `IIF(Of T)()` implementation, but out of the box that's not the way it is. If you really want `IIf()` in C#, you can have it: ``` object IIf(bool expression, object truePart, object falsePart) {return expression?truePart:falsePart;} ``` or a generic/type-safe implementation: ``` T IIf<T>(bool expression, T truePart, T falsePart) {return expression?truePart:falsePart;} ``` On the other hand, if you want the ternary operator in VB, Visual Studio 2008 and later provide a new `If()` *operator* that works like C#'s ternary operator. It uses type inference to know what it's returning, and it really is an operator rather than a function. This means there's no issues from pre-evaluating expressions, even though it has function semantics.
VB.NET: ``` If(someBool, "true", "false") ``` C# ``` someBool ? "true" : "false"; ```
Iif equivalent in C#
[ "", "c#", ".net", "conditional-operator", "iif-function", "" ]
Ok, so here's the problem I have to solve. I need to write a method in C# that will modify a table in SQL Server 2008. The table could potentially contain millions of records. The modifications include altering the table by adding a new column and then calculating and setting the value of the new field for every row in the table. Adding the column is not a problem. It's setting the values efficiently that is the issue. I don't want to read in the whole table into a DataTable and then update and commit for obvious reasons. I'm thinking that I would like to use a cursor to iterate over the rows in the table and update them one by one. I haven't done a whole lot of ADO.NET development, but it is my understanding that only read-only server side (firehose) cursors are supported. So what is the correct way to go about doing something like this (preferably with some sample code in C#)? Stored procedures or other such modifications to the DB are not allowed.
jpgoody, Here is an example to chew on using the [NerdDinner](http://nerddinner.codeplex.com/) database and some SQLConnection, SQLCommand, and SQLDataReader objects. It adds one day to each of the Event Dates in the **Dinners** table. ``` using System; using System.Data.SqlClient; namespace NerdDinner { public class Class1 { public void Execute() { SqlConnection readerConnection = new SqlConnection(Properties.Settings.Default.ConnectionString); readerConnection.Open(); SqlCommand cmd = new SqlCommand("SELECT DinnerID, EventDate FROM Dinners", readerConnection); SqlDataReader reader = cmd.ExecuteReader(); SqlConnection writerConnection = new SqlConnection(Properties.Settings.Default.ConnectionString); writerConnection.Open(); SqlCommand writerCommand = new SqlCommand("", writerConnection); while (reader.Read()) { int DinnerID = reader.GetInt32(0); DateTime EventDate = reader.GetDateTime(1); writerCommand.CommandText = "UPDATE Dinners SET EventDate = '" + EventDate.AddDays(1).ToString() + "' WHERE DinnerID = " + DinnerID.ToString(); writerCommand.ExecuteNonQuery(); } } } } ```
Your problem looks like something that you should be solving using T-SQL and not C#, unless there is some business rule that you are picking up dynamically and calculating the column values T-SQL should be the way to go. Just write a stored procedure or just open up Management studio and write the code to make your changes. If this does not help then please elaborate on what **exactly** you want to do to the table, then we can help you figure out if this can be done via T-SQL or not. [EDIT] you can do something like this ``` string sql = " USE " + paramDbName; sql+= " ALTER TABLE XYZ ADD COLUMN " + param1 + " datatype etc, then put semicolon to separate the commands as well" sql+= " UPDATE XYZ SET Columnx = " + some logic here cmd.CommandText = sql; cmd.ExecuteNonQuery(); ``` get this executed on the required instance of Sql Server 2008. If you have too many lines of text then use StringBuilder.
How to update a large table using ADO.NET
[ "", "c#", "ado.net", "" ]
Any reliable elegant way of detecting whether the current app is a win or web forms (or other) application? We have a common config class which needs to open either app.config or web.config. Currently, I'm catching ArgumentException when I try OpenExeConfiguration but it's not very elegant and might mask other issues.
I usually check if there's a HttpContext available (if it's a Web Application since in a Web Service `HttpContenxt.Current` is null) To do this, you should add `System.Web` to your references. ``` if(HttpContext.Current!=null) //It's a web application else //it's a win application ```
Try using dependency injection so the config class doesn't have to do a switch statement.
Detect win or web forms
[ "", "c#", "configuration", "web-config", "app-config", "" ]
I am looking for a good way to log changes that occur on a particular set of tables in my SQL Server 2005 database. I believe the best way to do this is through a trigger that get's executed on updates and deletes. Is there anyway to grab the actual statement that is being ran? Once I have the statement I can easily log it someplace else (other DB table). However, I have not found an easy way (if possible) to grab the SQL statement that is being ran.
Triggers are bad, I'd stay away from triggers. If you are trying to troubleshoot something, attach [Sql Profiler](http://msdn.microsoft.com/en-us/library/ms173757.aspx) to the database with specific conditions. This will log every query run for your inspection. Another option is to change to calling program to log its queries. This is a very common practice.
If you just want to keep a **log of all transactions** (insert, update and delete) in some database tables, then you can run the following script: ``` IF NOT EXISTS(SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME= 'Audit') CREATE TABLE LogTable ( LogID [int]IDENTITY(1,1) NOT NULL, Type char(1), TableName varchar(128), PrimaryKeyField varchar(1000), PrimaryKeyValue varchar(1000), FieldName varchar(128), OldValue varchar(1000), NewValue varchar(1000), UpdateDate datetime DEFAULT (GetDate()), UserName varchar(128) ) GO DECLARE @sql varchar(8000), @TABLE_NAME sysname SET NOCOUNT ON SELECT @TABLE_NAME= MIN(TABLE_NAME) FROM INFORMATION_SCHEMA.Tables WHERE --query for table that you want to audit TABLE_TYPE= 'BASE TABLE' AND TABLE_NAME!= 'sysdiagrams' AND TABLE_NAME!= 'LogTable' AND TABLE_NAME!= 'one table to not record de log'; WHILE @TABLE_NAME IS NOT NULL BEGIN SELECT 'PROCESANDO ' + @TABLE_NAME; EXEC('IF OBJECT_ID (''' + @TABLE_NAME+ '_ChangeTracking'', ''TR'') IS NOT NULL DROP TRIGGER ' + @TABLE_NAME+ '_ChangeTracking') SELECT @sql = 'create trigger ' + @TABLE_NAME+ '_ChangeTracking on ' + @TABLE_NAME+ ' for insert, update, delete as declare @bit int , @field int , @maxfield int , @char int , @fieldname varchar(128) , @TableName varchar(128) , @PKCols varchar(1000) , @sql varchar(2000), @UpdateDate varchar(21) , @UserName varchar(128) , @Type char(1) , @PKFieldSelect varchar(1000), @PKValueSelect varchar(1000) select @TableName = ''' + @TABLE_NAME+ ''' -- date and user select @UserName = system_user , @UpdateDate = convert(varchar(8), getdate(), 112) + '' '' + convert(varchar(12), getdate(), 114) -- Action if exists (select * from inserted) if exists (select * from deleted) select @Type = ''U'' else select @Type = ''I'' else select @Type = ''D'' -- get list of columns select * into #ins from inserted select * into #del from deleted -- Get primary key columns for full outer join select @PKCols = coalesce(@PKCols + '' and'', '' on'') + '' i.'' + c.COLUMN_NAME + '' = d.'' + c.COLUMN_NAME from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE c where pk.TABLE_NAME = @TableName and CONSTRAINT_TYPE = ''PRIMARY KEY'' and c.TABLE_NAME = pk.TABLE_NAME and c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME -- Get primary key fields select for insert(comma deparated) select @PKFieldSelect = coalesce(@PKFieldSelect+''+'','''') + '''''''' + COLUMN_NAME + '','''''' from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE c where pk.TABLE_NAME = @TableName and CONSTRAINT_TYPE = ''PRIMARY KEY'' and c.TABLE_NAME = pk.TABLE_NAME and c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME -- Get primary key values for insert(comma deparated as varchar) select @PKValueSelect = coalesce(@PKValueSelect+''+'','''') + ''convert(varchar(100), coalesce(i.'' + COLUMN_NAME + '',d.'' + COLUMN_NAME + ''))'' + ''+'''','''''' from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE c where pk.TABLE_NAME = @TableName and CONSTRAINT_TYPE = ''PRIMARY KEY'' and c.TABLE_NAME = pk.TABLE_NAME and c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME if @PKCols is null begin raiserror(''no PK on table %s'', 16, -1, @TableName) return end select @sql = ''insert LogTable(Type, TableName, PrimaryKeyField, PrimaryKeyValue, UserName)'' select @sql = @sql + '' select '''''' + @Type + '''''''' select @sql = @sql + '','''''' + @TableName + '''''''' select @sql = @sql + '','' + @PKFieldSelect select @sql = @sql + '','' + @PKValueSelect select @sql = @sql + '','''''' + @UserName + '''''''' select @sql = @sql + '' from #ins i full outer join #del d'' select @sql = @sql + @PKCols exec (@sql) '; SELECT @sql EXEC(@sql) SELECT @TABLE_NAME= MIN(TABLE_NAME) FROM INFORMATION_SCHEMA.Tables WHERE TABLE_NAME> @TABLE_NAME --query for table that you want to audit AND TABLE_TYPE= 'BASE TABLE' AND TABLE_NAME!= 'sysdiagrams' AND TABLE_NAME!= 'LogTable' AND TABLE_NAME!= 'one table to not record de log'; END ```
Log changes to database table with trigger
[ "", "sql", "sql-server", "sql-server-2005", "logging", "" ]
In Java we use System.setProperty() method to set some system properties. According to [this article](http://blogs.oracle.com/foo/entry/monitored_system_setproperty) the use of system properties is bit tricky. > System.setProperty() can be an evil call. > > * It is 100% thread-hostile > * It contains super-global variables > * It is extremely difficult to debug when these variables mysteriously > change at runtime. My questions are as follows. 1. How about the scope of the system properties? Are they specific for each and every Virtual Machine or they have a "Super Global nature" that shares the same set of properties over Each and every virtual machine instance? I guess the option 1 2. Are there any tools that can be used to monitor the runtime changes for detect the changes in System properties. (Just for the ease of problem detection)
**Scope of the System properties** At least from reading the API Specifications for the [`System.setProperties`](http://java.sun.com/javase/6/docs/api/java/lang/System.html#setProperties(java.util.Properties)) method, I was unable to get an answer whether the system properties are shared by all instances of the JVM or not. In order to find out, I wrote two quick programs that will set the system property via `System.setProperty`, using the same key, but different values: ``` class T1 { public static void main(String[] s) { System.setProperty("dummy.property", "42"); // Keep printing value of "dummy.property" forever. while (true) { System.out.println(System.getProperty("dummy.property")); try { Thread.sleep(500); } catch (Exception e) {} } } } class T2 { public static void main(String[] s) { System.setProperty("dummy.property", "52"); // Keep printing value of "dummy.property" forever. while (true) { System.out.println(System.getProperty("dummy.property")); try { Thread.sleep(500); } catch (Exception e) {} } } } ``` (Beware that running the two programs above will make them go into an infinite loop!) It turns out, when running the two programs using two separate `java` processes, the value for the property set in one JVM process does not affect the value of the other JVM process. I should add that this is the results for using Sun's JRE 1.6.0\_12, and this behavior isn't defined at least in the API specifications (or I haven't been able to find it), the behaviors may vary. **Are there any tools to monitor runtime changes** Not to my knowledge. However, if one does need to check if there were changes to the system properties, one can hold onto a copy of the `Properties` at one time, and compare it with another call to `System.getProperties` -- after all, [`Properties`](http://java.sun.com/javase/6/docs/api/java/util/Properties.html) is a subclass of [`Hashtable`](http://java.sun.com/javase/6/docs/api/java/util/Hashtable.html), so comparison would be performed in a similar manner. Following is a program that demonstrates a way to check if there has been changes to the system properties. Probably not an elegant method, but it seems to do its job: ``` import java.util.*; class CheckChanges { private static boolean isDifferent(Properties p1, Properties p2) { Set<Map.Entry<Object, Object>> p1EntrySet = p1.entrySet(); Set<Map.Entry<Object, Object>> p2EntrySet = p2.entrySet(); // Check that the key/value pairs are the same in the entry sets // obtained from the two Properties. // If there is an difference, return true. for (Map.Entry<Object, Object> e : p1EntrySet) { if (!p2EntrySet.contains(e)) return true; } for (Map.Entry<Object, Object> e : p2EntrySet) { if (!p1EntrySet.contains(e)) return true; } return false; } public static void main(String[] s) { // System properties prior to modification. Properties p = (Properties)System.getProperties().clone(); // Modification of system properties. System.setProperty("dummy.property", "42"); // See if there was modification. The output is "false" System.out.println(isDifferent(p, System.getProperties())); } } ``` **Properties is not thread-safe?** [`Hashtable`](http://java.sun.com/javase/6/docs/api/java/util/Hashtable.html) *is thread-safe*, so I was expecting that `Properties` would be as well, and in fact, the API Specifications for the [`Properties`](http://java.sun.com/javase/6/docs/api/java/util/Properties.html) class confirms it: > This class is thread-safe: multiple > threads can share a single `Properties` > object without the need for external > synchronization., [Serialized Form](http://java.sun.com/javase/6/docs/api/serialized-form.html#java.util.Properties)
System properties are per-process. This means that they are more global than static fields, which are per-classloader. So for instance, if you have a single instance of Tomcat running multiple Java webapps, each of which has a class `com.example.Example` with a static field named `globalField`, then the webapps will share system properties, but `com.example.Example.globalField` can be set to a different value in each webapp.
Scope of the Java System Properties
[ "", "java", "jvm", "system-properties", "" ]
I have some (small amount) of data that I'll need quick access to on inital load, but not after that. Right now, I have serialized the data (Generic List) to an Xml file and I'm deserializing it on load as needed. My question is should I use the XmlSerializer or the BinaryFormatter? I'm not worried about file size, but serialization speed.
`BinaryFormatter` is faster than `XmlSerializer`. It has to deal with much less bloated format without string parsing issues.
Interesting answers. I was going to suggest persisting the data as code, and building it into an assembly that would be referenced from the remainder of the application.
Best storage approach for small amount of 99% static data needed at startup?
[ "", "c#", "xml", "database", "serialization", "binary", "" ]
Forgive what might seem to some to be a very simple question, but I have this use case in mind: ``` struct fraction { fraction( size_t num, size_t denom ) : numerator( num ), denominator( denom ) {}; size_t numerator; size_t denominator; }; ``` What I would like to do is use statements like: ``` fraction f(3,5); ... double v = f; ``` to have `v` now hold the value represented by my fraction. How would I do this in C++?
One way to do this is to define a conversion operator: ``` struct fraction { size_t numerator; size_t denominator; operator float() const { return ((float)numerator)/denominator; } }; ``` Most people will prefer not to define an implicit conversion operator as a matter of style. This is because conversion operators tend to act "behind the scenes" and it can be difficult to tell which conversions are being used. ``` struct fraction { size_t numerator; size_t denominator; float as_float() const { return ((float)numerator)/denominator; } }; ``` In this version, you would call the `as_float` method to get the same result.
Assignment operators and conversion constructors are for initializing objects of *your* class from objects of other classes. You instead need a way to initialize an object of some other type with an object of your class. That's what a conversion operator is for: ``` struct fraction { //other members here... operator double() const { return (double)numerator / denominator;} //other members here... }; ```
Question about the assignment operator in C++
[ "", "c++", "operator-overloading", "" ]
If I change an element of an std::set, for example, through an iterator, I know it is not "reinserted" or "resorted", but is there any mention of if it triggers undefined behavior? For example, I would imagine insertions would screw up. Is there any mention of specifically what happens?
You should not edit the values stored in the set directly. I copied this from MSDN documentation which is somewhat authoritative: > The STL container class set is used > for the storage and retrieval of data > from a collection in which the values > of the elements contained are unique > and serve as the key values according > to which the data is automatically > ordered. The value of an element in a > set may not be changed directly. > Instead, you must delete old values > and insert elements with new values. Why this is is pretty easy to understand. The `set` implementation will have no way of knowing you have modified the value behind its back. The normal implementation is a red-black tree. Having changed the value, the position in the tree for that instance will be wrong. You would expect to see all manner of wrong behaviour, such as `exists` queries returning the wrong result on account of the search going down the wrong branch of the tree.
The precise answer is platform dependant but as a general rule, a "key" (the stuff you put in a set or the first type of a map) is suppose to be "immutable". To put it simply, that should not be modified, and there is no such thing as automatic re-insertion. More precisely, the *member variables* used for to compare the key must not be modified. Windows vc compiler is quite flexible (tested with VC8) and this code compile: ``` // creation std::set<int> toto; toto.insert(4); toto.insert(40); toto.insert(25); // bad modif (*toto.begin())=100; // output for(std::set<int>::iterator it = toto.begin(); it != toto.end(); ++it) { std::cout<<*it<<" "; } std::cout<<std::endl; ``` The output is **100 25 40**, which is obviously not sorted... Bad... Still, such behavior is useful when you want to modify data not participating in the *operator <*. But you better know what you're doing: that's the price you get for being too flexible. Some might prefer gcc behavior (tested with 3.4.4) which gives the error "assignment of read-only location". You can work around it with a const\_cast: ``` const_cast<int&>(*toto.begin())=100; ``` That's now compiling on gcc as well, same output: **100 25 40**. But at least, doing so will probably makes you wonder what's happening, then go to stack overflow and see this thread :-)
what happens when you modify an element of an std::set?
[ "", "c++", "stl", "collections", "set", "" ]
I want to save and load my xml data using XmlReader. But I don't know how to use this class. Can you give me a sample code for start?
Personally I have switched away from XMLReader to System.XML.Linq.XDocument to manage my XML data files. This way I can easily pull data from xml into objects and manage them like any other object in my program. When I am done manipulating them I can just save the changes back out the the xml file at any time. ``` //Load my xml document XDocument myData = XDocument.Load(PhysicalApplicationPath + "/Data.xml"); //Create my new object HelpItem newitem = new HelpItem(); newitem.Answer = answer; newitem.Question = question; newitem.Category = category; //Find the Parent Node and then add the new item to it. XElement helpItems = myData.Descendants("HelpItems").First(); helpItems.Add(newitem.XmlHelpItem()); //then save it back out to the file system myData.Save(PhysicalApplicationPath + "/Data.xml"); ``` If I want to use this data in an easily managed data set I can bind it to a list of my objects. ``` List<HelpItem> helpitems = (from helpitem in myData.Descendants("HelpItem") select new HelpItem { Category = helpitem.Element("Category").Value, Question = helpitem.Element("Question").Value, Answer = helpitem.Element("Answer").Value, }).ToList<HelpItem>(); ``` Now it can be passed around and manipulated with any inherent functions of my object class. For convenience my class has a function to create itself as an xml node. ``` public XElement XmlHelpItem() { XElement helpitem = new XElement("HelpItem"); XElement category = new XElement("Category", Category); XElement question = new XElement("Question", Question); XElement answer = new XElement("Answer", Answer); helpitem.Add(category); helpitem.Add(question); helpitem.Add(answer); return helpitem; } ```
MSDN has a simple example to get you started [here](http://msdn.microsoft.com/en-us/library/cc189056(VS.95).aspx). If you're interested in reading and writing XML documents, and not just specifically using the XmlReader class, there's [a nice article covering a few of your options here](http://www.c-sharpcorner.com/UploadFile/mahesh/ReadWriteXMLTutMellli2111282005041517AM/ReadWriteXMLTutMellli21.aspx). But if you just want to get started and play around, try this: ``` XmlReaderSettings settings = new XmlReaderSettings(); settings.IgnoreWhitespace = true; settings.IgnoreComments = true; XmlReader reader = XmlReader.Create("file.xml", settings); ```
How to use XmlReader class?
[ "", "c#", "xml", "xmlreader", "" ]
It appears so, but I can't find any definitive documentation on the subject. What I'm asking is if the result of this query: ``` from x in Db.Items join y in Db.Sales on x.Id equals y.ItemId group x by x.Id into g orderby g.Count() descending select g.First() ``` is ALWAYS THE SAME as the following query: ``` from x in Db.Items join y in Db.Sales on x.Id equals y.ItemId group x by x.Id into g select g.First() ``` note that the second query lets Linq decide the ordering of the group, which the first query sets as number sold, from most to least. My ad-hoc tests seem to indicate that Linq automatically sorts groups this way, while the [documentation](http://msdn.microsoft.com/en-us/library/bb534501.aspx) seems to indicate that the opposite is true--items are returned in the order they appear in the select. I figure if it comes sorted this way, adding the extra sort is pointless and wastes cycles, and would be better left out.
You're likely seeing this because the query result returned from the sqlserver is always in the same order in your tests. However, this is a fallacy: by definition, sets in SQL have no order unless it's explicitly specified with an ORDER BY. So if your queries don't have an order by statement, your sets might look like they're ordered, but that's not the case, it might be that in edge cases the order is different (e.g. when the server has to load pages of the table in different order due to memory constraints or otherwise). So rule of thumb: if you want an order, you have to specify one.
LINQ grouping does not guarantee such a thing. While it might work for that specific circumstance, it might not work in another situation. ***Avoid relying on this side effect***. By the way, if the output is really intentionally sorted by SQL Server due to clustered index or something, adding an `ORDER BY` clause won't hurt because query optimizer should be smart enough to know that the result is already sorted, so you won't lose anything.
Are groups in Linq to Sql already sorted by Count() descending?
[ "", "c#", "linq-to-sql", "sorting", "grouping", "" ]
Can someone tell me why when I cast a string of say 00332 I only get back 332? It removes the leading zeros and saves the data in the same format. Thanks --- ``` this->_gate = (string) $this->_linkID->QuoteSmart($gate); ```
I'd say because the leading zeros don't have any importance when cast to an integer. Maybe you should leave it as a string if you need those leading zeroes, and only cast `(int)` on any math needed to be performed (though you could just use the string too, PHP will figure it out) **edit** After seeing the example, I'd want to echo the value before you cast to string (so I can confirm something fishy isn't going on with you QuoteSmart method (or the value your sending as a param, $gate)
Don't cast strings to strings, or any type to the same type for that matter. Typecasting is for changing something from one type to another.
casting removes leading zeros
[ "", "php", "casting", "" ]