Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a piece of code that looks something like this (ClearImportTable and InsertPage are stored procedures): ``` datacontext.ClearImportTable() //Essentially a DELETE FROM table for (int i = 1; i < MAX_PAGES; ++i){ datacontext.InsertPage(i); //Inserts data into the table } ``` This is a somewhat simplified version of my code, but the idea is that it clears the table before inserting records. The only problem is that if an error occurrs after ClearImportTable, all of the data from the table is wiped. Is there any way to wrap this in a transaction so that if there are any errors, everything will be put back the way it was?
You can do a transaction scope: ``` using (var transaction = new TransactionScope()) { // do stuff here... transaction.Complete(); } ``` If an exception occurs or the using block is left without hitting the transaction.Complete() then everything performed within the using block is rolled back. You will need to reference the System.Transactions assembly.
As "Sailing Judo" mentioned above, I've used the TransactionScope blocks with great success when I need to call stored procs. However there's one "gotcha" that I've run into where an exception gets thrown saying that the "transaction is in doubt." To get around this I had to call a nondeferred method on the proc to make it evaluate the result immediately. So instead of ``` using (var transaction = new TransactionScope()) { var db = new dbDataContext(); db.StoredProc(); transaction.Complete(); } ``` I had to call it like this... ``` using (var transaction = new TransactionScope()) { var db = new dbDataContext(); db.StoredProc().ToList(); transaction.Complete(); } ``` In this example, ToList() could be any nondeferred method which causes LINQ to immediately evaluate the result. I assume this is because LINQs lazy (deferred) nature isn't playing well with the timing of the transaction, but this is only a guess. If anyone could shine more light on this, I'd love to hear about it.
How do I use a transaction in LINQ to SQL using stored procedures?
[ "", "c#", "sql-server", "sql-server-2005", "linq-to-sql", ".net-3.5", "" ]
What is a good way to distribute a small database on CD-ROM? The Database has to be encrypted. It must run on WinXP and Vista. The application is written in C# and is also distributed on CD. Records are only read but not written. Its ok to run an installer, but we prefer not to do it. The DB has 100000 records and performance is not the primary goal. EDIT: Yes the user will have to enter a password to decrypt the database.
I would use [SQLite](http://sqlite.org) for this. Actually re-reading your question, you could even use a flat or an XML file, given that there is no need for performance or updates/adds. If there is any chance you will need to add/update this data source as your app matures, though, go with SQLite.
Honestly... If you can do it I would use encrypted XML files. No install,runs anywhere (text files have no dependencies), fast and since updates are not necessary, that would make it even easier.
Database to distribute on CD
[ "", "c#", "database", "cd", "" ]
I would like to create an initialisation method for a Java class that accepts 3 parameters: ``` Employee[] method( String[] employeeNames, Integer[] employeeAges, float[] employeeSalaries ) { Employee myEmployees[] = new Employee[SIZE]; // I don't know what size is for ( int count = 0; count < SIZE; count++) { myEmployees[count] = new Employee( employeeNames[count], employeeAges[count], employeeSalaries[count] ); } return myEmployees; } ``` You may notice that this code is wrong. The SIZE variable is not defined. My problem is that I would like to pass in 3 arrays, but I would like to know if I can ensure that the three arrays are ALL of the same array size. This way the for loop will not fail, as the constructor in the for loop uses all the parameters of the arrays. Perhaps Java has a different feature that can enforce a solution to my problem. I could accept another parameter called SIZE which will be used in the for loop, but that doesn't solve my problem if parameters 1 and 2 are of size 10 and the 3rd parameter is an array of size 9. How can I enforce that the 3 arguments are all arrays that contain the exact same number of elements? Using an extra parameter that specifies the array sizes isn't very elegant and kind of dirty. It also doesn't solve the problem the array parameters contain different sized arrays.
You can't enforce that at compile-time. You basically have to check it at execution time, and throw an exception if the constraint isn't met: ``` Employee[] method(String[] employeeNames, Integer[] employeeAges, float[] employeeSalaries) { if (employeeNames == null || employeeAges == null || employeeSalaries == null) { throw new NullPointerException(); } int size = employeeNames.length; if (employeesAges.length != size || employeeSalaries.length != size) { throw new IllegalArgumentException ("Names/ages/salaries must be the same size"); } ... } ```
Since the arrays being passed in aren't generated until runtime, it is not possible to prevent the method call from completing depending upon the characteristics of the array being passed in as a compile-time check. As Jon Skeet has mentioned, the only way to indicate a problem is to throw an `IllegalArgumentException` or the like at runtime to stop the processing when the method is called with the wrong parameters. In any case, the documentation should clearly note the expectations and the "contract" for using the method -- passing in of three arrays which have the same lengths. It would probably be a good idea to note this in the Javadocs for the method.
Java Method with Enforced Array Size Parameters?
[ "", "java", "class", "architecture", "" ]
I am trying to insert huge amount of data into SQL server. My destination table has an unique index called "Hash". I would like to replace my SqlDataAdapter implementation with SqlBulkCopy. In SqlDataAapter there is a property called "ContinueUpdateOnError", when set to true adapter.Update(table) will insert all the rows possible and tag the error rows with RowError property. The question is how can I use SqlBulkCopy to insert data as quickly as possible while keeping track of which rows got inserted and which rows did not (due to the unique index)? Here is the additional information: 1. The process is iterative, often set on a schedule to repeat. 2. The source and destination tables can be huge, sometimes millions of rows. 3. Even though it is possible to check for the hash values first, it requires two transactions per row (first for selecting the hash from destination table, then perform the insertion). I think in the adapter.update(table)'s case, it is faster to check for the RowError than checking for hash hits per row.
SqlBulkCopy, has very limited error handling facilities, by default it doesn't even check constraints. However, its fast, really really fast. If you want to work around the duplicate key issue, and identify which rows are duplicates in a batch. One option is: * start tran * Grab a tablockx on the table select all current "Hash" values and chuck them in a HashSet. * Filter out the duplicates and report. * Insert the data * commit tran This process will work effectively if you are inserting huge sets and the size of the initial data in the table is not too huge. Can you please expand your question to include the rest of the context of the problem. **EDIT** Now that I have some more context here is another way you can go about it: * Do the bulk insert into a temp table. * start serializable tran * Select all temp rows that are already in the destination table ... report on them * Insert the data in the temp table into the real table, performing a left join on hash and including all the new rows. * commit the tran That process is very light on round trips, and considering your specs should end up being really fast;
Slightly different approach than already suggested; Perform the `SqlBulkCopy` and catch the **SqlException** thrown: ``` Violation of PRIMARY KEY constraint 'PK_MyPK'. Cannot insert duplicate key in object 'dbo.MyTable'. **The duplicate key value is (17)**. ``` You can then remove all items from your source from ID 17, the first record that was duplicated. I'm making assumptions here that apply to my circumstances and possibly not yours; i.e. that the duplication is caused by the *exact* same data from a previously failed `SqlBulkCopy` due to SQL/Network errors during the upload.
SqlBulkCopy Error handling / continue on error
[ "", "c#", "ado.net", "sqlbulkcopy", "" ]
I'm unable to do a scenario from subject. I have DirectX 9 March 2009 SDK installed, which is 9, "sub"-version c, but "sub-sub"-version is 41, so libs (d3dx9.lib d3d9.lib) are linking exports to dxd3d\_41.dll. What happens when I try to run my app on machine which has DX9.0c but not redistributable from march 2009 is now obvious :), it fails because it cannot find dxd3d\_41.dll. Which is standard solution for this problem? How Am I supposed to compile my app to be supported by all machines having DX 9.0c? Is that even achievable? Thanx
You need to install the runtime that matches the SDK you use to compile. The only way to force this to work on ALL machines with DirectX9c installed is to use an old SDK (the first 9.0c SDK). However, I strongly recommend avoiding this. You are much better off just using March 09, and install the March runtimes along with your application installation.
The simplest solution is to link to the Microsoft DirectX end-user runtime updater on your download page and tell people to run this first to make sure that the runtime components are up to date before installing your application. After that, the next simplest thing is to bundle the necessary runtime updater with your application and have users run it before running your installer. All of this is documented in the SDK documentation.
Compiling DX 9.0c app against March09SDK => Cannot run with older DX 9.0c DLLs => Problem :)
[ "", "c++", "directx", "direct3d9", "" ]
Instead of the default "boxed" axis style I want to have only the left and bottom axis, i.e.: ``` +------+ | | | | | | ---> | | | | +------+ +------- ``` This should be easy, but I can't find the necessary options in the docs.
This is the suggested Matplotlib 3 solution from the official website [HERE](http://matplotlib.org/examples/ticks_and_spines/spines_demo.html): ``` import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 2*np.pi, 100) y = np.sin(x) ax = plt.subplot(111) ax.plot(x, y) # Hide the right and top spines ax.spines[['right', 'top']].set_visible(False) plt.show() ``` ![enter image description here](https://i.stack.imgur.com/3dwiV.png)
Alternatively, this ``` def simpleaxis(ax): ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() ``` seems to achieve the same effect on an axis without losing rotated label support. (Matplotlib 1.0.1; solution inspired by [this](http://www.shocksolution.com/2011/08/removing-an-axis-or-both-axes-from-a-matplotlib-plot/)).
How can I remove the top and right axis?
[ "", "python", "matplotlib", "" ]
I was having a debate on this with some colleagues. Is there a preferred way to retrieve an object in Django when you're expecting only one? The two obvious ways are: ``` try: obj = MyModel.objects.get(id=1) except MyModel.DoesNotExist: # We have no object! Do something... pass ``` And: ``` objs = MyModel.objects.filter(id=1) if len(objs) == 1: obj = objs[0] else: # We have no object! Do something... pass ``` The first method seems behaviorally more correct, but uses exceptions in control flow which may introduce some overhead. The second is more roundabout but won't ever raise an exception. Any thoughts on which of these is preferable? Which is more efficient?
[`get()`](https://docs.djangoproject.com/en/stable/ref/models/querysets/#django.db.models.query.QuerySet.get) is provided [specifically for this case](https://docs.djangoproject.com/en/stable/topics/db/queries/#retrieving-a-single-object-with-get). Use it. Option 2 is almost precisely how the `get()` method is actually implemented in Django, so there should be no "performance" difference (and the fact that you're thinking about it indicates you're violating one of the cardinal rules of programming, namely trying to optimize code before it's even been written and profiled -- until you have the code and can run it, you don't know how it will perform, and trying to optimize before then is a path of pain).
You can install a module called [django-annoying](http://bitbucket.org/offline/django-annoying/) and then do this: ``` from annoying.functions import get_object_or_None obj = get_object_or_None(MyModel, id=1) if not obj: #omg the object was not found do some error stuff ```
.filter() vs .get() for single object? (Django)
[ "", "python", "django", "django-models", "backend", "django-queryset", "" ]
Hi i have a C# winform application with a particular form populated with a number of textboxes. I would like to make it so that by pressing the right arrow key this mimicks the same behaivour as pressing the tab key. Im not really sure how to do this. I dont want to change the behaivour of the tab key at all, just get the right arrow key to do the same thing whilst on that form. Can anyone offer any suggestions?
You should override the OnKeyUp method in your form to do this... ``` protected override void OnKeyUp(KeyEventArgs e) { if (e.KeyCode == Keys.Right) { Control activeControl = this.ActiveControl; if(activeControl == null) { activeControl = this; } this.SelectNextControl(activeControl, true, true, true, true); e.Handled = true; } base.OnKeyUp(e); } ```
I think this will accomplish what you're asking: ``` private void form1_KeyDown(object sender, KeyEventArgs e) { if (e.KeyCode == Keys.Right) { Control activeControl = form1.ActiveControl; // may need to check for null activeControl form1.SelectNextControl(activeControl, true, true, true, true); } } ```
C# Winform Alter Sent Keystroke
[ "", "c#", "winforms", "key", "" ]
``` class Foo { public: Foo() { do_something = &Foo::func_x; } int (Foo::*do_something)(int); // function pointer to class member function void setFunc(bool e) { do_something = e ? &Foo::func_x : &Foo::func_y; } private: int func_x(int m) { return m *= 5; } int func_y(int n) { return n *= 6; } }; int main() { Foo f; f.setFunc(false); return (f.*do_something)(5); // <- Not ok. Compile error. } ``` How can I get this to work?
The line you want is ``` return (f.*f.do_something)(5); ``` (That compiles -- I've tried it) "`*f.do_something`" refers to the pointer itself --- "f" tells us where to get the do\_something value *from*. But we still need to give an object that will be the this pointer when we call the function. That's why we need the "`f.`" prefix.
``` class A{ public: typedef int (A::*method)(); method p; A(){ p = &A::foo; (this->*p)(); // <- trick 1, inner call } int foo(){ printf("foo\n"); return 0; } }; void main() { A a; (a.*a.p)(); // <- trick 2, outer call } ```
C++ function pointer (class member) to non-static member function
[ "", "c++", "function-pointers", "" ]
I am trying to run a file watcher over some server path using windows service. I am using my windows login credential to run the service, and am able to access this "someServerPath" from my login. But when I do that from the FileSystemWatcher it throws: > The directory name \someServerPath is invalid" exception. ``` var fileWatcher = new FileSystemWatcher(GetServerPath()) { NotifyFilter=(NotifyFilters.LastWrite|NotifyFilters.FileName), EnableRaisingEvents=true, IncludeSubdirectories=true }; public static string GetServerPath() { return string.Format(@"\\{0}", FileServer1); } ``` Can anyone please help me with this?
I have projects using the FileSystemWatcher object monitoring UNC paths without any issues. My guess from looking at your code example may be that you are pointing the watcher at the root share of the server (//servername/) which may not be a valid file system share? I know it returns things like printers, scheduled tasks, etc. in windows explorer. Try pointing the watcher to a share beneath the root - something like //servername/c$/ would be a good test example if you have remote administrative rights on the server.
With regards to the updated question, I agree that you probably need to specify a valid share, rather than just the remote server name. [Update] Fixed previous question about the exception with this: specify the name as `@"\\someServerPath"` The \ is being escaped as a single \ When you prefix the string with an @ symbol, it doesn't process the escape sequences.
FileSystemWatcher Fails to access network drive
[ "", "c#", "windows-services", "filesystemwatcher", "" ]
For unit testing purposes I'm trying to write a [mock object](http://en.wikipedia.org/wiki/Mock_object) of a class with no constructors. Is this even possible in Java, of is the class simply not extensible?
A class with no constructors has an implicit public no-argument constructor and yes, as long as it's not final, it can be sub-classed. If the class has only private constructors then no, it can't.
Question has been answered, but to add a comment. This is often a good time to propose that code be written to be somewhat testable. Don't be a pain about it, research what it takes (probably Dependency Injection at least), learn about writing mocks and propose a reasonable set of guidelines that will allow classes to be more useful. We just had to re-write a bunch of singletons to use DI instead because singletons are notoriously hard to mock. This may not go over well, but some level of coding for testability is standard in most professional shops.
Is it possible to extend a class with no constructors in Java?
[ "", "java", "inheritance", "constructor", "mocking", "" ]
I want to display a file tree similarly to [java2s.com 'Create a lazy file tree'](http://www.java2s.com/Tutorial/Java/0280__SWT/Createalazyfiletree.htm), but include the actual system icons - especially for folders. SWT does not seem to offer this (Program API does not support folders), so I came up with the following: ``` public Image getImage(File file) { ImageIcon systemIcon = (ImageIcon) FileSystemView.getFileSystemView().getSystemIcon(file); java.awt.Image image = systemIcon.getImage(); int width = image.getWidth(null); int height = image.getHeight(null); BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB); Graphics2D g2d = bufferedImage.createGraphics(); g2d.drawImage(image, 0, 0, null); g2d.dispose(); int[] data = ((DataBufferInt) bufferedImage.getData().getDataBuffer()).getData(); ImageData imageData = new ImageData(width, height, 24, new PaletteData(0xFF0000, 0x00FF00, 0x0000FF)); imageData.setPixels(0, 0, data.length, data, 0); Image swtImage = new Image(this.display, imageData); return swtImage; } ``` However, the regions that should be transparent are displayed in black. How do I get this working, or is there another approach I should take? **Update:** I think the reason is that `PaletteData` is not intended for transparency at all. For now, I fill the `BufferedImage` with `Color.WHITE` now, which is an acceptable workaround. Still, I'd like to know the real solution here...
You need a method like the following, which is a 99% copy from <http://dev.eclipse.org/viewcvs/index.cgi/org.eclipse.swt.snippets/src/org/eclipse/swt/snippets/Snippet156.java?view=co> : ``` static ImageData convertToSWT(BufferedImage bufferedImage) { if (bufferedImage.getColorModel() instanceof DirectColorModel) { DirectColorModel colorModel = (DirectColorModel)bufferedImage.getColorModel(); PaletteData palette = new PaletteData(colorModel.getRedMask(), colorModel.getGreenMask(), colorModel.getBlueMask()); ImageData data = new ImageData(bufferedImage.getWidth(), bufferedImage.getHeight(), colorModel.getPixelSize(), palette); for (int y = 0; y < data.height; y++) { for (int x = 0; x < data.width; x++) { int rgb = bufferedImage.getRGB(x, y); int pixel = palette.getPixel(new RGB((rgb >> 16) & 0xFF, (rgb >> 8) & 0xFF, rgb & 0xFF)); data.setPixel(x, y, pixel); if (colorModel.hasAlpha()) { data.setAlpha(x, y, (rgb >> 24) & 0xFF); } } } return data; } else if (bufferedImage.getColorModel() instanceof IndexColorModel) { IndexColorModel colorModel = (IndexColorModel)bufferedImage.getColorModel(); int size = colorModel.getMapSize(); byte[] reds = new byte[size]; byte[] greens = new byte[size]; byte[] blues = new byte[size]; colorModel.getReds(reds); colorModel.getGreens(greens); colorModel.getBlues(blues); RGB[] rgbs = new RGB[size]; for (int i = 0; i < rgbs.length; i++) { rgbs[i] = new RGB(reds[i] & 0xFF, greens[i] & 0xFF, blues[i] & 0xFF); } PaletteData palette = new PaletteData(rgbs); ImageData data = new ImageData(bufferedImage.getWidth(), bufferedImage.getHeight(), colorModel.getPixelSize(), palette); data.transparentPixel = colorModel.getTransparentPixel(); WritableRaster raster = bufferedImage.getRaster(); int[] pixelArray = new int[1]; for (int y = 0; y < data.height; y++) { for (int x = 0; x < data.width; x++) { raster.getPixel(x, y, pixelArray); data.setPixel(x, y, pixelArray[0]); } } return data; } return null; } ``` Then you can call it like: ``` static Image getImage(File file) { ImageIcon systemIcon = (ImageIcon) FileSystemView.getFileSystemView().getSystemIcon(file); java.awt.Image image = systemIcon.getImage(); if (image instanceof BufferedImage) { return new Image(display, convertToSWT((BufferedImage)image)); } int width = image.getWidth(null); int height = image.getHeight(null); BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB); Graphics2D g2d = bufferedImage.createGraphics(); g2d.drawImage(image, 0, 0, null); g2d.dispose(); return new Image(display, convertToSWT(bufferedImage)); } ```
For files, you can use `org.eclipse.swt.program.Program` to obtain an icon (with correct set transparency) for a given file ending: ``` File file=... String fileEnding = file.getName().substring(file.getName().lastIndexOf('.')); ImageData iconData=Program.findProgram(fileEnding ).getImageData(); Image icon= new Image(Display.getCurrent(), iconData); ``` For folders, you might consider just using a static icon.
How to display system icon for a file in SWT?
[ "", "java", "swt", "transparency", "icons", "" ]
I want to add a custom php file to a WordPress to do a simple action. So far I have in my theme `index.php` file: ``` <a href="myfile.php?size=md">link</a> ``` and the php is ``` <?php echo "hello world"; ?> <?php echo $_GET["size"]; ?> <?php echo "hello world"; ?> ``` The link, once clicked, displays: ``` hello world ``` Is WordPress taking over the `$_GET` function and I need to do some tricks to use it? What am I doing wrong? **Edit**: ``` <?echo "hello world";?> <? if (array_key_exists('size', $_GET)) echo $_GET['size']; ?> <?echo "end";?> ``` Ouputs : ``` hello world ```
Not sure if this will show anything but try turning on error reporting with: ``` <?php error_reporting(E_ALL); ini_set('display_errors', true); ?> ``` at the top of your page before any other code. **Edit:** From the OP comments: > silly question, but are you sure you > are viewing the results of your latest > changes to the file and not a cached > copy of the page or something? Change > "hello world" to something else. > (Sorry grasping at straws, but this > happened to me before) – Zenshai > > ahaha, the person that > were doing the changes didn't changed > the correct file. It's working now – > marcgg > > peer programming fail ^^ – marcgg > > That would be an "or something", > can't tell you how many times i've > done something like that. Glad you > were able to figure it out in the end. > – Zenshai I usually discover errors like these only when they begin to defy everything I know about a language or an environment.
See the solution : In order to be able to add and work with your own custom query vars that you append to URLs, (eg: `www.site.com/some_page/?my_var=foo` - for example using `add_query_arg()`) you need to add them to the public query variables available to `WP_Query`. These are built up when `WP_Query` instantiates, but fortunately are passed through a filter `query_vars` before they are actually used to populate the `$query_vars` property of `WP_Query`. For your case : ``` function add_query_vars_filter( $vars ){ $vars[] = "size"; return $vars; } add_filter( 'query_vars', 'add_query_vars_filter' ); ``` and on your template page call the get methode like that : ``` $size_var = (get_query_var('size')) ? get_query_var('size') : false; if($size_var){ // etc... } ``` **More at the Codex** : **<http://codex.wordpress.org/Function_Reference/get_query_var>** I hope it helps !
$_GET and WordPress
[ "", "php", "wordpress", "" ]
I want to show a pop-up on click of a button. The pop-up should have a file upload control. I need to implement upload functionality. The base page has nested forms. Totally three forms nested inside. If I comment the two forms then I can able to get the posted file from Request Object. But I was not suppose to comment the other two forms. With nested forms I am not getting the posted file from the Request object. I need some protocols to implement this. I am using C#. The pop-up was designed using jQuery. As suggested, I am posting the sample code here. ``` <form id="frmMaster" name="frmMaster" method="post" action="Main.aspx" Runat="server" enctype="multipart/form-data"> <form method='Post' name='frmSub'> <input type="hidden" name='hdnData' value=''> </form> // This form is driven dynamically from XSL <form method='Post' name='frmMainSub'> <input type="hidden" name='hdnSet' value=''> </form> </form> ``` --- ### Note: Commenting the inner forms works fine. But as it required for other functionalities not suppose to touch those forms. I have given this code for sample purpose. The actual LOC in this page is 1200. and the second form is loaded with lots of controls dynamically. I have been asked not to touch the existing forms. Is it possible to do this functionality with nested forms?
You could always try putting one of the inner forms onto another page and serving it up in an iframe. That way the inner form is not technically inside the outer form. This will require you to alter some of the html, but there's really no way around that.
You can have multiple HTML form tags in a page, but they cannot be nested within one another. You will need to remove the nesting for this to work. If you post some of your code, you're likely to get more help with some specific recommendations to address this. From your posted code, it's also unclear why you'd even be tempted to use multiple forms. Can you elaborate on why you think you need multiple forms here? You don't have explicit actions in your subforms, so it's hard to tell where you want them to post, but I'm guessing it's all posting to the same page. So, why multiple forms at all?
Nested Form Problem in ASP.NET
[ "", "c#", "asp.net", "jquery", "file-upload", "nested-forms", "" ]
Is there any difference (performance, best-practice, etc...) between putting a condition in the JOIN clause vs. the WHERE clause? For example... ``` -- Condition in JOIN SELECT * FROM dbo.Customers AS CUS INNER JOIN dbo.Orders AS ORD ON CUS.CustomerID = ORD.CustomerID AND CUS.FirstName = 'John' -- Condition in WHERE SELECT * FROM dbo.Customers AS CUS INNER JOIN dbo.Orders AS ORD ON CUS.CustomerID = ORD.CustomerID WHERE CUS.FirstName = 'John' ``` Which do you prefer (and perhaps why)?
The relational algebra allows interchangeability of the predicates in the `WHERE` clause and the `INNER JOIN`, so even `INNER JOIN` queries with `WHERE` clauses can have the predicates rearrranged by the optimizer so that they **may already be excluded** during the `JOIN` process. I recommend you write the queries in the most readable way possible. Sometimes this includes making the `INNER JOIN` relatively "incomplete" and putting some of the criteria in the `WHERE` simply to make the lists of filtering criteria more easily maintainable. For example, instead of: ``` SELECT * FROM Customers c INNER JOIN CustomerAccounts ca ON ca.CustomerID = c.CustomerID AND c.State = 'NY' INNER JOIN Accounts a ON ca.AccountID = a.AccountID AND a.Status = 1 ``` Write: ``` SELECT * FROM Customers c INNER JOIN CustomerAccounts ca ON ca.CustomerID = c.CustomerID INNER JOIN Accounts a ON ca.AccountID = a.AccountID WHERE c.State = 'NY' AND a.Status = 1 ``` But it depends, of course.
For inner joins I have not really noticed a difference (but as with all performance tuning, you need to check against your database under your conditions). However where you put the condition makes a huge difference if you are using left or right joins. For instance consider these two queries: ``` SELECT * FROM dbo.Customers AS CUS LEFT JOIN dbo.Orders AS ORD ON CUS.CustomerID = ORD.CustomerID WHERE ORD.OrderDate >'20090515' SELECT * FROM dbo.Customers AS CUS LEFT JOIN dbo.Orders AS ORD ON CUS.CustomerID = ORD.CustomerID AND ORD.OrderDate >'20090515' ``` The first will give you only those records that have an order dated later than May 15, 2009 thus converting the left join to an inner join. The second will give those records plus any customers with no orders. The results set is very different depending on where you put the condition. (Select \* is for example purposes only, of course you should not use this in production code.) The exception to this is when you want to see only the records in one table but not the other. Then you use the where clause for the condition not the join. ``` SELECT * FROM dbo.Customers AS CUS LEFT JOIN dbo.Orders AS ORD ON CUS.CustomerID = ORD.CustomerID WHERE ORD.OrderID is null ```
Condition within JOIN or WHERE
[ "", "sql", "performance", "" ]
How do you remove a breakpoint from Safari 4's Javascript debugger?
Clicking it again will disable it (goes a lighter blue), but the line is always highlighted as a breakpoint (or breakpoint woz 'ere). The visual indication is handy if you find you need to set it again soon after. A disabled breakpoint behaves exactly like no breakpoint anyway.
Or, according to Apple... "A blue arrow in the gutter means that before that line of code is executed, execution will halt. To temporarily disable the breakpoint, click it; it turns from blue to gray to indicate that it’s disabled. To remove the breakpoint, drag it outside of the gutter."
How do you remove a breakpoint from Safari 4's Javascript debugger?
[ "", "javascript", "safari", "debugging", "" ]
A few days ago I posted some code like this: ``` StreamWriter writer = new StreamWriter(Response.OutputStream); writer.WriteLine("col1,col2,col3"); writer.WriteLine("1,2,3"); writer.Close(); Response.End(); ``` I was told that instead I should wrap StreamWriter in a using block in case of exceptions. Such a change would make it look like this: ``` using(StreamWriter writer = new StreamWriter(Response.OutputStream)) { writer.WriteLine("col1,col2,col3"); writer.WriteLine("1,2,3"); writer.Close(); //not necessary I think... end of using block should close writer } Response.End(); ``` I am not sure why this is a valuable change. If an exception occurred without the using block, the writer and response would still be cleaned up, right? What does the using block gain me?
Nope the stream would stay open in the first example, since the error would negate the closing of it. The using operator forces the calling of Dispose() which is supposed to clean the object up and close all open connections when it exits the block.
I'm going to give the dissenting opinion. The answer to the specific question "Is it necessary to wrap StreamWriter in a using block?" is actually **No.** In fact, you *should not* call Dispose on a StreamWriter, because its Dispose is badly designed and does the wrong thing. The problem with StreamWriter is that, when you Dispose it, it Disposes the underlying stream. If you created the StreamWriter with a filename, and it created its own FileStream internally, then this behavior would be totally appropriate. But if, as here, you created the StreamWriter with an existing stream, then this behavior is absolutely The Wrong Thing(tm). But it does it anyway. Code like this won't work: ``` var stream = new MemoryStream(); using (var writer = new StreamWriter(stream)) { ... } stream.Position = 0; using (var reader = new StreamReader(stream)) { ... } ``` because when the StreamWriter's `using` block Disposes the StreamWriter, that will in turn throw away the stream. So when you try to read from the stream, you get an ObjectDisposedException. StreamWriter is a horrible violation of the "clean up your own mess" rule. It tries to clean up someone else's mess, whether they wanted it to or not. *(Imagine if you tried this in real life. Try explaining to the cops why you broke into someone else's house and started throwing all their stuff into the trash...)* For that reason, I consider StreamWriter (and StreamReader, which does the same thing) to be among the very few classes where "if it implements IDisposable, you should call Dispose" is wrong. *Never* call Dispose on a StreamWriter that was created on an existing stream. Call Flush() instead. Then just make sure you clean up the Stream when you should. (As Joe pointed out, ASP.NET disposes the Response.OutputStream for you, so you don't need to worry about it here.) Warning: if you don't Dispose the StreamWriter, then you *do* need to call Flush() when you're done writing. Otherwise you could have data still being buffered in memory that never makes it to the output stream. My rule for StreamReader is, pretend it doesn't implement IDisposable. Just let it go when you're done. My rule for StreamWriter is, call Flush where you otherwise would have called Dispose. (This means you have to use a `try`..`finally` instead of a `using`.)
Is it necessary to wrap StreamWriter in a using block?
[ "", "c#", "" ]
I have recently written a LINQ query to get a `Dictionary` containing the last 6 month's placement amounts. It is returning a `Dictionary` of Month string - Decimal Amount pairs. It seems kind of cludgey. Any of you LINQ masters out there able to help me refactor this to make a bit cleaner? ``` /// <summary> /// Gets the last 6 months of Placement History totalled by Month /// for all Agencies /// </summary> /// <returns></returns> public Dictionary<string, decimal> getRecentPlacementHistory() { var placementHistoryByMonth = new Dictionary<string, decimal>(); using (DemoLinqDataContext db = new DemoLinqDataContext()) { for (int i = 0; i < 6; i++) { Decimal monthTotal = (from a in db.Accounts where (a.Date_Assigned.Value.Month == DateTime.Now.AddMonths(-i).Month && a.Date_Assigned.Value.Year == DateTime.Now.AddMonths(-i).Month) select a.Amount_Assigned).Sum(); String currentMonth = DateTime.Now.AddMonths(-i).ToString("MMM"); placementHistoryByMonth.Add(currentMonth, monthTotal); } return placementHistoryByMonth; } } ```
First problem: ``` where (a.Date_Assigned.Value.Month == DateTime.Now.AddMonths(-i).Month && a.Date_Assigned.Value.Year == DateTime.Now.AddMonths(-i).Month) ``` Shouldn't the latter expression end with .Year rather than .Month? Surely you'll rarely get a year with a value of 1-12... I would extract the idea of the "current month" as you're using it a lot. Note that you're also taking the current time multiple times, which could give odd results if it runs at midnight at the end of a month... ``` public Dictionary<string, decimal> getRecentPlacementHistory() { var placementHistoryByMonth = new Dictionary<string, decimal>(); using (DemoLinqDataContext db = new DemoLinqDataContext()) { DateTime now = DateTime.Now; for (int i = 0; i < 6; i++) { DateTime selectedDate = now.AddMonths(-i); Decimal monthTotal = (from a in db.Accounts where (a.Date_Assigned.Value.Month == selectedDate.Month && a.Date_Assigned.Value.Year == selectedDate.Year) select a.Amount_Assigned).Sum(); placementHistoryByMonth.Add(selectedDate.ToString("MMM"), monthTotal); } return placementHistoryByMonth; } } ``` I realise it's probably the loop that you were trying to get rid of. You could try working out the upper and lower bounds of the dates for the whole lot, then grouping by the year/month of `a.Date_Assigned` within the relevant bounds. It won't be much prettier though, to be honest. Mind you, that would only be one query to the database, if you could pull it off.
Use Group By ``` DateTime now = DateTime.Now; DateTime thisMonth = new DateTime(now.Year, now.Month, 1); Dictionary<string, decimal> dict; using (DemoLinqDataContext db = new DemoLinqDataContext()) { var monthlyTotal = from a in db.Accounts where a.Date_Assigned > thisMonth.AddMonths(-6) group a by new {a.Date_Assigned.Year, a.Date_Assigned.Month} into g select new {Month = new DateTime(g.Key.Year, g.Key.Month, 1), Total = g.Sum(a=>a.Amount_Assigned)}; dict = monthlyTotal.OrderBy(p => p.Month).ToDictionary(n => n.Month.ToString("MMM"), n => n.Total); } ``` No loop needed!
How can I make this LINQ query cleaner?
[ "", "c#", "linq", "linq-to-sql", "refactoring", "" ]
[Drupal](http://drupal.org/) has a very well-architected, [jQuery](http://jquery.com/)-based [autocomplete.js](http://cvs.drupal.org/viewvc.py/drupal/drupal/misc/autocomplete.js?revision=1.23&view=markup&pathrev=DRUPAL-6). Usually, you don't have to bother with it, since it's configuration and execution is handled by the Drupal form API. Now, I need a way to reconfigure it at runtime (with JavaScript, that is). I have a standard drop down select box with a text field next to it, and depending what option is selected in the select box, I need to call different URLs for autocompletion, and for one of the options, autocompletion should be disabled entirely. Is it possible to reconfigure the existing autocomplete instance, or will I have to somehow destroy and recreate?
Well, for reference, I've thrown together a hack that works, but if anyone can think of a better solution, I'd be happy to hear it. ``` Drupal.behaviors.dingCampaignRules = function () { $('#campaign-rules') .find('.campaign-rule-wrap') .each(function (i) { var type = $(this).find('select').val(); $(this).find('.form-text') // Remove the current autocomplete bindings. .unbind() // And remove the autocomplete class .removeClass('form-autocomplete') .end() .find('select:not(.dingcampaignrules-processed)') .addClass('dingcampaignrules-processed') .change(Drupal.behaviors.dingCampaignRules) .end(); if (type == 'page' || type == 'library' || type == 'taxonomy') { $(this).find('input.autocomplete') .removeClass('autocomplete-processed') .val(Drupal.settings.dingCampaignRules.autocompleteUrl + type) .end() .find('.form-text') .addClass('form-autocomplete'); Drupal.behaviors.autocomplete(this); } }); }; ``` This code comes from the [ding\_campaign module](http://github.com/kdb/ding/tree/master/sites/all/modules/ding_campaign). Feel free to check out the code if you need to do something similar. It's all GPL2.
Have a look at misc/autocomplete.js. ``` /** * Attaches the autocomplete behavior to all required fields */ Drupal.behaviors.autocomplete = function (context) { var acdb = []; $('input.autocomplete:not(.autocomplete-processed)', context).each(function () { var uri = this.value; if (!acdb[uri]) { acdb[uri] = new Drupal.ACDB(uri); } var input = $('#' + this.id.substr(0, this.id.length - 13)) .attr('autocomplete', 'OFF')[0]; $(input.form).submit(Drupal.autocompleteSubmit); new Drupal.jsAC(input, acdb[uri]); $(this).addClass('autocomplete-processed'); }); }; ``` The input's value attribute is used to create ACDB, which is a cache of values for that autocomplete path (uri). That is used in the Drupal.jsAC function to bind the element's keydown, keyup and blur events with triggers the autocomplete ajax operation (which caches its values in the ACDB object for that element), opens popups, etc. ``` /** * An AutoComplete object */ Drupal.jsAC = function (input, db) { var ac = this; this.input = input; this.db = db; $(this.input) .keydown(function (event) { return ac.onkeydown(this, event); }) .keyup(function (event) { ac.onkeyup(this, event); }) .blur(function () { ac.hidePopup(); ac.db.cancel(); }); }; ``` What you'll need to do is change the input's value and also reattach the behavior. You'll reattach the behavior by removing the '.autocomplete-processed' class on the autocomplete text field input element and then call Drupal.attachBehaviors(thatInputElement). This may not work. Things can go very badly if you attach the same behavior to the same element over and over again. It may be more sensible to create different autocomplete fields and simply hide and show them based on the value of the select. This would still require calling Drupal.attachBehaviors when you hide and display the widget, but the same behavior would remain attached if the switch happened more than once, and you wouldn't risk attaching the same behavior to the element multiple times.
How to dynamically reconfigure Drupal's jQuery-based autocomplete at runtime?
[ "", "javascript", "jquery", "drupal", "autocomplete", "drupal-6", "" ]
I need to craft a packet that has a header, a trailer, and a variable length payload field. So far I have been using a vector for the payload so my struct is set up like this: ``` struct a_struct{ hdr a_hdr; vector<unsigned int> a_vector; tr a_tr; }; ``` When I try to access members of the vector I get a seg fault and a sizeof of an entire structs give me 32 (after I've added about 100 elements to the vector. Is this a good approach? What is better? I found this post [Variable Sized Struct C++](https://stackoverflow.com/questions/688471/variable-sized-struct-c) He was using a char array, and I'm using a vector though.
The solution in the other SO answer is c-specific, and relies on the peculiarities of c arrays - and even in c, sizeof() won't help you find the "true" size of a variable size struct. Essentially, it's cheating, and it's a kind of cheating that isn't necessary in C++. What you are doing is fine. To avoid seg faults, access the vector as you would any other vector in C++: ``` a_struct a; for(int i = 0; i < 100; ++i) a.a_vector.push_back(i); cout << a.a_vector[22] << endl; // Prints 22 ```
Even though the vector type is inlined in the struct, the only member that is in the vector is likely a pointer. Adding members to the vector won't increase the size of the vector type itself but the memory that it points to. That's why you won't ever see the size of the struct increase in memory and hence you get a seg fault. Usually when people want to make a variable sized struct, they do so by adding an array as the last member of the struct and setting it's length to 1. They then will allocate extra memory for the structure that is actually required by sizeof() in order to "expand" the structure. This is almost always accompanied by an extra member in the struct detailing the size of the expanded array. The reason for using 1 is thoroughly documented on Raymond's blog * <http://blogs.msdn.com/oldnewthing/archive/2004/08/26/220873.aspx>
Method for making a variable size struct
[ "", "c++", "data-structures", "packets", "" ]
I'm using a simple web-based PHP application that outputs a table as a spreadsheet ``` header("Content-Disposition: attachment; filename=" . $filename . ".xls"); header("Content-Type: application/vnd.ms-excel"); //inserts tab delimited text ``` But I'm finding the downloaded spreadsheet opens as a read-only file and must be saved locally and (in Excel on Windows) the type changed to XLS (from HTML). Is there a way to set the attribute of filetype correctly so that doing a simple save doesn't require correcting the filetype? Is the file downloaded read-only by nature of security or is this not normal? Also I don't like the automatic borders created when opening the spreadsheet in Excel or OpenOffice (on Linux). I would prefer to have no border formatting. Is there a way in the file to specify no added formatting or is this built into those applications?
I don't know which version of Excel you're talking about, so I'll suppose that you're using a 2007 or newer version. The extension change problem probably depends on an Office feature called "[Extension Hardening](http://blogs.msdn.com/b/vsofficedeveloper/archive/2008/03/11/excel-2007-extension-warning.aspx)"; as far as I know, the only solution is to generate a real Excel file (for example by using the [PHPExcel](http://phpexcel.codeplex.com/) set of classes), and not an HTML file. The downloaded files are read-only for security reasons, since they are being opened in the so called "[Protected View](http://office.microsoft.com/en-us/excel-help/what-is-protected-view-HA010355931.aspx)": > Files from the Internet and from other potentially unsafe locations > can contain viruses, worms, or other kinds of malware, which can harm > your computer. To help protect your computer, files from these > potentially unsafe locations are opened in Protected View. By using > Protected View, you can read a file and inspect its contents while > reducing the risks that can occur. Finally, a word about borders and formatting: with the old Excel 2000 version, you could format the output by simply adding some XML tags in the header section of the HTML code; see the "[Microsoft Office HTML and XML Reference](http://msdn.microsoft.com/en-us/library/aa155477%28office.10%29.aspx)" for further details and examples, but keep in mind that it's quite obsolete, so I don't think that this technique still works with the more recent Excel versions. If you want to have more control over the generated ouput, you should not use simple HTML for creating the spreadsheet file. On [this post](https://stackoverflow.com/questions/3930975/alternative-for-php-excel) you can also find some alternatives to PHPExcel for writing Excel files.
AFAIK, at least on Windows, it depends on what the user does with the prompt shown by the browser when they follow the link. If the user chooses to save the file, it won't be read-only. If the user chooses to open it, the file will be saved to a temporary directory and the browser may remove it when it is closed. I am not sure how this mechanism works, but I am assuming there is a lock involved some place that makes the file read only.
PHP - Read-Only spreadsheet filetype?
[ "", "php", "formatting", "file-type", "spreadsheet", "readonly", "" ]
I'm trying to make a non-WSDL call in PHP (5.2.5) like this. I'm sure I'm missing something simple. This call has one parameter, a string, called "timezone": ``` $URL = 'http://www.nanonull.com/TimeService/TimeService.asmx'; $client = new SoapClient(null, array( 'location' => $URL, 'uri' => "http://www.Nanonull.com/TimeService/", 'trace' => 1, )); // First attempt: // FAILS: SoapFault: Object reference not set to an instance of an object $return = $client->__soapCall("getTimeZoneTime", array(new SoapParam('ZULU', 'timezone')), array('soapaction' => 'http://www.Nanonull.com/TimeService/getTimeZoneTime') ); // Second attempt: // FAILS: Generated soap Request uses "param0" instead of "timezone" $return = $client->__soapCall("getTimeZoneTime", array('timezone'=>'ZULU' ), array('soapaction' => 'http://www.Nanonull.com/TimeService/getTimeZoneTime') ); ``` Thanks for any suggestions -Dave
Thanks. Here's the complete example which now works: ``` $URL = 'http://www.nanonull.com/TimeService/TimeService.asmx'; $client = new SoapClient(null, array( 'location' => $URL, 'uri' => "http://www.Nanonull.com/TimeService/", 'trace' => 1, )); $return = $client->__soapCall("getTimeZoneTime", array(new SoapParam('ZULU', 'ns1:timezone')), array('soapaction' => 'http://www.Nanonull.com/TimeService/getTimeZoneTime') ); ```
@Dave C's solution didn't work for me. Looking around I came up with another solution: ``` $URL = 'http://www.nanonull.com/TimeService/TimeService.asmx'; $client = new SoapClient(null, array( 'location' => $URL, 'uri' => "http://www.Nanonull.com/TimeService/", 'trace' => 1, )); $return = $client->__soapCall("getTimeZoneTime", array(new SoapParam(new SoapVar('ZULU', XSD_DATETIME), 'timezone')), array('soapaction' => 'http://www.Nanonull.com/TimeService/getTimeZoneTime') ); ``` Hope this can help somebody.
PHP Soap non-WSDL call: how do you pass parameters?
[ "", "php", "web-services", "soap", "" ]
using this code ``` <?php foreach (glob("*.txt") as $filename) { $file = $filename; $contents = file($file); $string = implode($contents); echo $string; echo "<br></br>"; } ?> ``` i can display the contants of any txt file in the folder the problem is all the formating and so on from the txt file is skipped the txt file looks like ``` #nipponsei @ irc.rizon.net presents: Title: Ah My Goddess Sorezore no Tsubasa Original Soundrack Street Release Date: July 28, 2006 ------------------------------------ Tracklist: 1. Shiawase no Iro On Air Ver 2. Peorth 3. Anata ni Sachiare 4. Trouble Chase 5. Morisato Ka no Nichijou 6. Flying Broom 7. Megami no Pride 8. Panic Station 9. Akuryou Harai 10. Hore Kusuri 11. Majin Urd 12. Hild 13. Eiichi Soudatsusen 14. Goddess Speed 15. Kaze no Deau Basho 16. Ichinan Satte, Mata... 17. Eyecatch B 18. Odayaka na Gogo 19. Heibon na Shiawase 20. Kedarui Habanera 21. Troubadour 22. Awate nai de 23. Ninja Master 24. Shinobi no Okite 25. Skuld no Hatsukoi 26. Kanashimi no Yokan 27. Kousaku Suru Ishi 28. Dai Makai Chou Kourin 29. Subete no Omoi wo Mune ni 30. Invisible Shield 31. Sparkling Battle 32. Sorezore no Tsubasa 33. Yume no Ato ni 34. Bokura no Kiseki On Air Ver ------------------------------------ Someone busted in, kicked me and asked why there was no release of it. I forgot! I'm forgetting a lot...sorry ;_; minglong ``` i the result i get looks like ``` #nipponsei @ irc.rizon.net presents: Title: Ah My Goddess Sorezore no Tsubasa Original Soundrack Street Release Date: July 28, 2006 ------------------------------------ Tracklist: 1. Shiawase no Iro On Air Ver 2. Peorth 3. Anata ni Sachiare 4. Trouble Chase 5. Morisato Ka no Nichijou 6. Flying Broom 7. Megami no Pride 8. Panic Station 9. Akuryou Harai 10. Hore Kusuri 11. Majin Urd 12. Hild 13. Eiichi Soudatsusen 14. Goddess Speed 15. Kaze no Deau Basho 16. Ichinan Satte, Mata... 17. Eyecatch B 18. Odayaka na Gogo 19. Heibon na Shiawase 20. Kedarui Habanera 21. Troubadour 22. Awate nai de 23. Ninja Master 24. Shinobi no Okite 25. Skuld no Hatsukoi 26. Kanashimi no Yokan 27. Kousaku Suru Ishi 28. Dai Makai Chou Kourin 29. Subete no Omoi wo Mune ni 30. Invisible Shield 31. Sparkling Battle 32. Sorezore no Tsubasa 33. Yume no Ato ni 34. Bokura no Kiseki On Air Ver ------------------------------------ Someone busted in, kicked me and asked why there was no release of it. I forgot! I'm forgetting a lot...sorry ;_; minglong ```
The `implode` defaults to an empty string. You should call `implode` something like this: ``` $string = implode("<br>", $contents); ```
You have to add HTML line break elements to the physical line breaks. You could use the [`nl2br` function](http://docs.php.net/nl2br) to do that: ``` foreach (glob("*.txt") as $filename) { echo nl2br(file_get_contents($filename)); echo "<br></br>"; } ``` Additionally I would use the [`file_get_contents` function](http://docs.php.net/file_get_contents) rather than the combination of `file` and `implode`.
display contents of .txt file using php
[ "", "php", "text-files", "implode", "" ]
I have a regular HTML page with some images (just regular `<img />` HTML tags). I'd like to get their content, base64 encoded preferably, without the need to redownload the image (ie. it's already loaded by the browser, so now I want the content). I'd love to achieve that with Greasemonkey and Firefox.
**Note:** This only works if the image is from the same domain as the page, or has the `crossOrigin="anonymous"` attribute and the server supports CORS. It's also not going to give you the original file, but a re-encoded version. If you need the result to be identical to the original, see [Kaiido's answer](https://stackoverflow.com/a/42916772/2214). --- You will need to create a canvas element with the correct dimensions and copy the image data with the `drawImage` function. Then you can use the `toDataURL` function to get a data: url that has the base-64 encoded image. Note that the image must be fully loaded, or you'll just get back an empty (black, transparent) image. It would be something like this. I've never written a Greasemonkey script, so you might need to adjust the code to run in that environment. ``` function getBase64Image(img) { // Create an empty canvas element var canvas = document.createElement("canvas"); canvas.width = img.width; canvas.height = img.height; // Copy the image contents to the canvas var ctx = canvas.getContext("2d"); ctx.drawImage(img, 0, 0); // Get the data-URL formatted image // Firefox supports PNG and JPEG. You could check img.src to // guess the original format, but be aware the using "image/jpg" // will re-encode the image. var dataURL = canvas.toDataURL("image/png"); return dataURL.replace(/^data:image\/(png|jpg);base64,/, ""); } ``` Getting a JPEG-formatted image doesn't work on older versions (around 3.5) of Firefox, so if you want to support that, you'll need to check the compatibility. If the encoding is not supported, it will default to "image/png".
Coming long after, but none of the answers here are entirely correct. When drawn on a canvas, the passed image is uncompressed + all pre-multiplied. When exported, its uncompressed or recompressed with a different algorithm, and un-multiplied. All browsers and devices will have different rounding errors happening in this process (see [Canvas fingerprinting](https://en.wikipedia.org/wiki/Canvas_fingerprinting)). So if one wants a base64 version of an image file, they have to **request** it again (most of the time it will come from cache) but this time as a Blob. Then you can use a [FileReader](https://developer.mozilla.org/en-US/docs/Web/API/FileReader) to read it either as an ArrayBuffer, or as a dataURL. ``` function toDataURL(url, callback){ var xhr = new XMLHttpRequest(); xhr.open('get', url); xhr.responseType = 'blob'; xhr.onload = function(){ var fr = new FileReader(); fr.onload = function(){ callback(this.result); }; fr.readAsDataURL(xhr.response); // async call }; xhr.send(); } toDataURL(myImage.src, function(dataURL){ result.src = dataURL; // now just to show that passing to a canvas doesn't hold the same results var canvas = document.createElement('canvas'); canvas.width = myImage.naturalWidth; canvas.height = myImage.naturalHeight; canvas.getContext('2d').drawImage(myImage, 0,0); console.log(canvas.toDataURL() === dataURL); // false - not same data }); ``` ``` <img id="myImage" src="https://dl.dropboxusercontent.com/s/4e90e48s5vtmfbd/aaa.png" crossOrigin="anonymous"> <img id="result"> ```
Get image data URL in JavaScript?
[ "", "javascript", "image", "firefox", "greasemonkey", "base64", "" ]
Bjarne Stroustrup (C++ creator) once said that he avoids "do/while" loops, and prefers to write the code in terms of a "while" loop instead. [See quote below.] Since hearing this, I have found this to be true. What are your thoughts? Is there an example where a "do/while" is much cleaner and easier to understand than if you used a "while" instead? In response to some of the answers: yes, I understand the technical difference between "do/while" and "while". This is a deeper question about readability and structuring code involving loops. Let me ask another way: suppose you were forbidden from using "do/while" - is there a realistic example where this would give you no choice but to write unclean code using "while"? From "The C++ Programming Language", 6.3.3: > In my experience, the do-statement is a source of errors and confusion. The reason is that its body is always executed once before the condition is evaluated. However, for the body to work correctly, something very much like the condition must hold even the first time through. More often than I would have guessed, I have found that condition not to hold as expected either when the program was first written and tested, or later after the code preceding it has been modified. **I also prefer the condition "up front where I can see it." Consequently, I tend to avoid do-statements.** -Bjarne Avoiding the do/while loop is a recommendation included in the [C++ Core Guidelines](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md) as [ES.75, avoid do-statements](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#es75-avoid-do-statements).
Yes I agree that do while loops can be rewritten to a while loop, however I disagree that always using a while loop is better. do while always get run at least once and that is a very useful property (most typical example being input checking (from keyboard)) ``` #include <stdio.h> int main() { char c; do { printf("enter a number"); scanf("%c", &c); } while (c < '0' || c > '9'); } ``` This can of course be rewritten to a while loop, but this is usually viewed as a much more elegant solution.
do-while is a loop with a post-condition. You need it in cases when the loop body is to be executed at least once. This is necessary for code which needs some action before the loop condition can be sensibly evaluated. With while loop you would have to call the initialization code from two sites, with do-while you can only call it from one site. Another example is when you already have a valid object when the first iteration is to be started, so you don't want to execute anything (loop condition evaluation included) before the first iteration starts. An example is with FindFirstFile/FindNextFile Win32 functions: you call FindFirstFile which either returns an error or a search handle to the first file, then you call FindNextFile until it returns an error. Pseudocode: ``` Handle handle; Params params; if( ( handle = FindFirstFile( params ) ) != Error ) { do { process( params ); //process found file } while( ( handle = FindNextFile( params ) ) != Error ) ); } ```
Is there ever a need for a "do {...} while ( )" loop?
[ "", "c++", "c", "loops", "" ]
Rightly or wrongly, I am using unique identifier as a Primary Key for tables in my sqlserver database. I have generated a model using linq to sql (c#), however where in the case of an `identity` column linq to sql generates a unique key on inserting a new record for `guid` /`uniqueidentifier` the default value of `00000000-0000-0000-0000-000000000000`. I know that I can set the guid in my code: in the linq to sql model or elsewhere, or there is the default value in creating the sql server table (though this is overridden by the value generated in the code). But where is best to put generate this key, noting that my tables are always going to change as my solution develops and therefore I shall regenerate my Linq to Sql model when it does. Does the same solution apply for a column to hold current `datetime` (of the insert), which would be updated with each update?
As you noted in you own post you can use the extensibility methods. Adding to your post you can look at the partial methods created in the datacontext for inserting and updating of each table. Example with a table called "test" and a "changeDate"-column: ``` partial void InsertTest(Test instance) { instance.idCol = System.Guid.NewGuid(); this.ExecuteDynamicInsert(instance); } partial void UpdateTest(Test instance) { instance.changeDate = DateTime.Now; this.ExecuteDynamicUpdate(instance); } ```
Thanks, I've tried this out and it seems to work OK. I have another approach, which I think I shall use for guids: sqlserver default value to newid(), then in linqtosql set auto generated value property to true. This has to be done on each generation of the model, but this is fairly simple.
Using SqlServer uniqueidentifier/updated date columns with Linq to Sql - Best Approach
[ "", "c#", "sql-server", "linq-to-sql", "datetime", "uniqueidentifier", "" ]
Are there any open source libraries for representing cooking units such as Teaspoon and tablespoon in Java? I have only found JSR-275 (<https://jcp.org/en/jsr/detail?id=275>) which is great but doesn't know about cooking units.
JScience is extensible, so you should be able to create a subclass of javax.measure.unit.SystemOfUnits. You'll create a number of public static final declarations like this: ``` public final class Cooking extends SystemOfUnits { private static HashSet<Unit<?>> UNITS = new HashSet<Unit<?>>(); private Cooking() { } public static Cooking getInstance() { return INSTANCE; } private static final Cooking INSTANCE = new SI(); public static final BaseUnit<CookingVolume> TABLESPOON = si(new BaseUnit<CookingVolume>("Tbsp")); ... public static final Unit<CookingVolume> GRAM = TABLESPOON.divide(1000); } public interface CookingVolume extends Quantity { public final static Unit<CookingVolume> UNIT = Cooking.TABLESPOON; } ``` It's pretty straightforward to define the other units and conversions, just as long as you know what the conversion factors are.
This might be of some use: [JUnitConv](http://www.tecnick.com/public/code/cp_dpage.php?aiocp_dp=junitconv). It's a Java applet for converting units (including cooking units), but it's GPL-licensed so you could download the source and adapt the relevant parts for your own use. On the other hand, it looks like it shouldn't be hard to create a `CookingUnits` class compliant with JSR 275. (That's what I'd do)
Cooking Units in Java
[ "", "java", "units-of-measurement", "" ]
I'm looking to see if it's possible for me to code something maybe in php to auto submit a link into a scuttle site. Maybe a bookmarklet, or maybe just a script. Any ideas? Update: I am not trying to write a virus. I just have a bunch of accounts on social bookmarking sites, and I'm tired of clicking through a lot of forms to submit something.
It is posible. But to do that you will have to study about the site and how it works. I would suggest you to go to <http://forums.digitalpoint.com/> and search there as this forum has lot of posts about the script you are looking for.
You can used dome for submit 10 scuttle site <http://forums.digitalpoint.com/showthread.php?t=1178715>
How to write script to auto submit to scuttle sites?
[ "", "php", "" ]
I am trying to create a regular expression pattern in C#. The pattern can only allow for: * letters * numbers * underscores So far I am having little luck (i'm not good at RegEx). Here is what I have tried thus far: ``` // Create the regular expression string pattern = @"\w+_"; Regex regex = new Regex(pattern); // Compare a string against the regular expression return regex.IsMatch(stringToTest); ```
**EDIT :** ``` @"^[a-zA-Z0-9\_]+$" ``` or ``` @"^\w+$" ```
@"^\w+$" \w matches any "word character", defined as digits, letters, and underscores. It's Unicode-aware so it'll match letters with umlauts and such (better than trying to roll your own character class like [A-Za-z0-9\_] which would only match English letters). The ^ at the beginning means "match the beginning of the string here", and the $ at the end means "match the end of the string here". Without those, e.g. if you just had @"\w+", then "@@Foo@@" would match, because it *contains* one or more word characters. With the ^ and $, then "@@Foo@@" would not match (which sounds like what you're looking for), because you don't have beginning-of-string followed by one-or-more-word-characters followed by end-of-string.
C# Regular Expression to match letters, numbers and underscore
[ "", "c#", "regex", "" ]
OK, so I've got this totally rare an unique scenario of a load balanced PHP website. The bummer is - it didn't used to be load balanced. Now we're starting to get issues... Currently the only issue is with PHP sessions. Naturally nobody thought of this issue at first so the PHP session configuration was left at its defaults. Thus both servers have their own little stash of session files, and woe is the user who gets the next request thrown to the other server, because that doesn't have the session he created on the first one. Now, I've been reading PHP manual on how to solve this situation. There I found the nice function of `session_set_save_handler()`. (And, coincidentally, [this topic](https://stackoverflow.com/questions/76712/what-is-the-best-way-to-handle-sessions-for-a-php-site-on-multiple-hosts) on SO) Neat. Except I'll have to call this function in all the pages of the website. And developers of future pages would have to remember to call it all the time as well. Feels kinda clumsy, not to mention probably violating a dozen best coding practices. It would be much nicer if I could just flip some global configuration option and *Voilà* - the sessions all get magically stored in a DB or a memory cache or something. Any ideas on how to do this? --- **Added:** To clarify - I expect this to be a standard situation with a standard solution. FYI - I have a MySQL DB available. Surely there must be some ready-to-use code out there that solves this? I can, of course, write my own session saving stuff and `auto_prepend` option pointed out by [Greg](https://stackoverflow.com/questions/994935/php-sessions-in-a-load-balancing-cluster-how/994988#994988) seems promising - but that would feel like reinventing the wheel. :P --- **Added 2:** The load balancing is DNS based. I'm not sure how this works, but I guess it should be something like [this](http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=/rzajw/rzajwdnsrr.htm). --- **Added 3:** OK, I see that one solution is to use `auto_prepend` option to insert a call to `session_set_save_handler()` in every script and write my own DB persister, perhaps throwing in calls to `memcached` for better performance. Fair enough. Is there also some way that I could avoid coding all this myself? Like some famous and well-tested PHP plugin? **Added much, much later:** This is the way I went in the end: [How to properly implement a custom session persister in PHP + MySQL?](https://stackoverflow.com/questions/1022416/how-to-properly-implement-a-custom-session-persister-in-php-mysql) Also, I simply included the session handler manually in all pages.
You could set PHP to handle the sessions in the database, so all your servers share same session information as all servers use the same database for that. Here's a [good tutorial](http://www.raditha.com/php/session.php) for that.
The way we handle this is through memcached. All it takes is changing the php.ini similar to the following: ``` session.save_handler = memcache session.save_path = "tcp://path.to.memcached.server:11211" ``` We use AWS ElastiCache, so the server path is a domain, but I'm sure it'd be similar for local memcached as well. This method doesn't require any application code changes.
PHP sessions in a load balancing cluster - how?
[ "", "php", "session", "load-balancing", "cluster-computing", "" ]
Is there any fast way to get all subarrays where a key value pair was found in a multidimensional array? I can't say how deep the array will be. Simple example array: ``` $arr = array(0 => array(id=>1,name=>"cat 1"), 1 => array(id=>2,name=>"cat 2"), 2 => array(id=>3,name=>"cat 1") ); ``` When I search for key=name and value="cat 1" the function should return: ``` array(0 => array(id=>1,name=>"cat 1"), 1 => array(id=>3,name=>"cat 1") ); ``` I guess the function has to be recursive to get down to the deepest level.
Code: ``` function search($array, $key, $value) { $results = array(); if (is_array($array)) { if (isset($array[$key]) && $array[$key] == $value) { $results[] = $array; } foreach ($array as $subarray) { $results = array_merge($results, search($subarray, $key, $value)); } } return $results; } $arr = array(0 => array(id=>1,name=>"cat 1"), 1 => array(id=>2,name=>"cat 2"), 2 => array(id=>3,name=>"cat 1")); print_r(search($arr, 'name', 'cat 1')); ``` Output: ``` Array ( [0] => Array ( [id] => 1 [name] => cat 1 ) [1] => Array ( [id] => 3 [name] => cat 1 ) ) ``` If efficiency is important you could write it so all the recursive calls store their results in the same temporary `$results` array rather than merging arrays together, like so: ``` function search($array, $key, $value) { $results = array(); search_r($array, $key, $value, $results); return $results; } function search_r($array, $key, $value, &$results) { if (!is_array($array)) { return; } if (isset($array[$key]) && $array[$key] == $value) { $results[] = $array; } foreach ($array as $subarray) { search_r($subarray, $key, $value, $results); } } ``` The key there is that `search_r` takes its fourth parameter by reference rather than by value; the ampersand `&` is crucial. FYI: If you have an older version of PHP then you have to specify the pass-by-reference part in the *call* to `search_r` rather than in its declaration. That is, the last line becomes `search_r($subarray, $key, $value, &$results)`.
How about the [SPL](http://php.net/spl) version instead? It'll save you some typing: ``` // I changed your input example to make it harder and // to show it works at lower depths: $arr = array(0 => array('id'=>1,'name'=>"cat 1"), 1 => array(array('id'=>3,'name'=>"cat 1")), 2 => array('id'=>2,'name'=>"cat 2") ); //here's the code: $arrIt = new RecursiveIteratorIterator(new RecursiveArrayIterator($arr)); foreach ($arrIt as $sub) { $subArray = $arrIt->getSubIterator(); if ($subArray['name'] === 'cat 1') { $outputArray[] = iterator_to_array($subArray); } } ``` What's great is that basically the same code will iterate through a directory for you, by using a RecursiveDirectoryIterator instead of a RecursiveArrayIterator. SPL is the roxor. The only bummer about SPL is that it's badly documented on the web. But several PHP books go into some useful detail, particularly Pro PHP; and you can probably google for more info, too.
How to search by key=>value in a multidimensional array in PHP
[ "", "php", "arrays", "search", "recursion", "" ]
My goal is to change the `onclick` attribute of a link. I can do it successfully, but the resulting link doesn't work in ie8. It does work in ff3. For example, this works in Firefox 3, but not IE8. Why? ``` <p><a id="bar" href="#" onclick="temp()">click me</a></p> <script> doIt = function() { alert('hello world!'); } foo = document.getElementById("bar"); foo.setAttribute("onclick","javascript:doIt();"); </script> ```
You don't need to use setAttribute for that - This code works (IE8 also) ``` <div id="something" >Hello</div> <script type="text/javascript" > (function() { document.getElementById("something").onclick = function() { alert('hello'); }; })(); </script> ```
your best bet is to use a javascript framework like jquery or prototype, but, failing that, you should use: ``` if (foo.addEventListener) foo.addEventListener('click',doit,false); //everything else else if (foo.attachEvent) foo.attachEvent('onclick',doit); //IE only ``` edit: also, your function is a little off. it should be ``` var doit = function(){ alert('hello world!'); } ```
How to Set OnClick attribute with value containing function in ie8?
[ "", "javascript", "onclick", "setattribute", "" ]
According to [this article](http://www.gotw.ca/publications/mill17.htm) from Herb Sutter, one should always pick **Class Specializing** over **Function Overload** and definitely over **Specialized Function Templates**. The reason is that * Specializations don't overload. Overload resolution only selects a base template (or a nontemplate function, if one is available). Only after it's been decided which base template is going to be selected, and that choice is locked in, will the compiler look around to see if there happens to be a suitable specialization of that template available, and if so that specialization will get used. * we can’t particial specialize function templates. I must admit that before I read the article I have banged my head against the wall a few times. *Why isn’t he picking my specialized function …* After reading the article I’ve never used Specialized Function Templates again. Example: ``` template <class T> void foo( T t); ``` We should write foo like this so we can specialize it with class templates instead of function specializing. ``` template<class T> struct FooImpl; template <class T> void foo( T t) { FooImpl<T>::foo(t); } ``` Now we can specialze the template and don’t have to worry about the overload rules and we can even partitial specialize the template like this: ``` template<class U, class V> struct FooImpl< QMap< U, V > >; ``` ## Here is the question. It seems that the StackOverflow members prefer Specialized Function Templates? Why? Because the Specialized Function Templates get a lot more upvotes than the Overload Solutions and the Class Specializing. With the information that i have at the moment i find it perverse because i know that i can get it right, but i know that the one who comes after me will hit the wall. There are already some links to the GOTWCA article so you must have read the article. This means that upvoters must have some extra information, please stand up and enlighten me.
The problem with explicitly specialising a function template only applies if the function is also overloaded: ``` template <typename T> void foo (T*); // #1 template <typename T> void foo (T); // #2 template <> void foo<int*> (int*); int main () { int * i; foo (i); // Calls #1 not specialization of #2 } ``` Without the #1 overload, the code will work as expected. However, a function that starts out not being overloaded may have overloads added as the code is maintained into the future. This is one of those examples where, although I hate to say it, C++ has too many ways to do the same thing. Personally, if you find you need to specialise a function template, then I like the pattern suggested by [TimW](https://stackoverflow.com/users/106064/timw) in his comment against Neil Butterworth's [answer](https://stackoverflow.com/questions/992471/how-to-query-iftint-with-template-class/992488#992488), ie. it's best to do so by having the current function dispatch it's call to a specialized class template instead: ``` template <typename T> class DoFoo { static void do (T) { /* default behaviour */ } }; template <> class DoFoo<int*> { static void do (int*) { /* int * behaviour */ } }; template <typename T> void foo (T t) { DoFoo<T>::do (t); } ``` If 'foo' is overloaded, then at least it's clearer to the developer that this function won't be called, ie. the developer doesn't need to be a standards guru to know how the specialization rules interact with overload resolution. Ultimately, however, the code generated by the compiler is going to be the same, this is purely a code comprehension issue on the part of the developer.
You are assuming here that SO voters are all gurus - they are not. Many technically incorrect answers get upvoted. As the author of [the answer that I think you are referring to](https://stackoverflow.com/questions/992471/how-to-query-iftint-with-template-class/992488#992488), well, as I said in my comment, it's not an issue I care (or know) all that much about, because I very rarely use template specialisation. I suppose could have deleted the answer, but it is technically correct and does seem to have sparked some useful debate on the topic. And note that it is currently third in points (probably get downvoted after this) and that another answer that uses a completely different approach to the problem has been accepted.
Specialize Function Templates vs Function Overload vs Class Specializing
[ "", "c++", "templates", "" ]
I was trying to separate my DAL from my Business Layer, and in doing so, I decided to eschew any ActiveRecord approach and go for a DataMapper approach. In other words, my domain objects would not take care of persisting themselves. In doing so, I seem to be encroaching on the "anemic domain model" anti-pattern. For instance, one of the entities in my program is an Organization. An organization is represented as something like this: ``` class Organization { private $orgId; private $orgName; // getters and setters } ``` So basically this organization does nothing other than act as "bag" (as Martin Fowler says) for some data. In the PHP world it is nothing more than a glorified array. There is zero behaviour associated with it. And behaviour in the program, I've been sticking in "service level" class like an OrganizationService which mostly serves as an intermediary between these objects and the DAL. Other than potential scaling issues with PHP (I do have other reasons why I insist on "bagging" my data in these objects), is this approach totally off? How do you handle your domain models in these situations? Perhaps an organization isn't part of my domain in the first place?
well, it seems like this at the beginning, but when you'll refactor your code more, you'll get to some behavior for your organization class... one example that i might think of now is if you have people (employees), you may want to associate them with organization. so, you might have a method `AssociateEmployee(User employee)` that might find its place in your organization class. Or you might change location of the company, instead of setting parameters like address, city, state in three steps, you might add `ChangeLocation(Street, City, State)` method.. Just go step by step, when you encounter some code in you BL/service layer that seems like it should belong into the domain, move it down to the domain. If you read Fowler, you will get it very soon when you see it in your code.
It might just be anemic now? For instance, once time I was developing a meeting/conference registration site. It started with only one meeting. There was still a meeting class and only one instance, but the next year we held the conference, it was expanded and new properties were added (to hold two back-to-back meetings), so clearly it was just not fully developed yet, as we then added meeting groups which could contain multiple meetings. So I think it's important to keep in mind that domains change over time and your model may end up being refactored, so even if you might think it's anemic, it might just be a little too forward-looking (like your organization class will start to get some settings, rules or preferences or something).
Dealing with an anemic domain model
[ "", "php", "domain-driven-design", "domain-model", "" ]
is there a way to use PHP+PDO to query a MySQL database and find out the column widths of some VARCHAR fields? I don't want to hardcode them into my PHP file if possible. (purpose: determine maximum lengths of strings to be either inserted into these columns, or compared against existing data in the columns.)
Read it from [INFORMATION\_SCHEMA COLUMNS](http://dev.mysql.com/doc/refman/5.1/en/columns-table.html)
Just shoot a query to the `information_schema`.`columns` table and filter out the row you need. The value you're looking for is stored under 'CHARACTER\_MAXIMUM\_LENGTH'.
PHP + PDO/MySQL: how to automatically query to find out VARCHAR field lengths?
[ "", "php", "mysql", "pdo", "" ]
I need a bit of help constructing a query that will let me filter the following data. ``` Table: MyTree Id ParentId Visible ===================== 1 null 0 2 1 1 3 2 1 4 3 1 5 null 1 6 5 1 ``` I expect the following result from the query: ``` Id ParentId Visible ===================== 5 null 1 6 5 1 ``` That is, all the children of the hidden node should not be returned. What's more is that the depth of a hierarchy is not limited. Now don't answer "just set 2, 3 & 4 to visible=0" for non-obviuos reasons that is not possible... Like I'm fixing a horrible "legacy system". I was thinking of something like: ``` SELECT * FROM MyTree m1 JOIN MyTree m2 ON m1.ParentId = m2.Id WHERE m1.Visible = 1 AND (m1.ParentId IS NULL OR m2.Id IS NOT NULL) ``` *Sorry for any syntactical mistakes* But that will only filter the first level, right? Hope you can help. Edit: Finished up the title, whoops. The server is a brand spanking new MSSQL 2008 server but the database is running in 2000 compatibility mode.
I agree with @Quassnoi's focus on recursive CTEs (in SQL Server 2005 or later) but I think the logic is different to answer the original question: ``` WITH visall(id, parentid, visible) AS (SELECT id, parentid, visible FROM mytree WHERE parentid IS NULL UNION ALL SELECT m.id, m.parentid, m.visible & visall.visible AS visible FROM visall JOIN mytree m ON m.parentid = visall.id ) SELECT * FROM visall WHERE visall.visible = 1 ``` A probably more optimized way to express the same logic should be to have the visible checks in the WHERE as much as possible -- stop recursion along invisible "subtrees" ASAP. I.e.: ``` WITH visall(id, parentid, visible) AS (SELECT id, parentid, visible FROM mytree WHERE parentid IS NULL AND visible = 1 UNION ALL SELECT m.id, m.parentid, m.visible FROM visall JOIN mytree m ON m.parentid = visall.id WHERE m.visible = 1 ) SELECT * FROM visall ``` As usual with performance issues, benchmarking both versions on realistic data is necessary to decide with confidence (it also helps to check that they do indeed produce identical results;-) -- as DB engines' optimizers sometimes do strange things for strange reasons;-).
In `SQL Server 2005+`: ``` WITH q (id, parentid, visible) AS ( SELECT id, parentid, visible FROM mytree WHERE id = 5 UNION ALL SELECT m.id, m.parentid, m.visible FROM q JOIN mytree m ON m.parentid = q.id WHERE q.visible = 1 ) SELECT * FROM q ```
Filtering out children in a table with parentid
[ "", "sql", "sql-server", "t-sql", "" ]
I am reading an article ([JavaScript Closures for Dummies](http://blog.morrisjohns.com/javascript_closures_for_dummies)) and one of the examples is as follows. ``` function buildList(list) { var result = []; for (var i = 0; i < list.length; i++) { var item = 'item' + list[i]; result.push( function() {alert(item + ' ' + list[i])} ); } return result; } function testList() { var fnlist = buildList([1,2,3]); // using j only to help prevent confusion - could use i for (var j = 0; j < fnlist.length; j++) { fnlist[j](); } } testList(); ``` When testList is called, an alert box that says "item3 undefined". The article has this explanation: > When the anonymous functions are called on the line `fnlist[j]();` they all use the same single closure, and they use the current value for i and item within that one closure (where i has a value of 3 because the loop had completed, and item has a value of 'item3'). Why does item have a value of 'item3'? Doesn't the for loop end when i becomes 3? If it ends shouldn't item still be 'item2'? Or is the variable item created again when testList calls the functions?
You're close... > Why does item have a value of 'item3'? Doesn't the for loop end when i becomes 3? Yes. > If it ends shouldn't item still be > 'item2'? Nope. This example is a little tricky. During the last iteration of the loop, `i` is 2, but it references the 3rd element of the `list` array, which is 3. In other words, `item == 'item' + list[2] == 'item3'` > Or is the variable item created again when testList calls the functions? No, you were almost right the first time. I think you just missed that `item[2]` has the value of 3.
The for loop within buildList completes before you do the following: ``` for (var j = 0; j < fnlist.length; j++) { fnlist[j](); } ``` ... therefore, by that time (when you call each function), the variable `item` will be whatever was last assigned to it (i.e. "item3"), and `i` will be `3` (as a result of the last `i++` operation), and `list[3]` is `undefined`. It's all to do with the fact that the loop completes before you call the *closure'd* function. To prevent this, you could create a new closure, like so: ``` function buildList(list) { var result = []; for (var i = 0; i < list.length; i++) { var item = 'item' + list[i]; result.push( (function(item, i){ // Now we have our own "local" copies of `item` and `i` return function() { console.log(item + ' ' + list[i]) }; })(item, i) ); } return result; } ```
How are local variables referenced in closures?
[ "", "javascript", "loops", "closures", "" ]
I just learned about the AJAX Push Engine but it runs on Linux/Apache which is not an option for me. <http://www.ape-project.org/> Currently with AJAX to keep a page current I have to poll the server frequently which is not great for a high traffic site. The option to push data to the client only when necessary is a great option, but natively JavaScript does not support sockets, AFAIK. One trick I read about, but the site is now gone, is to use a Flash module to handle socket communications and relay message to JavaScript. The trouble with researching this approach is that "JavaScript push" as keywords come up with the push function for arrays instead of the context I want. How could establish a persistent connection with the server to do push communications in the browser? Do I need Flash/ActionScript or is there another option that would work with all of the currently active browsers? (IE6/7/8, FF3, Safari, Chrome) When it comes to the server I also need to work out the complications due to Origin policy as well as port security. I appreciate anything you can point out that will explain the available options.
What you want is [COMET](http://en.wikipedia.org/wiki/Comet_(programming)), or I would also look up [long polling](http://en.wikipedia.org/wiki/Push%5Ftechnology#Long%5Fpolling). [I asked a similar question.](https://stackoverflow.com/questions/944644/how-to-display-html-to-the-browser-incrementally-over-a-long-period-of-time)
This is interesting stuff, but I did not read anything about scalability issues on these Wiki pages. What does a web server do if you have 10,000 open long-polling connections? Also, for those not familiar with the underlying concepts, it is important to understand that pushing data from the server to the client in an ad-hoc fashion is impossible and will always be. Even if the HTTP protocol supported this, the network would not, particularly if there is a NAT firewall involved. So any solutions that claim to offer server push communication must rely on connections that are initiated by the client, kept open, and will eventually time out. I have concerns about this because it must have negative consequences for server scalability and performance.
How can you push data to a web page client?
[ "", "javascript", "ajax", "" ]
Pseudo Code ``` text = "I go to school"; word = "to" if ( word.exist(text) ) { return true ; else { return false ; } ``` I am looking for a PHP function which returns true if the word exists in the text.
You have a few options depending on your needs. For this simple example, `strpos()` is probably the simplest and most direct function to use. If you need to do something with the result, you may prefer `strstr()` or `preg_match()`. If you need to use a complex pattern instead of a string as your needle, you'll want `preg_match()`. ``` $needle = "to"; $haystack = "I go to school"; ``` [strpos() and stripos()](http://us.php.net/strpos) method (stripos() is case insensitive): ``` if (strpos($haystack, $needle) !== false) echo "Found!"; ``` [strstr() and stristr() method](http://us.php.net/strstr) (stristr is case insensitive): ``` if (strstr($haystack, $needle)) echo "Found!"; ``` [preg\_match method](http://us.php.net/preg_match) (regular expressions, much more flexible but runs slower): ``` if (preg_match("/to/", $haystack)) echo "Found!"; ``` Because you asked for a complete function, this is how you'd put that together (with default values for needle and haystack): ``` function match_my_string($needle = 'to', $haystack = 'I go to school') { if (strpos($haystack, $needle) !== false) return true; else return false; } ``` PHP 8.0.0 now contains a str\_contains function that works like so: ``` if (str_contains($haystack, $needle)) { echo "Found"; } ```
``` function hasWord($word, $txt) { $patt = "/(?:^|[^a-zA-Z])" . preg_quote($word, '/') . "(?:$|[^a-zA-Z])/i"; return preg_match($patt, $txt); } ``` If $word is "to", this will match: * "Listen to Me" * "To the moon" * "up-to-the-minute" but not: * "Together" * "Into space"
How can I check if a word is contained in another string using PHP?
[ "", "php", "string", "" ]
Is there a way to exclude code from inclusion into Cobertura coverage reports? We have some methods that should not be included in the coverage report and therefore not drive down the coverage numbers. I know that Clover has such a functionality, but I have not found anything similar for Cobertura.
You can exclude classes from instrumentation. Then they should not appear on reports. See *exclude* statements below. You can also ignore calls to some methods. See *ignore* statement below. If you are using maven, see [maven plugin manual](http://www.mojohaus.org/cobertura-maven-plugin/usage.html). ``` <configuration> <instrumentation> <ignores> <ignore>com.example.boringcode.*</ignore> </ignores> <excludes> <exclude>com/example/dullcode/**/*.class</exclude> <exclude>com/example/**/*Test.class</exclude> </excludes> </instrumentation> </configuration> ``` And for ant see [this](http://cobertura.sourceforge.net/anttaskreference.html). ``` <cobertura-instrument todir="${instrumented.dir}"> <ignore regex="org.apache.log4j.*" /> <fileset dir="${classes.dir}"> <include name="**/*.class" /> <exclude name="**/*Test.class" /> </fileset> <fileset dir="${jars.dir}"> <include name="my-simple-plugin.jar" /> </fileset> </cobertura-instrument> ```
This has been breaking my head for some time now. My problem was that I had the cobertura maven plugin setup in the reporting section instead of the build section. The instrumentation settings, and hence the excluding of classes or packages, won't be applied if you don't set it up on build section, so watch out for this.
Exclude methods from code coverage with Cobertura
[ "", "java", "code-coverage", "cobertura", "" ]
I need to do a find and replace (specific to one column of URLs) in a huge Excel .csv file. Since I'm in the beginning stages of trying to teach myself a scripting language, I figured I'd try to implement the solution in python. I'm having trouble with the "replace" part of the solution. I've read the [official csv module documentation](http://docs.python.org/library/csv.html) about how to use the writer, but there isn't really a clear enough example for me (yes, I'm slow). So, now for the question: how does one iterate through the rows of a csv file with a writer object? p.s. apologies in advance for the clumsy code, I'm still learning :) ``` import csv csvfile = open("PALTemplateData.csv") csvout = open("PALTemplateDataOUT.csv") dialect = csv.Sniffer().sniff(csvfile.read(1024)) csvfile.seek(0) reader = csv.reader(csvfile, dialect) writer = csv.writer(csvout, dialect) total=0; needchange=0; changed = 0; temp = '' changeList = [] for row in reader: total=total+1 temp = row[len(row)-1] if '/?' in temp: needchange=needchange+1; changeList.append(row.index) for row in writer: #this doesn't compile, hence the question if row.index in changeList: changed=changed+1 temp = row[len(row)-1] temp.replace('/?', '?') row[len(row)-1] = temp writer.writerow(row) print('Total URLs:', total) print('Total URLs to change:', needchange) print('Total URLs changed:', changed) ```
The reason you're getting an error is that the writer doesn't have data to iterate over. You're supposed to give it the data - presumably, you'd have some sort of list or generator that produces the rows to write out. I'd suggest just combining the two loops, like so: ``` for row in reader: row[-1] = row[-1].replace('/?', '?') writer.writerow(row) ``` And with that, you don't even need `total`, `needchange`, and `changeList`. (There are a bunch of optimizations in there that I unfortunately don't have time to explain, but I'll see if I can edit that info in later)
You should only have one loop and read and write at the same time - if your replacements only affect one line at a time, you don't need to loop over the data twice. ``` for row in reader: total=total+1 temp = row[len(row)-1] if '/?' in temp: temp = row[len(row)-1] temp.replace('/?', '?') row[len(row)-1] = temp writer.writerow(row) ``` This is just to illustrate the loop, not sure if the replacement code will work like this.
How to replace a column using Python's built-in .csv writer module?
[ "", "python", "file-io", "csv", "" ]
I noticed the other day that I can call boolean.class, but not integer.class (or on other primitives). What makes boolean so special? Note: I'm talking about boolean.class, not Boolean.class (which would make sense). Duh: I tried integer.class, not int.class. Don't I feel dumb :\
Not `integer.class` but `int.class`. Yes you can. JRE 6 : ``` public class TestTypeDotClass{ public static void main(String[] args) { System.out.println(boolean.class.getCanonicalName()); System.out.println(int.class.getCanonicalName()); System.out.println(float.class.getCanonicalName()); System.out.println(Boolean.class.getCanonicalName()); } } ``` outputs ``` boolean int float java.lang.Boolean ```
You can do `int.class`. It gives the same as `Integer.TYPE`. `int.class.isPrimitive()`, `boolean.class.isPrimitive()`, `void.class.isPrimitive()`, etc., will give a value of `true`. `Integer.class.isPrimitive()`, `Boolean.class.isPrimitive()`, etc., will give a value of `false`.
boolean.class?
[ "", "java", "" ]
I have two lists A and B (List). How to determine if they are equal in the cheapest way? I can write something like '(A minus B) union (B minus A) = empty set' or join them together and count amount of elements, but it is rather expensive. Is there workaround?
Well, that depends on how you interpret your lists. If you consider them as tuples (so the order of elements in lists matters), then you can go with this code: ``` public bool AreEqual<T>(IList<T> A, IList<T> B) { if (A.Count != B.Count) return false; for (int i = 0; i < A.Count; i++) if (!A[i].Equals(B[i])) return false; } ``` If you consider your lists as sets (so the order of elements doesn't matter), then... you are using the wrong data structures I guess: ``` public bool AreEqual<T>(IList<T> A, IList<T> B) { HashSet<T> setA = new HashSet<T>(A); return setA.SetEquals(B); } ```
If the ordering of the list items is relevant: ``` bool areEqual = a.SequenceEqual(b); ``` If the lists are to be treated as unordered sets: ``` // assumes that the list items are ints bool areEqual = new HashSet<int>(a).SetEquals(b); ``` (The [`SequenceEqual`](http://msdn.microsoft.com/en-us/library/vstudio/system.linq.enumerable.sequenceequal.aspx) method and the [`HashSet<T>`](http://msdn.microsoft.com/en-us/library/bb359438.aspx) constructor both have overloads that take an [`IEqualityComparer<T>`](http://msdn.microsoft.com/en-us/library/ms132151.aspx) parameter, if you need that functionality.)
set equality in linq
[ "", "c#", "linq", "set", "equality", "" ]
I'm trying to iterate over an unknown number of query values in C#... and can't find anything unrelated to LINQ, which I can't use. Anyone have any ideas?
If this question is about getting a querystring in ASP.NET, I think the link you are searching for is: <http://msdn.microsoft.com/en-us/library/system.web.httprequest.querystring.aspx> Essentially, `Request.QueryString` gives you a collection that you can then iterate over.
Using the Request.QueryString gives you a collection that you can iterate over. Using Request.QueryString.Allkeys allows you to iterate over a collection of strings that represent all of the keys i nthe query string. Using this we can come up with something like the below code in order to iterate over all keys and get their values. ``` foreach (string key in Request.QueryString.AllKeys) { Response.Write("Key: " + key + " Value: " + Request.QueryString[key]); } ``` Hope this helped.
Iterate over all Query Values in C#
[ "", "c#", ".net", "asp.net", "" ]
I have a page with a repeater in it. I'm writing an event handler so that when the user clicks my WebControl button, the event handler for said button iterates through the items in the repeater using FindControl, then uses some of the controls' values. It seems though, that after the page is loaded, the repeater items populate, but when the button is clicked to post this back, as I iterate through the repeater items, I'm seeing that they're all empty. I don't completely understand the sequencing, but I'm assuming it's because my iteration code is trying to access RepeaterItems that haven't been set yet. The repeater code is in my OnLoad method. Outside of that, I have my event handler trying to iterate through those items after being clicked. This is essentially what I was trying to do: ``` protected void MyButton_Click(object sender, EventArgs e) { foreach(RepeaterItem item in MyRepeater.Items) { MyLabel = (Label)item.FindControl("MyLabel"); } } ``` The button is located in the FooterTemplate of the repeater. ``` <asp:Button runat="server" OnClick="SubmitChecklist_Click" cssclass="BlueSubmit" id="SubmitChecklist" text="Submit" /> ``` Thanks in advance. Edit: To clarify, the exact error I'm getting is NullReferenceException, when I try to do something, for instance, Response.Write(MyLabel.Text) Edit: After looking into it more today, this is what I understand to be happening: The repeater is databound on postback. When I then make selections from the generated dropdownlists and hit my button, it posts back again. At this point, the repeater is databound again to it's initial values. So, if I must postback in order to get the users' selections, how can I go about this in the button's eventhandler so that I can get the selected values before that repeater gets databound again?
Instead of relying on the IsPostBack in my OnLoad, I just seperated all of the different states by putting the databinding of the repeater inside of an event handler after the user selects the first option, rather than relying on the IsPostBack of OnLoad. It was a bit convoluted, but I think I'm doing it the right way this time.
THe problem, it sounds like, is that you may be binding the data to your repeater on load, but not first checking to make sure it isnt a post back. example: 1. You request the page. On Load Fires. You bind the data to the repeater. 2. You maniupulate the data in the reapter then click your button 3. The page refreshes with the postback, firing the onload event. The data is rebound to your repeater and all previous data entered has been nullified. 4. the onclick event is triggered and your code tries to retrieve values that no longer exist. Make sure your databinding code in your onLoad event is nested within an postback check ``` if (!Page.IsPostBack) { Repeater.DataSource = Datatable; Repeater.DataBind(); } ```
Accessing RepeaterItem Controls in an EventHandler
[ "", "c#", "asp.net", "" ]
I have the following menu in my masterpage: ``` <ul id="menu" class="lavaLampBottomStyle"> <li> <%= Html.ActionLink("Employees", "Index", "Employees")%></li> <li> <%= Html.ActionLink("Customer", "Details", "Account")%></li> </ul> ``` **I need a way to set the css class of the current active li to "current".** My first guess it to do this with the assistance of javascript. I would include something like this in the masterpage: ``` $("#menu li a").each(){ if($(this).attr("href") == '<%= *GET CURRENT PAGE* %>'){ $(this).parent("li").addClass("current"); } } ``` **Is this a good approach?** If it is, how can I get the current URL part like in the href? If it isn't, what's your suggestion? :-) FYI, the generated html I'm after: ``` <ul id="menu" class="lavaLampBottomStyle"> <li> <a href="/KszEmployees/Index">Employees</a></li> <li> <a class="current" href="/">Customer</a></li> </ul> ```
That's probably the least intensive way of doing it. If you can count on the users to have javascript enabled, I see nothing wrong with this, and have done it myself on occasion. Request.Url is the object you are interested in to get the current page on the server side. The suggestion to use window.location.href by tvanfosson isn't bad either if you want to keep it entirely clientside. The advantage of using serverside, is that Request.Url has easily accessible parts of the url, such as Request.Url.Host, etc to help with your link-munging needs.
If you want to do it all server-side, I've done this before. Create an action filter attribute: ``` public class PageOptionsAttribute : ActionFilterAttribute { public string Title { get; set; } public string Section { get; set; } public override void OnActionExecuting(ActionExecutingContext filterContext) { var controller = filterContext.Controller as ControllerBase; if (controller != null) { controller.SetPageSection(this.Section); controller.SetPageTitle(this.Title); } base.OnActionExecuting(filterContext); } } ``` This calls two methods in my ControllerBase class that all my controllers inherit from: ``` public class ControllerBase : Controller { public void SetPageSection(string section) { // use the section defined or the controller name if none ViewData["PageSection"] = section != null ? section : this.RouteData.Values["controller"].ToString(); } public void SetPageTitle(string title) { ViewData["PageTitle"] = title; } } ``` Set the title and page section on you controller methods: ``` public class HomeController : ControllerBase { [PageOptions(Title="Home Page", Section="Home")] public ActionResult Index() { } } ``` Then I call the ViewData value from my master page (this won't interfere with ViewData.Model): ``` <body class="<%=ViewData["PageSection"] %>"> ``` Then to reference via CSS, instead of calling .current, give each nav item an ID and then use the body class in combination with that ID to determine the current page. ``` body.home #HomeNav { /* selected */ } body.about #AboutNav { /* selected */ } ```
ASP.NET MVC: Masterpage: How to set css class on active menu item
[ "", "javascript", "asp.net-mvc", "" ]
My application allocates a large amount of memory (millions of small objects totaling several gigabytes) and holds onto it for a long time. 1. Is .NET wasting time checking through all of this data to do GC on it? 2. How often does the Gen 2 GC occur (the one that checks all objects)? 3. Is there any way to reduce it's frequency or temporarily suppress it from occurring? 4. I know exactly when I am ready for a large amount of memory to be collected, is there any way to optimize for that? I am currently calling GC.Collect(); GC.WaitForPendingFinalizers(); at that time. Update: Perf counter "% Time in GC" is showing an average of 10.6%.
Unless you can confirm that the garbage collector is actively slowing the performance of your application, you should not take steps to cripple the functionality of your runtime environment. Judging from your question, you have not confirmed that the GC is a problem. I severely doubt that it is. Optimize only what needs to be optimized.
You can stop the garbage collector from finalizing any of your objects using the static method: ``` GC.SuppressFinalize(*your object*) ``` More information here: [link text](http://msdn.microsoft.com/en-us/library/system.gc.suppressfinalize.aspx)
Suppressing C# garbage collection
[ "", "c#", "garbage-collection", "" ]
The code below is part of a function for grabbing 5 image files from a given directory. At the moment readdir returns the images 'in the order in which they are stored by the filesystem' as per the [spec](https://www.php.net/manual/en/function.readdir.php). My question is, how can I modify it to get the latest 5 images? Either based on the `last_modified` date or the filename (which look like 0000009-16-5-2009.png, 0000012-17-5-2009.png, etc.). ``` if ( $handle = opendir($absolute_dir) ) { $i = 0; $image_array = array(); while ( count($image_array) < 5 && ( ($file = readdir($handle)) !== false) ) { if ( $file != "." && $file != ".." && $file != ".svn" && $file != 'img' ) { $image_array[$i]['url'] = $relative_dir . $file; $image_array[$i]['last_modified'] = date ("F d Y H:i:s", filemtime($absolute_dir . '/' . $file)); } $i++; } closedir($handle); } ```
If you want to do this entirely in PHP, you must find all the files and their last modification times: ``` $images = array(); foreach (scandir($folder) as $node) { $nodePath = $folder . DIRECTORY_SEPARATOR . $node; if (is_dir($nodePath)) continue; $images[$nodePath] = filemtime($nodePath); } arsort($images); $newest = array_slice($images, 0, 5); ```
If you are really only interested in pictures you could use [glob()](http://www.php.net/manual/en/function.glob.php) instead of soulmerge's scandir: ``` $images = array(); foreach (glob("*.{png,jpg,jpeg}", GLOB_BRACE) as $filename) { $images[$filename] = filemtime($filename); } arsort($images); $newest = array_slice($images, 0, 5); ```
How to get X newest files from a directory in PHP?
[ "", "php", "file", "" ]
I have this code ``` $(".insert").click(function(){ $(".insert").ajaxStop(function(){ $(".load").hide(); }); $(".insert").ajaxStart(function(){ $(".load").show(); }); $.ajax({ type: "GET", url: "edit.php", data: "action=add", success: function(msg){ $(".control").append(msg); } }); }); ``` as you can see this code append the HTML response of edit.php to the .control the problem is after appending the html .. all jquery changes wont apply in it .. because the $(document).ready() was already called before this this HTML code was born ... can I call $(document).ready() every while I do any changes ????
Take a look at [jQuery live](http://docs.jquery.com/Events/live). It is meant to bind events automatically for new elements. It works for click, dblclick, mousedown, mouseup, mousemove, mouseover, mouseout, keydown, keypress, and keyup.
If you could elaborate on what you are doing in your document.ready function, I could perhaps give more specific help. You might find what you need in the [`live()`](http://docs.jquery.com/Events/live) function, which simulates applying events to objects even if they were added to the DOM after calling `live()`. To answer your question though, yes you can invoke the event handler just by doing this: ``` $(document).ready(); ```
Can I call the function Ready() again in jQuery
[ "", "javascript", "jquery", "" ]
I have a class reflecting my dbml file which extends DataContext, but for some strange reason it's telling me > System.Data.Linq.DataContext' does not contain a constructor that takes '0' arguments" I've followed various tutorials on this and haven't encountered this problem, and VS doesn't seem to able to fix it. Here's my implementation ``` using System; using System.Collections.Generic; using System.Data; using System.Data.Linq; using System.Data.Linq.Mapping; using System.Reflection; using System.Text; using IntranetMvcAreas.Areas.Accounts.Models; namespace IntranetMvcAreas { partial class ContractsControlDataContext : DataContext { [FunctionAttribute(Name="dbo.procCC_Contract_Select")] [ResultType(typeof(Contract))] [ResultType(typeof(ContractCostCentre))] [ResultType(typeof(tblCC_Contract_Data_Terminal))] [ResultType(typeof(tblCC_CDT_Data_Service))] [ResultType(typeof(tblCC_Data_Service))] public IMultipleResults procCC_Contract_Select( [Parameter(Name = "ContractID", DbType = "Int")] System.Nullable<int> ContractID, [Parameter(Name = "ResponsibilityKey", DbType = "Int")] System.Nullable<int> ResponsibilityKey, [Parameter(Name = "ExpenseType", DbType = "Char")] System.Nullable<char> ExpenseType, [Parameter(Name = "SupplierID", DbType = "Int")] System.Nullable<int> SupplierID) { IExecuteResult result = this.ExecuteMethodCall(this, (MethodInfo)(MethodInfo.GetCurrentMethod()), ContractID, ResponsibilityKey, ExpenseType, SupplierID); return (IMultipleResults)result.ReturnValue; } } } ``` And it's `ContractsControlDataContext` that's pointed at as the problem (btw, this has no relation to a very recent post I made, it's just I'm working on the same thing) **EDIT** It's probably worth clarifying this, so please read very carefully. If you *do not* extend DataContext in the partial class, then `ExecuteMethodCall` isn't accessible. > 'Intranet.ContractsControlDataContext' does not contain a definition for 'ExecuteMethodCall' and no extension method 'ExecuteMethodCall' accepting a first argument of type 'Intranet.ContractsControlDataContext' could be found (are you missing a using directive or an assembly reference?) Maybe I'm missing something incredibly stupid? **SOLVED** I think perhaps Visual Studio struggled here, but I've relied entirely on auto-generated code. When right clicking on the database modeling language design view and hitting "View Code" it automagically creates a partial class for you within a specific namespace, *however*, this namespace was wrong. If someone could clarify this for me I would be most appreciative. The .designer.cs file sits in `namespace Intranet.Areas.Accounts.Models`, however the .cs file (partial class generated **for** the .designer.cs file **by** Visual Studio) was in `namespace Intranet`. Easy to spot for someone more experienced in this area than me. The real problem now is, who's answer do I mark as correct? Because many of you contributed to finding this issue.
The object DataContext for linq does not have an empty constructor. Since it does not have an empty constructor you must pass one of the items it is excepting to the base. From the MetaData for the DataContext. ``` // Summary: // Initializes a new instance of the System.Data.Linq.DataContext class by referencing // the connection used by the .NET Framework. // // Parameters: // connection: // The connection used by the .NET Framework. public DataContext(IDbConnection connection); // // Summary: // Initializes a new instance of the System.Data.Linq.DataContext class by referencing // a file source. // // Parameters: // fileOrServerOrConnection: // This argument can be any one of the following: The name of a file where a // SQL Server Express database resides. The name of a server where a database // is present. In this case the provider uses the default database for a user. // A complete connection string. LINQ to SQL just passes the string to the // provider without modification. public DataContext(string fileOrServerOrConnection); // // Summary: // Initializes a new instance of the System.Data.Linq.DataContext class by referencing // a connection and a mapping source. // // Parameters: // connection: // The connection used by the .NET Framework. // // mapping: // The System.Data.Linq.Mapping.MappingSource. public DataContext(IDbConnection connection, MappingSource mapping); // // Summary: // Initializes a new instance of the System.Data.Linq.DataContext class by referencing // a file source and a mapping source. // // Parameters: // fileOrServerOrConnection: // This argument can be any one of the following: The name of a file where a // SQL Server Express database resides. The name of a server where a database // is present. In this case the provider uses the default database for a user. // A complete connection string. LINQ to SQL just passes the string to the // provider without modification. // // mapping: // The System.Data.Linq.Mapping.MappingSource. public DataContext(string fileOrServerOrConnection, MappingSource mapping); ``` Something as simple as this would work. Any class that inherits from the DataConext must pass to the base constructor at least one of the types it is excepting. ``` public class SomeClass : System.Data.Linq.DataContext { public SomeClass(string connectionString) :base(connectionString) { } } ```
I'm *assuming* that the namespace and (data-context) type name are correct... double check that first. It sounds to me like the codegen has failed, and so you only have **your** half of the data-context (not the half that the IDE is meant to provide). There is a known bug in LINQ-to-SQL where this can fail if (as in your case) the `using` declarations are above the namespace. No, I am not joking. Try changing the code: ``` namespace IntranetMvcAreas { using System; using System.Collections.Generic; using System.Data; using System.Data.Linq; using System.Data.Linq.Mapping; using System.Reflection; using System.Text; using IntranetMvcAreas.Areas.Accounts.Models; // the rest of your code ``` Now go into the designer, tweak something (for example, change the name of a property and change it back again) and hit save (this forces the codegen). Now see if it works.
Extending System.Data.Linq.DataContext
[ "", "c#", "asp.net-mvc", "linq", "linq-to-sql", "" ]
I am getting the username from the URL so blah.com/kevinuk. I want some content on the page to say KevinUK which is whats stored in the membership table but when I do the following, it returns the same casing as what the input was. ``` MembershipUser member = Membership.GetUser(user); string userName = member.UserName; ``` How do I use a lowercase username as the parameter and return the value from the database with the correct casing?
It's not clear which Membership provider that you are using, but you can easily descend from that and override the GetUser method. Create a class that class inherited from MembershipProvider class. ``` public class MyMembershipProvider : MembershipProvider { public MyMembershipProvider() { // // TODO: Add constructor logic here // } } ``` Override the getUser Method. ``` public override MembershipUser GetUser(string username, bool userIsOnline) { ... Logic here to do a case insensitive lookup... } ``` Finally update the web config to use your new provider: ``` <system.web> <membership defaultProvider="MyMembershipProvider" userIsOnlineTimeWindow="10"> <providers> <add name="MyMembershipProvider" type="Providers.FIFAMembershipProvider" connectionStringName="ADConnectionString" ... /> </providers> </membership> </system.web> ``` Some examples: <http://www.eggheadcafe.com/tutorials/aspnet/30c3a27d-89ff-4f87-9762-37431805ef81/aspnet-custom-membership.aspx> <http://msdn.microsoft.com/en-us/library/ms366730(VS.80).aspx>
Unfortunately, the `Membership.GetUser(string username)` method simply sets the `MembershipUser`'s username to the value of the passed parameter. To get the correct casing, you'll either need to use `Membership.GetUser(object providerUserKey)`, which takes the user's GUID, or override the former method and its respective stored procedure to return the properly-cased username. Or, you could simply make back-to-back calls to the two different `GetUser()` methods, but that's quite wasteful.
Membership.GetUser(username) - how to return correct casing?
[ "", "c#", "asp.net-membership", "" ]
I have the following tables ``` nid timestamp title 82 1245157883 Home 61 1245100302 Minutes 132 1245097268 Sample Form 95 1245096985 Goals & Objectives 99 1245096952 Members ``` AND ``` pid src dst language 70 node/82 department/34-section-2 45 node/61/feed department/22-section-2/feed 26 node/15 department/department1/15-department1 303 node/101 department/101-section-4 ``` These are fragments of the tables, and is missing the rest of the data (they are both quite large), but I am trying to join the dst column from the second table into the first one. They should match up on their "nid", but the second table has node/[nid] which makes this more complicated. I also want to ignore the ones that end in "feed" since they are not needed for what I am doing. Much thanks EDIT: I feel bad for not mentioning this, but the first table is an sql result from ``` select nid, MAX(timestamp) as timestamp, title from node_revisions group by nid ORDER BY timestamp DESC LIMIT 0,5 ``` The second table has the name "url\_alias"
try ``` select * from table1 inner join table2 on src=concat('node/',nid) ``` **Edit** edited to reflect change in OP ``` select `nid`, MAX(`timestamp`) as `timestamp`, `title` from `node_revisions` inner join `url_alias` on `src`=concat('node/',`nid`) group by `nid` ORDER BY `timestamp` DESC LIMIT 0,5 ```
I don't know what database you are using. However, I suggest you write a parsing function that returns the nid from that column. Then, you can have this kind of query (assuming GET\_NID is the function you defined): ``` SELECT * from T1, T2 WHERE T1.nid = GET_NID( T2.node) ```
sql join question
[ "", "sql", "" ]
Are java primitive integers (int) atomic at all, for that matter? Some experimentation with two threads sharing an int seems to indicate that they *are*, but of course absence of evidence that they are *not* does not imply that they are. Specifically, the test I ran was this: ``` public class IntSafeChecker { static int thing; static boolean keepWatching = true; // Watcher just looks for monotonically increasing values static class Watcher extends Thread { public void run() { boolean hasBefore = false; int thingBefore = 0; while( keepWatching ) { // observe the shared int int thingNow = thing; // fake the 1st value to keep test happy if( hasBefore == false ) { thingBefore = thingNow; hasBefore = true; } // check for decreases (due to partially written values) if( thingNow < thingBefore ) { System.err.println("MAJOR TROUBLE!"); } thingBefore = thingNow; } } } // Modifier just counts the shared int up to 1 billion static class Modifier extends Thread { public void run() { int what = 0; for(int i = 0; i < 1000000000; ++i) { what += 1; thing = what; } // kill the watcher when done keepWatching = false; } } public static void main(String[] args) { Modifier m = new Modifier(); Watcher w = new Watcher(); m.start(); w.start(); } } ``` (and that was only tried with java jre 1.6.0\_07 on a 32bit windows PC) Essentially, the Modifier writes a count sequence to the shared integer, while the Watcher checks that the observed values never decrease. On a machine where a 32 bit value had to be accessed as four separate bytes (or even two 16bit words), there would be a probability that Watcher would catch the shared integer in an inconsistent, half-updated state, and detect the value decreasing rather than increasing. This should work whether the (hypothetical) data bytes are collected/written LSB 1st or MSB 1st, but is only probablistic at best. It would seem very probable given today's wide data paths that a 32 bit value could be effectively atomic, even if the java spec doesn't require it. In fact, with a 32 bit data bus it would seem that you might have to work harder to get atomic access to *bytes* than to 32 bit ints. Googling on "java primitive thread safety" turns up loads of stuff on thread-safe classes and objects, but looking for the info on the primitives seems to be looking for the proverbial needle in a haystack.
All memory accesses in Java are atomic by default, with the exception of `long` and `double` (which *may* be atomic, but don't have to be). It's not put *very* clearly to be honest, but I believe that's the implication. From [section 17.4.3](http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.4.3) of the JLS: > Within a sequentially consistent > execution, there is a total order over > all individual actions (such as reads > and writes) which is consistent with > the order of the program, and each > individual action is atomic and is > immediately visible to every thread. and then in [17.7](http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.7): > Some implementations may find it > convenient to divide a single write > action on a 64-bit long or double > value into two write actions on > adjacent 32 bit values. For > efficiency's sake, this behavior is > implementation specific; Java virtual > machines are free to perform writes to > long and double values atomically or > in two parts. Note that atomicity is very different to volatility though. When one thread updates an integer to 5, it's guaranteed that another thread won't see 1 or 4 or any other in-between state, but without any explicit volatility or locking, the other thread could see 0 forever. With regard to working hard to get atomic access to bytes, you're right: the VM may well have to try hard... but it does have to. From [section 17.6](http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.6) of the spec: > Some processors do not provide the > ability to write to a single byte. It > would be illegal to implement byte > array updates on such a processor by > simply reading an entire word, > updating the appropriate byte, and > then writing the entire word back to > memory. This problem is sometimes > known as word tearing, and on > processors that cannot easily update a > single byte in isolation some other > approach will be required. In other words, it's up to the JVM to get it right.
* No amount of testing can prove thread safety - it can only *disprove* it; * I found a indirect reference in [JLS 17.7](http://docs.oracle.com/javase/specs/jls/se5.0/html/memory.html#17.7) which states > Some implementations may find it convenient to divide a single write action on a 64-bit long or double value into two write actions on adjacent 32 bit values. and further down > For the purposes of the Java programming language memory model, a single write to a non-volatile long or double value is treated as two separate writes: one to each 32-bit half. This seems to imply that writes to ints are atomic.
Are java primitive ints atomic by design or by accident?
[ "", "java", "multithreading", "" ]
I would like to detect whether the user has pressed `Enter` using jQuery. How is this possible? Does it require a plugin? It looks like I need to use the [`keypress()`](http://docs.jquery.com/Events/keypress) method. Are there browser issues with that command - like are there any browser compatibility issues I should know about?
The whole point of jQuery is that you don't have to worry about browser differences. I am pretty sure you can safely go with `enter` being 13 in all browsers. So with that in mind, you can do this: ``` $(document).on('keypress',function(e) { if(e.which == 13) { alert('You pressed enter!'); } }); ```
I wrote a small plugin to make it easier to bind the "on enter key pressed" event: ``` $.fn.enterKey = function (fnc) { return this.each(function () { $(this).keypress(function (ev) { var keycode = (ev.keyCode ? ev.keyCode : ev.which); if (keycode == '13') { fnc.call(this, ev); } }) }) } ``` Usage: ``` $("#input").enterKey(function () { alert('Enter!'); }) ```
How can I detect pressing Enter on the keyboard using jQuery?
[ "", "javascript", "jquery", "keyboard-events", "enter", "jquery-events", "" ]
I have multiple (between 40 and 50) MP3 files that I'd like to concatenate into one file. What's the best way to do this in Python? Use [`fileinput`](http://docs.python.org/library/fileinput.html) module to loop through each line of each file and write it to an output file? Outsource to windows [`copy`](http://www.computerhope.com/copyhlp.htm) command?
Putting the bytes in those files together is easy... however I am not sure if that will cause a continuous play - I think it might if the files are using the same bitrate, but I'm not sure. ``` from glob import iglob import shutil import os PATH = r'C:\music' destination = open('everything.mp3', 'wb') for filename in iglob(os.path.join(PATH, '*.mp3')): shutil.copyfileobj(open(filename, 'rb'), destination) destination.close() ``` That will create a single "everything.mp3" file with all bytes of all mp3 files in C:\music concatenated together. If you want to pass the names of the files in command line, you can use `sys.argv[1:]` instead of `iglob(...)`, etc.
Just to summarize (and steal from [nosklo's answer](https://stackoverflow.com/questions/1001538/how-do-i-concatenate-files-in-python/1001587#1001587)), in order to concatenate two files you do: ``` destination = open(outfile,'wb') shutil.copyfileobj(open(file1,'rb'), destination) shutil.copyfileobj(open(file2,'rb'), destination) destination.close() ``` This is the same as: ``` cat file1 file2 > destination ```
How do I concatenate files in Python?
[ "", "python", "file", "mp3", "" ]
I have some current code and the problem is its creating a 1252 codepage file, i want to force it to create a UTF-8 file Can anyone help me with this code, as i say it currently works... but i need to force the save on utf.. can i pass a parameter or something?? this is what i have, any help really appreciated ``` var out = new java.io.FileWriter( new java.io.File( path )), text = new java.lang.String( src || "" ); out.write( text, 0, text.length() ); out.flush(); out.close(); ```
Instead of using `FileWriter`, create a `FileOutputStream`. You can then wrap this in an `OutputStreamWriter`, which allows you to pass an encoding in the constructor. Then you can write your data to that inside a [try-with-resources Statement](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html): ``` try (OutputStreamWriter writer = new OutputStreamWriter(new FileOutputStream(PROPERTIES_FILE), StandardCharsets.UTF_8)) // do stuff } ```
Try this ``` Writer out = new BufferedWriter(new OutputStreamWriter( new FileOutputStream("outfilename"), "UTF-8")); try { out.write(aString); } finally { out.close(); } ```
How to write a UTF-8 file with Java?
[ "", "java", "file-io", "utf-8", "" ]
I have two tables: * posts - holds post information * listen - holds information on what other users you are listening to. (what users posts you want to view) The structure of listen is: * id(uniqueid) * userid(the users unique id) * listenid(id of a user they are listening too) How would I gather all the entries from listen that mach the active users userid and then use those to find all the posts that match any of the found listenid values so as to create a query of the combined users posts I want to view?
``` SELECT posts.* FROM listen JOIN posts ON posts.userid = listen.listenid WHERE listen.userid = @current_user ```
You can do this with a simple natural join, or a direct join as given in other answers. ``` select * from posts, listen where listen.userid == $active_user and posts.userid = listen.userid ``` **You probably want to be more selective about the columns you are bringing in.**
using results from one mysql for a second query
[ "", "php", "mysql", "" ]
I'm wanting to select rows in a table where the primary key is in another table. I'm not sure if I should use a JOIN or the IN operator in SQL Server 2005. Is there any significant performance difference between these two SQL queries with a large dataset (i.e. millions of rows)? ``` SELECT * FROM a WHERE a.c IN (SELECT d FROM b) SELECT a.* FROM a JOIN b ON a.c = b.d ```
**Update:** This article in my blog summarizes both my answer and my comments to another answers, and shows actual execution plans: * [**IN vs. JOIN vs. EXISTS**](http://explainextended.com/2009/06/16/in-vs-join-vs-exists/) --- ``` SELECT * FROM a WHERE a.c IN (SELECT d FROM b) SELECT a.* FROM a JOIN b ON a.c = b.d ``` These queries are not equivalent. They can yield different results if your table `b` is not key preserved (i. e. the values of `b.d` are not unique). The equivalent of the first query is the following: ``` SELECT a.* FROM a JOIN ( SELECT DISTINCT d FROM b ) bo ON a.c = bo.d ``` If `b.d` is `UNIQUE` and marked as such (with a `UNIQUE INDEX` or `UNIQUE CONSTRAINT`), then these queries are identical and most probably will use identical plans, since `SQL Server` is smart enough to take this into account. `SQL Server` can employ one of the following methods to run this query: * If there is an index on `a.c`, `d` is `UNIQUE` and `b` is relatively small compared to `a`, then the condition is propagated into the subquery and the plain `INNER JOIN` is used (with `b` leading) * If there is an index on `b.d` and `d` is not `UNIQUE`, then the condition is also propagated and `LEFT SEMI JOIN` is used. It can also be used for the condition above. * If there is an index on both `b.d` and `a.c` and they are large, then `MERGE SEMI JOIN` is used * If there is no index on any table, then a hash table is built on `b` and `HASH SEMI JOIN` is used. **Neither** of these methods reevaluates the whole subquery each time. See this entry in my blog for more detail on how this works: * [**Counting missing rows: SQL Server**](http://explainextended.com/2009/04/20/counting-missing-rows-sql-server/) There are links for all `RDBMS`'s of the big four.
Speaking from experience on a Table with 49,000,000 rows I would recommend LEFT OUTER JOIN. Using IN, or EXISTS Took 5 minutes to complete where the LEFT OUTER JOIN finishes in 1 second. ``` SELECT a.* FROM a LEFT OUTER JOIN b ON a.c = b.d WHERE b.d is not null -- Given b.d is a primary Key with index ``` Actually in my query I do this across 9 tables.
IN vs. JOIN with large rowsets
[ "", "sql", "sql-server-2005", "performance", "join", "" ]
I'm working on an RTS game in C++ targeted at handheld hardware (Pandora). For reference, the Pandora has a single ARM processor at ~600Mhz and runs Linux. We're trying to settle on a good message passing system (both internal and external), and this is new territory for me. It may help to give an example of a message we'd like to pass. A unit may make this call to load its models into memory: `sendMessage("model-loader", "load-model", my_model.path, model_id );` In return, the unit could expect some kind of message containing a model object for the particular model\_id, which can then be passed to the graphics system. Please note that this sendMessage function is in no way final. It just reflects my current understanding of message passing systems, which is probably not correct :) From what I can tell there are two pretty distinct choices. One is to pass messages in memory, and only pass through the network when you need to talk to an external machine. I like this idea because the overhead seems low, but the big problem here is it seems like you need to make extensive use of mutex locking on your message queues. I'd really like to avoid excess locking if possible. I've read a few ways to implement simple queues without locking (by relying on atomic `int` operations) but these assume there is only one reader and one writer for a queue. This doesn't seem useful to our particular case, as an object's queue will have many writers and one reader. The other choice is to go completely over the network layer. This has some fun advantages like getting asynchronous message passing pretty much for free. Also, we gain the ability to pass messages to other machines using the exact same calls as passing locally. However, this solution rubs me the wrong way, probably because I don't fully understand it :) Would we need a socket for every object that is going to be sending/receiving messages? If so, this seems excessive. A given game will have thousands of objects. For a somewhat underpowered device like the Pandora, I fear that abusing the network like that may end up being our bottleneck. But, I haven't run any tests yet, so this is just speculation. MPI seems to be popular for message passing but it sure feels like overkill for what we want. This code is never going to touch a cluster or need to do heavy calculation. Any insight into what options we have for accomplishing this is much appreciated.
The network will be using locking as well. It will just be where you cannot see it, in the OS kernel. What I would do is create your own message queue object that you can rewrite as you need to. Start simple and make it better as needed. That way you can make it use any implementation you like behind the scenes without changing the rest of your code. Look at several possible implementations that you might like to do in the future and design your API so that you can handle them all efficiently if you decide to implement in those terms. If you want really efficient message passing look at some of the open source L4 microkernels. Those guys put a *lot* of time into fast message passing.
Since this is a small platform, it might be worth timing both approaches. However, barring some kind of big speed issue, I'd always go for the approach that is simpler to code. That is probably going to be using the network stack, as it will be the same code no matter where the recipient is, and you won't have to manually code and degug your mutual exclusions, message buffering, allocations, etc. If you find out it is too slow, you can always recode the local stuff using memory later. But why waste the time doing that up front if you might not have to?
Options for a message passing system for a game
[ "", "c++", "asynchronous", "networking", "shared-memory", "message-passing", "" ]
I have a listview. In my listview I have a dropdownbox which I want to fill in my codebehind page. Only the thing is, I don't know how to access this webcontrol. The following doesn't work: ``` DropDownList ddl = (DropDownList)lvUserOverview.Controls[0]; ``` I know the index is 0 because the dropdownlist is the only control on the listview (also when I try index 1 I get a index out of range exception). Can someone tell me how i can access the dropdownlist? In my pagebehind I want to add listitems. ASPX Code : ``` <asp:DropDownList ID="ddlRole" onload="ddlRole_Load" runat="server"> </asp:DropDownList> ``` Codebehind: ``` protected void ddlRole_Load(object sender, EventArgs e) { DropDownList ddl = (DropDownList)lvUserOverview.FindControl("ddlRole"); if (ddl != null) { foreach (Role role in roles) ddl.Items.Add(new ListItem(role.Description, role.Id.ToString())); } } ```
To get a handle to the drop down list inside of its own Load event handler, all you need to do is cast sender as a DropDownList. ``` DropDownList ddlRole = sender as DropDownList; ```
If this is being rendered in a ListView then there's a chance that multiple DropDownLists are going to be instantiated, each will get a unique ID and you wouldn't be able to use Matthew's approach. You might want to use the ItemDataBound event to access e.Item.FindControl("NameOfDropDownList") which will allow you to iterate on each dropdown created. If you are only creating one... why it is in a ListView?
C# How to access a dropdownbox in a listview?
[ "", "c#", "asp.net", "listview", "dropdownbox", "" ]
I'm currently in the process of building a repository for a project that will be DB intensive (Performance tests have been carried out and caching is needed hence why I'm asking ) The way I've got it set up now is that each object is individually cached, if I want to do a query for them objects I pass the query to the database and return a the id's required. (For some simple queries I've cached and manage the ids) I then hit the cache with these ids and pull them out, any missing objects are bundle in to "where in" statement and fired to the database; at this point I repopulate the cache with the missing ids. The queries them selves are most likely to be about paging / ordering the data. Is this a suitable strategy? Or perhaps are there better techniques available?
This is a reasonable approach and I have gone this route before and it's best to use this for simple caching. However, when you are updating or writing to the database you will run into some interesting problems and you should handle these scenarios carefully. For example your cache data will become obsolete if the user updates the record in the database. In that scenario you will either need to simultaneously update the in-memory cache or purge the cache so that it can be refreshed on the next fetch query. Things can also get tricky if you for example the user updates a customer's email address which is in a separate table but associated via a foreign key. Besides database caching you should also be considering output caching. This works quite well if for example you have a table that shows sales data for previous month. The table could be stored in another file that gets included in a bunch of other pages that want to show the table. Now if you cache the file with the sales data table, those other pages when they request this file, the caching engine can fetch it straight from the disk and the business logic layer doesn't even get hit. This is not applicable all the time but quite useful for custom controls. **Unit of Work Pattern** It also helps to know about the [Unit of Work](http://martinfowler.com/eaaCatalog/unitOfWork.html) pattern. > When you're pulling data in and out of > a database, it's important to keep > track of what you've changed; > otherwise, that data won't be written > back into the database. Similarly you > have to insert new objects you create > and remove any objects you delete. > > You can change the database with each > change to your object model, but this > can lead to lots of very small > database calls, which ends up being > very slow. Furthermore it requires you > to have a transaction open for the > whole interaction, which is > impractical if you have a business > transaction that spans multiple > requests. The situation is even worse > if you need to keep track of the > objects you've read so you can avoid > inconsistent reads. > > A Unit of Work keeps track of > everything you do during a business > transaction that can affect the > database. When you're done, it figures > out everything that needs to be done > to alter the database as a result of > your work.
If you are using SQLServer, you can use [SqlCacheDependency](http://msdn.microsoft.com/en-us/library/system.web.caching.sqlcachedependency.aspx) where your cache will be automatically repopulated when the data table changes in the database. Here's the link for SqlCacheDependency This link contains a similar [cache dependency solution](https://stackoverflow.com/questions/874563/dynamically-reading-resources-from-a-file/874577#874577). (It's for a file rather than a DB. You will need to make some changes as per the msdn link above to have a cache dependency on DB) Hope this helps :)
Caching Strategy for queried data
[ "", "sql", "database", "caching", "" ]
So I have a collection of Razzies created from a Collection of Bloops. I retrieve this collection using a Linq query. Reference:[Linq Select Certain Properties Into Another Object?](https://stackoverflow.com/questions/923238/linq-select-certain-properties-into-another-object) for the query. I would like to know if it is possible to run a method on all of the newly created Razzies before returning the collection, or even right after, just without using a for-loop. I tried this: ``` Dim results = From item In bloops _ Select New Razzie() With _ { _ .FirstName = item.FirstName, _ .LastName = item.LastName _ }.UpdateAddress(item.Address) ``` But it returns nothing.
Russ, this might do what you want. It's a pretty simple approach. If this is not what you want, please expand your question. This will run the method on each element as you enumerate over them. It will **not** run the method until you enumerate, but you can safely know that the method **will** run before you use the data. **EDIT** Since you are using a sealed 3rd party class, use extension methods. That's what they're for. ;) Modified code to use extension methods. ``` class MyArgs { } class Razzie //pretend this is a 3rd party class that we can't edit { public string FirstName { get; set; } public string LastName { get; set; } } static class RazzieExtensions { public static Razzie MyMethod(this Razzie razzie, MyArgs args) { razzie.FirstName = razzie.FirstName.ToUpper(); return razzie; } } class Program { static void Main(string[] args) { var bloops = new List<Razzie> { new Razzie{FirstName = "name"}, new Razzie{FirstName = "nAmE"} }; var myArgs = new MyArgs(); var results = from item in bloops select new Razzie { FirstName = item.FirstName, LastName = item.LastName }.MyMethod(myArgs); foreach (var r in results) Console.WriteLine(r.FirstName); Console.ReadKey(); } } ```
Using a foreach loop after your initial processing is the normal way to do this. If you don't want to use a foreach loop, you'll need to define your own extension method to handle this situation.
Run a method on all objects within a collection
[ "", "c#", "vb.net", "linq", ".net-3.5", "ienumerable", "" ]
I need to determine whether the current invocation of PHP is from the command line (CLI) or from the web server (in my case, Apache with mod\_php). Any recommended methods?
`php_sapi_name` is the function you will want to use as it returns a lowercase string of the interface type. In addition, there is the PHP constant `PHP_SAPI`. Documentation can be found here: <http://php.net/php_sapi_name> For example, to determine if PHP is being run from the CLI, you could use this function: ``` function isCommandLineInterface() { return (php_sapi_name() === 'cli'); } ```
I have been using this function for a few years ``` function is_cli() { if ( defined('STDIN') ) { return true; } if ( php_sapi_name() === 'cli' ) { return true; } if ( array_key_exists('SHELL', $_ENV) ) { return true; } if ( empty($_SERVER['REMOTE_ADDR']) and !isset($_SERVER['HTTP_USER_AGENT']) and count($_SERVER['argv']) > 0) { return true; } if ( !array_key_exists('REQUEST_METHOD', $_SERVER) ) { return true; } return false; } ```
PHP - how to best determine if the current invocation is from CLI or web server?
[ "", "php", "" ]
I would like to write automated tests that run in medium trust and fail if they require full trust. I am writing a library where some functionality is only available in full trust scenarios and I want to verify that the code I wish to run in medium trust will work fine. If also want to know that if I change a class that requires full trust, that my tests will fail. I have tried creating another AppDomain and loading the medium trust PolicyLevel, but I always get an error with assembly or its dependency could not be loaded while trying to run the cross AppDomain callback. Is there a way to pull this off? **UPDATE**: Based replies, here is what I have. Note that your class being tested must extend MarshalByRefObject. This is very limiting, but I don't see a way around it. ``` using System; using System.Reflection; using System.Security; using System.Security.Permissions; using Xunit; namespace PartialTrustTest { [Serializable] public class ClassUnderTest : MarshalByRefObject { public void PartialTrustSuccess() { Console.WriteLine( "partial trust success #1" ); } public void PartialTrustFailure() { FieldInfo fi = typeof (Int32).GetField( "m_value", BindingFlags.Instance | BindingFlags.NonPublic ); object value = fi.GetValue( 1 ); Console.WriteLine( "value: {0}", value ); } } public class Test { [Fact] public void MediumTrustWithExternalClass() { // ClassUnderTest must extend MarshalByRefObject var classUnderTest = MediumTrustContext.Create<ClassUnderTest>(); classUnderTest.PartialTrustSuccess(); Assert.Throws<FieldAccessException>( classUnderTest.PartialTrustFailure ); } } internal static class MediumTrustContext { public static T Create<T>() { AppDomain appDomain = CreatePartialTrustDomain(); var t = (T) appDomain.CreateInstanceAndUnwrap( typeof (T).Assembly.FullName, typeof (T).FullName ); return t; } public static AppDomain CreatePartialTrustDomain() { var setup = new AppDomainSetup {ApplicationBase = AppDomain.CurrentDomain.BaseDirectory}; var permissions = new PermissionSet( null ); permissions.AddPermission( new SecurityPermission( SecurityPermissionFlag.Execution ) ); permissions.AddPermission( new ReflectionPermission( ReflectionPermissionFlag.RestrictedMemberAccess ) ); return AppDomain.CreateDomain( "Partial Trust AppDomain: " + DateTime.Now.Ticks, null, setup, permissions ); } } } ```
I just posted an article titled [Partial Trust Testing with xUnit.net](http://brice-lambson.blogspot.com/2012/08/partial-trust-testing-with-xunitnet.html). It details the xUnit.net-based framework that we use on the Entity Framework team to exercise code under partial trust. Here is an example of its usage. ``` public class SomeTests : MarshalByRefObject { [PartialTrustFact] public void Partial_trust_test1() { // Runs in medium trust } } // Or... [PartialTrustFixture] public class MoreTests : MarshalByRefObject { [Fact] public void Another_partial_trust_test() { // Runs in medium trust } } ```
Shamelessly stolen from [How to Host a Partial Trust Sandbox – #7](http://blogs.rev-net.com/ddewinter/2009/05/22/how-to-host-a-partial-trust-sandbox/), but reimplemented (along with a simple test case) in F# just for kicks :-) ``` open System open System.Reflection open System.Security open System.Security.Permissions open System.Security.Policy type Program() = inherit System.MarshalByRefObject() member x.PartialTrustSuccess() = Console.WriteLine("foo") member x.PartialTrustFailure() = let field = typeof<Int32>.GetField("m_value", BindingFlags.Instance ||| BindingFlags.NonPublic) let value = field.GetValue(1) Console.WriteLine("value: {0}", value) [<EntryPoint>] let main _ = let appDomain = let setup = AppDomainSetup(ApplicationBase = AppDomain.CurrentDomain.BaseDirectory) let permissions = PermissionSet(null) permissions.AddPermission(SecurityPermission(SecurityPermissionFlag.Execution)) |> ignore permissions.AddPermission(ReflectionPermission(ReflectionPermissionFlag.RestrictedMemberAccess)) |> ignore AppDomain.CreateDomain("Partial Trust AppDomain", null, setup, permissions) let program = appDomain.CreateInstanceAndUnwrap( typeof<Program>.Assembly.FullName, typeof<Program>.FullName) :?> Program program.PartialTrustSuccess() try program.PartialTrustFailure() Console.Error.WriteLine("partial trust test failed") with | :? FieldAccessException -> () 0 ``` And a C# version: ``` using System; using System.Reflection; using System.Security; using System.Security.Permissions; using System.Security.Policy; namespace PartialTrustTest { internal class Program : MarshalByRefObject { public void PartialTrustSuccess() { Console.WriteLine("partial trust success #1"); } public void PartialTrustFailure() { FieldInfo fi = typeof(Int32).GetField("m_value", BindingFlags.Instance | BindingFlags.NonPublic); object value = fi.GetValue(1); Console.WriteLine("value: {0}", value); } private static AppDomain CreatePartialTrustDomain() { AppDomainSetup setup = new AppDomainSetup() { ApplicationBase = AppDomain.CurrentDomain.BaseDirectory }; PermissionSet permissions = new PermissionSet(null); permissions.AddPermission(new SecurityPermission(SecurityPermissionFlag.Execution)); permissions.AddPermission(new ReflectionPermission(ReflectionPermissionFlag.RestrictedMemberAccess)); return AppDomain.CreateDomain("Partial Trust AppDomain", null, setup, permissions); } static void Main(string[] args) { AppDomain appDomain = CreatePartialTrustDomain(); Program program = (Program)appDomain.CreateInstanceAndUnwrap( typeof(Program).Assembly.FullName, typeof(Program).FullName); program.PartialTrustSuccess(); try { program.PartialTrustFailure(); Console.Error.WriteLine("!!! partial trust test failed"); } catch (FieldAccessException) { Console.WriteLine("partial trust success #2"); } } } } ``` ``` C:\temp\PartialTrustTest\bin\Debug>PartialTrustTest.exe partial trust success #1 partial trust success #2 ```
How to Automate Testing of Medium Trust Code
[ "", "c#", "automated-tests", "functional-testing", "medium-trust", "full-trust", "" ]
`size_t` is defined as an `unsigned` integer, but the size of it depends on whether you're on a 32- or 64-bit machine. What's a correct and portable way to print out a `size_t`?
Try using the `%zu` format string ``` size_t val = get_the_value(); printf("%zu",val); ``` The z portion is a length specifier which says the argument will be size\_t in length. Source - <http://en.wikipedia.org/wiki/Printf#printf_format_placeholders>
There's a C++ tag on this, so `cout <<` is another possible answer. This is surprisingly hard to get right in all versions of C. In C90, casting to `unsigned long` should work, but that may well not work in C99, and the C99 solutions won't necessarily work in C90. The ability to reliably distinguish between C90 and C99 was introduced in the 1995 changes (specifying the allowable values for `__STDC__`). I don't think there is a completely portable way that works for C90, C99, and C++, although there are solutions for any individual one of those.
What's the correct way to use printf to print a size_t?
[ "", "c++", "c", "printf", "size-t", "" ]
I have a web page. From that i want to find all the IMG tags and get the SRC of those IMG tags. What will be the regular expression to do this. Some explanation: I am scraping a web page. All the data is displayed correctly except the images. To solve this, wow i have an idea, to find the SRC and replace it : e.g ``` /images/header.jpg ``` and replace this with ``` www.stackoverflow/images/header.jpg ```
You don't want a regular expression, you want a parser. [From this question](https://stackoverflow.com/questions/773340/can-you-provide-an-example-of-parsing-html-with-your-favorite-parser): ``` class Program { static void Main(string[] args) { var web = new HtmlWeb(); var doc = web.Load("http://www.stackoverflow.com"); var nodes = doc.DocumentNode.SelectNodes("//img[@src]"); foreach (var node in nodes) { Console.WriteLine(node.src); } } } ```
As pointed out, regular expression are not the perfect solution, but you can usually build one that is good enough for the job. This is what I would use: ``` string newHtml = Regex.Replace(html, @"(?<=<img\s+[^>]*?src=(?<q>['""]))(?<url>.+?)(?=\k<q>)", m => "http://www.stackoverflow.com" + m.Value); ``` It will match src attributes delimited by single or double quotes. Of course, you would have to change the lambda/delegate to do your own replacing logic, but you get the idea :)
Regular Expression to find src from IMG tag
[ "", "c#", "asp.net", "regex", "" ]
I have the following problem: I want to send a type (java.lang.Class) over the wire and 'define' the class on the other side. I tried like that: ``` ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bos); oos.writeObject(MyClass.class); ``` and on the receiving end: ``` ByteArrayInputStream bis = new ByteArrayInputStream(request.getBytes()); ObjectInputStream ois = new ObjectInputStream(bis); Class c = (Class) ois.readObject(); // ClassNotFoundException ``` so obviously I need to send the raw bytecode of the class and do a ``` ClassLoader.defineClass(bytes, .. ``` but unfortunately I fail to see how I can retrieve the bytcode of a loaded class. I'm searching for something like: ``` byte[] byteCode = MyClass.class.toByteArray(); ``` Is this even possible with standard JDK or is there any small lib out there that can do that?
I don't think what you want is possible in full generality. The act of defining a class from its bytecode is not reversible. What you should be able to do, however, is to directly read the bytecode file (assuming that it's an `URLClassLoader`): ``` MyClass.class.getResourceAsStream("Myclass.class") ``` Alternatively, you could just make the class files accessible via HTTP and directly use an `URLClassLoader` on the receiving side.
You cannot do this from memory. You must have the byte codes defining the class, which for most classes can be found by asking the JVM. This code from <http://www.exampledepot.com/egs/java.lang/ClassOrigin.html> should get you started: ``` // Get the location of this class Class cls = this.getClass(); ProtectionDomain pDomain = cls.getProtectionDomain(); CodeSource cSource = pDomain.getCodeSource(); URL loc = cSource.getLocation(); // file:/c:/almanac14/examples/ ```
How to send a Class over the wire
[ "", "java", "serialization", "classloader", "" ]
What kind of naming convention is appropriate for ViewModel classes? Example: for HomeController, Index view? HomeIndexViewModel doesn't seem right.
EmployeesViewData. That's what I use and what I've seen in sample applications as well. **About your added example:** Imho the name of the class should specify what kind of data it contains. "....IndexViewData" is rather meaningless. What exactly is displayed on that page? Try to summarize it in 1 or 2 word(s) and add 'ViewData' behind it. Or else just take the name of the controller and drop the "index". HomeViewData sounds fine to me.
I use the following pattern because it's clear and unambiguous : * Model : Foo * View : FooView * ViewModel : FooViewModel
How do you name your ViewModel classes?
[ "", "c#", "asp.net-mvc", "model-view-controller", "" ]
I need to be able to rotate content inside a placeholder div based on percentage like this. The rotation will occur on page load. So each time a user reloads a page he has these chances of seeing content 1, 2 or 3 in the content placeholder: Content 1 = show 50% of the time Content 2 = show 25% of the time Content 3 = show 25% of the time I prefer Javascript but if there is a easier way to do it in ASP.NET on the front end template not the codebehind, that is also acceptable. If you have a solution or can point me to an existing script I would appreciate it. Thanks!
There's really no need to multiply/floor/ceil anything: the `Math.random()` function gives value which is larger than or equal to 0 and less than 1. The following code would be a bit easier to maintain if you change the number of options or the chance percentage. ``` var contentId, random = Math.random(); if (random < 0.5) { // option 1: chance 0.0–0.499... contentId = 0; } else (random < 0.75) { // option 2: chance 0.50—0.7499... contentId = 1; } else { // option 3: chance 0.75–0.99... contentId = 2; } loadContent(contentId); ```
Here is a quick hack, there is probably a better way. ``` var randomnumber=Math.floor(Math.random()*4) if((num==0) || (num==1)) { //show hidden div or document.write (50% probability) } else if (num==2) { //show hidden div or document.write (25% probability) } else { //show hidden div or document.write (25% probability) } ```
Javascript content rotation based on percentage
[ "", "javascript", "rotation", "frontend", "" ]
I'm writing a Django app that performs various functions, including inserting, or updating new records into the database via the URL. So some internal application sends off a request to `/import/?a=1&b=2&c=3`, for example. In the view, I want to create a new object, `foo = Foo()` and have the members of `foo` set to the data in the request.GET dictionary. Here is what I'm doing now: 1. Request sent to `/import/?a=1&b=2&c=3` 2. View creates new object: `foo = Foo()` 3. Object is updated with data. Here is what I got thus far: ``` foo.a = request['a'] foo.b = request['b'] foo.c = request['c'] ``` Obviously this is tedious and error prone. The data in the URL has the exact same name as the object's members so it is a simple 1-to-1 mapping. Ideally, I would like to do able to do something like this: ``` foo = Foo() foo.update(request.GET) ``` or something to that effect. Thanks!
You can use the setattr function to dynamically set attributes: ``` for key,value in request.GET.items(): setattr(foo, key, value) ```
If `request.GET` is a dictionary and `class Foo` does not use `__slots__`, then this should also work: ``` # foo is a Foo instance foo.__dict__.update(request.GET) ```
How do I update an object's members using a dict?
[ "", "python", "django", "" ]
I'm working on a C++ project that I don't intend to develop or deploy using .NET libraries or tools, which means it would make sense for me to create it using a Visual Studio Win32 Console application. However, I've heard that the debugging abilities when using a CLR application under Visual Studio are much more powerful. So I have a few questions: 1. Is it true that having a CLR app vs. a Win32 app adds capabilities to your development process even if you don't utilize any .NET libraries or other resources? 2. If so, would I still be able to develop/compile the project as a CLR project to take advantage of these even though I'd be developing a pure C++ project using STL, etc. and not taking advantage of any .NET functionality? Or would such a project require fundamental differences that would make it non-trivial to revert back, meaning I should stick with a Win32 console app?
Bottom line answer, if you are never intending to use the CLR or any .Net objects in your application, just use a normal Win32 C++ library. Doing anything else will cause you pain down the road. Now, to answer the original question about debugging, yes debugging with the CLR has certain advantages over debugging a normal C++ app. Starting with Visual Studio 2005, both C# and VB.Net began to focus on making the variable display in the locals / autos /watch window much more valuable. It was mainly done through the introduction of .Net attributes such as DebuggerDisplay, DebuggerTypeProxy and the visualizer framework. If you don't use any .Net types though, you will get none of these benefits. The C++ expression evaluator does not take advantage of any of these. It has it's own methods of customizing type display. But it's not as featureful (or potentially dangerous) as the attribute style because it doesn't allow for code to run in the debugee process. That's not to say debugging C++ provides a poor experience. It is merely different and there are better displays for many STL container types. Debugging a CLR app also has certain disadvantegs. For instance, debugging optimized code is near impossible at times because the JITer will hide local variables, parameters and often "this". Debugging a similarly constructed C++ app can also be frustrating but you can always grab the registers and dissamebly to see what's going on. Doing the same for a CLR app is difficult at best.
I think compiling native C++ code into CLR opens a whole can of worms. Unless you have large investment on existing C++ code and some necessity to run the code with managed types, this is something you want to avoid. For example, C++/CLI is one way to bundle native C++ code right into a CLR assembly, but C++/ CLI adds non-standard syntax to C++ language, and using native C++ types mixed with managed types seems like a very tricky issue to say the least. So, in conclusion, I would just keep it as a native app. If you have any plan of porting it to CLR and you've just started working on this project, I would seriously think of start writing in a CLR-native language like C#.
Win32 Console app vs. CLR Console app
[ "", ".net", "c++", "visual-studio", "debugging", "console-application", "" ]
I have an application that has a certain page -- let's call it Page A. Page A is sometimes a top-level page, but also sometimes is embedded as an iframe within page B. All pages come from the same server and there are no cross-domain issues. I have a greasemonkey script that runs on page A. How can the greasemonkey script detect whether page A is within the iframe context or not?
Looking at frame length breaks down generally if page A itself has frames (I know this might not be the case for this specific instance). The more reliable and meaningful test would be: ``` if (window!=window.top) { /* I'm in a frame! */ } ```
The predicate ``` (window.parent.frames.length > 0) ``` will tell you just what you want.
Detect iFrame embedding in Javascript
[ "", "javascript", "iframe", "greasemonkey", "" ]
This following code gives me the error below . I think I need "InvokeRequired" . But I don't understand how can I use? > Cross-thread operation not valid: Control 'listBox1' accessed from a thread other than the thread it was created on. The code: ``` using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using System.Threading; namespace WindowsApplication1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } protected static DataSet dataset = null; private void Form1_Load(object sender, EventArgs e) { } private void timer1_Tick(object sender, EventArgs e) { SimulationFrameWork.MCSDirector run = new SimulationFrameWork.MCSDirector(); DataSet ds = run.Get(); if (ds.Tables[0].Rows.Count > 0) { for (int i = 0; i < ds.Tables[0].Rows.Count; i++) { if (ds.Tables[0].Rows[i]["result"].ToString() == "0") { dataset = run.Get(int.Parse(ds.Tables[0].Rows[i]["ID"].ToString())); WorkerObject worker = new WorkerObject( int.Parse(dataset.Tables[0].Rows[i]["ID"].ToString()), int.Parse(dataset.Tables[0].Rows[i]["Iteration"].ToString()), listBox1, timer1); Thread thread1 = new Thread(new ThreadStart(worker.start)); thread1.Start(); } } } } } public class WorkerObject { private int id; private int nmax; private ListBox list1; private System.Windows.Forms.Timer timer1; public WorkerObject(int _id, int _nmax, ListBox _list1, System.Windows.Forms.Timer _timer1) { id = _id; nmax = _nmax; list1 = _list1; timer1 = _timer1; } public void start() { timer1.Stop(); int i, idaire, j; double pi = 0.0, x, y; Random rnd = new Random(); for (i = 0; i < 100; i++) { idaire = 0; for (j = 0; j < nmax; j++) { x = rnd.Next(1, 10000) / (double)10000; y = rnd.Next(1, 10000) / (double)10000; if (Math.Pow(x, 2) + Math.Pow(y, 2) <= 1.0) idaire += 1; } pi = 4 * (double)idaire / (double)nmax; nmax *= 10; list1.Items.Add( "Iterasyon:" + nmax.ToString() + " ----->" + pi.ToString() + "\n"); System.Threading.Thread.Sleep(100); } SimulationFrameWork.MCSDirector run = new SimulationFrameWork.MCSDirector(); run.Update(id, pi); list1.Items.Add("\n\n islem bitti..."); } } } ```
Just encapsulate adding the text to the listbox to another method: ``` private void timer1_Tick(object sender, EventArgs e) { // ... AddTextToListBox("\n\n işlem bitti..."); } private void AddTextToListBox(string text) { if(list1.InvokeRequired) { list1.Invoke(new MethodInvoker(AddTextToListBox), new object[] { text }); return; } list1.Items.Add(text); } ```
This should get you around it ``` private delegate void stringDelegate(string s); private void AddItem(string s) { if (list1.InvokeRequired) { stringDelegate sd = new stringDelegate(AddItem); this.Invoke(sd, new object[] { s }); } else { list1.Items.Add(s); } } ``` Just call AddItem and this will invoke the add using a delegate if it is required otherwise it will just add the item directly to the box. OneSHOT
How can I correct the error "accessed from a thread other than the thread it was created on"?
[ "", "c#", ".net", "multithreading", "" ]
how would you parse a Microsoft [OLE compound document](http://www.forensicswiki.org/wiki/OLE_Compound_File) using Python? **Edit:** Sorry, I forgot to say that I need write support too.. In short, I have an OLE compound file that I have to read, modify a bit and write back to disk (it's a file made with a CAD application)
Just found [OleFileIO\_PL](http://www.decalage.info/python/olefileio), ~~but it doesn't have write support.. :/~~ and as of version 0.40 (2014) it **has** write support. **Edit:** Looks like there's a way (though Windows-only) that supports writing too.. The [pywin32](http://python.net/crew/mhammond/win32/) extensions ([StgOpenStorage](http://docs.activestate.com/activepython/2.6/pywin32/pythoncom__StgOpenStorage_meth.html) function and related)
An alternative: The xlrd package has a reader. The xlwt package (a fork of pyExcelerator) has a writer. They handle filesizes of 100s of MB cheerfully; the packages have been widely used for about 4 years. The compound document modules are targetted at getting "Workbook" streams into and out of Excel .xls files as efficiently as possible, but are reasonably general-purpose. Unlike OleFileIO\_PL, they don't provide access to the internals of Property streams. ``` http://pypi.python.org/pypi/xlrd http://pypi.python.org/pypi/xlwt ``` If you decide to use them and need help, ask in this forum: `http://groups.google.com/group/python-excel`
OLE Compound Documents in Python
[ "", "python", "ole", "" ]
I have a function: ``` function open($file){ return fopen($file, 'w'); } ``` This is then called by: ``` function write($file,$text){ $h = $this->open($file); fwrite($h,$text); } ``` This doesn't work. It returns that fwrite was given an invalid resource stream. This: ``` function open($file){ $h = fopen($file, 'w'); return $h; } ``` Works fine, but I can't figure out why assigning a variable first works and directly returning fopen() doesn't.
It's probably just because you are working in the scope of an object, so it cleans up the resource stream too early - since it passes a resource stream byref, if you have a variable set, its byref'ing the variable instead of trying to do it to the resource stream - so it'll work.
Does it have something to do with the fact that you're within an object? The following script works for me: ``` <?php function open($file) { return fopen($file, 'w'); } function write($file, $text) { $h = open($file); fwrite($h, $text); } write("test.txt", "hello\n"); ?> ``` I'm running PHP 5.2.8 on Mac OS X 10.5.7.
Why does "return fopen" fail?
[ "", "php", "" ]
I haven't found many ways to increase the performance of a Java application that does intensive XML processing other than to leverage hardware such as Tarari or Datapower. Does anyone know of any open source ways to accelerate XML parsing?
Take a look at Stax (streaming) parsers. See [the sun reference manual](http://java.sun.com/javaee/5/docs/tutorial/doc/bnbdw.html). One of the implementations is the [woodstox project](http://woodstox.codehaus.org/).
Since it hasn't been directly mentioned, I'll throw in [Aalto](http://wiki.fasterxml.com/AaltoHome), which is fastest java xml parser according to some measurements, like: * [JVM-serializers](https://github.com/eishay/jvm-serializers/wiki) (which compares, XML, JSON, protobuf, Thrift etc etc) * [Alternative serialization methods for WSTest](http://technotes.blogs.sapo.pt/1708.html) (Java web services) which are not written by Aalto developers.
Are there faster XML parsers in Java than Xalan/Xerces
[ "", "java", "xml", "performance", "parsing", "" ]
I can see how Swing uses Decorator and Observer pattern. Observer: Every component(eg. `JButton`) is a subject which can add `observers(ActionListeners)`. When someone pushes a button it notifies all its `ActionListeners` by calling their `actionPerformed(ActionEvent e)`. But how about Command Pattern? When I am making classes that implements `ActionListener` (eg: `MyActionListener`) the `actionPerformed(ActionEvent e)` is now the execute command? It confuses me that `actionPerformed(ActionEvent e)` is used both as a `execute()` and a `update()` method. Am I right here?
[Here is an article that will help.](http://code.tutsplus.com/tutorials/design-patterns-the-command-pattern--cms-22942) Basically, it is saying you can create concrete command classes that interact with a target object by deriving the ActionListener. Then you can expand what an action event invoker will do by registering these decoupled commands to it.
Yes, so basically making an object that encapsulates the behavior and other information that is needed when an action takes place can be seen as using the [command pattern](http://en.wikipedia.org/wiki/Command_pattern). The Wikipedia article linked above uses the [`Action`](http://java.sun.com/javase/6/docs/api/javax/swing/Action.html) interface as an example of the command pattern in Swing. The `Action` interface is a subinterface of [`ActionListener`](http://java.sun.com/javase/6/docs/api/java/awt/event/ActionListener.html), so a class that implements `Action` will have to implement the `actionPerformed` method. Therefore, a class implementing `Action` will be encapsulating some operations which will be performed when an action occurs. And that class itself can be seen to follow the command pattern. When it comes to the implementation, in general, an [`AbstractAction`](http://java.sun.com/javase/6/docs/api/javax/swing/AbstractAction.html) can be easier to use than implementing `Action` as it has several methods that needs to be overridden. An example using `AbstractAction` can be: ``` class MySpecialAction extends AbstractAction { @Override public void actionPerformed(ActionEvent e) { // Perform operations. } } ``` The `MySpecialAction` is a command pattern object -- it has the behavior it must exhibit when an action takes place. When instantiating the above class, one could try the following: ``` MySpecialAction action = new MySpecialAction("Special Action", mySpecialIcon); ``` Then, the action can be registered to multiple components, such as `JButton`s, `JMenuItem`s and such. In each case, the same `MySpecialAction` object will be called: ``` JMenuItem specialMenuItem = new JMenuItem(action); /* ... */ JButton b = new JButton(action); ``` In both cases, the action that is associated with each component, the button and the menu item, refer to the same `MySpecialAction` action object, or command. As we can see, the `MySpecialAction` object is functioning as a object following the command pattern, as it encapsulates some action to be performed at a the time when an action takes place.
Understand Command Pattern in Swing
[ "", "java", "swing", "design-patterns", "" ]
I have a table called "wp-posts" with a field "post-content". This field contains the text for a blog posts. I'd like to change all records to replace an URL for another one. Imagine that I can have things like: > This is a test and somewhere there's something like <img src="http://**oldurl.com**/wp-content/somimg.jpg"> and something like <a href="http://**oldurl.com**/something">a link</a>." I want it to be > This is a test and somewhere there's something like <img src="http://**newurl.com**/wp-content/somimg.jpg"> and something like <a href="http://**newurl.com**/something">a link</a>." I need to be able to change this for every record in my table without having to open each post in Wordpress and change them by hand. There has to be a way to do this
This can be easily achieved with a simple SQL statement using MySQL's `replace()` function. Before we do that, you should definitely do a database dump or whatever you use for backups. It's not only that it's The Right Thing To Do™, but if you make a mistake on your substitution, it might prove difficult to undo it (yes, you could rollback, but you might only figure out your mistake later on.) To create a database dump from MySQL, you can run something like this -- ``` mysqldump -h hostname -u username -p databasename > my_sql_dump.sql ``` Where (and you probably know this, but for the sake of completeness for future generations...) -- * **hostname** is a placeholder for the database hostname. If the database server is running on your own machine, then you can either use "localhost" or simply leave the "-h hostname" off entirely * **username** is a placeholder for the user with permission to run a dump on the database. This is often an admin, but if it's a shared db, it might simply be you. * **databasename** is the name of the whole database containing your tables. (Note that the "-p" has nothing to do with this database name. "-p" indicates that MySQL should ask you for a password in order to log in.) Now that we got that out of the way, you can log in to the MySQL database using: ``` mysql -h hostname -u username -p databasename ``` And simply run this statement: ``` UPDATE `wp-posts` SET `post-content` = REPLACE(`post-content`, "http://oldurl.com", "http://newurl.com"); ``` And that should do it! If you make a mistake, you can often rerun the statement with the original and new texts inverted (if the new text -- in your case the new URL -- didn't already exist in the text before you did the replace.) Sometimes this is not possible depending on what the new text was (again, not likely in your case.) Anyway, you can always try recovering the sql dump -- ``` cat my_sql_dump.sql | mysql -h hostname -u username -p databasename ``` And *voilà*.
Use the [replace string function](http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_replace) in MySQL: `UPDATE MyTable SET textfield = REPLACE(textfield, "http://oldurl.com/", "http://newurl.com")`
How can I change an URL inside a field in MySQL?
[ "", "sql", "mysql", "wordpress", "" ]
I noticed the eclipse indenter has support for the latest version of java, and it would be nice if I could use that class to indent generated java source code. Is there a way of integrating it ? EDIT: I need to be able to include the code formatter in my code. No external calls. EDIT2: I've managed to get it working. You can read the story [here](http://ssscripting.wordpress.com/2009/06/10/how-to-use-the-eclipse-code-formatter-from-your-code/). Thanks VonC !
You can try running the formatter as a [standalone application](http://help.eclipse.org/ganymede/index.jsp?topic=/org.eclipse.jdt.doc.user/tasks/tasks-231.htm) (also [detailed here](http://blogs.operationaldynamics.com/andrew/software/java-gnome/eclipse-code-format-from-command-line.html)). ``` eclipse -vm <path to virtual machine> -application org.eclipse.jdt.core.JavaCodeFormatter [ OPTIONS ] <files> ``` Try first to define formatting settings with eclipse IDE in order to achieve the right result, then export those settings, and use that configuration file in the eclipse.exe parameters. Or see also ["Generating a Config File for the Formatter Application"](http://kickjava.com/src/org/eclipse/jdt/core/formatter/CodeFormatter.java.htm) ``` eclipse [...] -config <myExportedSettings> ``` --- In a java program, you can try to directly format by: * Creating an instance of [`CodeFormatter`](http://kickjava.com/src/org/eclipse/jdt/core/formatter/CodeFormatter.java.htm) * Using the method void `format(aString)` on this instance to format aString. It will return the formatted string. --- Thanks to [Geo](https://stackoverflow.com/users/31610/geo) himself and his report in [his blog entry](http://ssscripting.wordpress.com/2009/06/10/how-to-use-the-eclipse-code-formatter-from-your-code/), I now know you need to use [`DefaultCodeFormatter`](http://mobius.inria.fr/eclipse-doc/org/eclipse/jdt/internal/formatter/DefaultCodeFormatter.html) ``` String code = "public class geo{public static void main(String[] args){System.out.println(\"geo\");}}"; CodeFormatter cf = new DefaultCodeFormatter(); TextEdit te = cf.format(CodeFormatter.K_UNKNOWN, code, 0,code.length(),0,null); IDocument dc = new Document(code); try { te.apply(dc); System.out.println(dc.get()); } catch (MalformedTreeException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (BadLocationException e) { // TODO Auto-generated catch block e.printStackTrace(); } ``` Again, full details in the blog entry. Thank you Geo for that feedback! --- [Thorbjørn Ravn Andersen](https://stackoverflow.com/users/53897/thorbj%C3%B8rn-ravn-andersen) mentions [in the comments](https://stackoverflow.com/questions/975980/how-can-i-use-the-eclipse-indenter-from-my-code/976094#comment37758772_976094): > [Maven2 Java Formatter Plugin v0.4](http://maven-java-formatter-plugin.googlecode.com/svn/site/0.4/usage.html) describes a maven plugin that allows Maven to invoke the Eclipse formatter. > As of 0.4 it invokes Eclipse 3.5 which does not support Java 8.
Actually, there is one problem with VonC's answer: DefaultCodeFormatter is in an 'internal' package, and therefore should not be be used by clients! I recently asked the same question here on stackoverflow, and came up with [the answer a little while later](https://stackoverflow.com/questions/2873678/eclipse-jdt-call-correct-indentation-programmatically). In short, you need to use ToolFactory, as in ``` ToolFactory.createCodeFormatter(null); ```
How can I use the eclipse indenter from my code?
[ "", "java", "eclipse", "" ]
How to access Javascript Gobal Variable into CGI Program Thanks , Chells
Three ways: POST, GET, or cookies. which you use depends on your situation. POST: Include a form on your page with two hidden fields. When an event occurs, fill the hidden fields with your JS variables and submit the form to your cgi program. GET: Have JS tack the variables onto the URL. when the user clicks a link, it activates a JS function. the JS function sends the browser to "cgi-prog.cgi?JSvar1=foo&JSvar2=bar" cookies: JS sets a cookie on the user's machine once it has determined the variables. Perl reads that cookie to get the variables.
the cgi prog runs on the server and your javascript is in the browser, right? Maybe you should pass the variable to the server via URL (GET) or HTTP Post?
How to Access Java Script Variable in CGI
[ "", "javascript", "cgi", "" ]
I'm writing a geocoding component for an app I'm building, and I decided to use Yahoo Maps. I wrote the geocode API wrapper and some unit tests and called it a day. Came back the next day, ran the tests, and found that the latitude and longitude had changed. Mind you, the change was small enough not to matter to me, but it was significant enough to affect the rounding to 4 decimal places that I was using in my unit test to compare the result. I've never heard of changing latitude and longitude before. Is this something I should expect / account for? Can anyone explain why?
Geocoding, especially when done from addresses, is rarely 100% accurate. There are many companies who do nothing but compile the street data used for geocoding purposes. The data is not accurate, but is frequently updated to improve address matching. When this happens, you'll get a different result. My guess is that one of two things happened: 1) Yahoo updated their source data. 2) You got a result from a different server with a different set of source data.
Continental Drift?
Latitude and longitude changing?
[ "", "c#", "geocoding", "" ]
Most of the compilers [already support C++0x](https://stackoverflow.com/questions/657511/c-compiler-that-supports-c0x-features). Have you started using C++0x or are you waiting for the definition of x? I have to do some refactoring of my code; should I start using the new features?
C++0x is not a completed standard yet. It's likely that there will be many revisions before an international accepted standard is released. So it all depends, what are you writing code for? If it's for an work-assignment i would stick with regular C++, wait for the standard to be set and give the programming community the time it takes to adjust. Don't refactor code you really need to implement, it might give you a loot of trouble. I however think C++0x great to play around with and also it can't hurt to be familiar with the syntax when 0x is globally accepted.
I've started using `nullptr`, using `#define nullptr 0` for those compilers (i.e. most) that don't support it yet.
Have You Started Using C++0x?
[ "", "c++", "c++11", "" ]
Is there a possibility to check if two python functions are interchangeable? For instance, if I have ``` def foo(a, b): pass def bar(x, y): pass def baz(x,y,z): pass ``` I would like a function `is_compatible(a,b)` that returns True when passed foo and bar, but False when passed bar and baz, so I can check if they're interchangeable before actually calling either of them.
Take a look at [`inspect.getargspec()`](http://docs.python.org/library/inspect.html#classes-and-functions): > ### `inspect.getargspec(func)` > > Get the names > and default values of a function’s > arguments. A tuple of four things is > returned: *(args, varargs, varkw, > defaults)*. args is a list of the > argument names (it may contain nested > lists). varargs and varkw are the > names of the \* and \*\* arguments or > None. defaults is a tuple of default > argument values or None if there are > no default arguments; if this tuple > has n elements, they correspond to the > last n elements listed in args. > > *Changed in version 2.6: Returns a > named tuple ArgSpec(args, varargs, > keywords, defaults).*
What would you be basing the compatibility on? The number of arguments? Python has variable length argument lists, so you never know if two functions might be compatible in that sense. Data types? Python uses duck typing, so until you use an isinstance test or similar inside the function, there is no constraint on data types that a compatibility test could be based on. So in short: No. You should rather write good docstrings, such that any user of your API knows what the function he is giving you has to do, and then you should trust that the function you get behaves correctly. Any "compatibility" check would either rule out possibly valid functions or give you a false sense of "everything is exactly as it should be." The pythonic way of exposing an API is: Write good documentation, such that people know what they need to know, and trust that they do the right thing. In critical positions you can still use `try: except:`, but anybody who is misusing your API because they just didn't care to read the doc shouldn't be given a false sense of security. And someone who did read your doc and wants to use it in a totally acceptable way should not be denied the possibility to use it on the grounds of the way they declared a function.
Checking compatibility of two python functions (or methods)
[ "", "python", "reflection", "" ]
I'm not particularly experienced with python, so may be doing something silly below. I have the following program: ``` import os import re import linecache LINENUMBER = 2 angles_file = open("d:/UserData/Robin Wilson/AlteredData/ncaveo/16-June/scan1_high/000/angles.txt") lines = angles_file.readlines() for line in lines: splitted_line = line.split(";") DN = float(linecache.getline(splitted_line[0], LINENUMBER)) Zenith = splitted_line[2] output_file = open("d:/UserData/Robin Wilson/AlteredData/ncaveo/16-June/scan1_high/000/DNandZenith.txt", "a") output_file.write("0\t" + str(DN) + "\t" + Zenith + "\n") #print >> output_file, str(DN) + "\t" + Zenith #print DN, Zenith output_file.close() ``` When I look at the output to the file I get the following: ``` 0 105.5 0.0 0 104.125 18.0 0 104.0 36.0 0 104.625 54.0 0 104.25 72.0 0 104.0 90.0 0 104.75 108.0 0 104.125 126.0 0 104.875 144.0 0 104.375 162.0 0 104.125 180.0 ``` Which is the right numbers, it just has blank lines between each line. I've tried and tried to remove them, but I can't seem to. What am I doing wrong? Robin
For a GENERAL solution, remove the trailing newline from your INPUT: ``` splitted_line = line.rstrip("\n").split(";") ``` Removing the extraneous newline from your output "works" in this case but it's a kludge. ALSO: (1) it's not a good idea to open your output file in the middle of a loop; do it once, otherwise you are just wasting resources. With a long enough loop, you will run out of file handles and crash (2) It's not a good idea to hard-wire file names like that, especially hidden in the middle of your script; try to make your scripts reusable.
Change this: ``` output_file.write("0\t" + str(DN) + "\t" + Zenith + "\n") ``` to this: ``` output_file.write("0\t" + str(DN) + "\t" + Zenith) ``` The `Zenith` string already contains the trailing `\n` from the original file when you read it in.
Why are there extra blank lines in my python program output?
[ "", "python", "" ]
Currently, I have a String object that contains XML elements: ``` String carsInGarage = garage.getCars(); ``` I now want to pass this String as an input/stream source (or some kind of source), but am unsure which one to choose and how to implement it. Most of the solutions I have looked at import the package: javax.xml.transform and accept a XML file (stylerXML.xml) and output to a HTML file (outputFile.html) (See code below). ``` try { TransformerFactory tFactory = TransformerFactory.newInstance(); Transformer transformer = tFactory.newTransformer(new StreamSource("styler.xsl")); transformer.transform(new StreamSource("stylerXML.xml"), new StreamResult(new FileOutputStream("outputFile.html"))); } catch (Exception e) { e.printStackTrace(); } ``` I want to accept a String object and output (using XSL) to a element within an existing JSP page. I just don't know how to implement this, even having looked at the code above. Can someone please advise/assist. I have searched high and low for a solution, but I just can't pull anything out.
Use a [`StringReader`](http://java.sun.com/javase/6/docs/api/java/io/StringReader.html) and a [`StringWriter`](http://java.sun.com/javase/6/docs/api/java/io/StringWriter.html): ``` try { StringReader reader = new StringReader("<xml>blabla</xml>"); StringWriter writer = new StringWriter(); TransformerFactory tFactory = TransformerFactory.newInstance(); Transformer transformer = tFactory.newTransformer( new javax.xml.transform.stream.StreamSource("styler.xsl")); transformer.transform( new javax.xml.transform.stream.StreamSource(reader), new javax.xml.transform.stream.StreamResult(writer)); String result = writer.toString(); } catch (Exception e) { e.printStackTrace(); } ```
If at some point you want the source to contain more than just a single string, or you don't want to generate the XML wrapper element manually, create a DOM document that contains your source and pass it to the transformer using a [DOMSource](http://java.sun.com/javase/6/docs/api/javax/xml/transform/dom/DOMSource.html).
How to 'transform' a String object (containing XML) to an element on an existing JSP page
[ "", "java", "xml", "xslt", "" ]
I receive as input a list of strings and need to return a list with these same strings but in randomized order. I must allow for duplicates - same string may appear once or more in the input and must appear the same number of times in the output. I see several "brute force" ways of doing that (using loops, god forbid), one of which I'm currently using. However, knowing Python there's probably a cool one-liner do get the job done, right?
``` >>> import random >>> x = [1, 2, 3, 4, 3, 4] >>> random.shuffle(x) >>> x [4, 4, 3, 1, 2, 3] >>> random.shuffle(x) >>> x [3, 4, 2, 1, 3, 4] ```
Looks like this is the simplest way, if not the most truly random ([this question](https://stackoverflow.com/questions/3062741/maximal-length-of-list-to-shuffle-with-python-random-shuffle) more fully explains the limitations): <http://docs.python.org/library/random.html#random.shuffle>
Best way to randomize a list of strings in Python
[ "", "python", "string", "random", "" ]
I have an application written in Java that uses a jar file(it uses more than one, but that's not the point). The fact is, the Jar file I'm using contains files that I absolutely MUST extract to the filesystem. So far I'm using Class.getResourceAsStream and FileOutputStream, but this method is somewhat slow. Note that some of these files are text-based, but others are simply binary. So apart from trying to reduce the need to extract files from JARs, are there any optimizations(such as more adequated functions) for these tasks. Note that my application is Java 6-based and I would like to reduce external dependencies to a minimum. EDIT: For future reference, my **OLD**(inefficent) code was: ``` int c; while((c = is.read())!=-1){ fos.write(c); } ``` For the new, much faster code, see the accepted reply.
Do you have control over the jar file? If you create it uncompressed, that *may* make it faster. Obviously it'll make the jar file bigger though... Another thing to check - how are you extracting the file? For instance, if you're doing it byte by byte it will be painfully slow. Use something like this: ``` public static void copyStream(InputStream input, OutputStream output) throws IOException { // Reads up to 8K at a time. Try varying this. byte[] buffer = new byte[8192]; int read; while ((read = input.read(buffer)) != -1) { output.write(buffer, 0, read); } } ``` If you're already doing this, could you give us more information? How slow is "somewhat slow"? How does it compare with, say, using the jar utility to extract the jar file?
Err. I'm not sure what you really want to do - but have you thought about using winzip? Obviously if you need to extract the files dynamically at run time this won't work - but I'm not sure why you'd need to do this - how often does this jar file change?. Surely you can extract them once and then distribute them with the application?
Efficient extraction of file from jar
[ "", "java", "jar", "copy", "extract", "" ]
I would like to connect to a SOAP web service using C#. Before I jump right in, trying anything Google throws at me I would like to ask what is the cleanest and best way to do it where most of the work is done for me. I would like a high level method, where I give it a WSDL and it will basically handle a lot of things for me. I of course don't want to have to do any of the XML, except to see what it is doing to debug. What is the nicest fanciest library/method that Microsoft has for doing this? Notice that I'm more concerned with optimizing developer time over performance, though there probably isn't much difference in this situation. Here is a better explanation of what I'm trying to accomplish. [a better OOP approach to multiple SOAP web services](https://stackoverflow.com/questions/56996588/using-a-more-oop-method-instead-of-switch-statements-for-soap-web-service-calls)
What Assaf said. You have the choice between using a Web reference (wrapper around WSDL.exe) and using a Service reference (wrapper around svcutil.exe). Service references are .NET 3.0+ and part of the WCF way of doing things. Personally I still use Web references most of the time, but YMMV. Edit: [screenshot of the two menu options](http://andrewtokeley.net/archive/2008/07/10/the-difference-between-ldquoadd-web-referencerdquo-and-ldquoadd-service-referencerdquo.aspx) :)
That's what Web References in C# projects do. When you add a web reference it parses the WSDL and creates strongly typed classes for the API.
Which C# SOAP Client Library is easiest to use?
[ "", "c#", ".net", "web-services", "client-side", "" ]
Is there anyway I can zip dynamically generated content, such as a freshly rendered html template, into a zip file using zipfile? There seem to be some examples around for zipping static content, but none for zipping dynamic ones. Or, is it not possible at all? One more question: Is it possible to create a zip file with a bunch of sub-folders inside it? Thanks.
You can add whatever you want to a zip file using [`ZipFile.writestr()`](http://docs.python.org/library/zipfile#zipfile.ZipFile.writestr): ``` my_data = "<html><body><p>Hello, world!</p></body></html>" z.writestr("hello.html", my_data) ``` You can also use sub-folders using `/` (or `os.sep`) as a separator: ``` z.writestr("site/foo/hello/index.html", my_data) ```
The working code: (for app engine:) ``` output = StringIO.StringIO() z = zipfile.ZipFile(output,'w') my_data = "<html><body><p>Hello, world!</p></body></html>" z.writestr("hello.html", my_data) z.close() self.response.headers["Content-Type"] = "multipart/x-zip" self.response.headers['Content-Disposition'] = "attachment; filename=test.zip" self.response.out.write(output.getvalue()) ``` Thanks again to Schnouki and Ryan.
Zipping dynamic files in App Engine (Python)
[ "", "python", "google-app-engine", "zip", "" ]
I need a file system walker that I could instruct to ignore traversing directories that I want to leave untouched, including all subdirectories below that branch. The os.walk and os.path.walk just don't do it.
So I made this home-roles walker function: ``` import os from os.path import join, isdir, islink, isfile def mywalk(top, topdown=True, onerror=None, ignore_list=('.ignore',)): try: # Note that listdir and error are globals in this module due # to earlier import-*. names = os.listdir(top) except Exception, err: if onerror is not None: onerror(err) return if len([1 for x in names if x in ignore_list]): return dirs, nondirs = [], [] for name in names: if isdir(join(top, name)): dirs.append(name) else: nondirs.append(name) if topdown: yield top, dirs, nondirs for name in dirs: path = join(top, name) if not islink(path): for x in mywalk(path, topdown, onerror, ignore_list): yield x if not topdown: yield top, dirs, nondirs ```
Actually, `os.walk` may do exactly what you want. Say I have a list (perhaps a set) of directories to ignore in `ignore`. Then this should work: ``` def my_walk(top_dir, ignore): for dirpath, dirnames, filenames in os.walk(top_dir): dirnames[:] = [ dn for dn in dirnames if os.path.join(dirpath, dn) not in ignore ] yield dirpath, dirnames, filenames ```
A Python walker that can ignore directories
[ "", "python", "directory-walk", "" ]
I am saving user-submitted HTML (in a database). I must prevent JavaScript injection attacks. The most pernicious I have seen is JavaScript in a `style="expression(...)"`. In addition to this, a fair amount of valid user content will include special characters and XML constructs, so I'd like to avoid a white-list approach if possible. (Listing every allowable HTML element and attribute). Examples of JavaScript attack strings: 1. ``` "Hello, I have a &lt;script&gt;alert("bad!")&lt;/script&gt; problem with the &lt;dog&gt; element..." ``` ``` "Hi, this &lt;b style="width:expression(alert('bad!'))"&gt;dog&lt;/b&gt; is black." ``` Is there a way to prevent such JavaScript, and leave the rest intact? The only solution I have so far is to use a regular expression to remove certain patterns. It solves case 1, but not case 2. The environment is essentially the Microsoft stack: * SQL Server 2005 * C# 3.5 (ASP.NET) * JavaScript and jQuery. I would like the chokepoint to be the ASP.NET layer - anyone can craft a bad HTTP request. ### Edit Thanks for the links, everyone. Assuming that I can define my list (the content will include many mathematical and programming constructs, so a whitelist is going to be very annoying), I still have a question: What kind of parser will allow me to just remove the "bad" parts? The bad part could be an entire element, but then what about those scripts that reside in the attributes? I can't remove `< a hrefs >` willy-nilly.
You think that's it? [Check this out](https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet). Whatever approach you take, you definitely need to use a whitelist. It's the only way to even come close to being safe about what you're allowing on your site. **EDIT**: I'm not familiar with .NET, unfortunately, but you can check out stackoverflow's own battle with XSS (<https://blog.stackoverflow.com/2008/06/safe-html-and-xss/>) and the code that was written to parse HTML posted on this site: [Archive.org link](https://web.archive.org/web/20120606051450/http://refactormycode.com/codes/333-sanitize-html) - obviously you might need to change this because your whitelist is bigger, but that should get you started.
Whitelist for elements and attributes is the *only* acceptable choice in my opinion. Anything not on your whitelist should be stripped out or encoded (change <>&" to entities). Also be sure to check the values within the attributes you allow. Anything less and you are opening yourself up to problems - known exploits or those that will be discovered in the future.
How to prevent Javascript injection attacks within user-generated HTML
[ "", "javascript", "html", "regex", "parsing", "code-injection", "" ]
I want to write a generic class that should be casted to itself with a different generic argument. ``` class Base {} class Inherited : Base {} class MyGeneric<T> {} // WCF Service interface void Foo(MyGeneric<Base> b); // somewhere else MyGeneric<Inherited> inherited; Foo(inherited) ``` I know that this could be done in C# 4.0, but this doesn't help for now. * I could write a specialized class for each `MyGeneric<T>` constellation, and there write an implicit type converter or implement a certain interface. But I want to avoid this. * I could have an interface with no generic, but the whole sense of the generic in this case is to get compile time type safety on the method Foo. So this is not an option. * Because Foo is a Operation Contract, it could not be generic itself. Any ideas how this problem could be solved in C# 3.0?
I wrote this casting method ``` public MyGeneric<TTarget> Cast<TTarget, TSource>() where TTarget : class where TSource : TTarget, T { return new MyGeneric<TTarget>(); } ``` which could be called like this ``` MyGeneric<Inherited> inherited; Foo(inherited.Cast<Base, Inherited>()); ``` The ugly thing is that one has to provide the class that it already is. There is probably some improvement possible. By the way, I couldn't manage to make it an extension method, to avoid the second generic argument.
You want: ``` void Foo<T>(MyGeneric<T> b) where T : Base {} ```
C# Generic classes and casting with WCF
[ "", "c#", "wcf", "generics", "casting", "" ]
I've noticed that the Delegate class has a Target property, that (presumably) returns the instance the delegate method will execute on. I want to do something like this: ``` void PossiblyExecuteDelegate(Action<int> method) { if (method.Target == null) { // delegate instance target is null // do something } else { method(10); // do something else } } ``` When calling it, I want to do something like: ``` class A { void Method(int a) {} static void Main(string[] args) { A a = null; Action<int> action = a.Method; PossiblyExecuteDelegate(action); } } ``` But I get an ArgumentException (Delegate to an instance method cannot have a null 'this') when I try to construct the delegate. Is what I want to do possible, and how can I do it?
Ahah! [found it!](http://msdn.microsoft.com/en-us/library/s3860fy3.aspx) You can create an open instance delegate using a [CreateDelegate](http://msdn.microsoft.com/en-us/library/system.delegate.createdelegate.aspx) overload, using a delegate with the implicit 'this' first argument explicitly specified: ``` delegate void OpenInstanceDelegate(A instance, int a); class A { public void Method(int a) {} static void Main(string[] args) { A a = null; MethodInfo method = typeof(A).GetMethod("Method"); OpenInstanceDelegate action = (OpenInstanceDelegate)Delegate.CreateDelegate(typeof(OpenInstanceDelegate), a, method); PossiblyExecuteDelegate(action); } } ```
In order to do this you would have to pass a `static` method to `PossiblyExecuteDelegate()`. This will give you a `null` `Target`. ``` class A { void Method(int a) {} static void Method2(int a) {} static void Main(string[] args) { PossiblyExecuteDelegate(A.Method2); A a = new A(); PossiblyExecuteDelegate(a.Method); } } ``` **Edit:** It *is* possible to pass a delegate to an instance method with no target via reflection, but not using standard compiled code.
How to create a delegate to an instance method with a null target?
[ "", "c#", ".net", "delegates", "" ]
How do you use SOAP using C#? Is there a simple, yet effective tutorial for this?
SOAP is encapsulated with the Web Service model in ASP.Net. Here's a good intro article on MSDN: <http://msdn.microsoft.com/en-us/library/ms972326.aspx> You can find some good tutorials on MSDN and at the ASP.Net web site. Here's a great introductory video: <http://www.asp.net/learn/videos/video-7025.aspx>
I like the tutorial at <http://users.skynet.be/pascalbotte/rcx-ws-doc/dotnet.htm>
SOAP using C#
[ "", "c#", ".net", "web-services", "soap", "" ]
Here is an [example](http://tldp.org/LDP/lpg/node11.html) to illustrate what I mean: ``` #include <stdio.h> #include <unistd.h> #include <sys/types.h> int main(void) { int fd[2], nbytes; pid_t childpid; char string[] = "Hello, world!\n"; char readbuffer[80]; pipe(fd); if((childpid = fork()) == -1) { perror("fork"); exit(1); } if(childpid == 0) { /* Child process closes up input side of pipe */ close(fd[0]); /* Send "string" through the output side of pipe */ write(fd[1], string, (strlen(string)+1)); exit(0); } else { /* Parent process closes up output side of pipe */ close(fd[1]); /* Read in a string from the pipe */ nbytes = read(fd[0], readbuffer, sizeof(readbuffer)); printf("Received string: %s", readbuffer); } return(0); ``` } However, what if one of my processes needs to continuously write to the pipe while the other pipe needs to read? The example above seems to work only for one write and one read.
Your pipe is a unidirectional stream - with a file descriptor for each end. It is not necessary to close() either end of the pipe to allow data to pass along it. **if your pipe spans processes** (i.e. is created before a fork() and then the parent and child use it to communicate) you can have one write and and one read end. Then it is good practice to close the unwanted ends of the pipe. This will * make sure that when the writing end closes the pipe it is seen by the read end. As an example, say the child is the write side, and it dies. If the parent write side has not been closed, then the parent will not get "eof" (zero length read()) from the pipe - because the pipe has a open write-end. * make it clear which process is doing the writing and which process is doing the reading on the pipe. **if your pipe spans threads** (within the same process), then do not close the unwanted ends of the pipe. This is because the file descriptor is held by the process, and closing it for one thread will close it for all threads, and therefore the pipe will become unusable. There is nothing stopping you having one process writing continuously to the pipe and the other process reading. If this is a problem you are having then give us more details to help you out.
After performing the fork, all fds are duplicated. Each process has both ends of the pipe open. If you only want to use one end you should close the other (if your process writes, close the read end). Besides the obvious fact that if you do not close the descriptors the OS will keep extra entries in the open file table, if you do not close the write end of the pipe, the reader will never receive EOF since there is still a way of entering data into the pipe. AFAIK (and IIRC) there is no problem in not closing the read fd in the other process --that is, besides the file being open for no reason. It is also recommended (as good practice, not that it affects too much) that you close all descriptors before exiting the application (that is, closing the other end of the pipe after the read/write operation is completed in each process)
Why do I need to close fds when reading and writing to the pipe?
[ "", "c++", "c", "multithreading", "pipe", "unix", "" ]
Is it possible to request some data in a Flash movie from PHP at run-time? Maybe my real-world implementation can clarify some things: I use a Flash movie to store a Local Shared Object (because for some reason I need LSO's instead or regular PHP cookies). Now, when I load up a PHP file I want to somehow retrieve the data from the LSO at runtime, assign it to some variables, and use the variables through the rest of the script. Doing some research makes me believe it's not possible in the way I intend. So any other suggestions, methods or solutions are highly welcome.
The best way to intercommunicate Flash and PHP is XML (don't forget to use UTF-8!). in PHP: ``` $xml = new DOMDocument('1.0', 'UTF-8'); $doc = $xml->appendChild($xml->createElement('my-root-element')); ... header('Content-Type: text/xml; charset=utf-8'); echo $xml->saveXML(); ``` In as3 ``` var myLoader:URLLoader = new URLLoader(); var req:URLRequest = new URLRequest('http://host.com/my_xml.php'); myLoader.addEventListener(Event.COMPLETE, onMyXMLLoad); myLoader.load(req); function onMyXMLLoad(evt:Event) { trace(evt.target.data); var xml:XML = new XML(evt.target.data); ... } ``` You could also read about [ExternalInterface](http://help.adobe.com/en_US/AS3LCR/Flash_10.0/flash/external/ExternalInterface.html)... Yes, sometimes it helps... You may want to generate dynamicaly your JS to communicate with flash movie.
There are two ways that we do it here (and we do this a fair bit). One is to write your PHP so that it outputs well-formed XML which can be consumed by your Flash application. The other is using [AMFPHP](http://www.amfphp.org/) which is a little more complicated to configure, but it can do quite a lot in terms of translating PHP objects/arrays into native Flash objects. Not sure if this fits your particular situation exactly, but in terms of a general solution for retrieving data from PHP inside a Flash application, it gets the job done.
Request data in a Flash movie from PHP, at run-time
[ "", "php", "flash", "actionscript", "" ]
I know this must be a trivial question, but I've tried many different ways, and searched quie a bit for a solution, but how do I create and reference subfunctions in the current module? For example, I am writing a program to parse through a text file, and for each of the 300 different names in it, I want to assign to a category. There are 300 of these, and I have a list of these structured to create a dict, so of the form lookup[key]=value (bonus question; any more efficient or sensible way to do this than a massive dict?). I would like to keep all of this in the same module, but with the functions (dict initialisation, etc) at the end of the file, so I dont have to scroll down 300 lines to see the code, i.e. as laid out as in the example below. When I run it as below, I get the error 'initlookups is not defined'. When I structure is so that it is initialisation, then function definition, then function use, no problem. I'm sure there must be an obvious way to initialise the functions and associated dict without keeping the code inline, but have tried quite a few so far without success. I can put it in an external module and import this, but would prefer not to for simplicity. What should I be doing in terms of module structure? Is there any better way than using a dict to store this lookup table (It is 300 unique text keys mapping on to approx 10 categories? Thanks, Brendan --- ``` import ..... (initialisation code,etc ) initLookups() # **Should create the dict - How should this be referenced?** print getlookup(KEY) # **How should this be referenced?** def initLookups(): global lookup lookup={} lookup["A"]="AA" lookup["B"]="BB" (etc etc etc....) def getlookup(value) if name in lookup.keys(): getlookup=lookup[name] else: getlookup="" return getlookup ```
A function needs to be defined before it can be called. If you want to have the code that needs to be executed at the top of the file, just define a `main` function and call it from the bottom: ``` import sys def main(args): pass # All your other function definitions here if __name__ == '__main__': exit(main(sys.argv[1:])) ``` This way, whatever you reference in `main` will have been parsed and is hence known already. The reason for testing [`__name__`](http://docs.python.org/3.0/library/__main__.html) is that in this way the `main` method will only be run when the script is executed directly, not when it is imported by another file. --- Side note: a `dict` with 300 keys is by no means massive, but you may want to either move the code that fills the `dict` to a separate module, or (perhaps more fancy) store the key/value pairs in a format like [JSON](http://docs.python.org/3.0/library/json.html) and load it when the program starts.
Here's a more pythonic ways to do this. There aren't a lot of choices, BTW. A function *must* be defined before it can be *used*. Period. However, you don't have to strictly order all functions for the compiler's benefit. You merely have to put your execution of the functions last. ``` import # (initialisation code,etc ) def initLookups(): # Definitions must come before actual use lookup={} lookup["A"]="AA" lookup["B"]="BB" (etc etc etc....) return lookup # Any functions initLookups uses, can be define here. # As long as they're findable in the same module. if __name__ == "__main__": # Use comes last lookup= initLookups() print lookup.get("Key","") ``` Note that you don't need the `getlookup` function, it's a built-in feature of a dict, named [get](http://docs.python.org/library/stdtypes.html#dict.get). Also, "initialisation code" is suspicious. An import should not "do" anything. It should define functions and classes, but not actually provide any executable code. In the long run, executable code that is processed by an import can become a maintenance nightmare. The most notable exception is a module-level Singleton object that gets created by default. Even then, be sure that the mystery object which makes a module work is clearly identified in the documentation.
Scope, using functions in current module
[ "", "python", "function", "scope", "module", "structure", "" ]
If I create a Customer and Controller, then associate my Controller with a customer it saves fine. If I then remove my controller it doesn't remove the relationship between them. This causes an EntityNotFoundException when I load the Customer. ``` javax.persistence.EntityNotFoundException: Unable to find Controller with id 22 ``` I'd like to know how to map this so that when a Controller is deleted the relationship is also deleted. Database Tables * customer * controller * customer\_controllers - mapping table. The Controller's id is not getting removed from the customer\_controllers mapping table. ``` @Entity public class Customer implements Serializable{ private Integer id; private Set<Controller> controllers; @Id @GeneratedValue public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } @ManyToMany(cascade={CascadeType.ALL}) public Set<Controller> getControllers() { return controllers; } public void setControllers(Set<Controller> controllers) { this.controllers = controllers; } } @Entity public class Controller implements Serializable{ private Integer id; private String name; private String abbreviation; @Id @GeneratedValue public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getAbbreviation() { return abbreviation; } public void setAbbreviation(String abbreviation) { this.abbreviation = abbreviation; } } ```
If you have a ManyToMany then you should map Controller to Customer with a ``` @ManyToMany(mappedBy="controllers") ``` or the other way around, depending on which side is the owning side. As you have it now the relation is not fully defined and it will fail on events like "Cascade".
Have you checked the javadoc for [@ManyToMany](http://java.sun.com/javaee/5/docs/api/javax/persistence/ManyToMany.html)? It includes the above example mappings.
JPA Many to Many cascade problem
[ "", "java", "hibernate", "jpa", "" ]
I'm using linq to sql for MySql (using DbLinq) in an ASP.NET MVC website. I have a weird caching problem. Consider the following methods in my Repository class: ``` public IEnumerable<Message> GetInbox(int userId) { using(MyDataContext repo = new MyDataContext(new MySqlConnection("[Connectionstring]"))) { return repo.Messages.Where(m => m.MessageTo == userId); } } public IEnumerable<Message> GetOutbox(int userId) { using (MyDataContext repo = new MyDataContext(new MySqlConnection("[Connectionstring]"))) { return repo.Messages.Where(m => m.MessageFrom == userId); } } ``` 'MyDataContext' is the by DbLinq generated mapping to my database, which inherits from DataContext. I'm not reusing the datacontext here (the above code looks a bit silly but I wanted to make absolutely sure that it was not some datacontext / mysqlconnection re-using issue). What happens is, whichever of the two methods I call, with whatever userId, the results stay the same. Period. Even though I can see that `repo.Messages` has more than 10 results, with varying `MessageFrom` and `MessageTo` values, I only get the first-queried results back. So if I call `GetInbox(4374)` it gives me message A and message B. Calling `GetInbox(526)` afterwards still gives me message A and B, even though there *are* messages C and D who *do* have a userId of 526. I have to restart the application to see any changes. What's going on here? I'm sure I'm doing something so stupid that I'm going to be ashamed when someone points it out to me. If I'm not doing something very stupid, then I find this issue very strange. I read about not reusing DataContext, but I am not. Why this caching issue? Below is my controller code, but I doubt it matters: ``` [Authorize] public ActionResult Inbox(int userId) { Mailbox inbox = new Mailbox(userId, this.messageRepository.GetInbox(userId)); return PartialView("Inbox", inbox); } ``` Though there are similar questions on SO, I haven't found an answer to this exact question. Many thanks! **UPDATE**: changing the code to: `return repo.Messages.ToList().Where(m => m.MessageFrom == userId);` fixes it, it works fine then. Seems like some cache problem. However, I of course don't want to fix it that way. Changing the code so that the datacontext is not disposed after the query does *not* fix the problem.
Well, it seemed that it was a problem with DbLinq. I used source code from 3 weeks old and there was an apparant bug in QueryCache (though it has *always* been in there). There's a complete thread that covers this [here](http://groups.google.com/group/dblinq/browse_thread/thread/b6a5bd2020c9d48e/8a4d3bc60a6839d0?lnk=gst&q=cache+problem#8a4d3bc60a6839d0). I updated the dblinq source. Querycache is now disabled (does imply a performance hit) and well at least now it works. I'll have to see if the performance is acceptable. Must confess that I'm a bit baffled though as what I'm trying to do is a common linq2sql pattern. Thanks all.
Caching in LINQ-to-SQL is associated with the `DataContext`, and is mainly limited to identity caching - in most cases it will re-run a query even if you've done it before. There are a few examples, like `.Single(x=>x.Id == id)` (which has special handling). Since you are clearly getting a new data-context each time, I don't think that is the culprit. However, I'm also slightly surprised that the code works... are you sure that is representative? LINQ's `Where` method is deferred - meaning it isn't executed until you iterate the data (for example with `foreach`). But by that time you have already disposed the data-context! Have you snipped something from the example? Also - by giving it a `SqlConnection` (that you don't then `Dispose()`), you may be impacting the cleanup - it may be preferable to just give it (the data-context) the connection string.
DbLinq - Cache problem
[ "", "c#", "caching", "datacontext", "dblinq", "" ]
I'm looking for POD low dimension vectors (2,3 and 4D let say) with all the necessary arithmetic niceties (operator +, - and so on). POD low dimension matrices would be great as well. boost::ublas vectors are not POD, there's a pointer indirection somewhere (vector are resizeable). Can I find that anywhere in boost? Using boost::array along with boost.operator lib is an options but maybe I'm missing something easier elsewhere? Apart boost, does anybody know any good library around? PS: POD <=> plain old data **EDIT:** Otherwise, here are some other links I gathered from another thread: * <http://www.cgal.org/> * <http://geometrylibrary.geodan.nl> * <http://www.cmldev.net> * <http://www.openexr.com/index.html> * <http://project-mathlibs.web.cern.ch/project-mathlibs/sw/html/SMatrix.html>
There is a nice Vector library for 3d graphics in the prophecy SDK: Check out <http://www.twilight3d.com/downloads.html>
The [blitz++](http://www.oonumerics.org/blitz/examples/Blitz++/transform.html) TinyMatrix / TinyVector is what you want I think, you also have the [libeigen](http://eigen.tuxfamily.org).
POD low dimensional vector in boost
[ "", "c++", "boost", "vector", "math", "" ]