instruction
stringlengths
0
30k
It's not open source, but is free (as in beer): [Sql Effects Accord (aka Clarity) Community Edition][1] [1]: http://www.sqleffects.com/Articles/Product/sqlAccordInfo/aboutSqlAccordCommunityEd.html
Where to put your code - Database v/s Application?
|database|
I have been developing web/desktop applications for about 6 years now. During the course of my career, I have come across application that were heavily written in the database using stored procedures whereas a lot of application just had only a few basic stored procedures (to read, insert, edit and delete entity records) for each entity. I have seen people argue saying that if you have paid for an enterprise database use its features extensively. Whereas a lot of "object oriented architects" told me its absolute crime to put anything more than necessary in the database and you should be able to drive the application using the methods on those classes? Where do you think is the balance? Thanks, Krunal
Where to put your code - Database vs. Application?
|database|
In most firewall setups, the TCP connection will be torn down by the firewall if it is idle to conserve resources. The idle timeout is probably not something you can control. Some will tear them down if they are idle and a resource limit is being hit. Most corp environments won't allow any machines to make an outbound TCP connection anyway. Also, using this mechanism means you are going to have scaling problems. I think more reliable solution is to queue up information and have your clients poll for them regularly. Utilize caching if possible such that a subsequent client poll will get the cached data from the customers proxy cache, if they are using one. If you have to push data in a timely manner, in sub-second land (i.e. financial services), then consider some messaging infrastructure such an NServiceBus distributor on client side, but that will require a customer install... So have you tried using Toredo? Having read that it would appear there it is prob too complicated for a user to setup.
[PHP Benchmarks][1] [1]: http://www.php.lt/benchmark/phpbench.php
No, due to page splits. You're essentially defining rows that can be 1KB - n MB so your database will have a lot of empty spaces in its pages which is bad for performance.
You should use a value type whenever: - The use of a class isn't necessary (no need for inheritance) - You want to make sure there's no need to initalize the type. - You have a reason to want the type to be allocated in stack space - You want the type to be a complete independent entity on assigment instead of a "link" to the instance as it is in reference types.
Plain text files in a filesystem - Very simple to create and edit - Easy for users to manipulate with simple tools (i.e. text editors, grep etc) - Efficient storage of binary documents ---------- XML or JSON files on disk - As above, but with a bit more ability to validate the structure. ---------- Subversion (or similar disk based version control system) - Very good support for versioning of data ---------- [Berkley DB][1] (Basically, a disk based hashtable) - Very simple conceptually (just un-typed key/value) - Quite fast - No administration overhead - Supports transactions I believe ---------- [CouchDB][2] - Document focus - Simple storage of semi-structured / document based data ---------- Native language collections (stored in memory or serialised on disk) - Very tight language integration ---------- Custom (hand-written) storage engine - Potentially very high performance in required uses cases ---------- I can't claim to know anything much about them, but you might also like to look into [object database systems][3]. [1]: http://www.oracle.com/technology/products/berkeley-db/index.html [2]: http://incubator.apache.org/couchdb/ [3]: http://en.wikipedia.org/wiki/Object_database
Plain text files in a filesystem - Very simple to create and edit - Easy for users to manipulate with simple tools (i.e. text editors, grep etc) - Efficient storage of binary documents ---------- Spreadsheet / CSV file - Very easy model for business users to understand ---------- XML or JSON files on disk - As above, but with a bit more ability to validate the structure. ---------- Subversion (or similar disk based version control system) - Very good support for versioning of data ---------- [Berkley DB][1] (Basically, a disk based hashtable) - Very simple conceptually (just un-typed key/value) - Quite fast - No administration overhead - Supports transactions I believe ---------- [Amazon's Simple DB][2] - Much like Berkley DB I believe, but hosted ---------- [Google's App Engine Datastore][3] - Hosted and highly scalable - Per document key-value storage (i.e. flexible data model) ---------- [CouchDB][4] - Document focus - Simple storage of semi-structured / document based data ---------- Native language collections (stored in memory or serialised on disk) - Very tight language integration ---------- Custom (hand-written) storage engine - Potentially very high performance in required uses cases ---------- I can't claim to know anything much about them, but you might also like to look into [object database systems][5]. [1]: http://www.oracle.com/technology/products/berkeley-db/index.html [2]: http://www.amazon.com/SimpleDB-AWS-Service-Pricing/b/ref=sc_fe_l_2?ie=UTF8&node=342335011&no=3435361&me=A36L942TSJ2AJA [3]: http://code.google.com/appengine/docs/datastore/ [4]: http://incubator.apache.org/couchdb/ [5]: http://en.wikipedia.org/wiki/Object_database
Unless your site is expecting an above-normal amount of Linux-based traffic, you're probably going to adversely affect more people if you "sacrifice the user’s ability to adjust his or her reading environment" as opposed to just not caring about the Linux experience. Having said that, if you **do** want a nice Linux experience, you should address the reasons behind why your design breaks under small variations in font spacing, given that these issues are difficult to control under current CSS implementations.
I'd second @Zizzencs suggestion that you might want something that's not totally (R)DB-centric Somehow, I believe that using plain nvarchar fields to store that tags with some proper caching/indexing might yield faster results. But that's just me. I've implemented tagging systems using 3 tables to represent a Many-to-Many relationship before (Item Tags ItemTags), but I suppose you will be dealing with tags in a lot of places, I can tell you that with 3 tables having to be manipulated/queried simultaneously all the time will definitely make your code more complex. You might want to consider if the added complexity is worth it.
Impersonate will change the Thread context. If you want to change the identity and launch a separate process, you will have to use runas command. The .NET Developer's Guide to Windows Security by Keith Brown is an excellent read which describes all the security scenarios. [Online version][1] is also available. [1]: http://alt.pluralsight.com/wiki/default.aspx/Keith.GuideBook.HomePage
@dagorym: Aw, man. I had been hanging onto this as a good "I'm bored, what can I ponder" puzzle. I came up with my in-place transposition code, but got here to find yours pretty much identical to mine...ah, well. Here it is in Ruby. require 'pp' n = 10 a = [] n.times { a << (1..n).to_a } pp a 0.upto(n/2-1) do |i| i.upto(n-i-2) do |j| tmp = a[i][j] a[i][j] = a[n-j-1][i] a[n-j-1][i] = a[n-i-1][n-j-1] a[n-i-1][n-j-1] = a[j][n-i-1] a[j][n-i-1] = tmp end end pp a
Programming to an interface means respecting the "contract" created by using that interface. And so if your `IPoweredByMotor` interface has a `start()` method, future classes that implement the interface, be they `MotorizedWheelChair`, `Automobile`, or `SmoothieMaker`, in implementing the methods of that interface, add flexibility to your system, because one piece of code can start the motor of many different types of things, because all that one piece of code needs to know is that they respond to `start()`. It doesn't matter *how* they start, just that they *must start*.
A couple of things we've seen - more about differences between interactive vs services, but might help... One thing we've seen that does not seem relevant is the difference with what is on the user vs system path. Another thing we've seen relates to temporary files - the service we had was creating lots in the windows\temp directory - we tracked this down when it had created something like 65000 of these files and thus hit the limit of what a directory can hold... Regards, Chris
`Var` is absolutely needed for Linq: var anonEnumeration = from post in AllPosts() where post.Date > oldDate let author = GetAuthor( post.AuthorId ) select new { PostName = post.Name, post.Date, AuthorName = author.Name }; Now look at *anonEnumeration* in intellisense and it will appear something like `IEnumerable<a'>` The C# compiler is pretty clever - anon types generated separately will have the same generated type if their properties match. Outside of that, as long as you have intellisense it makes good sense to use `var` anywhere the context is clear. In your comparison between `IEnumerable<int>` and `IEnumerable<double>` you don't need to worry - if you pass the wrong type your code won't compile anyway. There's no concern about type-safety, as var is not dynamic. It's just compiler magic and any type unsafe calls you make will get caught.
I think you are referring to the [Assembly Manifest][1]: > Every assembly, whether static or > dynamic, contains a collection of data > that describes how the elements in the > assembly relate to each other. The > assembly manifest contains this > assembly metadata. An assembly > manifest contains all the metadata > needed to specify the assembly's > version requirements and security > identity, and all metadata needed to > define the scope of the assembly and > resolve references to resources and > classes. One of the most important features of .Net assemblies is that they are self-describing components and this is provided by the manifest. So removing them will somehow defeat its purpose. [1]: http://msdn.microsoft.com/en-us/library/1w45z383.aspx
Don't worry about VSS corrupting you, worry about VSS corrupting your data. It does not have a good track record in that department. Back up frequently if you do not switch to a different version control system. Backups should be happening daily even with other SCMs, but it's doubly important with VSS.
Looks like SubVersion is the winner here. I'd do yourself a favor and use [VisualSVN Server][1]. It's free and will save you a bunch of installation headaches. [1]: http://www.visualsvn.com/server/
You can use some hacks to **authenticate** only. Try Dim directoryEntry as New DirectoryEntry("LDAP://DomainController:389/dc=domain,dc=suffix", "username", "password") Dim temp as Object = directoryEntry.NativeObject return true Catch return false End Try If the user is not valid, the directory entry NativeObject cannot be accessed and throws an exception. While this isn't the most efficient way (exceptions are evil, blah blah blah), it's quick and painless. This also has the super-cool advantage of working with all LDAP servers, not just AD.
After spending some more time on this I agree with @grapefrukt. Setting wmode to transparent leads to all sorts of strange issues and in my opinion it should be avoided. Instead I've resorted to passing the background color as a parameter. I use the following ActionScript to draw the background. var parameters:Object = LoaderInfo(this.root.loaderInfo).parameters; opaqueBackground = parameters["background-color"];
After spending some more time on this I agree with @grapefrukt. Setting wmode to transparent leads to all sorts of strange issues and in my opinion it should be avoided. Instead I've resorted to passing the background color as a parameter. I use the following ActionScript to draw the background. var parameters:Object = LoaderInfo(this.root.loaderInfo).parameters; opaqueBackground = parameters["background-color"]; EDIT: Thanks to @grapefrukt for reminding me of the bgcolor param (which makes the ActionScript above totally unnecessary)
If you only have 2 people, and you mostly work independantly, git is going to give you a lot more flexibility, power, and be far and away the fastest to work with. It is however a pain in the backside to use. Using VSS you're obviously programming for windows - if you're doing Win32 API stuff in C then git will be a learning curve but will be quite interesting. If the depths of your knowledge however only extend to ASP and Visual Basic, just use subversion. Walk before you can run. ** I'm not trying to say if you only know VB you're dumb or anything like that, but that git can be very finicky and picky to use (if you've used the WinAPI in C you know all about picky and finicky), and you may want a more gradual introduction to SCM than git provides
|deployment|web-application|administration|maintenance|
I am aware of [Capistrano](http://www.capify.org/) but it is a bit too heavyweight for me. Personally I set up two [Mercurial](http://www.selenic.com/mercurial/wiki/) repositories, one on the production server and another on my local dev machine. Regularly, when a new feature is ready, I push changes from repository on my local machine to repository on the server, then update on the server. This is pretty simple and quick way to keep files in sync on several computers, but does not help to update database. What is your solution to the problem?
|deployment|web-applications|administration|maintenance|
C#: What Else Do You Use Besides DataSet
|c#|.net|sql|dataset|
I've found myself increasingly unsatisfied with the DataSet/DataTable/DataRow paradigm in .Net, mostly because it's often a couple of steps more complicated than what I really want to do. In cases where I'm binding to controls, DataSets are fine. But in other cases, there seems to be a fair amount of mental overhead. I've played a bit with SqlDataReader, and that seems to be good for simple jaunts through a select, but I feel like there may be some other models lurking in .Net that are useful to learn more about. I feel like all of the help I find on this just uses DataSet by default. Maybe that and DataReader really are the best options. I'm not looking for a best/worst breakdown, just curious what my options are and what experiences you've had with them. Thanks! -Eric Sipple
|c#|.net|sql|dataset|
Handling XSD Dataset ConstraintExceptions
|xsd|dataset|constraintexception|
Does anyone have any tips for dealing with ConstraintExceptions thrown by XSD datasets? This is the exception with the cryptic message: System.Data.ConstraintException : Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints.
|xsd|dataset|constraintexception|
Arbitrary Naming Convention Question About Business Objects
|c#|oop|naming|object|
Ok, do you do Business.Name or Business.BusinessName SubCategory.ID or SubCategory.SubCategoryID What about in your database? Why? I'm torn with both. Would love there to be a "right answer"
|c#|oop|naming|object|
Creating a custom button in Java
|java|button|
Is there a way to make a JButton with your own button graphic not just with an image inside the button? If not is another way to create a custom button in java?
|java|button|
protected override void LoadViewState(object savedState) { // Put your code here before base is called base.LoadViewState(savedState); } Is that what you meant? Or did you mean in what order are the controls processed? I think the answer to that is it quasi-random.
protected override void LoadViewState(object savedState) { // Put your code here before base is called base.LoadViewState(savedState); } Is that what you meant? Or did you mean in what order are the controls processed? I think the answer to that is it quasi-random. Also, why can't you load the objects you bind to before Page_Load? It's ok to call your business layer at any time during the page lifecycle if you have to, with the exception of pre-render and anything after.
I've just been shown this: # dig +trace ns stackoverflow.com ; <<>> DiG 9.2.4 <<>> +trace ns stackoverflow.com ;; global options: printcmd . 269431 IN NS B.ROOT-SERVERS.NET. . 269431 IN NS C.ROOT-SERVERS.NET. . 269431 IN NS D.ROOT-SERVERS.NET. . 269431 IN NS E.ROOT-SERVERS.NET. . 269431 IN NS F.ROOT-SERVERS.NET. . 269431 IN NS G.ROOT-SERVERS.NET. . 269431 IN NS H.ROOT-SERVERS.NET. . 269431 IN NS I.ROOT-SERVERS.NET. . 269431 IN NS J.ROOT-SERVERS.NET. . 269431 IN NS K.ROOT-SERVERS.NET. . 269431 IN NS L.ROOT-SERVERS.NET. . 269431 IN NS M.ROOT-SERVERS.NET. . 269431 IN NS A.ROOT-SERVERS.NET. ;; Received 504 bytes from 83.138.151.80#53(83.138.151.80) in 3 ms com. 172800 IN NS A.GTLD-SERVERS.NET. com. 172800 IN NS B.GTLD-SERVERS.NET. com. 172800 IN NS C.GTLD-SERVERS.NET. com. 172800 IN NS D.GTLD-SERVERS.NET. com. 172800 IN NS E.GTLD-SERVERS.NET. com. 172800 IN NS F.GTLD-SERVERS.NET. com. 172800 IN NS G.GTLD-SERVERS.NET. com. 172800 IN NS H.GTLD-SERVERS.NET. com. 172800 IN NS I.GTLD-SERVERS.NET. com. 172800 IN NS J.GTLD-SERVERS.NET. com. 172800 IN NS K.GTLD-SERVERS.NET. com. 172800 IN NS L.GTLD-SERVERS.NET. com. 172800 IN NS M.GTLD-SERVERS.NET. ;; Received 495 bytes from 192.228.79.201#53(B.ROOT-SERVERS.NET) in 145 ms stackoverflow.com. 172800 IN NS ns51.domaincontrol.com. stackoverflow.com. 172800 IN NS ns52.domaincontrol.com. ;; Received 119 bytes from 192.5.6.30#53(A.GTLD-SERVERS.NET) in 156 ms Does this tell me that the stackoverflow.com nameservers have been stored in the .com name servers? Or is it just that they happen to be there now?
Possibly a bit heavy for your immediate needs, but have you seen the [Polar bear][1]*? Well worth a browse in the library to see if it's what you require. *[Information Architecture for the World Wide Web, Second Edition] [1] [1]: http://oreilly.com/catalog/9780596000356/
Plain text files in a filesystem - Very simple to create and edit - Easy for users to manipulate with simple tools (i.e. text editors, grep etc) - Efficient storage of binary documents ---------- XML or JSON files on disk - As above, but with a bit more ability to validate the structure. ---------- Spreadsheet / CSV file - Very easy model for business users to understand ---------- Subversion (or similar disk based version control system) - Very good support for versioning of data ---------- [Berkley DB][1] (Basically, a disk based hashtable) - Very simple conceptually (just un-typed key/value) - Quite fast - No administration overhead - Supports transactions I believe ---------- [Amazon's Simple DB][2] - Much like Berkley DB I believe, but hosted ---------- [Google's App Engine Datastore][3] - Hosted and highly scalable - Per document key-value storage (i.e. flexible data model) ---------- [CouchDB][4] - Document focus - Simple storage of semi-structured / document based data ---------- Native language collections (stored in memory or serialised on disk) - Very tight language integration ---------- Custom (hand-written) storage engine - Potentially very high performance in required uses cases ---------- I can't claim to know anything much about them, but you might also like to look into [object database systems][5]. [1]: http://www.oracle.com/technology/products/berkeley-db/index.html [2]: http://www.amazon.com/SimpleDB-AWS-Service-Pricing/b/ref=sc_fe_l_2?ie=UTF8&node=342335011&no=3435361&me=A36L942TSJ2AJA [3]: http://code.google.com/appengine/docs/datastore/ [4]: http://incubator.apache.org/couchdb/ [5]: http://en.wikipedia.org/wiki/Object_database
Exactly what most other people use them for.. Fast and light data/value access. As well as being ideal for making grouping properties (where it makes sense of course) into an object. For example: * Display/Data value differences, such as String pairs of image names and a path for a control (or whatever). You want the path for the work under the hood, but the name to be visible to the user. * Obvious grouping of values for the metrics of objects. We all know Size etc but there may be plenty of situations where the base "metric" types are not enough for you. * "Typing" of enum values, being more than a fixed enum, but less that a full blown class (already has been mentioned, just want to advocate). Its important to remember the [differences between value and reference types](http://www.gafvert.info/CSharpDataTypes.htm). Used properly, they can really improve efficiency of your code as well as make the object model more robust.
Value types, specifically, structs and enums, and have proper uses in object-oriented programming. Enums are, as aku said, first class citizens in .NET, which can be used from all sorts of things from Colors to DialogBox options to various types of flags. Structs, as far as my experience goes, are great as Data Transfer Objects; logicless containers of data especially when they comprise mostly of primitive types. And of course, primitive types are all value types, which resolve to System.Object (unlike in Java where primitive types aren't related to structs and need some sort of wrapper).
WPF is part of the .net 3.0 stack. Its microsoft's next generation Graphical User Interface system. All the information you need can be found on [wikipedia][1] and [msdn's wpf site][2] To Get Started programming I guess check out the [essential downloads on windows client][3] [1]: http://en.wikipedia.org/wiki/Windows_Presentation_Foundation [2]: http://msdn.microsoft.com/en-us/netframework/aa663326.aspx [3]: http://windowsclient.net/edownloads.aspx
In your comparison between `IEnumerable<int>` and `IEnumerable<double>` you don't need to worry - if you pass the wrong type your code won't compile anyway. There's no concern about type-safety, as `var` is **not** dynamic. It's just compiler magic and any type unsafe calls you make will get caught. `Var` is absolutely needed for Linq: var anonEnumeration = from post in AllPosts() where post.Date > oldDate let author = GetAuthor( post.AuthorId ) select new { PostName = post.Name, post.Date, AuthorName = author.Name }; Now look at *anonEnumeration* in intellisense and it will appear something like `IEnumerable<a'>` foreach( var item in anonEnumeration ) { //VS knows the type item.PostName; //you'll get intellisense here //you still have type safety item.ItemId; //will throw a compiler exception } The C# compiler is pretty clever - anon types generated separately will have the same generated type if their properties match. Outside of that, as long as you have intellisense it makes good sense to use `var` anywhere the context is clear. //less typing, this is good var myList = new List<UnreasonablyLongClassName>(); //also good - I can't be mistaken on type var anotherList = GetAllOfSomeItem(); //but not here - probably best to leave single value types declared var decimalNum = 123.456m;
In your comparison between `IEnumerable<int>` and `IEnumerable<double>` you don't need to worry - if you pass the wrong type your code won't compile anyway. There's no concern about type-safety, as `var` is **not** dynamic. It's just compiler magic and any type unsafe calls you make will get caught. `Var` is absolutely needed for Linq: var anonEnumeration = from post in AllPosts() where post.Date > oldDate let author = GetAuthor( post.AuthorId ) select new { PostName = post.Name, post.Date, AuthorName = author.Name }; Now look at *anonEnumeration* in intellisense and it will appear something like `IEnumerable<'a>` foreach( var item in anonEnumeration ) { //VS knows the type item.PostName; //you'll get intellisense here //you still have type safety item.ItemId; //will throw a compiler exception } The C# compiler is pretty clever - anon types generated separately will have the same generated type if their properties match. Outside of that, as long as you have intellisense it makes good sense to use `var` anywhere the context is clear. //less typing, this is good var myList = new List<UnreasonablyLongClassName>(); //also good - I can't be mistaken on type var anotherList = GetAllOfSomeItem(); //but not here - probably best to leave single value types declared var decimalNum = 123.456m;
Pros & cons between LINQ and traditional collection based approaches
|c#|linq|
Being relatively new to the .net game, I was wondering, has anyone had any experience of the pros / cons between the use of LINQ and what could be considered more traditional methods working with lists / collections? For a specific example of a project I'm working on : a list of unique id / name pairs are being retrieved from a remote web-service. - this list will change infrequently (once per day), - will be read-only from the point of view of the application where it is being used - will be stored at the application level for all requests to access Given those points, I plan to store the returned values at the application level in a singleton class. My initial approach was to iterate through the list returned from the remote service and store it in a NameValueCollection in a singleton class, with methods to retrieve from the collection based on an id: sugarsoap soapService = new sugarsoap(); branch_summary[] branchList = soapService.getBranches(); foreach (branch_summary aBranch in branchList) { branchNameList.Add(aBranch.id, aBranch.name); } The alternative using LINQ is to simply add a method that works on the list directly once it has been retrieved: public string branchName (string branchId) { //branchList populated in the constructor branch_summary bs = from b in branchList where b.id == branchId select b; return branch_summary.name; } Is either better than the other - is there a third way? I'm open to all answers, for both approaches and both in terms of solutions that offer elegance, and those which benefit performance.
A service may have multiple endpoints within a single host but every endpoint must have a unique combination of address/protocol/contract. For an IIS-hosted service (i.e., an .SVC file), just set the address of the endpoint to a **relative** URI and make sure that your Visual Studio or wsdl.exe generated client specifies the endpoint's name in its constructor. See also [this MSDN artcile](http://msdn.microsoft.com/en-us/library/ms751515.aspx).
A service may have multiple endpoints within a single host but every endpoint must have a unique combination of address/binding/contract. For an IIS-hosted service (i.e., an .SVC file), just set the address of the endpoint to a **relative** URI and make sure that your Visual Studio or wsdl.exe generated client specifies the endpoint's name in its constructor. See also [this MSDN artcile](http://msdn.microsoft.com/en-us/library/ms751515.aspx).
Possible to perform cross-database queries with postgres?
|sql|postgresql|
I'm going to guess that the answer is no based on the below error message (and [this google result][1]), but is there anyway to perform a cross-database query using postgres? databaseA=# select * from databaseB.public.someTableName; ERROR: cross-database references are not implemented: "databaseB.public.someTableName" I'm working with some data that is partitioned across two databases although data is really shared between the two (userid columns in one database come from the `users` table in the other database). I have no idea why these are two seperate databases instead of schema, but c'est la vie... [1]: http://archives.postgresql.org/pgsql-sql/2004-08/msg00076.php
Can a service have multiple endpoints?
|wcf|web-services|
We have a service that has some settings that are support only over net.tcp. What's the best way to add another endpoint? Do I need to create an entire new host?
|wcf|web-services|.net|.net3.0|
|.net|web-services|wcf|
You linked to the [check digits][1] project, and using the "encode" function seems like a good solution. It says: > encode may throw an exception if 'bad' data (e.g. non-numeric) is passed to it, while verify only returns true or false. The idea here is that encode normally gets it's data from 'trusted' internal sources (a database key for instance), so it should be pretty usual, in fact, exceptional that bad data is being passed in. So it sounds like you could pass the encode function a database key (5 digits, for instance) and you could get a number out that would meet your requirements. [1]: http://code.google.com/p/checkdigits/
I guess it depends on the technology you select. For web projects in general I've always employed (Web-)MVC for the past two years or so. The advantage being a clear seperation of frontend and backend in order to create a managable code base. But that's as vague as a recommendation could be. :) Aside from using a framework to build your site from scratch, you might also want to look into using what's already out there (in terms of open source). I'd recommend any kind of "community software" that's semi-established, well documented, not too often in the news because of security issues and offers API to extend its base. That could indeed jump start you on your facebook-esque site. ;)
Actually prior to .net 3.5 SP1 there has been a performance issue with the intensive use of value types as mentioned here in [Vance Morrison's blog](http://blogs.msdn.com/vancem/archive/2008/05/12/what-s-coming-in-net-runtime-performance-in-version-v3-5-sp1.aspx). As far as I can see the vast majority of the time you should be using classes and the JITter should guarantee a good level of performance. structs have 'value type semantics', so will pass by value rather than by reference. We can see this difference in behaviour in the following example:- using System; namespace StructClassTest { struct A { public string Foobar { get; set; } } class B { public string Foobar { get; set; } } class Program { static void Main() { A a = new A(); a.Foobar = "hi"; B b = new B(); b.Foobar = "hi"; StructTest(a); ClassTest(b); Console.WriteLine("a.Foobar={0}, b.Foobar={1}", a.Foobar, b.Foobar); Console.ReadKey(true); } static void StructTest(A a) { a.Foobar = "hello"; } static void ClassTest(B b) { b.Foobar = "hello"; } } } The struct will be passed by value so StructTest() will get it's own A struct and when it changes a.Foobar will change the Foobar of its new type. ClassTest() will receive a reference to b and thus the .Foobar property of b will be changed. Thus we'd obtain the following output:- a.Foobar=hi, b.Foobar=hello So if you desire value type semantics then that would be another reason to declare something as a struct. Note interestingly that the DateTime type in .net is a value type, so the .net architects decided that it was appropriate to assign it as such, it'd be interesting to determine why they did that :-)
ActiveRecord
|php|database|
Does anyone have any recommendations for implementations of ActiveRecord in PHP? I've been using [CBL ActiveRecord][1], but I was wondering if there were any viable alternatives. [1]: http://31tools.com/cbl_activerecord/
I have used [OpenCV][1] to load video files and process them. It's also handy for many types of video processing including those useful for computer vision. [1]: http://sourceforge.net/projects/opencvlibrary/
For the most part, it's good to emulate the behaviour of the framework. Many elementary data types such as `int`s are value types. If you have types that have similar properties, use value types. For example, when writing a `Complex` data type or a `BigInteger`, value types are the logical solution. The same goes for the other cases where the framework used value types: `DateTime`, `Point`, etc. When in doubt, use a reference type instead.
Try using the [OpenCV](http://opencvlibrary.sourceforge.net/) library. It definitely has the capabilities you require. [This guide](http://www.cs.iit.edu/~agam/cs512/lect-notes/opencv-intro/index.html) has a section about accessing frames from a video file.
This isn't supposed to be reliable - but try ::GetDesktopWindow() as the parent (it returns a HWND). Be warned - if your app crashes, it will bring down the desktop with it. But i'd be interested to see if it works.
|c#|.net|linq|
[ASP.NET] How to access controls in listview's layouttemplate?
|asp.net|listview|
How do I set a property of a user control in listview's layouttemplate from codebehind? <asp:ListView ...> <LayoutTemplate> <myprefix:MyControl id="myControl" ... /> </LayoutTemplate> ... </asp:ListView> I want to do this: myControl.SomeProperty = somevalue;
I love emacs and use it every day. That said, I don't think the cost of learning it will be recouped by productivity gains down the road. If you're programming Java, you need a good IDE. Emacs goes a fair way towards being one, but let's face it, IDEA et al beat it hands down. (emacs probably inspired a lot of those IDEs, but that's another story).
Some ways to debug: * Is there any additional information in the Windows events log? * I believe you should be able to listen to some kind of global-exception event like Application_Exception in windows services. I can't remember the exact name but you can atelast dump stack trace from there. * You should be able to start debugging the project in service mode. Some code snippets/stack trace/information will definitely help.
It's one way to promote loose [coupling][1]. > With low coupling, a change in one module will not require a change in the implementation of another module. A good use of this concept is [Abstract Factory pattern][2]. In the Wikipedia example, GUIFactory interface produces Button interface. The concrete factory may be WinFactory (producing WinButton), or OSXFactory (producing OSXButton). Imagine if you are writing a GUI application and you have to go look around all instances of `OldButton` class and changing them to `WinButton`. Then next year, you need to add `OSXButton` version. [1]: http://en.wikipedia.org/wiki/Coupling_%28computer_science%29 [2]: http://en.wikipedia.org/wiki/Abstract_factory_pattern
getchar(), or cgetc(), depending on the platform
Rsync can exclude files matching certain patters. Even if you can't modify it to make it download files to a temporary directory, maybe it has a convention of naming the files differently during download (for example: `foo.downloading` while downloading for a file named `foo`) and you can use this property to exclude files which are still being downloaded from being copied.
@Matt Sheppard: Say you have a table of customers. Surely you don't want a customer to exist in the table more than once, or lots of confusion will happen throughout your sales and logistics departments (especially if the multiple rows about the customer contain different information). So you have a customer identifier which uniquely identifies the customer and you make sure that the identifier is known by the customer (in invoices), so that the customer and the customer service people have a common reference in case they need to communicate. To guarantee duplicated customer records, you add a uniqueness-constraint to the table, either through a primary key on the customer identifier or via a NOT NULL + UNIQUE constraint on the customer identifier column. Next, for some reason (which I can't think of), you are asked to add a GUID column to the customer table and make that the primary key. If the customer identifier column is now left without a uniqueness-guarantee, you are asking for future trouble throughout the organization because the GUIDs will always be unique. Some "architect" might tell you that "oh, but we handle the _real_ customer uniqueness constraint in our app tier!". Right. Fashion regarding that general purpose programming languages and (especially) middle tier frameworks changes all the time, and will generally never out-live your database. And there is a very good chance that you will at some point need to access the database without going through the present application. == Trouble. (But fortunately, you and the "architect" are long gone, so you will not be there to clean up the mess.) In other words: Do maintain obvious constraints in the database (and in other tiers, as well, if you have the time). In other words: There may be good reasons to add GUID columns to tables, but please don't fall for the temptation to make that lower your ambitions for consistency within the _real_ (==non-GUID) information.
Ticket Tracking Software w/ Good Email Integration and Decent Navigation?
|bug-tracking|email|ticket-system|
I am looking for a simple system to manage inbound emails from a support mailbox for a group with about 3 support people. I've looked at [OTRS][1] which seems to have the features that we need. Unfortunately, so far the UI still looks like a confusing mess. Are there any good FOSS tools that would meet this need? I've heard murmurings that something called fooogzeeebugzo might have similar features, but it seems quite expensive for such simple needs. [1]: http://otrs.org/
@[Mark Biek](#40659) Your keyExists closely matches my standard Exists() function. To make the class more useful for COM-exposed collections and checking for numeric indexes, I'd recommend changing sKey and myCollection to not be typed. If the function is going to be used with a collection of objects, 'set' is required (on the line where val is set). (Another point is just one for a quick edit: "public function" should not be paired with "exit sub", but that's just one of those off-the-cuff code mistakes.)
@[Mark Biek](#40659) Your keyExists closely matches my standard Exists() function. To make the class more useful for COM-exposed collections and checking for numeric indexes, I'd recommend changing sKey and myCollection to not be typed. If the function is going to be used with a collection of objects, 'set' is required (on the line where val is set). **EDIT**: It was bugging me that I've never noticed different requirements for an object-based and value-based Exists() function. I very rarely use collections for non-objects, but this seemed such a perfect bottleneck for a bug that would be so hard to track down when I needed to check for existence. Because error handling will fail if an error handler is already active, two functions are required to get a new error scope. Only the Exists() function need ever be called: Public Function Exists(col, index) As Boolean On Error GoTo ExistsTryNonObject Dim o As Object Set o = col(index) Exists = True Exit Function ExistsTryNonObject: Exists = ExistsNonObject(col, index) End Function Private Function ExistsNonObject(col, index) As Boolean On Error GoTo ExistsNonObjectErrorHandler Dim v As Variant v = col(index) ExistsNonObject = True Exit Function ExistsNonObjectErrorHandler: ExistsNonObject = False End Function Public Sub TestExists() Dim c As New Collection Dim b As New Class1 c.Add "a string", "a" c.Add b, "b" Debug.Print "a", Exists(c, "a") ' True ' Debug.Print "b", Exists(c, "b") ' True ' Debug.Print "c", Exists(c, "c") ' False ' Debug.Print 1, Exists(c, 1) ' True ' Debug.Print 2, Exists(c, 2) ' True ' Debug.Print 3, Exists(c, 3) ' False ' End Sub
Use or extend System.ComponentModel.BackgroundWorker: <http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx> Personally, I find this to be the easiest way to perform asynchronous operations in client apps. (I've used this in WinForms, but not WPF. I'm assuming this will work in WPF as well.) I usually extend Backgroundworker, but you dont' have to. public class ResizeFolderBackgroundWorker : BackgroundWorker { public ResizeFolderBackgroundWorker(string sourceFolder, int resizeTo) { this.sourceFolder = sourceFolder; this.destinationFolder = destinationFolder; this.resizeTo = resizeTo; this.WorkerReportsProgress = true; this.DoWork += new DoWorkEventHandler(ResizeFolderBackgroundWorker_DoWork); } void ResizeFolderBackgroundWorker_DoWork(object sender, DoWorkEventArgs e) { DirectoryInfo dirInfo = new DirectoryInfo(sourceFolder); FileInfo[] files = dirInfo.GetFiles("*.jpg"); foreach (FileInfo fileInfo in files) { /* iterate over each file and resizing it */ } } } This is how you would use it in your form: //handle a button click to start lengthy operation private void resizeImageButtonClick(object sender, EventArgs e) { string sourceFolder = getSourceFolderSomehow(); resizer = new ResizeFolderBackgroundWorker(sourceFolder,290); resizer.ProgressChanged += new progressChangedEventHandler(genericProgressChanged); resizer.RunWorkerCompleted += new RunWorkerCompletedEventHandler(genericRunWorkerCompleted); progressBar1.Value = 0; progressBar1.Visible = true; resizer.RunWorkerAsync(); } void genericRunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { progressBar1.Visible = false; //signal to user that operation has completed } void genericProgressChanged(object sender, ProgressChangedEventArgs e) { progressBar1.Value = e.ProgressPercentage; //I just update a progress bar }
Given the XPath functions that you're calling which I don't remember having the luxury of in my MSXSL work, it looks like you're using an XPath 2-compatible processor. If that's the case, doesn't XPath 2 have a replace(string, pattern, replacement) function that takes a regex as a second parameter? <xsl:value-of select="replace(string(.), '&#10;(\s|&#10;)*', '&lt;/p&gt;&lt;p&gt;')" /> It might help to have some sample Xml input and to know what processor you plan to use.
Given the XPath functions that you're calling which I don't remember having the luxury of in my MSXSL work, it looks like you're using an XPath 2-compatible processor. If that's the case, doesn't XPath 2 have a replace(string, pattern, replacement) function that takes a regex as a second parameter? <xsl:value-of select="replace(string(.), '&#10;(\s|&#10;)*', '&lt;/p&gt;&lt;p&gt;')" /> It might help to have some sample Xml input and to know what processor you plan to use. From your original example, it seems that the duplicate paragraphs all have a white-space only prefix. So something like this slight modification might trim the dupes. <xsl:when test="contains($text, $replace)"> <xsl:variable name="prefix" select="substring-before($text, $replace)" /> <xsl:choose> <xsl:when test="normalize-string($prefix)!=''"> <xsl:value-of select="$prefix"/> <xsl:value-of select="$by" disable-output-escaping="yes"/> </xsl:when> </xsl:choose> <xsl:call-template name="replace-text"> <xsl:with-param name="text" select="substring-after($text, $replace)"/> <xsl:with-param name="replace" select="$replace" /> <xsl:with-param name="by" select="$by" /> </xsl:call-template> </xsl:when>
If you remove the metadata the framework won't be able to load your code, or figure out which other assemblies it references, or anything like that, so no, that's not a good idea. Obfuscators will make it a lot harder for an 'attacker' to decompile your code, but at the end of the day if someone is motivated and smart there's not a lot you can do to stop them. .NET will always compile down to MSIL, and MSIL is inherently easier to read than raw x86. That's just one of the tradeoffs you make for using .NET. Don't worry about it. The source code to apache, linux, and everything else is freely available on the net, but it's not providing much competitive advantage to microsoft is it :-)
Learning FORTRAN In the Modern Era
|fortran|
I've recently come to maintain a large amount of scientific calculation-intensive FORTRAN code. I'm having difficulties getting a handle on all of the, say, nuances, of a forty year old language, despite google & two introductory level books. The code is rife with "performance enhancing improvements". Does anyone have any guides or practical advice for **de**-optimizing FORTRAN into CS 101 levels? Does anyone have knowledge of how FORTRAN code optimization operated? Are there any typical FORTRAN 'gotchas' that might not occur to a Java/C++/.NET raised developer taking over a FORTRAN 77/90 codebase?