instruction
stringlengths
0
30k
1. [Microsoft ODBC](http://msdn.microsoft.com/en-us/library/ms710252.aspx). 2. The MFC ODBC classes such as [CDatabase](http://msdn.microsoft.com/en-us/library/2dhc1abk.aspx). 3. [OleDB](http://msdn.microsoft.com/en-us/library/ms722784.aspx) (via COM). 4. And you can always go through the per-RDBMS native libraries (for example, the [SQL Server native library](http://msdn.microsoft.com/en-us/library/ms722784.aspx)) 5. [DAO](http://msdn.microsoft.com/en-us/library/aa163992.aspx) (don't). 6. 3rd party ORM providers. I would recommend going through ODBC or OleDB by default. Native libraries really restrict you, DAO is no fun, there aren't a lot of great 3rd-party ORM for C++/Windows.
As I understand it, Hashtable is similar to the HashMap and has a similar interface. It is recommended that you use HashMap unless yoou require support for legacy applications or you need synchronisation - as the Hashtables methods are synchronised. So in your case as you are not multi-threading, HashMaps are your best bet.
Java is always pass-by-value. The difficult thing can be to understand that Java passes objects as references passed by value. It goes like this: public void foo(Dog d) { d.name == "Max"; // true d = new Dog("Fifi"); d.name == "Fifi"; // true } Dog aDog = new Dog("Max"); foo(aDog); aDog.name == "Max"; // true In this example aDog.name will still be "Max". "d" is not overwritten in the function as the object reference is passed by value. Likewise: public void foo(Dog d) { d.name == "Max"; // true d.setname("Fifi"); } Dog aDog = new Dog("Max"); foo(aDog); aDog.name == "Fifi"; // true
Call Functions within Function - Flash Actionscript 2.0
|flash|actionscript|actionscript-2|
I'm trying to call a function after I load some XML into Actionscript, and I'm just wondering how I call that function. Example: //Function Declarations function parentFunction() { function callMe() { textField.text = "lawl"; } } Now, **how do I call the "callMe()" function** in a different part of the code, like an onRelease function? on(release) { callMe(); //doesn't work parentFunction().callMe(); //doesn't work parentFunction.callMe(); //doesn't work } ActionScript 2.0 is just so wonky! Any ideas?
There are some super-fast implementations that (from memory) represent cells of 8 or more adjacent squares as bit patterns and use that as an index into a large array of precalculated values to determine in a single machine instruction if a cell is live or dead. Check out here: http://dotat.at/prog/life/life.html Also XLife: http://linux.maruhn.com/sec/xlife.html
I am going to quote my answer from the other question, because the chapters I mention have some very interesting and fine-tuned solutions. Some of the implementation details are in c and/or assembly, yes, but for the most part the algorithms can work in any language: > Chapters [17][1] and [18][2] of > Michael Abrash's [Graphics > Programmer's Black Book][3] are one of > the most interesting reads I have ever > had. It is a lesson in thinking > outside the box. The whole book is > great really, but the final optimized > solutions to the Game of Life are > incredible bits of programming. > > [1]: http://www.byte.com/abrash/chapters/gpbb17.pdf [2]: http://www.byte.com/abrash/chapters/gpbb18.pdf [3]: http://www.byte.com/abrash/
Asp.Net MVC: How to determine if you're currently on a specific view
|c#|asp.net-mvc|
I need to determine if I'm on a particular view. My use case is that I'd like to decorate navigation elements with an "on" class for the current view. Is there a built in way of doing this?
I dont' think calling a Generic Method on a webservice is possible. If you look at the service description of two identical methods, one generic, one not: [WebMethod] public string[] GetSearchList(string prefixText, int count) { } [WebMethod] public string[] GetSearchList2<T>(string prefixText, int count) { } They are identical. It appears that both SOAP 1.x and HTTP POST do not allow this type of operation.
So I found the answer in my specific case -- there was a div much further up in the DOM that had specific left/right margins set which compressed it and everything in it. But the heart of the question is really how can you easily debug this sort of issue? What would be perfect in this case for example would be something in Firebug that, when hovering over an element's size in the layout panel would display a tool tip that says something like "width constrained by outer element X; height constrained by style Z on element Q" or "width contributed to by inner elements A, B and C". I wish I had the time to write something like this, although I suspect it would be difficult (if not impossible) to get that information out of Firefox's rendering engine.
This is not a reason why, but the C# specification section 8.7.2 states the following: > The governing type of a switch statement is established by the switch expression. If the type of the switch expression is sbyte, byte, short, ushort, int, uint, long, ulong, char, string, or an enum-type, then that is the governing type of the switch statement. Otherwise, exactly one user-defined implicit conversion (§6.4) must exist from the type of the switch expression to one of the following possible governing types: sbyte, byte, short, ushort, int, uint, long, ulong, char, string. If no such implicit conversion exists, or if more than one such implicit conversion exists, a compile-time error occurs. > The C# 3.0 specification is located at: http://download.microsoft.com/download/3/8/8/388e7205-bc10-4226-b2a8-75351c669b09/CSharp%20Language%20Specification.doc
Not really, no. You may be able to achieve the same effect by using reflection to get Method objects you can then invoke, and the other way is to create an interface with a single 'invoke' or 'execute' method, and then instantiate them to call the method your interested in (i.e. using an anonymous inner class). You might also find this article interesting / useful : [A Java Programmer Looks at C# Delegates][1] [1]: http://www.onjava.com/pub/a/onjava/2003/05/21/delegates.html
I suppose there is no fundamental reason why the compiler couldn't automatically translate your switch statement into: if (t == typeof(int)) { ... } elseif (t == typeof(string)) { ... } ... But there isn't much gained by that. A case statement on integral types allows the compiler to make a number of optimizations: 1. There is no duplication (unless you duplicate case labels, which the compiler detects). In your example t could match multiple types due to inheritance. Should the first match be executed? All of them? 2. The compiler can choose to implement a switch statement over an integral type by a jump table to avoid all the comparisons. If you are switching on an enumeration that has integer values 0 to 100 then it creates an array with 100 pointers in it, one for each switch statement. At runtime it simply looks up the address from the array based on the integer value being switched on. This makes for much better runtime performance than performing 100 comparisons.
I'm used to doing all my coding in one C file. However, I'm working on a project large enough that it becomes impractical to do so. I've been #including them together but I've run into cases where I'm #including some files multiple times, etc. I've heard of .h files, but I'm not sure what their function is (or why having 2 files is better than 1). What strategies should I use for organizing my code? Is it possible to separate "public" functions from "private" ones for a particular file? [This][1] question precipitated my inquiry. The tea.h file makes no reference to the tea.c file. Does the compiler "know" that every .h file has a corresponding .c file? [1]: http://stackoverflow.com/questions/47878/codereview-tiny-encryption-algorithm-for-arbitrary-sized-data
It's called [Script Combining][1]. There is a video example from asp.net explaining it [here][2]. [1]: http://msdn.microsoft.com/en-us/library/cc837190.aspx [2]: http://www.asp.net/Learn/3.5-SP1/video-296.aspx
Hey Chris, I just figured it out. Silly mistake. It was just a permission issue, running everything from `/tmp/` worked fine --well after enabling `GODI_BASEPKG_PCRE` in `godi.conf`. I had been running it from my home directory, you forget simple things like that at 3:00am.
Hey Chris, I just figured it out. Silly mistake. It was just a permission issue, running everything from `/tmp/` worked fine --well after enabling `GODI_BASEPKG_PCRE` in `godi.conf`. I had been running it from my home directory, you forget simple things like that at 3:00am. -- Actually I'm having another problem. Installing `conf-opengl-6`: GODI can't seen to find the `GL/gl.h` file, though I can --you can see that it is `Checking the suggestion`. > ===> Configuring for conf-opengl-6 > Checking the suggestion > Include=/usr/include/GL/gl.h Library=/<GLU+GL> > Checking /usr: > Include=/usr/include/GL/gl.h Library=/usr/lib/<GLU+GL> > Checking /usr: > Include=/usr/local/include/GL/gl.h Library=/usr/local/lib/<GLU+GL> > Checking /usr/local: > Include=/usr/local/include/GL/gl.h Library=/usr/local/lib/<GLU+GL> > Checking /usr/pkg: > Include=/usr/pkg/include/GL/gl.h Library=/usr/pkg/lib/<GLU+GL> > Checking /opt/pkg: > Include=/opt/pkg/include/GL/gl.h Library=/opt/pkg/lib/<GLU+GL> > Checking /opt/local: > Include=/opt/local/include/GL/gl.h Library=/opt/local/lib/<GLU+GL> > Checking /opt/sfw: > Include=/opt/sfw/include/GL/gl.h Library=/opt/sfw/lib/<GLU+GL> > Checking /sw: > Include=/sw/include/GL/gl.h Library=/sw/lib/<GLU+GL> > Checking /cw: > Include=/cw/include/GL/gl.h Library=/cw/lib/<GLU+GL> > Exception: Failure "Cannot find library". > Error: Exec error: File /home/nlucaroni/godi/build/conf/conf-opengl/./../../mk/bsd.pkg.mk, line 1022: Command returned with non-zero exit code > Error: Exec error: File /home/nlucaroni/godi/build/conf/conf-opengl/./../../mk/bsd.pkg.mk, line 1375: Command returned with non-zero exit code ### Error: Command fails with code 1: godi_console
Hey Chris, I just figured it out. Silly mistake. It was just a permission issue, running everything from `/tmp/` worked fine --well after enabling `GODI_BASEPKG_PCRE` in `godi.conf`. I had been running it from my home directory, you forget simple things like that at 3:00am. -- Actually I'm having another problem. Installing `conf-opengl-6`: GODI can't seen to find the `GL/gl.h` file, though I can --you can see that it is `Checking the suggestion`. > ===> Configuring for conf-opengl-6 > Checking the suggestion > Include=/usr/include/GL/gl.h Library=/<GLU+GL> > Checking /usr: > Include=/usr/include/GL/gl.h Library=/usr/lib/<GLU+GL> > Checking /usr: > Include=/usr/local/include/GL/gl.h Library=/usr/local/lib/<GLU+GL> > Checking /usr/local: > Include=/usr/local/include/GL/gl.h Library=/usr/local/lib/<GLU+GL> > Exception: Failure "Cannot find library". > Error: Exec error: File /home/nlucaroni/godi/build/conf/conf-opengl/./../../mk/bsd.pkg.mk, line 1022: Command returned with non-zero exit code > Error: Exec error: File /home/nlucaroni/godi/build/conf/conf-opengl/./../../mk/bsd.pkg.mk, line 1375: Command returned with non-zero exit code ### Error: Command fails with code 1: godi_console _edit_ - Ok, this is fixed too... just needed GLU, weird since the test configuration option said everything was fine.
[erickson][1] is almost right: since you want to match on counts of duplicates, you want a [Bag][2]. In Java, this looks something like: (new HashBag(collection1)).equals(new HashBag(collection2)) I'm sure C# has a built-in Set implementation. I would use that first; if performance is a problem, you could always use a different Set implementation, but use the same Set interface. [1]: http://stackoverflow.com/users/3474/erickson [2]: http://commons.apache.org/collections/api-3.1/org/apache/commons/collections/HashBag.html
Here is how the confirm window works on simpleModal: $(document).ready(function () { $('#confirmDialog input:eq(0)').click(function (e) { e.preventDefault(); // example of calling the confirm function // you must use a callback function to perform the "yes" action confirm("Continue to the SimpleModal Project page?", function () { window.location.href = 'http://www.ericmmartin.com/projects/simplemodal/'; }); }); }); function confirm(message, callback) { $('#confirm').modal({ close:false, overlayId:'confirmModalOverlay', containerId:'confirmModalContainer', onShow: function (dialog) { dialog.data.find('.message').append(message); // if the user clicks "yes" dialog.data.find('.yes').click(function () { // call the callback if ($.isFunction(callback)) { callback.apply(); } // close the dialog $.modal.close(); }); } }); }
It is common to use a “file exists” function to check a path before writing to it. In this use case the type of file is irrelevant, if there is a directory called “/home/foo” you won't be able to create a file called “/home/foo” Also PHP, one of the languages you mentioned, provides several functions depending on what kind(s) of file you care about: * [<code>file_exists()</code>](http://uk3.php.net/manual/en/function.file-exists.php) will return TRUE for files, directories and symbolic links * [<code>is_file()</code>](http://uk3.php.net/manual/en/function.is-file.php) will return TRUE for files, but FALSE for directories and sym links * [<code>is_dir()</code>](http://uk3.php.net/manual/en/function.is-dir.php) will return TRUE for directories, but FALSE for files and sym links * [<code>is_link()</code>](http://uk3.php.net/manual/en/function.is-link.php) will return TRUE for symbolic links, but FALSE for files and directories
There is an inter-op angle as well. If you upgrade your Asmx services to WCF services you can still honor your asmx clients and then start moving forward with newer WCF clients. WCF is starting to get some ReST attention, RSS is there, Silverlight has a place with WCF. Performance is better, depending on the bindings you choose. One of the big draw backs is a steeper learning curve comapred to Asmx services, the great power/great responsibilty problem and then the 101 ways to do the same thing. None of this is CxO talk but refactor the language into magazine buzz words so that they can see the future of this technology.
It seems that something like this could be negotiable. We have never though of "fees" as a hard nonnegotiable item. If they value your business I would bet they could discount the transfer fee. It certainly seems that some kind of fee is reasonable for administrative changes that are required. To me that should be a flat fee per license. The work required to change their database is the same no matter how much the license costs.
Are you making good use of SuspendLayout() and ResumeLayout()? [http://msdn.microsoft.com/en-us/library/system.windows.forms.control.suspendlayout(VS.80).aspx][1] [1]: http://msdn.microsoft.com/en-us/library/system.windows.forms.control.suspendlayout(VS.80).aspx
According to [the switch statement documentation][1] if there is an unambiguous way to implicitly convert the the object to an integral type, then it will be allowed. I think you are expecting a behavior where for each case statement it would be replaced with `if (t == typeof(int))`, but that would open a whole can of worms when you get to overload that operator. The behavior would change when implementation details for the switch statement changed if you wrote your == override incorrectly. By reducing the comparisons to integral types and string and those things that can be reduced to integral types (and are intended to) they avoid potential issues. [1]: http://msdn.microsoft.com/en-us/library/aa664749(VS.71).aspx
Apache uses the standard format of inode-filesize-mtime. The only caveat to this is that the mtime must be epoch time and padded with zeros so it is 16 digits. Here is how to do it in PHP: $fs = stat($file); header("Etag: ".sprintf('"%x-%x-%s"', $fs['ino'], $fs['size'],base_convert(str_pad($fs['mtime'],16,"0"),10,16)));
How to find the current name of the test being executing?
|visual-studio|unit-testing|
I know I can figure out the name of the method as its being executed, just wondering if there is a way from the setup method. I guess an attribute method would work but from the setup method it would be the best.
I know I can figure out the name of the method as its being executed, just wondering if there is a way from the setup method. I guess an attribute method would work but from the setup method it would be the best. EDIT NUnit
Explicit vs implicit SQL joins
|sql|join|
Is there any efficiency difference in an explicit vs implicit inner join? For example: select * from table a inner join table b on a.id = b.id; vs. select a.*, b.* from table a, table b where a.id = b.id;
Is there any efficiency difference in an explicit vs implicit inner join? For example: select * from table a inner join table b on a.id = b.id; vs. select a.*, b.* from table a, table b where a.id = b.id;
@Darren: Thanks for the answer. I had tried to create pageLoad with event argument [ApplicationLoadEventArgs][1] as parameter (see below). However according to [this][2]: *The load event is raised for all postbacks to the server, which includes asynchronous postbacks.* As you have indicated, the isPartialLoad property does not cover all postback scenarios. It'll be nice if the event argument also contain isPostback property. function pageLoad(sender, arg) { if (!arg.get-isPartialLoad()) { //code to be executed only on the first load } } @mmattax: I'm looking for property that can be called from client-side (javascript). [1]: http://asp.net/AJAX/Documentation/Live/ClientReference/Sys/ApplicationLoadEventArgsClass/ApplicationLoadEventArgsIsPartialLoadProperty.aspx [2]: http://msdn.microsoft.com/en-us/library/bb386417.aspx
A simple and fairly efficient solution is to sort both collections and then compare them for equality: bool equal = collection1.OrderBy(i => i).SequenceEqual( collection2.OrderBy(i => i); This algorithm is O(N*logN), while your solution above is O(N^2). If the collections have certain properties, you may be able to implement a faster solution. For example, if both of your collections are hash sets, they cannot contain duplicates. Also, checking whether a hash set contains some element is very fast.In that case, an algorithm similar to yours would likely be fastest.
There is also a formal reason why a directory is a file: > Files (or links to files) can be > located in directories. However, more > generally, a directory can contain > either a list of files or a list of > links to files. Within this > definition, it is of paramount > importance that the term "file" > includes directories. This permits the > existence of directory hierarchies, > i.e., directories containing > subdirectories. From Wikipedia, [‘Computer file’][1] [1]: http://en.wikipedia.org/wiki/Computer_file#Identifying_and_organizing_files
No Idea what I do, but now its working. this is all I have done: 1. reinstall Vmware server several times ( more than 4 ) 2. Fix network adapter 3. prey ( mora than 1000 times)
No Idea what I do, but now its working. this is all I have done: 1. reinstall Vmware server several times ( more than 4 ) 2. Fix network adapter 3. prey ( more than 1000 times)
No Idea what I do, but now its working. this is all I have done: 1. reinstall Vmware server several times ( more than 4 ) 2. Fix network adapter 3. prey ( more than 1000 times) UPDATE: One of the three VM does not work the other works perfect.
UNION does not support BLOB (or CLOB) column types, UNION ALL does.
I designed a database similarly (only INSERTs &mdash; no UPDATEs, no DELETEs). Almost all of my SELECT queries were against views of only the current rows for each table (highest revision number). The views looked like this&hellip; SELECT dbo.tblBook.BookId, dbo.tblBook.RevisionId, dbo.tblBook.Title, dbo.tblBook.AuthorId, dbo.tblBook.Price, dbo.tblBook.Deleted FROM dbo.tblBook INNER JOIN ( SELECT BookId, MAX(RevisionId) AS RevisionId FROM dbo.tblBook GROUP BY BookId ) AS CurrentBookRevision ON dbo.tblBook.BookId = CurrentBookRevision.BookId AND dbo.tblBook.RevisionId = CurrentBookRevision.RevisionId WHERE dbo.tblBook.Deleted = 0 And my inserts (and updates and deletes) were all handled by stored procedures (one per table). The stored procedures looked like this&hellip; ALTER procedure [dbo].[sp_Book_CreateUpdateDelete] @BookId uniqueidentifier, @RevisionId bigint, @Title varchar(256), @AuthorId uniqueidentifier, @Price smallmoney, @Deleted bit as insert into tblBook ( BookId, RevisionId, Title, AuthorId, Price, Deleted ) values ( @BookId, @RevisionId, @Title, @AuthorId, @Price, @Deleted ) Revision numbers were handled per-transaction in the Visual Basic code&hellip; Shared Sub Save(ByVal UserId As Guid, ByVal Explanation As String, ByVal Commands As Collections.Generic.Queue(Of SqlCommand)) Dim Connection As SqlConnection = New SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings("Connection").ConnectionString) Connection.Open() Dim Transaction As SqlTransaction = Connection.BeginTransaction Try Dim RevisionId As Integer = Nothing Dim RevisionCommand As SqlCommand = New SqlCommand("sp_Revision_Create", Connection) RevisionCommand.CommandType = CommandType.StoredProcedure RevisionCommand.Parameters.AddWithValue("@RevisionId", 0) RevisionCommand.Parameters(0).SqlDbType = SqlDbType.BigInt RevisionCommand.Parameters(0).Direction = ParameterDirection.Output RevisionCommand.Parameters.AddWithValue("@UserId", UserId) RevisionCommand.Parameters.AddWithValue("@Explanation", Explanation) RevisionCommand.Transaction = Transaction LogDatabaseActivity(RevisionCommand) If RevisionCommand.ExecuteNonQuery() = 1 Then 'rows inserted RevisionId = CInt(RevisionCommand.Parameters(0).Value) 'generated key Else Throw New Exception("Zero rows affected.") End If For Each Command As SqlCommand In Commands Command.Connection = Connection Command.Transaction = Transaction Command.CommandType = CommandType.StoredProcedure Command.Parameters.AddWithValue("@RevisionId", RevisionId) LogDatabaseActivity(Command) If Command.ExecuteNonQuery() < 1 Then 'rows inserted Throw New Exception("Zero rows affected.") End If Next Transaction.Commit() Catch ex As Exception Transaction.Rollback() Throw New Exception("Rolled back transaction", ex) Finally Connection.Close() End Try End Sub Such a system preserves all past versions of each row, but can be a real pain to manage. **PROS:** - Total history preserved - Fewer stored procedures **CONS:** - relies on non-database application for data integrity - No foreign keys managed within database (goodbye automatic Linq-to-SQL-style object generation) - I still haven't come up with a good user interface to retrieve all that preserved past versioning. I'm curious if the Microsoft Entity Framework can be made to handle such database designs well. Jeff and the rest of that Stack Overflow team must have had to deal with similar issues while developing Stack Overflow: Past revisions of edited questions and answers are saved and retrievable. I believe Jeff has stated that his team used Linq to SQL and MS SQL Server. I wonder how they handled these issues.
I designed a database similarly (only INSERTs &mdash; no UPDATEs, no DELETEs). Almost all of my SELECT queries were against views of only the current rows for each table (highest revision number). The views looked like this&hellip; SELECT dbo.tblBook.BookId, dbo.tblBook.RevisionId, dbo.tblBook.Title, dbo.tblBook.AuthorId, dbo.tblBook.Price, dbo.tblBook.Deleted FROM dbo.tblBook INNER JOIN ( SELECT BookId, MAX(RevisionId) AS RevisionId FROM dbo.tblBook GROUP BY BookId ) AS CurrentBookRevision ON dbo.tblBook.BookId = CurrentBookRevision.BookId AND dbo.tblBook.RevisionId = CurrentBookRevision.RevisionId WHERE dbo.tblBook.Deleted = 0 And my inserts (and updates and deletes) were all handled by stored procedures (one per table). The stored procedures looked like this&hellip; ALTER procedure [dbo].[sp_Book_CreateUpdateDelete] @BookId uniqueidentifier, @RevisionId bigint, @Title varchar(256), @AuthorId uniqueidentifier, @Price smallmoney, @Deleted bit as insert into tblBook ( BookId, RevisionId, Title, AuthorId, Price, Deleted ) values ( @BookId, @RevisionId, @Title, @AuthorId, @Price, @Deleted ) Revision numbers were handled per-transaction in the Visual Basic code&hellip; Shared Sub Save(ByVal UserId As Guid, ByVal Explanation As String, ByVal Commands As Collections.Generic.Queue(Of SqlCommand)) Dim Connection As SqlConnection = New SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings("Connection").ConnectionString) Connection.Open() Dim Transaction As SqlTransaction = Connection.BeginTransaction Try Dim RevisionId As Integer = Nothing Dim RevisionCommand As SqlCommand = New SqlCommand("sp_Revision_Create", Connection) RevisionCommand.CommandType = CommandType.StoredProcedure RevisionCommand.Parameters.AddWithValue("@RevisionId", 0) RevisionCommand.Parameters(0).SqlDbType = SqlDbType.BigInt RevisionCommand.Parameters(0).Direction = ParameterDirection.Output RevisionCommand.Parameters.AddWithValue("@UserId", UserId) RevisionCommand.Parameters.AddWithValue("@Explanation", Explanation) RevisionCommand.Transaction = Transaction LogDatabaseActivity(RevisionCommand) If RevisionCommand.ExecuteNonQuery() = 1 Then 'rows inserted RevisionId = CInt(RevisionCommand.Parameters(0).Value) 'generated key Else Throw New Exception("Zero rows affected.") End If For Each Command As SqlCommand In Commands Command.Connection = Connection Command.Transaction = Transaction Command.CommandType = CommandType.StoredProcedure Command.Parameters.AddWithValue("@RevisionId", RevisionId) LogDatabaseActivity(Command) If Command.ExecuteNonQuery() < 1 Then 'rows inserted Throw New Exception("Zero rows affected.") End If Next Transaction.Commit() Catch ex As Exception Transaction.Rollback() Throw New Exception("Rolled back transaction", ex) Finally Connection.Close() End Try End Sub I created an object for each table, each with constructors, instance properties and methods, create-update-delete commands, a bunch of finder functions, and IComparable sorting functions. It was a huge amount of code. One-to-one DB table to VB object... Public Class Book Implements iComparable #Region " Constructors " Private _BookId As Guid Private _RevisionId As Integer Private _Title As String Private _AuthorId As Guid Private _Price As Decimal Private _Deleted As Boolean ... Sub New(ByVal BookRow As DataRow) Try _BookId = New Guid(BookRow("BookId").ToString) _RevisionId = CInt(BookRow("RevisionId")) _Title = CStr(BookRow("Title")) _AuthorId = New Guid(BookRow("AuthorId").ToString) _Price = CDec(BookRow("Price")) Catch ex As Exception 'TO DO: log exception Throw New Exception("DataRow does not contain valid Book data.", ex) End Try End Sub #End Region ... #Region " Create, Update & Delete " Function Save() As SqlCommand If _BookId = Guid.Empty Then _BookId = Guid.NewGuid() End If Dim Command As SqlCommand = New SqlCommand("sp_Book_CreateUpdateDelete") Command.Parameters.AddWithValue("@BookId", _BookId) Command.Parameters.AddWithValue("@Title", _Title) Command.Parameters.AddWithValue("@AuthorId", _AuthorId) Command.Parameters.AddWithValue("@Price", _Price) Command.Parameters.AddWithValue("@Deleted", _Deleted) Return Command End Function Shared Function Delete(ByVal BookId As Guid) As SqlCommand Dim Doomed As Book = FindByBookId(BookId) Doomed.Deleted = True Return Doomed.Save() End Function ... #End Region ... #Region " Finders " Shared Function FindByBookId(ByVal BookId As Guid, Optional ByVal TryDeleted As Boolean = False) As Book Dim Command As SqlCommand If TryDeleted Then Command = New SqlCommand("sp_Book_FindByBookIdTryDeleted") Else Command = New SqlCommand("sp_Book_FindByBookId") End If Command.Parameters.AddWithValue("@BookId", BookId) If Database.Find(Command).Rows.Count > 0 Then Return New Book(Database.Find(Command).Rows(0)) Else Return Nothing End If End Function Such a system preserves all past versions of each row, but can be a real pain to manage. **PROS:** - Total history preserved - Fewer stored procedures **CONS:** - relies on non-database application for data integrity - huge amount of code to be written - No foreign keys managed within database (goodbye automatic Linq-to-SQL-style object generation) - I still haven't come up with a good user interface to retrieve all that preserved past versioning. **CONCLUSION:** - I wouldn't go to such trouble on a new project without some easy-to-use out-of-the-box ORM solution. I'm curious if the Microsoft Entity Framework can handle such database designs well. Jeff and the rest of that Stack Overflow team must have had to deal with similar issues while developing Stack Overflow: Past revisions of edited questions and answers are saved and retrievable. I believe Jeff has stated that his team used Linq to SQL and MS SQL Server. I wonder how they handled these issues.
In terms of what it offers, I think the answer is compatibility. The ASMX services were pretty Microsofty. Not to say that they didn't try to be compatible with other consumers; but the model wasn't made to fit much besides ASP.NET web pages and some other custom Microsoft consumers. Whereas WCF, because of its architecture, allows your service to have very open-standard--based endpoints, e.g. REST, JSON, etc. in addition to the usual SOAP. Other people will probably have a much easier time consuming your WCF service than your ASMX one. (This is all basically inferred from comparative MSDN reading, so someone who knows more should feel free to correct me.)
The only disadvantage is complexity but really how hard is it to add some domain objects and bind to a list of them as opposed to using a dataset. You don't even have to create three seperate projects, you can just create 3 seperate folders within the web app and give each one a namespace like, YourCompany.YourApp.Domain, YourCompany.YourApp.Data, etc. The big advantage is having a more flexible solution. If you start writing your app as a data centric application, strongly coupling your web forms pages to datasets, you are going to end up doing a lot more work later migrating to a more domain centeric model as your business logic grows in complexity. Maybe in the short term you focus on a simple solution by creating very simple domain objects and populating them from datasets, then you can add business logic to them as needed and build out a more sophisticated ORM as needed, or use nhibernate.
I'm using GCC. You can turn on nested functions by using the flag: -fnested-functions when you compile.
My current solution is with extension methods: public static class UrlHelperExtensions { /// <summary> /// Determines if the current view equals the specified action /// </summary> /// <typeparam name="TController">The type of the controller.</typeparam> /// <param name="helper">Url Helper</param> /// <param name="action">The action to check.</param> /// <returns> /// <c>true</c> if the specified action is the current view; otherwise, <c>false</c>. /// </returns> public static bool IsAction<TController>(this UrlHelper helper, LambdaExpression action) where TController : Controller { MethodCallExpression call = action.Body as MethodCallExpression; if (call == null) { throw new ArgumentException("Expression must be a method call", "action"); } return (call.Method.Name.Equals(helper.ViewContext.ViewName, StringComparison.OrdinalIgnoreCase) && typeof(TController) == helper.ViewContext.Controller.GetType()); } /// <summary> /// Determines if the current view equals the specified action /// </summary> /// <param name="helper">Url Helper</param> /// <param name="actionName">Name of the action.</param> /// <returns> /// <c>true</c> if the specified action is the current view; otherwise, <c>false</c>. /// </returns> public static bool IsAction(this UrlHelper helper, string actionName) { if (String.IsNullOrEmpty(actionName)) { throw new ArgumentException("Please specify the name of the action", "actionName"); } string controllerName = helper.ViewContext.RouteData.GetRequiredString("controller"); return IsAction(helper, actionName, controllerName); } /// <summary> /// Determines if the current view equals the specified action /// </summary> /// <param name="helper">Url Helper</param> /// <param name="actionName">Name of the action.</param> /// <param name="controllerName">Name of the controller.</param> /// <returns> /// <c>true</c> if the specified action is the current view; otherwise, <c>false</c>. /// </returns> public static bool IsAction(this UrlHelper helper, string actionName, string controllerName) { if (String.IsNullOrEmpty(actionName)) { throw new ArgumentException("Please specify the name of the action", "actionName"); } if (String.IsNullOrEmpty(controllerName)) { throw new ArgumentException("Please specify the name of the controller", "controllerName"); } if (!controllerName.EndsWith("Controller", StringComparison.OrdinalIgnoreCase)) { controllerName = controllerName + "Controller"; } bool isOnView = helper.ViewContext.ViewName.SafeEquals(actionName, StringComparison.OrdinalIgnoreCase); return isOnView && helper.ViewContext.Controller.GetType().Name.Equals(controllerName, StringComparison.OrdinalIgnoreCase); } }
Replace a database connection for report and all subreports
Personally, I wouldn't worry about it until you see a problem. Messing with the default python install on a *Nix system can cause more trouble than it's worth. I can say from personal experience that you never truly understand what python has done for the nix world until you have a problem with it. You can also add a second python installation, but that also causes more problems than it's worth IMO. So I suppose the best question to start out with would be why exactly do you want to use the 64 bit version of python?
Why not use FormsAuth, but against ActiveDirectory instead as per the info in [this thread][1]. It's just as (in)secure as Basic Auth, but logging out is simply a matter of blanking a cookie (or rather, calling [FormsAuthentication.SignOut][2]) [1]: http://stackoverflow.com/questions/30861/authenticationg-domain-users-with-systemdirectoryservices [2]: http://msdn.microsoft.com/en-us/library/system.web.security.formsauthentication.signout.aspx
<pre> <code> import threading class Foo ( threading.Thread ): def __init__(self,x): self.__x = x def run (self,x): print str(self.__x) for x in xrange(20): Foo(x).start() <code> </pre> Here is a basic threading sample, this will spawn 20 threads, each thread will output it's thread number... run it and observe the order in which they print. as you have hinted at python threads are implemented through time-slicing, this is how they get the "parallel" effect.
<pre> <code> import threading class Foo (threading.Thread): def __init__(self,x): self.__x = x threading.Thread.__init__() def run (self,x): print str(self.__x) for x in xrange(20): Foo(x).start() </code> </pre> Here is a basic threading sample, this will spawn 20 threads, each thread will output it's thread number... run it and observe the order in which they print. as you have hinted at python threads are implemented through time-slicing, this is how they get the "parallel" effect.
<pre> <code> import threading class Foo (threading.Thread): def __init__(self,x): self.__x = x threading.Thread.__init__() def run (self,x): print str(self.__x) for x in xrange(20): Foo(x).start() </code> </pre> Here is a basic threading sample, this will spawn 20 threads, each thread will output it's thread number... run it and observe the order in which they print. as you have hinted at python threads are implemented through time-slicing, this is how they get the "parallel" effect. In my example my Foo class extends thread, I then implement the run method, which is where the code that you would like to run in a thread goes. to start the thread you call start() on the thread object, which will automatically invoke your run method... of course, this is just the vary basics...you will eventually want to learn about semaphores, mutexes, and locks for thread synchronization and message passing...
<pre> <code> import threading class Foo (threading.Thread): def __init__(self,x): self.__x = x threading.Thread.__init__(self) def run (self): print str(self.__x) for x in xrange(20): Foo(x).start() </code> </pre> Here is a basic threading sample, this will spawn 20 threads, each thread will output it's thread number... run it and observe the order in which they print. as you have hinted at python threads are implemented through time-slicing, this is how they get the "parallel" effect. In my example my Foo class extends thread, I then implement the run method, which is where the code that you would like to run in a thread goes. to start the thread you call start() on the thread object, which will automatically invoke your run method... of course, this is just the vary basics...you will eventually want to learn about semaphores, mutexes, and locks for thread synchronization and message passing...
UDP packets can also be delivered out of order, so if you are devising a protocol on top of UDP you have to take that into account as well.
Compiler ------------ You can see an example of a C 'module' at [this topic][1] - Note that there are two files - the header tea.h, and the code tea.c. You declare all the public defines, variables, and function prototypes that you want other programs to access in the header. In your main project you'll #include <tea.h> and that code can now access the functions and variables of the tea module that are mentioned in the header. It gets a little more complex after that. If you're using Visual Studio and many other IDEs that manage your build for you, then ignore this part - they take care of compiling and linking objects. Linker ------ When you compile two separate C files the compiler produces individual object files - so main.c becomes main.o, and tea.c becomes tea.o. The linker's job is to look at all the object files (your main.o and tea.o), and match up the references - so when you call a tea function in main, the linker modifies that call so it actually does call the right function in tea. The linker produces the executable file. There is a [great tutorial][2] that goes into more depth on this subject, including scope and other issue you'll run into. Good luck! -Adam [1]: http://stackoverflow.com/questions/47878/codereview-tiny-encryption-algorithm-for-arbitrary-sized-data [2]: http://www.cs.cf.ac.uk/Dave/C/node35.html
Your question makes it clear that you haven't really done much serious development. The usual case is that your code will generally be far too large to fit into one file. A good rule is that you should split the functionality into logical units (.c files) and each file should contain no more than what you can easily hold in your head at one time. A given software product then generally includes the output from many different .c files. How this is normally done is that the compiler produces a number of object files (in unix systems ".o" files, VC generates .obj files). It is the purpose of the "linker" to compose these object files into the output (either a shared library or executable). Generally your implementation (.c) files contain actual executable code, while the header files (.h) have the declarations of the public functions in those implementation files. You can quite easily have more header files than there are implementation files, and sometimes header files can contain inline code as well. It is generally quite unusual for implementation files to include each other. A good practice is to ensure that each implementation file separates its concerns from the other files. I would recommend you download and look at the source for the linux kernel. It is quite massive for a C program, but well organised into separate areas of functionality.
You should regard .h files as **interface files** of your .c file. Every .c file represents a module with a certain amount of functionality. If functions in a .c file are used by other modules (i.e. other .c files) put the function prototype in the .h interface file. By including the interface file in your original modules .c file and every other .c file you need the function in, you make this function available to other modules. If you only need a function in a certain .c file (not in any other module), declare its scope static. This means it can only be called from within the c file it is defined in. Same goes for variables that are used across multiple modules. They should go in the header file and there they have to marked with the keyword 'extern'. Note: For functions the keyword 'extern' is optional. Functions are always considered 'extern'. The inclusion guards in header files help to not include the same header file multiple times. For example: Module1.c: <pre> #include "Module1.h" static void MyLocalFunction(void); static unsigned int MyLocalVariable; void MyExternFunction(void) { MyLocalVariable = 1u; /* Do something */ MyLocalFunction(); } static void MyLocalFunction(void) { /* Do something */ MyExternVariable = 2u; } </pre> Module1.h: <pre> #ifndef __MODULE1.H #define __MODULE1.H extern unsigned int MyExternVariable; void MyExternFunction(void) #endif </pre> Module2.c <pre> #include "Module.1.h" static void MyLocalFunction(void); static void MyLocalFunction(void) { MyExternVariable = 1u; MyExternFunction(); } </pre>
|crystal-reports|
Is there is any way to change the datasource location for a report and all of it's subreports without having to open each of them manually?
In addition to what izb said, `HashMap` allows null values, whereas the `Hashtable` does not. Also note that `Hashtable` extends the `Dictionary` class, which as the [Javadocs][1] state, is obsolete and has been replaced by the `Map` interface. [1]: http://java.sun.com/j2se/1.4.2/docs/api/java/util/Dictionary.html
It's not good. More platforms leads to more testing, leads to more time fixing bugs, leads to less time having fun implementing new features, leads to anger, hate, suffering, etc.
How to tell which disk Windows Used to Boot
|windows|mbr|
I'm need to find a method to programmatically determine which disk drive Windows is using to boot. In other words, I need a way from Windows to determine which drive the BIOS is using to boot the whole system. Does Windows expose an interface to discover this? With how big the Windows API is, I'm hoping there is something buried in there that might do the trick. Terry p.s. Just reading the first sectors of the hard disk isn't reveling anything. On my dev box I have two hard disks, and when I look at the contents of the first couple of sectors on either of the hard disks I have a standard boiler plate MBR.
I'm need to find a method to programmatically determine which disk drive Windows is using to boot. In other words, I need a way from Windows to determine which drive the BIOS is using to boot the whole system. Does Windows expose an interface to discover this? With how big the Windows API is, I'm hoping there is something buried in there that might do the trick. Terry p.s. Just reading the first sectors of the hard disk isn't reveling anything. On my dev box I have two hard disks, and when I look at the contents of the first couple of sectors on either of the hard disks I have a standard boiler plate MBR. Edit to clarify a few things. The way I want to identify the device is with a string which will identify a physical disk drive (as opposed to a logical disk drive). Physical disk drives are of the form "\\\\.\PHYSICALDRIVEx" where x is a number. On the other hand, a logical drive is identified by a string of the form, "\\\\.\x" where x is a drive letter.
I'm need to find a method to programmatically determine which disk drive Windows is using to boot. In other words, I need a way from Windows to determine which drive the BIOS is using to boot the whole system. Does Windows expose an interface to discover this? With how big the Windows API is, I'm hoping there is something buried in there that might do the trick. Terry p.s. Just reading the first sectors of the hard disk isn't reveling anything. On my dev box I have two hard disks, and when I look at the contents of the first couple of sectors on either of the hard disks I have a standard boiler plate MBR. Edit to clarify a few things. The way I want to identify the device is with a string which will identify a physical disk drive (as opposed to a logical disk drive). Physical disk drives are of the form "\\\\.\PHYSICALDRIVEx" where x is a number. On the other hand, a logical drive is identified by a string of the form, "\\\\.\x" where x is a drive letter. Edit to discuss a few of the ideas that were thrown out. Knowing which logical volume Windows used to boot doesn't help me here. Here is the reason. Assume that C: is using a mirrored RAID setup. Now, that means we have at least two physical drives. Now, I get the mapping from Logical Drive to Physical Drive and I discover that there are two physical drives used by that volume. Which one did Windows use to boot? Of course, this is assuming that the physical drive Windows used to boot is the same physical drive that contains the MBR.
As others have stated, the command pattern is a very powerful method of implementing Undo/Redo. But there is important advantage I would like to mention to the command pattern. When implementing undo/redo using the command pattern, you can avoid large amounts of duplicated code by abstracting (to a degree) the operations performed on the data and utilize those operations in the undo/redo system. For example in a text editor cut and paste are complementary commands (aside from the management of the clipboard). In other words, the undo operation for a cut is paste and the undo operation for a paste is cut. This applies to much simpler operations as typing and deleting text. The key here is that you can use your undo/redo system as the primary command system for your editor. Instead of writing the system such as "create undo object, modify the document" you can "create undo object, execute redo operation on undo object to modify the document". Now, admittedly, many people are thinking to themselves "Well duh, isn't part of the point of the command pattern?" Yes, but I've seen too many command systems that have two sets of commands, one for immediate operations and another set for undo/redo. I'm not saying that there won't be commands that are specific to immediate operations and undo/redo, but reducing the duplication will make the code more maintainable.
you might want to look into using compiled queries have a look at [http://www.3devs.com/?p=3][1] [1]: http://www.3devs.com/?p=3
Your code looks functional. As for a better way, that is going to get subjective. You might want to check out this [page][1] as it deals with time spans in SQL. [1]: http://weblogs.sqlteam.com/jeffs/archive/2007/01/02/56079.aspx
Wait.... did you ever use .NET Remoting, cause thats the real thing its replacing. .NET Remoting is pretty complicated itself. I find WCF easier and better laid out.
Apparently marker was introduced as a value in CSS 2 but did not make it to CSS 2.1 because of lacking browser support. I suppose that didn’t help its popularity … Source: http://de.selfhtml.org/css/eigenschaften/positionierung.htm#display (German)
Have you tried Adobe's version cue? This is not open source / free but it may be easier to use for the end-user. http://www.adobe.com/products/creativesuite/versioncue/
Have you tried [Tortoise SVN][1]? I can't imagine source control getting much easier to use. [1]: http://tortoisesvn.tigris.org/
Dave Miller and their fellow henchmen all inhabit in the lkml or [Linux Kernel Mailing List][1]. If you can provide a reasonably decent bug report they'll get you a reasonable answer. [1]: http://lkml.org
Dave Miller (person in charge of networking in the linux kernel) and their fellow henchmen all inhabit the lkml or [Linux Kernel Mailing List][1]. If you can provide a reasonably decent bug report they'll get you a reasonable answer. On the other hand if you tell them it's a very old kernel, they'll tell you to try the newest... At the very least you can try searching its archives. [1]: http://lkml.org
@Darren: Thanks for the answer. I had tried to create pageLoad with event argument [ApplicationLoadEventArgs][1] as parameter (see below). However according to [this][2]: *The load event is raised for all postbacks to the server, which includes asynchronous postbacks.* As you have indicated, the isPartialLoad property does not cover all postback scenarios. It'll be nice if the event argument also contain isPostback property. function pageLoad(sender, arg) { if (!arg.get_isPartialLoad()) { //code to be executed only on the first load } } @mmattax: I'm looking for property that can be called from client-side (javascript). [1]: http://asp.net/AJAX/Documentation/Live/ClientReference/Sys/ApplicationLoadEventArgsClass/ApplicationLoadEventArgsIsPartialLoadProperty.aspx [2]: http://msdn.microsoft.com/en-us/library/bb386417.aspx
Cast List<int> to List<string>
|c#|generics|casting|
Does anyone know if you can cast a `List<int>` to `List<string>` somehow? I know I could loop through and .ToString() the thing but a cast would be awesome. I'm in c# 2.0 (so no linq)
Learning CIL
|.net|cil|
Does anybody know any good resources for learning how to program CIL with in-depth descriptions of commands, etc.? I have looked around but not found anything particularly good.
|.net|cil|
> Although, I'm not sure how these approaches could apply when resizing a window. Handle the ResizeBegin and ResizeEnd events to call SuspendLayout() and ResumeLayout(). These events are only on the System.Windows.Form class (although I wish they were also on Control).
You have to build a new list. The underlying bit representations of `List<int>` and `List<string>` are completely incompatible -- on a 64-bit platform, for instance, the individual members aren't even the same size. It is theoretically possible to treat a `List<string>` as a `List<object>` -- this gets you into the exciting worlds of covariance and contravariance, and is not currently supported by C# or VB.NET.
While on the topic, according to Jeff Atwood, [the switch statement is a programming atrocity][1]. Use them sparingly. You can often accomplish the same task using a table (e.g. a `Dictionary<Type, Func>` in your case.) [1]: http://twitter.com/codinghorror/statuses/909900250
While on the topic, according to Jeff Atwood, [the switch statement is a programming atrocity][1]. Use them sparingly. You can often accomplish the same task using a table. For example: var table = new Dictionary<Type, string>() { { typeof(int), "it's an int!" } { typeof(string), "it's a string!" } }; Type someType = typeof(int); Console.WriteLine(table[someType]); [1]: http://twitter.com/codinghorror/statuses/909900250
Designs change so much during development and afterwards that most of my carefully crafted documents rot away in source control and become almost more of a hindrance than a help, once code is in production. I see design documents as necessary to good communication and to clarify your thinking while you develop something, but after that it takes a herculean effort to keep them properly maintained. I do take pictures of whiteboards and save the JPEGs to source control. Those are some of my best design docs!
MSBuild is incredibly simple to use, you can use VS to manage the projects and solution files and just pass the SLN to MSBuild.
A couple of simple rules to start: 1. Put those declarations that you want to make "public" into the header file for the C implementation file you are creating. 2. Only #include header files in the C file that are needed to implement the C file. 3. #include header files in a header file only if required for the declarations within that header file. 4. Use the include guard method described Andrew OR use **#pragma once** if the compiler supports it (which does the same thing -- sometimes more efficiently)
A couple of simple rules to start: 1. Put those declarations that you want to make "public" into the header file for the C implementation file you are creating. 2. Only #include header files in the C file that are needed to implement the C file. 3. #include header files in a header file only if required for the declarations within that header file. 4. Use the include guard method described by Andrew OR use **#pragma once** if the compiler supports it (which does the same thing -- sometimes more efficiently)
It sounds like you are a single developer working on your own site. If this is the case, it's not necessary at all, but it is still a good idea for you to learn as a part of your professional experience. Automated building of projects becomes more necessary as the number of developers working on a project increase. It is very easy for two developers to write incompatible code which will break when it is combined (imagine I'm calling a function foo(int x), and you change the signature to be foo(int x, int y): when we combine our code bases, the code will break. These types of errors increase in complexity and hassle with the amount of time between integration builds. By setting up nightly builds, or even builds that occur every check-in, these problems are greatly reduced. This practice is pretty much industry standard across projects with multiple developers. So now, to answer your question: this is a skill that will span across projects and companies. You should learn it to broaden your knowledge and skills as a developer, and to add an important line on your resume.
From the [Python Enterprise Application Kit community][1]: > *"Eggs are to Pythons as Jars are to Java..."* > > Python eggs are a way of bundling > additional information with a Python > project, that allows the project's > dependencies to be checked and > satisfied at runtime, as well as > allowing projects to provide plugins > for other projects. There are several > binary formats that embody eggs, but > the most common is '.egg' zipfile > format, because it's a convenient one > for distributing projects. All of the > formats support including > package-specific data, project-wide > metadata, C extensions, and Python > code. > > The primary benefits of Python Eggs > are: > > - They enable tools like the "Easy Install" Python package manager > > - .egg files are a "zero installation" format for a Python > package; no build or install step is > required, just put them on PYTHONPATH > or sys.path and use them (may require > the runtime installed if C extensions > or data files are used) > > - They can include package metadata, such as the other eggs they depend on > > - They allow "namespace packages" (packages that just contain other > packages) to be split into separate > distributions (e.g. zope.*, twisted.*, > peak.* packages can be distributed as > separate eggs, unlike normal packages > which must always be placed under the > same parent directory. This allows > what are now huge monolithic packages > to be distributed as separate > components.) > > - They allow applications or libraries to specify the needed > version of a library, so that you can > e.g. require("Twisted-Internet>=2.0") > before doing an import > twisted.internet. > > - They're a great format for distributing extensions or plugins to > extensible applications and frameworks > (such as Trac, which uses eggs for > plugins as of 0.9b1), because the egg > runtime provides simple APIs to locate > eggs and find their advertised entry > points (similar to Eclipse's > "extension point" concept). > > - There are also other benefits that may come from having a standardized > format, similar to the benefits of > Java's "jar" format. -Adam [1]: http://peak.telecommunity.com/DevCenter/PythonEggs
Since you are limited to .Net 2.0 WCF is perhaps not an option. You could use .Net remoting with shared memory as the underlying communication mechanism between app domains on the same machine. Using this approach you can easily put your processes on different machines and replace the shared memory protocol with a network protocol.
I always figured you'd use a db trigger on update and delete to push those rows out into a TableName_Audit table. That'd work with ORMs, give you your history and wouldn't decimate select performance on that table. Is that a good idea or am I missing something?
Tell them it'll let you do your job easier which translates into less time and less money.