Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I need to convert a bunch of files to utf-8 in Python, and I have trouble with the "converting the file" part.
I'd like to do the equivalent of:
```
iconv -t utf-8 $file > converted/$file # this is shell code
```
Thanks!
|
You can use the [codecs module](https://docs.python.org/2/library/codecs.html), like this:
```
import codecs
BLOCKSIZE = 1048576 # or some other, desired size in bytes
with codecs.open(sourceFileName, "r", "your-source-encoding") as sourceFile:
with codecs.open(targetFileName, "w", "utf-8") as targetFile:
while True:
contents = sourceFile.read(BLOCKSIZE)
if not contents:
break
targetFile.write(contents)
```
**EDIT**: added `BLOCKSIZE` parameter to control file chunk size.
|
This worked for me in a small test:
```
sourceEncoding = "iso-8859-1"
targetEncoding = "utf-8"
source = open("source")
target = open("target", "w")
target.write(unicode(source.read(), sourceEncoding).encode(targetEncoding))
```
|
How to convert a file to utf-8 in Python?
|
[
"",
"python",
"encoding",
"file",
"utf-8",
""
] |
I'm messing around with 2D game development using C++ and DirectX in my spare time. I'm finding that the enterprisey problem domain modeling approach doesn't help as much as I'd like ;)
I'm more or less looking for a "best practices" equivalent to basic game engine design. How entities should interact with each other, how animations and sounds should be represented in an ideal world, and so on.
Anyone have good resources they can recommend?
|
[Gamedev.net](http://www.gamedev.net) is usually where I turn to get an idea of what other people in the game development community are doing.
That said, I'm afraid that you'll find that the idea of "best practices" in game development is more volatile than most. Games tend to be such specialized applications that it's near impossible to give any "one size fits all" answers. What works great for Tetris is going to be useless with Asteroids, and a model that works perfectly for Halo is likely to fail miserably for Mario.
You'll also find quickly that there's no such thing as an "industry standard" for texture, mesh, level, sound, or animation formats. Everyone just rolls their own or uses whatever is convenient to the platform. You do occasionally see things like [COLLADA](http://www.collada.org), which is nice, but it's still just an intermediate format designed to make writing exporters easier.
If you're new to game development, my advice would be this: Don't kill yourself over your code structure on your first go. Try a simple game, like asteroids, and just hack away until it works, no matter how "ugly" the code is. Use simple formats that you are familiar with without worrying about how well they'll hold up in larger projects. Don't worry about plugins, skins, editors, or any of that other fluff. Just make it WORK! Then, when you're done with that first, all important game, pick another, and this time around clean up one or two aspects of your code (but don't go overboard!) From there, iterate!
I promise you that this will get you farther faster than any amount of poking around online for the "right way" ever could (this coming from someone who's done a LOT of poking).
And one last thought for you: If you feel more comfortable working in a more well defined space, take a look at [XNA](http://creators.xna.com/) or a similar library. They'll pre-define some of the "best" formats to use and give you tools to work with them, which takes some of the initial guesswork out.
Good luck, and above all else remember: Games (and their development) are supposed to be FUN! Don't get too caught up on the small stuff!
|
Make a game. After you're done, make another. Look at what you liked and what you didn't like and then make another.
Seriously though, you can read all of the "best practices" guides to game design you'd like, but ultimately it comes down to experience. The only way to get experience is to sit down and write a game. After you do this a few times you'll get a *much* better idea of how to write a game.
|
What are some good resources on 2D game engine design?
|
[
"",
"c++",
"directx",
""
] |
I am analyzing a .dmp file that was created and I have a call stack which gives me a lot of info. But I'd like to double click on the call stack and have it bring me to the source code.
I can right click on the call stack and select symbol settings.. where I can put the location to the PDB. But there is no option for the source code directory.
|
The source code directory is unfortunately hard coded into the pdb's however if you know the folders required you can use windows concept of symbolic links, junctions.
I use the tool [Junction Link Magic](http://www.rekenwonder.com/linkmagic.htm)
|
Read [this article](http://msdn.microsoft.com/en-us/magazine/cc163563.aspx) about how to set up a Source Server (aka SrcSrv) integration at your site.
I took the time to follow these steps for our codebase, and now we are able to take a .dmp file from any build of our software in the past 6 months... get a stack trace with symbols... and view the exact source code lines in the debugger. Since the steps are integrated into our automated builds, there's very little overhead now.
I did need to write a custom indexer for ClearCase, but they have pre-existing ones for Perforce, TFS, and maybe others.
It is worth noting that the .dmp support in VS2005 is a little shaky.. it's quite a bit more stable in VS2008.
You'll also need to configure Visual Studio to grab the symbols for the MS products from here in addition to your own symbol server:
```
http://msdl.microsoft.com/download/symbols
```
That is described in a few places such as on the [Debugging Tools for Windows](http://www.microsoft.com/whdc/DevTools/Debugging/debugstart.mspx) site.
|
how do I specify the source code directory in VS when looking at the call stack of a memory dump?
|
[
"",
"c++",
"visual-studio",
"crash-reports",
"memory-dump",
""
] |
What is SafeHandle? how does it differ from IntPtr? When should I use one? What are its advantages?
|
I think [MSDN](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.safehandle.aspx) is pretty clear in definition:
> The SafeHandle class provides critical
> finalization of handle resources,
> preventing handles from being
> reclaimed prematurely by garbage
> collection and from being recycled by
> Windows to reference unintended
> unmanaged objects. Before the .NET
> Framework version 2.0, all operating
> system handles could only be
> encapsulated in the IntPtr managed
> wrapper object.
>
> The SafeHandle class contains a
> finalizer that ensures that the handle
> is closed and is guaranteed to run,
> even during unexpected AppDomain
> unloads when a host may not trust the
> consistency of the state of the
> AppDomain.
>
> For more information about the
> benefits of using a SafeHandle, see
> Safe Handles and Critical
> Finalization.
>
> This class is abstract because you
> cannot create a generic handle. To
> implement SafeHandle, you must create
> a derived class. To create SafeHandle
> derived classes, you must know how to
> create and free an operating system
> handle. This process is different for
> different handle types because some
> use CloseHandle, while others use more
> specific methods such as
> UnmapViewOfFile or FindClose. For this
> reason, you must create a derived
> class of SafeHandle for each operating
> system handle type; such as
> MySafeRegistryHandle,
> MySafeFileHandle, and
> MySpecialSafeFileHandle. Some of these
> derived classes are prewritten and
> provided for you in the
> Microsoft.Win32.SafeHandles namespace.
|
You should use a derivative of SafeHandle whenever possible where managed code is receiving an IntPtr from unmanaged code. While the name, general use, and even documentation of the SafeHandle class implies that it is only supposed to be used to contain Windows operating system handles, a few internal .NET framework classes such as Microsoft.Win32.SafeHandles.SafeLocalAllocHandle and those that derive from the publicly available abstract class [System.Runtime.InteropServices.SafeBuffer](http://msdn.microsoft.com/en-us/library/vstudio/system.runtime.interopservices.safebuffer(v=vs.100).aspx) also use it to guarantee that other unmanaged resources such as dynamically allocated structs and arrays are freed. In general, I believe that it is good practice to create a derivative of this class whenever an IntPtr is returned to managed code from unmanaged code even if it doesn't require cleanup.
The established purpose of a SafeHandle is to guarantee that even if the world is ending (e.g. an AppDomain is being unloaded or a StackOverflowException occurs) the .NET framework should make absolutely sure that the finalizer for the SafeHandle is called to close or deallocate the unmanaged entity being referred to by the wrapped IntPtr. The SafeHandle class achieves this by inheriting from the [CriticalFinalizerObject](http://msdn.microsoft.com/en-us/library/vstudio/system.runtime.constrainedexecution.criticalfinalizerobject(v=vs.100).aspx) class. Inheriting from this class does, however, place upon the inheritor the obligation of not totally screwing up the state of the process when the finalizer is called, which is likely why it is not often used for entities other than Windows operating system handles. The .NET framework also provides some weak finalization ordering so that it is safe to interact with a SafeHandle object in the finalizer of any class that does not inherit from CriticalFinalizerObject, but circumstances in which that is necessary should be few and far between.
Ideally, a SafeHandle-derived class should also be used to more safely interact with an unmanaged entity reference by encapsulating expected functionality within the derived class. A well-written class that inherits from SafeHandle should have a specific purpose in mind and should provide methods that are sufficient to prevent any developer using it for that purpose from ever needing to interact directly with the IntPtr it contains. Adding such methods also provides other developers with a clear idea of what the result of an unmanaged method call is to be used for in a managed context. A class that inherits from SafeHandle can be used for this even if no cleanup is required on the pointer that the unmanaged method returns by calling base(false) in the constructor for the class.
Two examples that use classes which derive from SafeHandle to safely clean up a reference to an unmanaged entity and encapsulate functionality related to the unmanaged entity are below. The first example is a more traditional scenario in which a user token returned by [LogonUser](http://msdn.microsoft.com/en-us/library/windows/desktop/aa378184(v=vs.85).aspx) is wrapped by an instance of the SafeTokenHandle class. This class will call CloseHandle on the token when the object is disposed or finalized. It also includes a method called GetWindowsIdentity that returns a WindowsIdentity object for the user represented by the user token. The second example uses Windows built-in function [CommandLineToArgvW](http://www.bing.com/search?q=CommandLineToArgvW&form=DLCDF8&pc=MDDC&src=IE-SearchBox) to parse a command line. This function returns a pointer to an array contained a contiguous block of memory that can be freed by a single call to LocalFree. The SafeLocalAllocWStrArray class (which inherits from class SafeLocalAllocArray which is also defined in this example) will call LocalFree on the array when object is disposed or finalized. It also includes a function that will copy the contents of the unmanaged array to a managed array.
```
static class Examples
{
static void Example1_SafeUserToken()
{
const string user = "SomeLocalUser";
const string domain = null;
const string password = "ExamplePassword";
NativeMethods.SafeTokenHandle userToken;
WindowsIdentity identity;
NativeMethods.LogonUser(user, domain, password, NativeMethods.LogonType.LOGON32_LOGON_INTERACTIVE, NativeMethods.LogonProvider.LOGON32_PROVIDER_DEFAULT, out userToken);
using (userToken)
{
// get a WindowsIdentity object for the user
// WindowsIdentity will duplicate the token, so it is safe to free the original token after this is called
identity = userToken.GetWindowsIdentity();
}
// impersonate the user
using (identity)
using (WindowsImpersonationContext impersonationContext = identity.Impersonate())
{
Console.WriteLine("I'm running as {0}!", Thread.CurrentPrincipal.Identity.Name);
}
}
static void Example2_SafeLocalAllocWStrArray()
{
const string commandLine = "/example /command";
int argc;
string[] args;
using (NativeMethods.SafeLocalAllocWStrArray argv = NativeMethods.CommandLineToArgvW(commandLine, out argc))
{
// CommandLineToArgvW returns NULL on failure; since SafeLocalAllocWStrArray inherits from
// SafeHandleZeroOrMinusOneIsInvalid, it will see this value as invalid
// if that happens, throw an exception containing the last Win32 error that occurred
if (argv.IsInvalid)
{
int lastError = Marshal.GetHRForLastWin32Error();
throw new Win32Exception(lastError, "An error occurred when calling CommandLineToArgvW.");
}
// the one unsafe aspect of this is that the developer calling this function must be trusted to
// pass in an array of length argc or specify the length of the copy as the value of argc
// if the developer does not do this, the array may end up containing some garbage or an
// AccessViolationException could be thrown
args = new string[argc];
argv.CopyTo(args);
}
for (int i = 0; i < args.Length; ++i)
{
Console.WriteLine("Argument {0}: {1}", i, args[i]);
}
}
}
/// <summary>
/// P/Invoke methods and helper classes used by this example.
/// </summary>
internal static class NativeMethods
{
// documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/aa378184(v=vs.85).aspx
[DllImport("advapi32.dll", SetLastError = true, CharSet = CharSet.Unicode)]
public static extern bool LogonUser(string lpszUsername, string lpszDomain, string lpszPassword, LogonType dwLogonType, LogonProvider dwLogonProvider, out SafeTokenHandle phToken);
// documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/ms724211(v=vs.85).aspx
[DllImport("kernel32.dll", SetLastError = true)]
public static extern bool CloseHandle(IntPtr handle);
// documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/bb776391(v=vs.85).aspx
[DllImport("shell32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
public static extern SafeLocalAllocWStrArray CommandLineToArgvW(string lpCmdLine, out int pNumArgs);
// documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/aa366730(v=vs.85).aspx
[DllImport("kernel32.dll", SetLastError = true)]
public static extern IntPtr LocalFree(IntPtr hLocal);
/// <summary>
/// Wraps a handle to a user token.
/// </summary>
public class SafeTokenHandle : SafeHandleZeroOrMinusOneIsInvalid
{
/// <summary>
/// Creates a new SafeTokenHandle. This constructor should only be called by P/Invoke.
/// </summary>
private SafeTokenHandle()
: base(true)
{
}
/// <summary>
/// Creates a new SafeTokenHandle to wrap the specified user token.
/// </summary>
/// <param name="arrayPointer">The user token to wrap.</param>
/// <param name="ownHandle"><c>true</c> to close the token when this object is disposed or finalized,
/// <c>false</c> otherwise.</param>
public SafeTokenHandle(IntPtr handle, bool ownHandle)
: base(ownHandle)
{
this.SetHandle(handle);
}
/// <summary>
/// Provides a <see cref="WindowsIdentity" /> object created from this user token. Depending
/// on the type of token, this can be used to impersonate the user. The WindowsIdentity
/// class will duplicate the token, so it is safe to use the WindowsIdentity object created by
/// this method after disposing this object.
/// </summary>
/// <returns>a <see cref="WindowsIdentity" /> for the user that this token represents.</returns>
/// <exception cref="InvalidOperationException">This object does not contain a valid handle.</exception>
/// <exception cref="ObjectDisposedException">This object has been disposed and its token has
/// been released.</exception>
public WindowsIdentity GetWindowsIdentity()
{
if (this.IsClosed)
{
throw new ObjectDisposedException("The user token has been released.");
}
if (this.IsInvalid)
{
throw new InvalidOperationException("The user token is invalid.");
}
return new WindowsIdentity(this.handle);
}
/// <summary>
/// Calls <see cref="NativeMethods.CloseHandle" /> to release this user token.
/// </summary>
/// <returns><c>true</c> if the function succeeds, <c>false otherwise</c>. To get extended
/// error information, call <see cref="Marshal.GetLastWin32Error"/>.</returns>
protected override bool ReleaseHandle()
{
return NativeMethods.CloseHandle(this.handle);
}
}
/// <summary>
/// A wrapper around a pointer to an array of Unicode strings (LPWSTR*) using a contiguous block of
/// memory that can be freed by a single call to LocalFree.
/// </summary>
public sealed class SafeLocalAllocWStrArray : SafeLocalAllocArray<string>
{
/// <summary>
/// Creates a new SafeLocalAllocWStrArray. This constructor should only be called by P/Invoke.
/// </summary>
private SafeLocalAllocWStrArray()
: base(true)
{
}
/// <summary>
/// Creates a new SafeLocalallocWStrArray to wrap the specified array.
/// </summary>
/// <param name="handle">The pointer to the unmanaged array to wrap.</param>
/// <param name="ownHandle"><c>true</c> to release the array when this object
/// is disposed or finalized, <c>false</c> otherwise.</param>
public SafeLocalAllocWStrArray(IntPtr handle, bool ownHandle)
: base(ownHandle)
{
this.SetHandle(handle);
}
/// <summary>
/// Returns the Unicode string referred to by an unmanaged pointer in the wrapped array.
/// </summary>
/// <param name="index">The index of the value to retrieve.</param>
/// <returns>the value at the position specified by <paramref name="index" /> as a string.</returns>
protected override string GetArrayValue(int index)
{
return Marshal.PtrToStringUni(Marshal.ReadIntPtr(this.handle + IntPtr.Size * index));
}
}
// This class is similar to the built-in SafeBuffer class. Major differences are:
// 1. This class is less safe because it does not implicitly know the length of the array it wraps.
// 2. The array is read-only.
// 3. The type parameter is not limited to value types.
/// <summary>
/// Wraps a pointer to an unmanaged array of objects that can be freed by calling LocalFree.
/// </summary>
/// <typeparam name="T">The type of the objects in the array.</typeparam>
public abstract class SafeLocalAllocArray<T> : SafeHandleZeroOrMinusOneIsInvalid
{
/// <summary>
/// Creates a new SafeLocalArray which specifies that the array should be freed when this
/// object is disposed or finalized.
/// <param name="ownsHandle"><c>true</c> to reliably release the handle during the finalization phase;
/// <c>false</c> to prevent reliable release (not recommended).</param>
/// </summary>
protected SafeLocalAllocArray(bool ownsHandle)
: base(ownsHandle)
{
}
/// <summary>
/// Converts the unmanaged object referred to by <paramref name="valuePointer" /> to a managed object
/// of type T.
/// </summary>
/// <param name="index">The index of the value to retrieve.</param>
/// <returns>the value at the position specified by <paramref name="index" /> as a managed object of
/// type T.</returns>
protected abstract T GetArrayValue(int index);
//
/// <summary>
/// Frees the wrapped array by calling LocalFree.
/// </summary>
/// <returns><c>true</c> if the call to LocalFree succeeds, <c>false</c> if the call fails.</returns>
protected override bool ReleaseHandle()
{
return (NativeMethods.LocalFree(this.handle) == IntPtr.Zero);
}
/// <summary>
/// Copies the unmanaged array to the specified managed array.
///
/// It is important that the length of <paramref name="array"/> be less than or equal to the length of
/// the unmanaged array wrapped by this object. If it is not, at best garbage will be read and at worst
/// an exception of type <see cref="AccessViolationException" /> will be thrown.
/// </summary>
/// <param name="array">The managed array to copy the unmanaged values to.</param>
/// <exception cref="ObjectDisposedException">The unmanaged array wrapped by this object has been
/// freed.</exception>
/// <exception cref="InvalidOperationException">The pointer to the unmanaged array wrapped by this object
/// is invalid.</exception>
/// <exception cref="ArgumentNullException"><paramref name="array"/> is null.</exception>
public void CopyTo(T[] array)
{
if (array == null)
{
throw new ArgumentNullException("array");
}
this.CopyTo(array, 0, array.Length);
}
/// <summary>
/// Copies the unmanaged array to the specified managed array.
///
/// It is important that <paramref name="length" /> be less than or equal to the length of
/// the array wrapped by this object. If it is not, at best garbage will be read and at worst
/// an exception of type <see cref="AccessViolationException" /> will be thrown.
/// </summary>
/// <param name="array">The managed array to copy the unmanaged values to.</param>
/// <param name="index">The index to start at when copying to <paramref name="array" />.</param>
/// <param name="length">The number of items to copy to <paramref name="array" /></param>
/// <exception cref="ObjectDisposedException">The unmanaged array wrapped by this object has been
/// freed.</exception>
/// <exception cref="InvalidOperationException">The pointer to the unmanaged array wrapped by this object
/// is invalid.</exception>
/// <exception cref="ArgumentNullException"><paramref name="array"/> is null.</exception>
/// <exception cref="ArgumentOutOfRangeException"><paramref name="index"/> is less than zero.-or-
/// <paramref name="index" /> is greater than the length of <paramref name="array"/>.-or-
/// <paramref name="length"/> is less than zero.</exception>
/// <exception cref="ArgumentException">The sum of <paramref name="index" /> and <paramref name="length" />
/// is greater than the length of <paramref name="array" />.</exception>
public void CopyTo(T[] array, int index, int length)
{
if (this.IsClosed)
{
throw new ObjectDisposedException(this.ToString());
}
if (this.IsInvalid)
{
throw new InvalidOperationException("This object's buffer is invalid.");
}
if (array == null)
{
throw new ArgumentNullException("array");
}
if (index < 0 || array.Length < index)
{
throw new ArgumentOutOfRangeException("index", "index must be a nonnegative integer that is less than array's length.");
}
if (length < 0)
{
throw new ArgumentOutOfRangeException("length", "length must be a nonnegative integer.");
}
if (array.Length < index + length)
{
throw new ArgumentException("length", "length is greater than the number of elements from index to the end of array.");
}
for (int i = 0; i < length; ++i)
{
array[index + i] = this.GetArrayValue(i);
}
}
}
/// <summary>
/// The type of logon operation to perform.
/// </summary>
internal enum LogonType : uint
{
LOGON32_LOGON_BATCH = 1,
LOGON32_LOGON_INTERACTIVE = 2,
LOGON32_LOGON_NETWORK = 3,
LOGON32_LOGON_NETWORK_CLEARTEXT = 4,
LOGON32_LOGON_NEW_CREDENTIALS = 5,
LOGON32_LOGON_SERVICE = 6,
LOGON32_LOGON_UNLOCK = 7
}
/// <summary>
/// The logon provider to use.
/// </summary>
internal enum LogonProvider : uint
{
LOGON32_PROVIDER_DEFAULT = 0,
LOGON32_PROVIDER_WINNT50 = 1,
LOGON32_PROVIDER_WINNT40 = 2
}
}
```
|
Safehandle in C#
|
[
"",
"c#",
".net",
""
] |
I'm working on a method that accepts an expression tree as a parameter, along with a type (or instance) of a class.
The basic idea is that this method will add certain things to a collection that will be used for validation.
```
public interface ITestInterface
{
//Specify stuff here.
}
private static void DoSomething<T>(Expression<Func<T, object>> expression, params IMyInterface[] rule)
{
// Stuff is done here.
}
```
The method is called as follows:
```
class TestClass
{
public int MyProperty { get; set; }
}
class OtherTestClass : ITestInterface
{
// Blah Blah Blah.
}
static void Main(string[] args)
{
DoSomething<TestClass>(t => t.MyProperty,
new OtherTestClass());
}
```
I'm doing it this way because I'd like for the property names that are passed in to be strong typed.
A couple of things I'm struggling with..
1. Within DoSomething, I'd like to get a `PropertyInfo` type (from the body passed in) of T and add it to a collection along with rule[]. Currently, I'm thinking about using expression.Body and removing [propertyname] from "Convert.([propertyname])" and using reflection to get what I need. This seems cumbersome and wrong. Is there a better way?
2. Is this a specific pattern I'm using?
3. Lastly, any suggestions or clarifications as to my misunderstanding of what I'm doing are appreciated and / or resources or good info on C# expression trees are appreciated as well.
Thanks!
Ian
# Edit:
An example of what `expression.Body.ToString()` returns within the DoSomething method is a string that contains "Convert(t.MyProperty)" if called from the example above.
I do need it to be strongly typed, so it will not compile if I change a property name.
Thanks for the suggestions!
|
Collecting PropertyInfo objects from Expression.Body seems similar to [my solution](https://stackoverflow.com/questions/123088/possible-pitfalls-of-using-this-extension-method-based-shorthand#144982) to another question.
|
I rely heavily on expression trees to push a lot of what I want to do with my current application to compile-time, i.e. static type checking.
I traverse expression trees to translate them into something else which "makes sense".
One thing I've ended up doing a lot is that instead of URLs I rely on a MVC like approach where I declare lambda functions, and translates that... interpret, the compiler generated expression tree into an URL. When this URL is invoked, I do the opposite. This way, I have what I call compile-time checks for broken links and this works great with refactoring and overloads as well. I think it's cool to think about using expression trees in this way.
You might wanna check out the visitor pattern, it's a pain to get started with because it doesn't make much sense in the beginning but it ties everything together and it's a very formal way to solve type checking in compiler construction. You could do the same, but instead of type checking emit what ever you need.
Something which I'm currently pounding my head against is the ability to build a simple framework for translating (or actually I should say interpret) expression tress and emit JavaScript. The idea is that the compiler generated expression trees will translate into valid JavaScript which interfaces with some object model.
What's exciting about this is the way the compiler is always able to tell me when I go wrong and sure the end result is just a bunch of strings but the important part is how these strings got created. They went through some verification and that means something.
Once you get that going there is little you can't do with expression trees.
While working with the System.Reflection.Emit stuff I found myself using expression trees to create a light-weight framework for dynamic compilation, which at compile time could basically say if my dynamically created assemblies would compile as well, and this worked seamlessly with reflection and static type checking. It took this further and further and ended up with something which in the end saved a lot of time and proved to be very agile and robust.
So I love this kind of stuff, and this is what meta programming is all about, writing programs in your programs that do programs. I say keep it coming!
|
Advice on C# Expression Trees
|
[
"",
"c#",
"reflection",
"expression-trees",
""
] |
I am asking this out of a curiosity. Basically my question is when you have a database which needs a row entry to have things which act like flags, what is the best practice? A good example of this would be the badges on stack overflow, or the operating system field in bugzilla. Any subset of the flags may be set for a given entry.
Usually, I do c and c++ work, so my gut reaction is to use an unsigned integer field as a set of bits which can be flipped... But i know that isn't a good solution for several reasons. The most obvious of which is scale-ability, there will be a hard upper limit on how many flags I can have.
I can also think of a couple of other solutions which scale better but would have performance issues because they would require multiple selects to get all the information.
So, what is the "right" way to do this?
|
If you really need an unbounded selection from a closed set of flags (e.g. stackoverflow badges), then the "relational way" would be to create a table of flags and a separate table which relates those flags to your target entities. Thus, users, flags and usersToFlags.
However, if space efficiency is a serious concern and query-ability is not, an unsigned mask would work almost as well.
|
Generally speaking, I avoid bitmask fields. They're difficult to read in the future and they require a much more in-depth knowledge of the data to understanding.
The relational solution has been proposed previously. Given the example you outlined, I would create something like this (in SQL Server):
```
CREATE TABLE Users (
UserId INT IDENTITY(1, 1) PRIMARY KEY,
FirstName VARCHAR(50),
LastName VARCHAR(50),
EmailAddress VARCHAR(255)
);
CREATE TABLE Badges (
BadgeId INT IDENTITY(1, 1) PRIMARY KEY,
[Name] VARCHAR(50),
[Description] VARCHAR(255)
);
CREATE TABLE UserBadges (
UserId INT REFERENCES Users(UserId),
BadgeId INT REFERENCES Badges(BadgeId)
);
```
|
Flags in a database rows, best practices
|
[
"",
"sql",
"database",
"flags",
""
] |
I'm relatively new to web application programming so I hope this question isn't too basic for everyone.
I created a HTML page with a FORM containing a dojox datagrid (v1.2) filled with rows of descriptions for different grocery items. After the user selects the item he's interested in, he will click on the "Submit" button.
At this point, I can get the javascript function to store the item ID number as a javascript variable BUT I don't know how to pass this ID onto the subsequent HTML page.
Should I just pass the ID as an URL query string parameter? Are there any other better ways?
EDIT: The overall process is like a shopping cart. The user will select the item from the grid and then on the next page the user will fill out some details and then checkout.
I should also mention that I'm using grails so this is happening in a GSP page but currently it only contains HTML.
|
You could just use a hidden input field; that gets transmitted as part of the form.
```
<html>
<head>
</head>
<body>
<script type="text/javascript">
function updateSelectedItemId() {
document.myForm.selectedItemId.value = 2;
alert(document.myForm.selectedItemId.value);
// For you this would place the selected item id in the hidden
// field in stead of 2, and submit the form in stead of alert
}
</script>
Your grid comes here; it need not be in the form
<form name="myForm">
<input type="hidden" name="selectedItemId" value="XXX">
The submit button must be in the form.
<input type="button" value="changeSelectedItem" onClick="updateSelectedItemId()">
</form>
</body>
</html>
```
|
It's good one, but better is to use some script language such as JSP,PHP, ASP....and you can use simple POST and GET methods.
|
What is the best practice for passing variables from one HTML page to another?
|
[
"",
"javascript",
"html",
"grails",
"dojo",
""
] |
I wrote a C# application for a client a couple of years ago, but I no longer have the source code. All I have is the EXE that I deployed on the client's PC. Is there a way I can generate C# source code from the EXE?
|
[Reflector](http://www.red-gate.com/products/reflector/index.htm) and its add-in [FileDisassembler](http://www.denisbauer.com/Home/ReflectorFileDisassembler).
Reflector will allow to see the source code. FileDisassembler will allow you to convert it into a VS solution.
|
When Red Gate said there would no longer be a free version of .Net Reflector, I started using [ILSpy](https://github.com/icsharpcode/ILSpy/) and Telerik's [JustDecompile](http://www.telerik.com/products/decompiler.aspx). I have found ILSpy to decompile more accurately than JustDecompile (which is still in Beta). Red Gate has changed their decision and still have a free version of .Net Reflector, but now I like ILSpy.
From the ILSpy website (<https://github.com/icsharpcode/ILSpy/>):
[ILSpy](https://github.com/icsharpcode/ILSpy/) is the open-source .NET assembly browser and decompiler.
## ILSpy Features
* Assembly browsing
* IL Disassembly
* Decompilation to C#
* Supports lambdas and 'yield return'
* Shows XML documentation
* Saving of resources
* Search for types/methods/properties (substring)
* Hyperlink-based type/method/property navigation
* Base/Derived types navigation
* Navigation history
* BAML to XAML decompiler
* Save Assembly as C# Project
* Find usage of field/method
* Extensible via plugins (MEF)
**Update:**
April 15, 2012, ILSpy 2.0 was released. New features compared with version 1.0:
* Assembly Lists
* Support for decompiling Expression trees
* Support for lifted operatores on nullables
* Decompile to Visual Basic
* Search for multiple strings separated by space (searching for "Assembly manager" in ILSpy.exe would find AssemblyListManager)
* Clicking on a local variable will highlight all other occurrences of that variable
* Ctrl+F can be used to search within the decompiled code view
**Update:**
* ILSpy 2.1 supports async/await decompilation
|
How do I decompile a .NET EXE into readable C# source code?
|
[
"",
"c#",
".net",
"reverse-engineering",
"decompiling",
""
] |
In my understanding of Servlet, the Servlet will be instantiated by the Container, its `init()` method will be called once, and the servlet will live like a singleton until the JVM shuts down.
I do not expect my servlet to be serialized, since it will be constructed new when the app server recovers or is starts up normally. The servlet should hold no session-specific members, so it does not make sense for it to be written to disk and re-instantiated.
Is there a practical use for this?
My concerns are, that I put some non-serializable fields within there and then my app will mysteriously fail in a production environment where a different sort of session replication will take place.
|
Technically, I believe the servlet container is allowed to "passivate" the servlet object to disk, in a similar way that EJB session beans can be. So you're correct to ask the question if your app will fail due to non-serializable fields.
In practise, I've never heard of a container doing this, so it's really just legacy baggage from the bad old days of early J2EE. I wouldn't worry about it.
|
HttpServlet should by serialized to disk and survive restart of servlet container. For example tomcat allows you to set up flag which enable this kind of survive. The next option is transfer using JNDI. This is not garbage, it is used only in extreme use cases.
|
Why does HttpServlet implement Serializable?
|
[
"",
"java",
"session",
"servlets",
"serializable",
""
] |
We'd like to override DataGridView's default behavior when using a mouse wheel with this control. By default, the DataGridView scrolls a number of rows equal the SystemInformation.MouseWheelScrollLines setting. What we'd like to do is scroll just one item at a time.
(We display images in the DataGridView, which are somewhat large. Because of this scroll three rows (a typical system setting) is too much, often causing the user to scroll to items they can't even see.)
I've tried a couple things already and haven't had much success so far. Here are some issues I've run into:
1. You can subscribe to MouseWheel events but there's no way to mark the event as handled and do my own thing.
2. You can override OnMouseWheel but this never appears to be called.
3. You might be able to correct this in the base scrolling code but it sounds like a messy job since other types of scrolling (e.g. using the keyboard) come through the same pipeline.
Anyone have a good suggestion?
Here's the final code, using the wonderful answer given:
```
/// <summary>
/// Handle the mouse wheel manually due to the fact that we display
/// images, which don't work well when you scroll by more than one
/// item at a time.
/// </summary>
///
/// <param name="sender">
/// sender
/// </param>
/// <param name="e">
/// the mouse event
/// </param>
private void mImageDataGrid_MouseWheel(object sender, MouseEventArgs e)
{
// Hack alert! Through reflection, we know that the passed
// in event argument is actually a handled mouse event argument,
// allowing us to handle this event ourselves.
// See http://tinyurl.com/54o7lc for more info.
HandledMouseEventArgs handledE = (HandledMouseEventArgs) e;
handledE.Handled = true;
// Do the scrolling manually. Move just one row at a time.
int rowIndex = mImageDataGrid.FirstDisplayedScrollingRowIndex;
mImageDataGrid.FirstDisplayedScrollingRowIndex =
e.Delta < 0 ?
Math.Min(rowIndex + 1, mImageDataGrid.RowCount - 1):
Math.Max(rowIndex - 1, 0);
}
```
|
I just did a little scrounging and testing of my own. I used [Reflector](http://www.red-gate.com/products/reflector/) to investigate and discovered a couple things. The `MouseWheel` event provides a `MouseEventArgs` parameter, but the `OnMouseWheel()` override in `DataGridView` casts it to **`Handled`**`MouseEventArgs`. This also works when handling the `MouseWheel` event. `OnMouseWheel()` does indeed get called, and it is in `DataGridView`'s override that it uses `SystemInformation.MouseWheelScrollLines`.
So:
1. You could indeed handle the `MouseWheel` event, casting `MouseEventArgs` to `HandledMouseEventArgs` and set `Handled = true`, then do what you want.
2. Subclass `DataGridView`, override `OnMouseWheel()` yourself, and try to recreate all the code I read here in [Reflector](http://www.red-gate.com/products/reflector/) except for replacing `SystemInformation.MouseWheelScrollLines` with `1`.
The latter would be a huge pain because it uses a number of private variables (including references to the `ScrollBar`s) and you'd have replace some with your own and get/set others using Reflection.
|
**UPDATE:** Since I've now learned that the `DataGridView` has a `MouseWheel` event, I've added a second, simpler override.
One way to accomplish this is to subclass the `DataGridView` and override the `WndProc` to add special handling of the `WM_MOUSEWHEEL` message.
This example catches the mouse wheel movement and replaces it with a call to `SendKeys.Send`.
*(This is a little different than just scrolling, since it also selects the next/previous row of the `DataGridView`. But it works.)*
```
public class MyDataGridView : DataGridView
{
private const uint WM_MOUSEWHEEL = 0x20a;
protected override void WndProc(ref Message m)
{
if (m.Msg == WM_MOUSEWHEEL)
{
var wheelDelta = ((int)m.WParam) >> 16;
if (wheelDelta < 0)
{
SendKeys.Send("{DOWN}");
}
if (wheelDelta > 0)
{
SendKeys.Send("{UP}");
}
return;
}
base.WndProc(ref m);
}
}
```
2nd take (with the same caveats as mentioned above):
```
public class MyDataGridView : DataGridView
{
protected override void OnMouseWheel(MouseEventArgs e)
{
if (e.Delta < 0)
SendKeys.Send("{DOWN}");
else
SendKeys.Send("{UP}");
}
}
```
|
How can you make a DataGridView scroll one item at a time using the mouse wheel?
|
[
"",
"c#",
".net",
"datagridview",
""
] |
I need to create an access (mdb) database without using the ADOX interop assembly.
How can this be done?
|
Before I throw away this code, it might as well live on stackoverflow
Something along these lines seems to do the trick:
```
if (!File.Exists(DB_FILENAME))
{
var cnnStr = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + DB_FILENAME;
// Use a late bound COM object to create a new catalog. This is so we avoid an interop assembly.
var catType = Type.GetTypeFromProgID("ADOX.Catalog");
object o = Activator.CreateInstance(catType);
catType.InvokeMember("Create", BindingFlags.InvokeMethod, null, o, new object[] {cnnStr});
OleDbConnection cnn = new OleDbConnection(cnnStr);
cnn.Open();
var cmd = cnn.CreateCommand();
cmd.CommandText = "CREATE TABLE VideoPosition (filename TEXT , pos LONG)";
cmd.ExecuteNonQuery();
}
```
This code illustrates that you can access the database using OleDbConnection once its created with the ADOX.Catalog COM component.
|
I've done the same as Autsin, create an Access db then included it into my project as a managed resource. Once there, it is included in the compiled code and you can copy it to hard disk as many times as you want. Empty databases are relatively small too, so there isn't much overhead.
The added bonus is the ability to set up the database if you know how it will be used or what tables will be added every time, you can reduce the amount of coding and slow database queries.
|
How do I create a Microsoft Jet (Access) database without an interop assembly?
|
[
"",
"c#",
"interop",
"jet",
""
] |
I have created a non-visual component in C# which is designed as a placeholder for meta-data on a form.
The component has a property which is a collection of custom objects, this object is marked as Serializable and implements the GetObjectData for serilizing and public constuctor for deserilizing.
In the resx file for the form it will generate binary data for storing the collection, however any time I make a change to the serialized class I get designer errors and need to delete the data manually out of the resx file and then recreate this data.
I have tried changing the constuctor to have a try / catch block around each property in the class
```
try
{
_Name = info.GetString("Name");
}
catch (SerializationException)
{
this._Name = string.Empty;
}
```
but it still crashes. The last error I got was that I had to implement IConvertible.
I would prefer to use xml serialization because I can at least see it, is this possible
for use by the designer?
Is there a way to make the serialization more stable and less resistant to changes?
Edit:
More information...better description maybe
I have a class which inherits from Component, it has one property which is a collection of Rules. The RulesCollection seems to have to be marked as Serializable, otherwise it does not retain its members.
The Rules class is also a Component with the attribute DesignTimeVisible(false) to stop it showing in the component tray, this clas is not marked Serializable.
Having the collection marked as Serializable generates binary data in the resx file (not ideal) and the IDE reports that the Rules class is not Serializable.
I think this issue is getting beyond a simple question. So I will probably close it shortly.
If anyone has any links to something similar that would help a lot.
|
I have since discovered where I was going wrong.
The component I was implementing a custom collection (inherited from CollectionBase), I changed this to a List and added the DesignerSerializationVisibility(DesignerSerializationVisibility.Content) attribute to the List property, this list is also read-only. This would then produce code to generate all the components properties and all the entries in the List.
The class stored in the list did not need any particuar attributes or need to be serializble.
```
private List<Rule> _Rules;
[DesignerSerializationVisibility(DesignerSerializationVisibility.Content)]
public List<Rule> Rules
{
get { return _Rules; }
}
```
|
You might want to try the alternate approach of getting everything to serialize as generated code. To do that is very easy. Just implement your non-visual class from **Component**. Then expose your collection as you already are but ensure each object placed into the collection is itself derived from **Component**. By doing that everything is code generated.
|
Design time serialization in C#
|
[
"",
"c#",
".net",
"serialization",
""
] |
What is the easiest way to check if events have been logged in the eventlog during a period of time?
I want to perform a series of automated test steps and then check if any errors were logged to the Application Event Log, ignoring a few sources that I'm not interested in. I can use System.Diagnostics.EventLog and then look at the Entries collection, but it doesn't seem very useable for this scenario. For instance Entries.Count can get smaller over time if the event log is removing old entries. I'd prefer some way to either query the log or monitor it for changes during a period of time. e.g.
```
DateTime start = DateTime.Now;
// do some stuff...
foreach(EventLogEntry entry in CleverSolution.EventLogEntriesSince(start, "Application"))
{
// Now I can do stuff with entry, or ignore if its Source is one
// that I don't care about.
// ...
}
```
|
Just to be a good Wiki citizen and strive for completion, there are other ways. I didn't suggest it earlier because it is complete overkill for something that is only going to be run in-house as part of a test suite, and you said right in the title you wanted something easy.
But if you need to see events as they occur in shipping code, read on. Believe it or not there are *three* different Windows APIs for this thing at this point.
## NotifyChangeEventLog()
The original API for this sort of thing is called [NotifyChangeEventLog()](http://msdn.microsoft.com/en-us/library/aa363670(VS.85).aspx) and it was supported starting in Windows 2000. Essentially you use the [WIN32 event log APIs](http://msdn.microsoft.com/en-us/library/aa363654(VS.85).aspx) to open the event log, then you call this API with the handle you were given by the other API and an event handle. Windows will signal your event when there are new event log entries to look at.
I never used this API myself, because most of my interest was in remote event log access and this API explicitly does *not* support remote logs. However, the rest of the API set this belongs to *does* let you sequentially read remote logs if you have the right permissions.
## Windows Management Instrumentation
A second way is to use the [Windows Management Instrumentation API](http://msdn.microsoft.com/en-us/library/aa394582(VS.85).aspx), and this does support both local and remote logs. This is a COM/DCOM based API that has existed for several years in Windows, and the .NET Framework has a nice implementation of it in the [System.Management](http://msdn.microsoft.com/en-us/library/system.management.aspx) namespace. Essentially what you do is create an [EventQuery](http://msdn.microsoft.com/en-us/library/system.management.eventquery.aspx) that looks for the appearance of new WMI objects of type (meaning within the WMI type system) of [Win32\_NTLogEvent](http://msdn.microsoft.com/en-us/library/aa394226(VS.85).aspx). The appearance of these will indicate new event log entries, and they will present pretty much in real time. The attributes on these objects contain all the details of the log entry. There's an [article from MSDN magazine](http://msdn.microsoft.com/en-us/magazine/cc302051.aspx) that talks about playing around with this stuff in Visual Studio.
Again, this would be total overkill for a test application, it would require far more code than your existing solution. But years ago I wrote a subsystem for a network management application that used the DCOM flavor of this API to gather the event logs off of all the servers on a network so we could alert on particular ones. It was pretty slick and darn near real time. If you implement this in C++ with DCOM, be prepared to deal with Multithreaded Apartments and a lot of hairy logic to detect if/when your connection to the remote server goes up or down.
## Windows Vista Event Log
Windows Vista (and Server 2008) have a whole new API suite relating to event logging and tracing. The [new event log is documented here](http://msdn.microsoft.com/en-us/library/aa385780(VS.85).aspx). It looks like there is an API called EvtSubscribe that allows you to [subscribe to events](http://msdn.microsoft.com/en-us/library/aa385771(VS.85).aspx). I have not used this API so I can't comment on its pros and cons.
|
That having been said, here's an answer that actually should be pretty straightforward even for your test application and is .NET Framework specific.
You need to open the EventLog before you start your test, and subscribe an event handler to the [EventLog.EntryWritten](http://msdn.microsoft.com/en-us/library/system.diagnostics.eventlog.entrywritten.aspx) event. This is the way that .NET exposes the NotifyChangeEventLog() Win32 API.
Move your current logic from `GetEventLogEntriesSince()` into the event handler, but instead of adding the events to a list for return, store them in a list you can retrieve from somewhere at the end of the run. You can retrieve the contents of the log entry from the EntryWrittenEventArgs argument which is passed, via its [Entry](http://msdn.microsoft.com/en-us/library/system.diagnostics.entrywritteneventargs.entry.aspx) property.
|
What is the easiest way using .net to check if events have been logged in the eventlog?
|
[
"",
"c#",
".net",
"testing",
"event-log",
""
] |
I am currently creating a custom control that needs to handle animation in a C# project. It is basically a listbox that contains a fixed number of elements that are subject to move. An element (another user control with a background image and a couple of generated labels) can move upwards, downwards or be taken out of the list.
I would like to create animated movement as the elements get moved around within the container custom control but it seems to me that moving controls around using lines such as
```
myCustomControl.left -= m_iSpeed;
```
triggered within a timer event is flickery and has a terrible rendering, even with double buffering turned on.
So here's the question : **What is the best way to achieve a flicker-free animated C# control?** Should I just not create custom controls and handle all of the drawing within a panel's background image that I generate? Is there a super animation method that I have not discovered? :)
Thanks!
|
your best bet for flicker-free animation is to do the painting yourself (use the Graphics object in the Paint event handler) and use double-buffering. In your custom control you will need code like this in the constructor:
```
this.SetStyle(ControlStyles.UserPaint | ControlStyles.OptimizedDoubleBuffer |
ControlStyles.AllPaintingInWmPaint | ControlStyles.SupportsTransparentBackColor,
true);
```
|
A similar discussion took place this morning on this question. [visual c# form update results in flickering.](https://stackoverflow.com/questions/181374/visual-c-form-update-results-in-flickering) so I will be lazy and give the same answer I gave there:
You could try to call **this.SuspendLayout();** before you start your move and **this.ResumeLayout(false);** when you have finished moving all of the controls. In this way all controls should draw at once and you should have less of a flicker.
*On a side note I have tried to reproduce this here at work, but seem to be failing. Can you give some more sample code that I can fix maybe?*
|
What is the best way to do a flicker-free animated C# custom control?
|
[
"",
"c#",
"animation",
"controls",
"doublebuffered",
""
] |
I've got a column in a database table (SQL Server 2005) that contains data like this:
```
TQ7394
SZ910284
T r1534
su8472
```
I would like to update this column so that the first two characters are uppercase. I would also like to remove any spaces between the first two characters. So `T q1234` would become `TQ1234`.
**The solution should be able to cope with multiple spaces between the first two characters.**
Is this possible in T-SQL? How about in ANSI-92? I'm always interested in seeing how this is done in other db's too, so feel free to post answers for PostgreSQL, MySQL, et al.
|
Here is a solution:
**EDIT:** **Updated to support replacement of multiple spaces between the first and the second non-space characters**
```
/* TEST TABLE */
DECLARE @T AS TABLE(code Varchar(20))
INSERT INTO @T SELECT 'ab1234x1' UNION SELECT ' ab1234x2'
UNION SELECT ' ab1234x3' UNION SELECT 'a b1234x4'
UNION SELECT 'a b1234x5' UNION SELECT 'a b1234x6'
UNION SELECT 'ab 1234x7' UNION SELECT 'ab 1234x8'
SELECT * FROM @T
/* INPUT
code
--------------------
ab1234x3
ab1234x2
a b1234x6
a b1234x5
a b1234x4
ab 1234x8
ab 1234x7
ab1234x1
*/
/* START PROCESSING SECTION */
DECLARE @s Varchar(20)
DECLARE @firstChar INT
DECLARE @secondChar INT
UPDATE @T SET
@firstChar = PATINDEX('%[^ ]%',code)
,@secondChar = @firstChar + PATINDEX('%[^ ]%', STUFF(code,1, @firstChar,'' ) )
,@s = STUFF(
code,
1,
@secondChar,
REPLACE(LEFT(code,
@secondChar
),' ','')
)
,@s = STUFF(
@s,
1,
2,
UPPER(LEFT(@s,2))
)
,code = @s
/* END PROCESSING SECTION */
SELECT * FROM @T
/* OUTPUT
code
--------------------
AB1234x3
AB1234x2
AB1234x6
AB1234x5
AB1234x4
AB 1234x8
AB 1234x7
AB1234x1
*/
```
|
```
UPDATE YourTable
SET YourColumn = UPPER(
SUBSTRING(
REPLACE(YourColumn, ' ', ''), 1, 2
)
)
+
SUBSTRING(YourColumn, 3, LEN(YourColumn))
```
|
Uppercase first two characters in a column in a db table
|
[
"",
"sql",
"sql-server",
"t-sql",
"language-agnostic",
""
] |
Often I want to add a value to a table or update the value if its key already exists. This can be accomplished in several ways, assuming a primary or unique key is set on the 'user\_id' and 'pref\_key' columns in the example:
**1. Blind insert, update if receiving a duplicate key error:**
```
// Try to insert as a new value
INSERT INTO my_prefs
(user_id, pref_key, pref_value)
VALUES (1234, 'show_help', 'true');
// If a duplicate-key error occurs run an update query
UPDATE my_prefs
SET pref_value = 'true'
WHERE user_id=1234 AND pref_key='show_help';
```
**2. Check for existence, then select or update:**
```
// Check for existence
SELECT COUNT(*)
FROM my_prefs
WHERE user_id=1234 AND pref_key='show_help';
// If count is zero, insert
INSERT INTO my_prefs
(user_id, pref_key, pref_value)
VALUES (1234, 'show_help', 'true');
// If count is one, update
UPDATE my_prefs
SET pref_value = 'true'
WHERE user_id=1234 AND pref_key='show_help';
```
The first way seems to be preferable as it will require only one query for new inserts and two for an update, where as the second way will always require two queries. Is there anything I'm missing though that would make it a bad idea to blindly insert?
|
Will there be concurrent INSERTs to these rows? DELETEs?
"ON DUPLICATE" sounds great (the behavior is just what you want) provided that you're not concerned about portability to non-MySQL databases.
The "blind insert" seems reasonable and robust provided that rows are never deleted. (If the INSERT case fails because the row exists, the UPDATE afterward should succeed because the row still exists. But this assumption is false if rows are deleted - you'd need retry logic then.) On other databases without "ON DUPLICATE", you might consider an optimization if you find latency to be bad: you could avoid a database round trip in the already-exists case by putting this logic in a stored procedure.
The "check for existence" is tricky to get right if there are concurrent INSERTs. Rows could be added between your SELECT and your UPDATE. Transactions won't even really help - I think even at isolation level "serializable", you'll see "could not serialize access due to concurrent update" errors occasionally (or whatever the MySQL equivalent error message is). You'll need retry logic, so I'd say the person above who suggests using this method to avoid "exception-based programming" is wrong, as is the person who suggests doing the UPDATE first for the same reason.
|
have a look at the ON DUPLICATE KEY syntax in <http://dev.mysql.com/doc/refman/5.0/en/insert-select.html>
```
INSERT [LOW_PRIORITY | HIGH_PRIORITY] [IGNORE]
[INTO] tbl_name [(col_name,...)]
SELECT ...
[ ON DUPLICATE KEY UPDATE col_name=expr, ... ]
```
|
Is there a disadvantage to blindly using INSERT in MySQL?
|
[
"",
"sql",
"mysql",
""
] |
For PHP
I have a date I want line wrapped.
I have $date = '2008-09-28 9:19 pm';
I need the first space replaced with a br
to become
```
2008-09-28<br>9:19 pm
```
If it wasn't for that second space before PM, I would just str\_replace() it.
|
```
$date = preg_replace('/ /', '<br>', $date, 1);
```
|
`s/ /<br\/>/` should do it. Though PHP regex might be greedy and replace all spaces.
## edit
I support Ben Hoffstein's PHP solution. Where possible, avoid regex as it nearly *always* has unintended side effects.
|
How to do a regex replacement, adding characters in a date string?
|
[
"",
"php",
"regex",
""
] |
In a few large projects i have been working on lately it seems to become increasingly important to choose one or the other (XML or Annotation). As projects grow, consistency is very important for maintainability.
My questions are: what are the advantages of XML-based configuration over Annotation-based configuration and what are the advantages of Annotation-based configuration over XML-based configuration?
|
Annotations have their use, but they are not the one silver bullet to kill XML configuration. I recommend mixing the two!
For instance, if using Spring, it is entirely intuitive to use XML for the dependency injection portion of your application. This gets the code's dependencies away from the code which will be using it, by contrast, using some sort of annotation in the code that needs the dependencies makes the code aware of this automatic configuration.
However, instead of using XML for transactional management, marking a method as transactional with an annotation makes perfect sense, since this is information a programmer would probably wish to know. But that an interface is going to be injected as a SubtypeY instead of a SubtypeX should not be included in the class, because if now you wish to inject SubtypeX, you have to change your code, whereas you had an interface contract before anyways, so with XML, you would just need to change the XML mappings and it is fairly quick and painless to do so.
I haven't used JPA annotations, so I don't know how good they are, but I would argue that leaving the mapping of beans to the database in XML is also good, as the object shouldn't care where its information came from, it should just care what it can do with its information. But if you like JPA (I don't have any expirience with it), by all means, go for it.
In general:
If an annotation provides functionality and acts as a comment in and of itself, and doesn't tie the code down to some specific process in order to function normally without this annotation, then go for annotations. For example, a transactional method marked as being transactional does not kill its operating logic, and serves as a good code-level comment as well. Otherwise, this information is probably best expressed as XML, because although it will eventually affect how the code operates, it won't change the main functionality of the code, and hence doesn't belong in the source files.
|
There is a wider issue here, that of externalised vs inlined meta-data. If your object model is only ever going to persisted in one way, then inlined meta-data (i.e. annotations) are more compact and readable.
If, however, your object model was reused in different applications in such a way that each application wanted to persist the model in different ways, then externalising the meta-data (i.e. XML descriptors) becomes more appropriate.
Neither one is better, and so both are supported, although annotations are more fashionable. As a result, new hair-on-fire frameworks like JPA tend to put more emphasis on them. More mature APIs like native Hibernate offer both, because it's known that neither one is enough.
|
Xml configuration versus Annotation based configuration
|
[
"",
"java",
"xml",
"spring",
"annotations",
""
] |
Our situation is as follows, but I'm curious about this problem in any situation.
We have a framework consisting of 4 projects:
* beans
* util
* framework
* web
We also have modules that need a version and depend on a version of beans and util.
Finally we have a customer project that consists of a specific version of the core projects and one or more modules.
Is there a standard way to version these projects?
What seems simple to me is becoming really complicated as we try to deliver releases to QA and then manage our ongoing development with the maintenance of the release (release = tag and possible branch).
I kind of prefer the following:
1.2.0 - major and minor versions + release.
1.2.1 - next release
1.2.0\_01 - bug fix in 1.2.0 release (branch)
etc.
Any ideas?
|
We use major.minor.bugfix. A major release only happens for huge changes. A minor release is called for when there is an API change. All other releases are bugfix releases. There's definitely utility in having a build or revision number there too for troubleshooting, although if you've got really rigorous CM you might not need to include it.
Coordinating among the versions of all these projects can be done really well with help from tools like Apache Ivy or Maven. The build of one project, with its own version number, can involve the aggregation of specific versions of (the products of) other projects, and so your build files provide a strict mapping of versions from the bottom up. Save it all in [insert favorite version control tool here] and you have a nice history recorded.
|
I use {major}.{minor}.{buildday}.{sequential}. For Windows, we use the utilities [stampver.exe](http://blog.dixo.net/downloads/stampver-win32-version-resource-stamping/) and [UpdateVersion.exe](http://code.mattgriffith.net/UpdateVersion/) for .NET projects that handle that mostly automatically.
|
How do you version your projects and manage releases?
|
[
"",
"java",
"language-agnostic",
"maven-2",
"versioning",
""
] |
I'm about to inherit a rather large Java enterprise project that has a large amount of third party dependencies. There is at least seventy JARs included and some of them would seem to be unused e.g. spring.jar which I know isn't used.
It seems that over the years as various developers have touched upon the code base they have all tried out new project-of-the-month type libraries.
**How does one go about getting rid of these?** Within reason of course, as clearly some dependencies are helpful to not have to re-invent the wheel.
I'm obviously interested in java based projects but I'm welcome to answers across languages that people think will be helpful.
|
Personally, I think you have to start by assessing the scale of the problem. It's going to be fairly painful, but I'd make a list of the dependencies and work out exactly which parts of the project use which ones.
Then I'd work out exactly what features of each you're actually making use of (in many cases, you'll end up having a massive third party library which you're using a tiny part of).
Once you have this information, you'll at least know what you're dealing with.
My next step would be to look at all of the dependencies that you only use to a small extent. Checking around might uncover things that you could use from other libraries that would eliminate the lesser used libraries.
I'd also have a look around to see if there's anything small that you could just re-write and include in your own code-base.
Finally, I'd have a look around at the vendors of your dependencies and their competitors to see if the latest versions contain more functionality that will allow you to eliminate a few others.
Then you're just left wondering whether it's better to be highly dependent on a few vendors, or less dependent on a lot of vendors!! ;o)
|
structure101 <http://www.headwaysoftware.com/products/structure101/index.php>
It's a great tool for showing dependencies. I've been using it for a couple of years.
|
How does one weed out dependencies in a large project?
|
[
"",
"java",
"dependencies",
"project",
""
] |
I have an `<img>` in an HTML document that I would like to highlight as though the user had highlighted it using the mouse. Is there a way to do that using JavaScript?
I only need it to work in Mozilla, but any and all information is welcome.
*EDIT: The reason I want to select the image is actually not so that it appears highlighted, but so that I can then copy the selected image to the clipboard using XPCOM. So the img actually has to be selected for this to work.*
|
Here's an example which selects the first image on the page (which will be the Stack Overflow logo if you test it out on this page in Firebug):
```
var s = window.getSelection()
var r = document.createRange();
r.selectNode(document.images[0]);
s.addRange(r)
```
Relevant documentation:
* <http://developer.mozilla.org/en/DOM/window.getSelection>
* <http://developer.mozilla.org/en/DOM/range.selectNode>
* <http://developer.mozilla.org/en/DOM/Selection/addRange>
|
You might also want to call s.removeAllRanges() before s.addRange(r).
|
How can I select an <img> element programmatically using JavaScript?
|
[
"",
"javascript",
"html",
"firefox",
"dom",
""
] |
Currently we have a table that we use to track inivitations. We have an email field that is indexed but we also have three optional keys that the user can specify when adding new record emails. We don't allow duplicates so we have to query if the email plus the optional keys already exists. Currently the keys are only added to the select statement if they are specified. The normal case is only email is specified and using the index it works fairly quickly. When the keys are added performance drops.
Would adding three indexes affect performance for other operations? Keys are probably used infrequently that we wouldn't want to impact performance for this case.
* email, key1
* email, key1, key2
* email, key1, key2, key3
The other idea is we add 1 key.
* email, key1, key2, key3
Then always use all 3 keys in the lookup (eg. key1 = mykey AND key2 is NULL AND key3 is NULL)
**See Also**
[Exact duplicate post](https://stackoverflow.com/questions/179085/multiple-indexes-vs-multi-column-indexes)
|
Personally I would recommend this approach.
Try the method with the single index that covers everything, if I recall correctly it will still perform well if you only query on the first of the included columns. Once you have the index in place, run the Index Advisor.
Then try the other route and repeat.
It really depends on your data.
I typically have been able to get by with 1 covering index, starting with the most frequently used key first.
|
It depends on how often the table is updated, and how complex the indexes are. If you go nuts creating indexes, then every time a record is inserted/updated/deleted, every index will have to be modified to reflect that information.
If you only put three indexes on, and they are relatively simple, then you shouldn't have a problem.
|
Individual indexes vs multiple field indexes
|
[
"",
"sql",
"indexing",
""
] |
Is there a more efficient way to convert an HTMLCollection to an Array, other than iterating through the contents of said collection and manually pushing each item into an array?
|
```
var arr = Array.prototype.slice.call( htmlCollection )
```
will have the same effect using "native" code.
**Edit**
Since this gets a lot of views, note (per @oriol's comment) that the following more concise expression is *effectively* equivalent:
```
var arr = [].slice.call(htmlCollection);
```
But note per @JussiR's comment, that unlike the "verbose" form, it does create an empty, unused, and indeed unusable array instance in the process. What compilers do about this is outside the programmer's ken.
**Edit**
Since ECMAScript 2015 (ES 6) there is also [*Array.from*](http://ecma-international.org/ecma-262/8.0/#sec-array.from):
```
var arr = Array.from(htmlCollection);
```
**Edit**
ECMAScript 2015 also provides the [spread operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax), which is functionally equivalent to `Array.from` (although note that `Array.from` supports a mapping function as the second argument).
```
var arr = [...htmlCollection];
```
I've confirmed that both of the above work on `NodeList`.
A performance comparison for the mentioned methods: <http://jsben.ch/h2IFA>
|
not sure if this is the most efficient, but a concise ES6 syntax might be:
```
let arry = [...htmlCollection]
```
---
Edit: Another one, from Chris\_F comment:
```
let arry = Array.from(htmlCollection)
```
|
Most efficient way to convert an HTMLCollection to an Array
|
[
"",
"javascript",
"arrays",
"object",
""
] |
I am displaying a scrolled data table in a web page. This table has several thousands of dynamic rows, so it is loaded from the server (via AJAX).
The user can scroll up and down, so what I need is to **detect when the user reaches the end of the scrollbar** (that is, the last row at the bottom of the table) in order to request and show more data.
You can find this effect in google reader, when you scroll down to the last post in a given feed, google requests and shows new posts in a transparent way, but I can't figure out how they achieve it.
By the way, right now I am using a [YUI Datatable](http://developer.yahoo.com/yui/datatable/)
|
Thank you for your answers. That's my final working code (inspired by Greg and [ajaxian.com](http://ajaxian.com/archives/implementing-infinite-scrolling-with-jquery)), that uses some jQuery functions and works with the [YUI DataTable](http://developer.yahoo.com/yui/datatable/).
```
$(".yui-dt-bd").scroll(load_more);
function load_more() {
if ($(this).scrollend()) {
alert("SCROLL END REACHED !");
// TODO load more data
}
}
$.fn.scrollend = function() {
return this[0].scrollHeight - this[0].scrollTop - this.height() <= 0;
}
```
My next step is to implement my own [YUI Paginator](http://developer.yahoo.com/yui/paginator/) to achieve a complete integration with YUI components :)
|
I'm not familiar with the specific element you are using, but in order
to implement this on a full size window, you can do the following:
```
$wnd.onscroll = function() {
if (($wnd.height - $wnd.scrollTop) < SOME_MARGIN) then doSomething();
};
```
Where scrollTop is essentially "how many pixels have been scrolled".
I assume applying this to the table you are working with will do the job.
|
"Endless scrolling" effect in a HTML table
|
[
"",
"javascript",
"datatable",
"yui",
"scroll",
"dhtml",
""
] |
Could you please explain what the practical usage is for the `internal` keyword in C#?
I know that the `internal` modifier limits access to the current assembly, but when and in which circumstance should I use it?
|
Utility or helper classes/methods that you would like to access from many other classes within the same assembly, but that you want to ensure code in other assemblies can't access.
From [MSDN](https://web.archive.org/web/20130305112130/http://msdn.microsoft.com/en-us/library/7c5ka91b(v=vs.80).aspx) (via archive.org):
> A common use of internal access is in component-based development because it enables a group of components to cooperate in a private manner without being exposed to the rest of the application code. For example, a framework for building graphical user interfaces could provide Control and Form classes that cooperate using members with internal access. Since these members are internal, they are not exposed to code that is using the framework.
You can also use the internal modifier along with the [`InternalsVisibleTo`](http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.internalsvisibletoattribute.aspx) assembly level attribute to create "friend" assemblies that are granted special access to the target assembly internal classes.
This can be useful for creation of unit testing assemblies that are then allowed to call internal members of the assembly to be tested. Of course no other assemblies are granted this level of access, so when you release your system, encapsulation is maintained.
|
If Bob needs BigImportantClass then Bob needs to get the people who own project A to sign up to guarantee that BigImportantClass will be written to meet his needs, tested to ensure that it meets his needs, is documented as meeting his needs, and that a process will be put in place to ensure that it will never be changed so as to no longer meet his needs.
If a class is internal then it doesn't have to go through that process, which saves budget for Project A that they can spend on other things.
The point of internal is not that it makes life difficult for Bob. It's that it allows you to control what expensive promises Project A is making about features, lifetime, compatibility, and so on.
|
Practical uses for the "internal" keyword in C#
|
[
"",
"c#",
".net-assembly",
"access-modifiers",
"internals",
""
] |
I've noticed RAII has been getting lots of attention on Stackoverflow, but in my circles (mostly C++) RAII is so obvious its like asking what's a class or a destructor.
So I'm really curious if that's because I'm surrounded daily, by hard-core C++ programmers, and RAII just isn't that well known in general (including C++), or if all this questioning on Stackoverflow is due to the fact that I'm now in contact with programmers that didn't grow up with C++, and in other languages people just don't use/know about RAII?
|
For people who are commenting in this thread about RAII (resource acquisition is initialisation), here's a motivational example.
```
class StdioFile {
FILE* file_;
std::string mode_;
static FILE* fcheck(FILE* stream) {
if (!stream)
throw std::runtime_error("Cannot open file");
return stream;
}
FILE* fdup() const {
int dupfd(dup(fileno(file_)));
if (dupfd == -1)
throw std::runtime_error("Cannot dup file descriptor");
return fdopen(dupfd, mode_.c_str());
}
public:
StdioFile(char const* name, char const* mode)
: file_(fcheck(fopen(name, mode))), mode_(mode)
{
}
StdioFile(StdioFile const& rhs)
: file_(fcheck(rhs.fdup())), mode_(rhs.mode_)
{
}
~StdioFile()
{
fclose(file_);
}
StdioFile& operator=(StdioFile const& rhs) {
FILE* dupstr = fcheck(rhs.fdup());
if (fclose(file_) == EOF) {
fclose(dupstr); // XXX ignore failed close
throw std::runtime_error("Cannot close stream");
}
file_ = dupstr;
return *this;
}
int
read(std::vector<char>& buffer)
{
int result(fread(&buffer[0], 1, buffer.size(), file_));
if (ferror(file_))
throw std::runtime_error(strerror(errno));
return result;
}
int
write(std::vector<char> const& buffer)
{
int result(fwrite(&buffer[0], 1, buffer.size(), file_));
if (ferror(file_))
throw std::runtime_error(strerror(errno));
return result;
}
};
int
main(int argc, char** argv)
{
StdioFile file(argv[1], "r");
std::vector<char> buffer(1024);
while (int hasRead = file.read(buffer)) {
// process hasRead bytes, then shift them off the buffer
}
}
```
Here, when a `StdioFile` instance is created, the resource (a file stream, in this case) is acquired; when it's destroyed, the resource is released. There is no `try` or `finally` block required; if the reading causes an exception, `fclose` is called automatically, because it's in the destructor.
The destructor is guaranteed to be called when the function leaves `main`, whether normally or by exception. In this case, the file stream is cleaned up. The world is safe once again. :-D
|
There are plenty of reasons why RAII isn't better known. First, the name isn't particularly obvious. If I didn't already know what RAII was, I'd certainly never guess it from the name. (Resource acquisition is initialization? What does that have to do with the destructor or cleanup, which is what *really* characterizes RAII?)
Another is that it doesn't work as well in languages without deterministic cleanup.
In C++, we know exactly when the destructor is called, we know the order in which destructors are called, and we can define them to do anything we like.
In most modern languages, everything is garbage-collected, which makes RAII trickier to implement. There's no reason why it wouldn't be possible to add RAII-extensions to, say, C#, but it's not as obvious as it is in C++. But as others have mentioned, Perl and other languages support RAII despite being garbage collected.
That said, it is still possible to create your own RAII-styled wrapper in C# or other languages. I did it in C# a while ago.
I had to write something to ensure that a database connection was closed immediately after use, a task which any C++ programmer would see as an obvious candidate for RAII.
Of course we could wrap everything in `using`-statements whenever we used a db connection, but that's just messy and error-prone.
My solution was to write a helper function which took a delegate as argument, and then when called, opened a database connection, and inside a using-statement, passed it to the delegate function, pseudocode:
```
T RAIIWrapper<T>(Func<DbConnection, T> f){
using (var db = new DbConnection()){
return f(db);
}
}
```
Still not as nice or obvious as C++-RAII, but it achieved roughly the same thing. Whenever we need a DbConnection, we have to call this helper function which guarantees that it'll be closed afterwards.
|
Do programmers of other languages, besides C++, use, know or understand RAII?
|
[
"",
"c++",
"language-agnostic",
"raii",
""
] |
What's the most efficient way to read a stream into another stream? In this case, I'm trying to read data in a Filestream into a generic stream. I know I could do the following:
1. read line by line and write the data to the stream
2. read chunks of bytes and write to the stream
3. etc
I'm just trying to find the most efficient way.
Thanks
|
Stephen Toub discusses a stream pipeline in his MSDN .NET matters column [here](http://msdn.microsoft.com/en-us/magazine/cc163290.aspx). In the article he describes a CopyStream() method that copies from one input stream to another stream. This sounds quite similar to what you're trying to do.
|
I rolled together a quick extension method (so VS 2008 w/ 3.5 only):
```
public static class StreamCopier
{
private const long DefaultStreamChunkSize = 0x1000;
public static void CopyTo(this Stream from, Stream to)
{
if (!from.CanRead || !to.CanWrite)
{
return;
}
var buffer = from.CanSeek
? new byte[from.Length]
: new byte[DefaultStreamChunkSize];
int read;
while ((read = from.Read(buffer, 0, buffer.Length)) > 0)
{
to.Write(buffer, 0, read);
}
}
}
```
It can be used thus:
```
using (var input = File.OpenRead(@"C:\wrnpc12.txt"))
using (var output = File.OpenWrite(@"C:\wrnpc12.bak"))
{
input.CopyTo(output);
}
```
You can also swap the logic around slightly and write a `CopyFrom()` method as well.
|
Read data in FileStream into a generic Stream
|
[
"",
"c#",
"stream",
""
] |
I need help understanding some C++ operator overload statements. The class is declared like this:
```
template <class key_t, class ipdc_t>
class ipdc_map_template_t : public ipdc_lockable_t
{
...
typedef map<key_t,
ipdc_t*,
less<key_t>> map_t;
...
```
The creator of the class has created an iterator for the internal map structure:
```
struct iterator : public map_t::iterator
{
iterator() {}
iterator(const map_t::iterator & it)
: map_t::iterator(it) {}
iterator(const iterator & it)
: map_t::iterator(
*static_cast<const map_t::iterator *>(&it)) {}
operator key_t() {return ((this->operator*()).first);} // I don't understand this.
operator ipdc_t*() const {return ((this->operator*()).second);} // or this.
};
```
And begin() and end() return the begin() and end() of the map:
```
iterator begin() {IT_ASSERT(is_owner()); return map.begin();}
iterator end() {return map.end();}
```
My question is, if I have an iterator, how do I use those overloads to get the key and the value?
```
ipdc_map_template_t::iterator iter;
for( iter = my_instance.begin();
iter != my_instance.end();
++iter )
{
key_t my_key = ??????;
ipdc_t *my_value = ??????;
}
```
|
These are typecast operators, so you can do this:
```
{
key_t key = iter;
ipdc_t *val = iter;
}
```
Or, since `ipdc_map_template::iterator` is a subclass of `std::map::iterator`, you can still use the original accessors (which I find more readable):
```
{
key_t key = (*iter).first;
ipdc_t *val = (*iter).second;
// or, equivalently
key_t key = iter->first;
ipdc_t *val = iter->second;
}
```
|
The creator of the class has overridden the cast operators.
So by just assigning the iter to an object of the correct type it should auto cast itself via the methods into the correct values.
**N.B.**: The value type is being stored as a pointer. So when you extract the value you need to specify the pointer to the value type that you specified in the map interface.
```
typedef ipdc_map_template_t<int,std::string> MyMap;
MyMap mymap;
for(MyMap::iterator iter = mymap.begin();
iter != mymap.end();
++iter )
{
int my_key = iter;
std::string* my_value = iter;
}
```
Not sure I agree with the direction taken here.
Does this make the code more readable? I would stick with the old fashioned map iterator sounds more usable. Is this container being defined in some specialized library that you need, or would it be beneficial to look at the boost pointer containers?
|
Operator overloading for C++ maps
|
[
"",
"c++",
"templates",
""
] |
What is the best way to manage the JavaScript files and the functions/objects context in an ASP.NET MVC app?
|
Google says <http://javascriptmvc.com/>
|
If you're looking for something like that, you should definitely check out [MooTools](http://mootools.net/) as they implement classes almost exactly the way you describe. Overall, I've found their approach to be very clean, extensible, and maintainable. For example, here is a class template I use to write all of my classes (note that it uses some MooTools-specific syntax):
```
var className = new Class ({
Implements: [Events, Options],
options: {
option1: 'option1',
option2: 'option2'
},
initialize: function(options){
this.setOptions(options);
},
function1: function(){
},
function2: function(){
}
});
```
|
What is a good and known JavaScript MVC pattern?
|
[
"",
"javascript",
"asp.net-mvc",
"design-patterns",
""
] |
So i am reorganizing a winforms C# solution to help decouple and make it cleaner and more organized. The solution tracks a small business orders,etc . .
I have broken out the projects so far into
**App.View** - all GUI Related Code
**App.Data** - just data structures and interfaces. No other implementation code
**App.BusinessLogic** - all business logic code that has no GUI references
I have some classes that i can't figure out where they belong. Please let me know your thoughts which project each class should go or if there is another project that should get created for this.
1. A Class that retrieves user preferences from a database
2. A Class that retrieves static data from our static data server and returns sets of data results.
3. A Class that brings down user entitlements
4. A model class that stores a hashtable of orders
5. A class that emails out messages on a user action
|
Actually, I think you have things a little off from a traditional layered architecture. Normally, the models of your data that your application works on would be kept in a business layer, along with the code to operate on them. Your data layer would have both the data models of your persistence framework and the code to interact with that framework. I think this might be the source of the confusion between the suggested locations of your classes and your reaction to it based on your comments.
From that perspective anything that retrieves or brings would necessarily be located in your data layer -- it's accessing data in persistent storage. What it retrieves are eventually converted into business layer objects that your business logic operates on. Things are are conceptual models -- like a table of orders -- or business actions belong in the business layer. I would agree with @Adron with, perhaps, the same confusion about where (3) goes depending on what it actually is.
More specifically:
1. User Preferences are business
objects, the thing that retrieves
them is a data layer object.
2. The static data maps on to a business
object (table or view or something),
the thing that accesses the external
server is a data layer object.
3. The user entitlement is a business object, the thing that retrieves it is data layer object.
4. A table of Orders is a business object
5. Emailing is a business activity, so the thing that mails people is a business object
[EDIT] My generalized 3-Tier Architecture for (simple) web apps
DataAccessLayer
This would include my TableAdapters and strongly typed DataTables and Factories that turn rows of my DataTables into business objects in pre-LINQ projects. Using LINQ this would include my DataContext and designer generated LINQ entities.
BusinessLayer
This would include any business logic, including validation and security. In pre-LINQ these would be my business objects and any other classes that implement the logic of the application. Using LINQ these are the partial class implementations of my LINQ entities to implement security and validation along with any other classes to implement business logic.
Presentation
These are my web forms -- basically the UI of the app. I do include some of the validation logic in the forms as an optimization, although these are also validated in the BL. This would also include any user controls.
**Note:** This is the logical structure. The project structure generally mirrors this, but there are some cases, like connections to web services, that may be directly included in the web project even though logically the components are really in the BL/DAL.
**Note:** I'll probably be moving to MVC over 3-Tier once ASP.NET MVC is in production. I've done some personal projects in Ruby/Rails and I really like the MVC paradigm for web apps.
|
You have specified that App.Data should contain only data structures and interfaces, no implementation code, which is fine if you want to do that, but that leaves you with nowhere to put your database access code except in your App.BusinessLogic assembly.
Perhaps you really need to rename App.Data to App.Model (or something similar), and have a new App.DataAccess assembly that talks to the database (perhaps implementing a Repository pattern). Having done that, I would split things up like this:
1. App.DataAccess
2. App.DataAccess
3. App.DataAccess
4. App.Model
5. App.BusinessLogic
|
Structuring a winforms C# solution
|
[
"",
"c#",
"winforms",
"model-view-controller",
"mvp",
"projects",
""
] |
A feature of C++ is the ability to create unnamed (anonymous) namespaces, like so:
```
namespace {
int cannotAccessOutsideThisFile() { ... }
} // namespace
```
You would think that such a feature would be useless -- since you can't specify the name of the namespace, it's impossible to access anything within it from outside. But these unnamed namespaces *are* accessible within the file they're created in, as if you had an implicit using-clause to them.
My question is, why or when would this be preferable to using static functions? Or are they essentially two ways of doing the exact same thing?
|
~~The C++ Standard reads in section 7.3.1.1 Unnamed namespaces, paragraph 2:~~
> The use of the static keyword is
> deprecated when declaring objects in a
> namespace scope, the unnamed-namespace
> provides a superior alternative.
Static only applies to names of objects, functions, and anonymous unions, not to type declarations.
## Edit:
The decision to deprecate this use of the `static` keyword (affecting visibility of a variable declaration in a translation unit) has been reversed ([ref](http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#1012)). In this case using a `static` or an unnamed `namespace` are back to being essentially two ways of doing the exact same thing. For more discussion please see [this](https://stackoverflow.com/questions/4726570/deprecation-of-the-static-keyword-no-more) SO question.
Unnamed `namespace`'s still have the advantage of allowing you to define translation-unit-local types. Please see [this](https://stackoverflow.com/questions/4422507/superiority-of-unnamed-namespace-over-static) SO question for more details.
Credit goes to [Mike Percy](https://stackoverflow.com/users/1220179/mike-percy) for bringing this to my attention.
|
Putting methods in an anonymous namespace prevents you from accidentally violating the [One Definition Rule](http://en.wikipedia.org/wiki/One_Definition_Rule), allowing you to never worry about naming your helper methods the same as some other method you may link in.
And, as pointed out by [luke](https://stackoverflow.com/a/154482/913098), anonymous namespaces are preferred by the standard over static members.
|
Why should you prefer unnamed namespaces over static functions?
|
[
"",
"c++",
"static",
"namespaces",
"unnamed-namespace",
""
] |
I have a controller with an action method as follows:
```
public class InventoryController : Controller
{
public ActionResult ViewStockNext(int firstItem)
{
// Do some stuff
}
}
```
And when I run it I get an error stating:
> The parameters dictionary does not contain a valid value of type 'System.Int32' for parameter 'firstItem'. To make a parameter optional its type should either be a reference type or a Nullable type.
I had it working at one point and I decided to try the function without parameters. Finding out that the controller was not persistant I put the parameter back in, now it refuses to recognise the parameter when I call the method.
I'm using this url syntax to call the action:
```
http://localhost:2316/Inventory/ViewStockNext/11
```
Any ideas why I would get this error and what I need to do to fix it?
I've tried adding another method that takes an integer to the class it it also fails with the same reason. I've tried adding one that takes a string, and the string is set to null. I've tried adding one without parameters and that works fine, but of course it won't suit my needs.
|
Your routing needs to be set up along the lines of `{controller}/{action}/{firstItem}`. If you left the routing as the default `{controller}/{action}/{id}` in your `global.asax.cs` file, then you will need to pass in `id`.
```
routes.MapRoute(
"Inventory",
"Inventory/{action}/{firstItem}",
new { controller = "Inventory", action = "ListAll", firstItem = "" }
);
```
... or something close to that.
|
you can change firstItem to id and it will work
you can change the routing on global.asax (i do not recommed that)
and, can't believe no one mentioned this, you can call :
```
http://localhost:2316/Inventory/ViewStockNext?firstItem=11
```
In a @Url.Action would be :
```
@Url.Action("ViewStockNext", "Inventory", new {firstItem=11});
```
depending on the type of what you are doing, the last will be more suitable.
Also you should consider not doing ViewStockNext action and instead a ViewStock action with index. (my 2cents)
|
ASP.NET MVC - passing parameters to the controller
|
[
"",
"c#",
"asp.net-mvc",
""
] |
I haven't been around Java development for 8 years, but am starting to build a NetBeans Web Application. When I walk through the Web Application wizard, it asks for the server I'm going to be using.
What would be the best and simplest server for me to start using with NetBeans?
|
Since the NetBeans IDE is a Sun product, I would assume that the Glassfish application server would be a natural fit.
That said, one of the pluses of developing a web application in Java is that the interface for working with the http is standardized (i.e. the Servlet specification), so that you can pick any servlet container you want: be it Glassfish, Tomcat, Jetty or Weblogic. Since it sounds to me that you're experimenting and you want to use something easy to administer, I might go with Glassfish. However, be open to revisit that decision when you need to actually deploy your web application in a production environment. Be sure to check out other options like Tomcat or Jetty.
|
Unless you are deploying to a full J2EE application server, I would recommend using Tomcat. Tomcat can run as a standalone web/servlet/jsp server and avoids some of the complexities of a full J2EE app server.
The web development bundle for Netbeans will include installers for and automated integration with Glassfish and Tomcat. You will get the "best" experience using Netbeans with those servers.
That said, the workflow in Netbeans can be easily integrated with other application servers. As of 6.1, this includes Sun Java System Application Server 8 and 9, GlassFish v1 and v2, Apache Tomcat 4, 5 and 6, JBoss 4, BEA WebLogic 10, IBM WebSphere 6.0 and 6.1, Sailfin V1. See the [Netbeans J2EE Features](http://www.netbeans.org/features/web/java-ee.html) site for more info.
|
What web server should I use with NetBeans?
|
[
"",
"java",
"netbeans",
"webserver",
""
] |
I have been asking myself this question for a long time now. Thought of posting it. C# doesn't support Multiple Inheritance(this is the fact). All classes created in C# derive out of 'Object' class(again a fact).
So if C# does not support Multiple inheritance, then how are we able to extend a class even though it already extends Object class?
Illustating with an example:
1. class A : object - Class A created.
2. class B : object - Class B created.
3. class A : B - this again is supported. What happens to the earlier association to object.
We are able to use object class methods in A after step 3. So is the turned to multi level inheritance. If that is the case, then
1. class A : B
2. class C : B
3. class A : C - I must be able to access class B's methods in A. Which is not the case?
Can anyone please explain?
|
Joel's answer is correct. There is a difference between multiple inheritance and an inhertance tree (or derivation chain). In your example, you actually show an inhertance tree: One object inherits (derives) from another object higher in the tree. Multiple inheritance allows one object to inherit from multiple base classes.
Take, for example, the following tree:
```
public class BaseClass { }
public class SpecialBaseClass : BaseClass {}
public class SpecialtyDerivedClass : SpecialBaseClass {}
```
This is perfectly valid and says that SpecialtyDerivedClass inherits from SpecialBaseClass (SpecialtyDerivedClass' parent) which, in turn, derives from BaseClass (SpecialtyDerivedClass' grandparent).
Under the idea of multiple inheritance, the example would look like this:
```
public class BaseClass { }
public class SpecialBaseClass {}
public class SpecialtyDerivedClass : BaseClass, SpecialBaseClass {}
```
This is not allowed in .NET, but it says that SpecialityDerivedClass inherits from both BaseClass and SpecialBaseClass (which are both parents).
.NET does allow a form of multiple inheritance by allowing you to inherit from more than one interface. Changing the example above slightly:
```
public class BaseClass { }
public interface ISpecialBase {}
public interface ISpecialDerived {}
public class SpecialtyDerivedClass : BaseClass, ISpecialBase, ISpecialDerived {}
```
This says that SpecialtyDerivedClass inherits from BaseClass (it's parent) and also ISpecialBase and ISpecialDerived (also parent's but more like step-parents as interfaces can't specify functionality).
|
You're confusing mutliple inheritance with an inheritance tree. You can inherit from something other than Object. It's just that Object is sitting way up there at the top of your tree. And someone can inherit your class, but because Object is still up there at the top that class will also inherit from object. Your "Multi-level" inheritance is not multiple inheritance.
Multiple inheritance is when you inherit from two different trees, and .Net actually does support this after a fashion via interfaces.
|
Single Inheritance in C# - object class?
|
[
"",
"c#",
""
] |
So in my documentation it says:
> public event TreeViewPlusNodeCheckedEventHandler NodeChecked()
>
> You can use this event to run cause a method to run whenever the
> check-box for a node is checked on the tree.
So how do I add a method to my code behind file that will run when a node is checked? The method I want to run is:
```
protected void TOCNodeCheckedServer(object sender, TreeViewPlusNodeEventArgs args)
{
TreeViewPlusNode aNode = args.Node;
if (!aNode.Checked)
return;
List<string> BaseLayers = new List<string>();
_arcTOCConfig.BaseDataLayers.CopyTo(BaseLayers);
List<MapResourceItem> mapResources = new List<MapResourceItem>();
if (BaseLayers.Contains(aNode.Text))
{
foreach (BaseDataLayerElement anEl in _arcTOCConfig.BaseDataLayers)
{
if (!aNode.Text.Equals(anEl.Name))
{
if (aNode.TreeViewPlus.Nodes.FindByValue(anEl.Name).Checked)
{
aNode.TreeViewPlus.Nodes.FindByValue(anEl.Name).Checked = false;
aNode.TreeViewPlus.Nodes.FindByValue(anEl.Name).Refresh();
MapResourceItem aMapResource = this.Map1.MapResourceManagerInstance.ResourceItems.Find(anEl.Name);
aMapResource.DisplaySettings.Visible = false;
this.Map1.RefreshResource(anEl.Name);
mapResources.Add(aMapResource);
this.Map1.MapResourceManagerInstance.ResourceItems.Remove(aMapResource);
}
else
{
MapResourceItem aMapResource = this.Map1.MapResourceManagerInstance.ResourceItems.Find(anEl.Name);
mapResources.Add(aMapResource);
this.Map1.MapResourceManagerInstance.ResourceItems.Remove(aMapResource);
}
}
}
foreach (MapResourceItem aMapResource in mapResources)
{
int count = this.Map1.MapResourceManagerInstance.ResourceItems.Count - 1;
this.Map1.MapResourceManagerInstance.ResourceItems.Insert(count, aMapResource);
this.Map1.MapResourceManagerInstance.CreateResource(aMapResource);
}
this.Map1.InitializeFunctionalities();
this.Map1.Refresh();
}
}
```
vs 2008
c#
.net 3.5
|
You need to assign a delegate to the event and have it run the method you want. Something like :
TreeViewControl.NodeChecked += new TreeViewPlusNodeCheckedEventHandler(TOCNodeCheckedServer)
|
Just add a handler to the event.
```
myTreeView.NodeChecked += new TreeViewPlusNodeCheckedEventHandler(TOCNodeCheckedServer);
```
or (because instantiating the TreeViewPlusNodeCheckedEventHandler isn't actually necessary)
```
myTreeView.NodeChecked += TOCNodeCheckedServer;
```
|
How can I use an event to cause a method to run?
|
[
"",
"c#",
".net",
"events",
"methods",
""
] |
What's the difference between:
```
class Child(SomeBaseClass):
def __init__(self):
super(Child, self).__init__()
```
and:
```
class Child(SomeBaseClass):
def __init__(self):
SomeBaseClass.__init__(self)
```
I've seen `super` being used quite a lot in classes with only single inheritance. I can see why you'd use it in multiple inheritance but am unclear as to what the advantages are of using it in this kind of situation.
---
This question is about technical implementation details and the distinction between different ways of accessing the base class `__init__` method. To close duplicate questions where OP is simply missing a `super` call and is asking why base class attributes aren't available, please use [Why don't my subclass instances contain the attributes from the base class (causing an AttributeError when I try to use them)?](https://stackoverflow.com/questions/10268603) instead.
|
The benefits of `super()` in single-inheritance are minimal -- mostly, you don't have to hard-code the name of the base class into every method that uses its parent methods.
However, it's almost impossible to use multiple-inheritance without `super()`. This includes common idioms like mixins, interfaces, abstract classes, etc. This extends to code that later extends yours. If somebody later wanted to write a class that extended `Child` and a mixin, their code would not work properly.
|
## What's the difference?
```
SomeBaseClass.__init__(self)
```
means to call `SomeBaseClass`'s `__init__`. while
```
super().__init__()
```
means to call a bound `__init__` from the parent class that follows `SomeBaseClass`'s child class (the one that defines this method) in the instance's Method Resolution Order (MRO).
If the instance is a subclass of *this* child class, there may be a different parent that comes next in the MRO.
### Explained simply
When you write a class, you want other classes to be able to use it. `super()` makes it easier for other classes to use the class you're writing.
As Bob Martin says, a good architecture allows you to postpone decision making as long as possible.
`super()` can enable that sort of architecture.
When another class subclasses the class you wrote, it could also be inheriting from other classes. And those classes could have an `__init__` that comes after this `__init__` based on the ordering of the classes for method resolution.
Without `super` you would likely hard-code the parent of the class you're writing (like the example does). This would mean that you would not call the next `__init__` in the MRO, and you would thus not get to reuse the code in it.
If you're writing your own code for personal use, you may not care about this distinction. But if you want others to use your code, using `super` is one thing that allows greater flexibility for users of the code.
### Python 2 versus 3
This works in Python 2 and 3:
```
super(Child, self).__init__()
```
This only works in Python 3:
```
super().__init__()
```
It works with no arguments by moving up in the stack frame and getting the first argument to the method (usually `self` for an instance method or `cls` for a class method - but could be other names) and finding the class (e.g. `Child`) in the free variables (it is looked up with the name `__class__` as a free closure variable in the method).
I used to prefer to demonstrate the cross-compatible way of using `super`, but now that Python 2 is largely deprecated, I will demonstrate the Python 3 way of doing things, that is, calling `super` with no arguments.
### Indirection with Forward Compatibility
What does it give you? For single inheritance, the examples from the question are practically identical from a static analysis point of view. However, using `super` gives you a layer of indirection with forward compatibility.
Forward compatibility is very important to seasoned developers. You want your code to keep working with minimal changes as you change it. When you look at your revision history, you want to see precisely what changed when.
You may start off with single inheritance, but if you decide to add another base class, you only have to change the line with the bases - if the bases change in a class you inherit from (say a mixin is added) you'd change nothing in this class.
In Python 2, getting the arguments to `super` and the correct method arguments right can be a little confusing, so I suggest using the Python 3 only method of calling it.
If you know you're using `super` correctly with single inheritance, that makes debugging less difficult going forward.
### Dependency Injection
Other people can use your code and inject parents into the method resolution:
```
class SomeBaseClass(object):
def __init__(self):
print('SomeBaseClass.__init__(self) called')
class UnsuperChild(SomeBaseClass):
def __init__(self):
print('UnsuperChild.__init__(self) called')
SomeBaseClass.__init__(self)
class SuperChild(SomeBaseClass):
def __init__(self):
print('SuperChild.__init__(self) called')
super().__init__()
```
Say you add another class to your object, and want to inject a class between Foo and Bar (for testing or some other reason):
```
class InjectMe(SomeBaseClass):
def __init__(self):
print('InjectMe.__init__(self) called')
super().__init__()
class UnsuperInjector(UnsuperChild, InjectMe): pass
class SuperInjector(SuperChild, InjectMe): pass
```
Using the un-super child fails to inject the dependency because the child you're using has hard-coded the method to be called after its own:
```
>>> o = UnsuperInjector()
UnsuperChild.__init__(self) called
SomeBaseClass.__init__(self) called
```
However, the class with the child that uses `super` can correctly inject the dependency:
```
>>> o2 = SuperInjector()
SuperChild.__init__(self) called
InjectMe.__init__(self) called
SomeBaseClass.__init__(self) called
```
### Addressing a comment
> Why in the world would this be useful?
Python linearizes a complicated inheritance tree via the [C3 linearization algorithm](https://en.wikipedia.org/wiki/C3_linearization) to create a Method Resolution Order (MRO).
We want methods to be looked up *in that order*.
For a method defined in a parent to find the next one in that order without `super`, it would have to
1. get the mro from the instance's type
2. look for the type that defines the method
3. find the next type with the method
4. bind that method and call it with the expected arguments
> The `UnsuperChild` should not have access to `InjectMe`. Why isn't the conclusion "Always avoid using `super`"? What am I missing here?
The `UnsuperChild` does *not* have access to `InjectMe`. It is the `UnsuperInjector` that has access to `InjectMe` - and yet cannot call that class's method from the method it inherits from `UnsuperChild`.
Both Child classes intend to call a method by the same name that comes next in the MRO, which might be *another* class it was not aware of when it was created.
The one without `super` hard-codes its parent's method - thus is has restricted the behavior of its method, and subclasses cannot inject functionality in the call chain.
The one *with* `super` has greater flexibility. The call chain for the methods can be intercepted and functionality injected.
You may not need that functionality, but subclassers of your code may.
## Conclusion
Always use `super` to reference the parent class instead of hard-coding it.
What you intend is to reference the parent class that is next-in-line, not specifically the one you see the child inheriting from.
Not using `super` can put unnecessary constraints on users of your code.
|
What does 'super' do in Python? - difference between super().__init__() and explicit superclass __init__()
|
[
"",
"python",
"oop",
"inheritance",
"multiple-inheritance",
"super",
""
] |
I'm a long-time Windows developer, having cut my teeth on win32 and early COM. I've been working with .NET since 2001, so I'm pretty fluent in C# and the CLR. I'd never heard of Castle Windsor until I started participating in Stack Overflow. I've read the Castle Windsor "Getting Started" guide, but it's not clicking.
Teach this old dog new tricks, and tell me why I should be integrating Castle Windsor into my enterprise apps.
|
Castle Windsor is an inversion of control tool. There are others like it.
It can give you objects with pre-built and pre-wired dependencies right in there. **An entire object graph created via reflection and configuration rather than the "new" operator.**
Start here: <http://tech.groups.yahoo.com/group/altdotnet/message/10434>
---
Imagine you have an email sending class. EmailSender. Imagine you have another class WorkflowStepper. Inside WorkflowStepper you need to use EmailSender.
You could always say `new EmailSender().Send(emailMessage);`
but that - the use of `new` - creates a TIGHT COUPLING that is hard to change. (this is a tiny contrived example after all)
So what if, instead of newing this bad boy up inside WorkflowStepper, you just passed it into the constructor?
So then whoever called it had to new up the EmailSender.
`new WorkflowStepper(emailSender).Step()`
Imagine you have hundreds of these little classes that only have one responsibility (google SRP).. and you use a few of them in WorkflowStepper:
`new WorkflowStepper(emailSender, alertRegistry, databaseConnection).Step()`
Imagine not worrying about the details of `EmailSender` when you are writing `WorkflowStepper` or `AlertRegistry`
You just worry about the concern you are working with.
Imagine this whole graph (tree) of objects and dependencies gets wired up at RUN TIME, so that when you do this:
`WorkflowStepper stepper = Container.Get<WorkflowStepper>();`
you get a real deal `WorkflowStepper` with all the dependencies automatically filled in where you need them.
## There is no `new`
It just *happens* - because it knows what needs what.
And you can write fewer defects with better designed, DRY code in a testable and repeatable way.
|
Mark Seemann wrote and excellent book on DI (Dependency Injection) which is a subset of IOC. He also compares a number of containers. I cannot recommend this book enough. The book's name is: "Dependency Injection in .Net" <https://www.manning.com/books/dependency-injection-in-dot-net>
|
What is Castle Windsor, and why should I care?
|
[
"",
"c#",
"dependency-injection",
"inversion-of-control",
"castle-windsor",
""
] |
Is it possible to prevent stack allocation of an object and only allow it to be instiated with 'new' on the heap?
|
One way you could do this would be to make the constructors private and only allow construction through a static method that returns a pointer. For example:
```
class Foo
{
public:
~Foo();
static Foo* createFoo()
{
return new Foo();
}
private:
Foo();
Foo(const Foo&);
Foo& operator=(const Foo&);
};
```
|
In the case of C++11
```
class Foo
{
public:
~Foo();
static Foo* createFoo()
{
return new Foo();
}
Foo(const Foo &) = delete; // if needed, put as private
Foo & operator=(const Foo &) = delete; // if needed, put as private
Foo(Foo &&) = delete; // if needed, put as private
Foo & operator=(Foo &&) = delete; // if needed, put as private
private:
Foo();
};
```
|
Is it possible to prevent stack allocation of an object and only allow it to be instantiated with 'new'?
|
[
"",
"c++",
"c++-faq",
""
] |
I'm just starting to wean myself from ASP.NET UpdatePanels. I'm using jQuery and jTemplates to bind the results of a web service to a grid, and everything works fine.
Here's the thing: I'm trying to show a spinner GIF while the table is being refreshed (à la UpdateProgress in ASP.NET) I've got it all working, except that the spinner is frozen. To see what's going on, I've tried moving the spinner out from the update progress div and out on the page where I can see it the whole time. It spins and spins until the refresh starts, and stays frozen until the refresh is done, and then starts spinning again. Not really what you want from a 'please wait' spinner!
This is in IE7 - haven't had a chance to test in other browsers yet. Any thoughts? Is the ajax call or the client-side databinding so resource-intensive that the browser is unable to tend to its animated GIFs?
### Update
Here's the code that refreshes the grid. Not sure if this is synchronous or asynchronous.
```
updateConcessions = function(e) {
$.ajax({
type: "POST",
url: "Concessions.aspx/GetConcessions",
data: "{'Countries':'ga'}",
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function(msg) {
applyTemplate(msg);
},
error: function(XMLHttpRequest, textStatus, errorThrown) {
}
});
}
applyTemplate = function(msg) {
$('div#TemplateTarget').setTemplate($('div#TemplateSource').html());
$('div#TemplateTarget').processTemplate(msg);
}
```
### Update 2
I just checked the [jQuery documentation](http://docs.jquery.com/Ajax/jQuery.ajax#options) and the `$.ajax()` method is asynchronous by default. Just for kicks I added this
```
$.ajax({
async: true,
...
```
and it didn't make any difference.
|
It's not the Ajax call that's freezing the browser. It's the success handler (applyTemplate). Inserting HTML into a document like that can freeze IE, depending on how much HTML there is. It's because the IE UI is single threaded; if you notice, the actual IE menus are frozen too while this is happening.
As a test, try:
```
applyTemplate = function(msg) {
return;
}
```
|
I don't remember precisely what caused it, but we had a similar issue with IE6 in a busy box and we fixed it with this incredible hack in the Javascript:
```
setTimeout("document.images['BusyImage'].src=document.images['BusyImage'].src",10);
```
That just sets the image source to what it was before, but it is apparently enough to jostle IE out of its stupor.
edit: I think I remember what was causing this: We were loading the animation into a div with display: none. IE loads it and doesn't start the animation, because it's hidden. Unfortunately it doesn't start the animation when you set the containing block to display: block, so we used the above line of code to trick IE into reloading the image.
|
Why does my spinner GIF stop while jQuery ajax call is running?
|
[
"",
"javascript",
"jquery",
""
] |
I'm using a column of checkboxes in a YUI DataTable, I works fine. But I haven't found a way to put a name and value attribute so I can use when the form is submitted.
Thanks in advance.
|
Does the API reference at <http://developer.yahoo.com/yui/docs/YAHOO.widget.CheckboxCellEditor.html> help?
|
In order to get a name and value attribute, you use a checkbox like this:
```
<input type="checkbox" name="the_name" value="the_value" />
```
In your server-side code, you would look into the POST or GET data for the name of the checkbox. If it is there, the checkbox was checked. If it isn't there, the checkbox was not checked.
<http://www.w3schools.com/HTMLDOM/dom_obj_checkbox.asp>
|
checkbox in YUI DataTable
|
[
"",
"javascript",
"yui",
""
] |
I have recently been thinking about the difference between the two ways of defining an array:
1. `int[] array`
2. `int array[]`
Is there a difference?
|
They are semantically identical. The `int array[]` syntax was only added to help C programmers get used to java.
`int[] array` is much preferable, and less confusing.
|
There is one slight difference, if you happen to declare more than one variable in the same declaration:
```
int[] a, b; // Both a and b are arrays of type int
int c[], d; // WARNING: c is an array, but d is just a regular int
```
Note that this is bad coding style, although the compiler will almost certainly catch your error the moment you try to use `d`.
|
Difference between int[] array and int array[]
|
[
"",
"java",
"arrays",
""
] |
Is there a way to format a UTC time into any arbitrary string format I want in java? Basically I was thinking of having some class take the timestamp and I pass it is string telling it how I want it formated, and it returns the formatted string for me. Is there a way to do this?
|
The java.text.SimpleDateFormat class provides formatting and parsing for dates in a locale-sensitive manner.
The javadoc header for [SimpleDateFormat](http://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html) is a good source of detailed information. There is also a [Java Tutorial](https://docs.oracle.com/javase/tutorial/i18n/format/simpleDateFormat.html) with example usages.
|
The [DateFormat](http://java.sun.com/j2se/1.4.2/docs/api/java/text/DateFormat.html) class or [SimpleDateFormat](http://java.sun.com/j2se/1.4.2/docs/api/java/text/SimpleDateFormat.html) should get you there. For example, <http://www.epochconverter.com/> lists the following example to convert a epoch time to human readable timestamp with Java:
```
String date = new java.text.SimpleDateFormat("dd/MM/yyyy HH:mm:ss").format(new java.util.Date (epoch*1000));
```
|
Formatting Timestamps in Java
|
[
"",
"java",
"date",
""
] |
What’s the difference between the `System.Array.CopyTo()` and `System.Array.Clone()`?
|
The **[Clone()](http://msdn.microsoft.com/en-us/library/system.array.clone.aspx)** method returns a new array (a shallow copy) object containing all the elements in the original array. The **[CopyTo()](http://msdn.microsoft.com/en-us/library/system.array.copyto.aspx)** method copies the elements into another existing array. Both perform a shallow copy. A shallow copy means the contents (each array element) contains references to the same object as the elements in the original array. A deep copy (which neither of these methods performs) would create a new instance of each element's object, resulting in a different, yet identical object.
So the difference are :
```
1- CopyTo require to have a destination array when Clone return a new array.
2- CopyTo let you specify an index (if required) to the destination array.
```
Edit:
Remove the wrong example.
|
One other difference not mentioned so far is that
* with `Clone()` the destination array need not exist yet since a new one is created from scratch.
* with `CopyTo()` not only does the destination array need to already exist, it needs to be large enough to hold all the elements in the source array from the index you specify as the destination.
|
Difference between the System.Array.CopyTo() and System.Array.Clone()
|
[
"",
"c#",
".net",
"arrays",
".net-2.0",
""
] |
I am using SQL Server 2005. I want to constrain the values in a column to be unique, while allowing NULLS.
My current solution involves a unique index on a view like so:
```
CREATE VIEW vw_unq WITH SCHEMABINDING AS
SELECT Column1
FROM MyTable
WHERE Column1 IS NOT NULL
CREATE UNIQUE CLUSTERED INDEX unq_idx ON vw_unq (Column1)
```
Any better ideas?
|
Pretty sure you can't do that, as it violates the purpose of uniques.
However, this person seems to have a decent work around:
<http://sqlservercodebook.blogspot.com/2008/04/multiple-null-values-in-unique-index-in.html>
|
Using SQL Server 2008, you can [create a filtered index](https://learn.microsoft.com/en-us/sql/relational-databases/indexes/create-filtered-indexes).
```
CREATE UNIQUE INDEX AK_MyTable_Column1 ON MyTable (Column1) WHERE Column1 IS NOT NULL
```
Another option is a trigger to check uniqueness, but this could affect performance.
|
How to create a unique index on a NULL column?
|
[
"",
"sql",
"sql-server",
"indexing",
"constraints",
"unique",
""
] |
I have code similar to this filtering entries in an Array of Objects:
```
var filterRegex = new RegExp(".*blah.*","ig");
if (filterRegex.test(events[i].thing) && events[i].show) {
console.log("SUCCESS: filtering thing " + i + " " + events[i].thing);
events[i].show = false;
numevents--;
}
```
I get inconsistent results with this if condition (checking with Firebug, both conditions are true individually, but *sometimes* the whole expression evaluates to false). HOWEVER, if I actually put an `alert()` called inside this if statement (like line 4), it becomes consistent and I get the result I want.
Can you see anything wrong with this logic and tell me why it's not always producing what is expected?
|
Ok, i see it now. The key to your problem is the use of the `g` (global match) flag: when this is specified for a regex, it will be set up such that it can be executed multiple times, beginning each time at the place where it left off last time. It keeps a "bookmark" of sorts in its `lastIndex` property:
```
var testRegex = /blah/ig;
// logs: true 4
console.log(testRegex.test("blah blah"), testRegex.lastIndex);
// logs: true 9
console.log(testRegex.test("blah blah"), testRegex.lastIndex);
// logs: false 0
console.log(testRegex.test("blah blah"), testRegex.lastIndex);
```
The above example creates an instance of a very simple regex: it matches "blah", upper or lower case, anywhere in the string, and it can be matched multiple times (the `g` flag). On the first run, it matches the first "blah", and leaves `lastIndex` set to 4 (the index of the space after the first "blah"). The second run starts matching at the `lastIndex`, matches the second blah, and leaves `lastIndex` set to 9 - one past the end of the array. The third run doesn't match - `lastIndex` is bogus - and leaves `lastIndex` set to 0. A fourth run would therefore have the same results as the first.
Now, your expression is quite a bit more greedy than mine: it will match any number of any characters before or after "blah". Therefore, no matter what string you test on, if it contains "blah" it will always match the entire string and leave `lastIndex` set to the length of the string just tested. Meaning, if you were to call `test()` twice, the second test would always fail:
```
var filterRegex = /.*blah.*/ig;
// logs: true, 9
console.log(filterRegex.test("blah blah"), filterRegex.lastIndex);
// logs: false, 0
console.log(filterRegex.test("blah blah"), filterRegex.lastIndex);
```
Fortunately, since you create your regex immediately prior to calling `test()`, and never call `test()` more than once, you'll never run into unexpected behavior... **Unless** you're using a debugger that lets you add in another call to `test()` on the side. Yup. With Firebug running, a watch expression containing your call to `test()` will result in intermittent `false` results showing up, either in your code or in the watch results, depending on which one gets to it first. Driving you slowly insane...
Of course, without the g flag, livin' is easy:
```
var filterRegex = /.*blah.*/i;
// logs: true, 0
console.log(filterRegex.test("blah blah"), filterRegex.lastIndex);
// logs: true, 0
console.log(filterRegex.test("blah blah"), filterRegex.lastIndex);
```
### Suggestions
* Avoid the global flag when you don't need it.
* Be careful what you evaluate in the debugger: if there are side effects, it can affect the behavior of your program.
|
I just can't imagine there is any situation where two JavaScript expressions evaluate to true individually, but not when combined.
Are you sure both expressions actually produce a boolean value every time? (Okay, to make `regex.test()` not produce a boolean value is difficult, but how about `event.show`. Might that be undefined at times?
Do you refer to the correct index when saying `event[0].show`, wouldn't you mean `event[i].show`?
|
Why am I seeing inconsistent JavaScript logic behavior looping with an alert() vs. without it?
|
[
"",
"javascript",
"regex",
""
] |
What are the pros and cons of using nested public C++ classes and enumerations? For example, suppose you have a class called `printer`, and this class also stores information on output trays, you could have:
```
class printer
{
public:
std::string name_;
enum TYPE
{
TYPE_LOCAL,
TYPE_NETWORK,
};
class output_tray
{
...
};
...
};
printer prn;
printer::TYPE type;
printer::output_tray tray;
```
Alternatively:
```
class printer
{
public:
std::string name_;
...
};
enum PRINTER_TYPE
{
PRINTER_TYPE_LOCAL,
PRINTER_TYPE_NETWORK,
};
class output_tray
{
...
};
printer prn;
PRINTER_TYPE type;
output_tray tray;
```
I can see the benefits of nesting private enums/classes, but when it comes to public ones, the office is split - it seems to be more of a style choice.
So, which do you prefer and why?
|
## Nested classes
There are several side effects to classes nested inside classes that I usually consider flaws (if not pure antipatterns).
Let's imagine the following code :
```
class A
{
public :
class B { /* etc. */ } ;
// etc.
} ;
```
Or even:
```
class A
{
public :
class B ;
// etc.
} ;
class A::B
{
public :
// etc.
} ;
```
So:
* **Privilegied Access:** A::B has privilegied access to all members of A (methods, variables, symbols, etc.), which weakens encapsulation
* **A's scope is candidate for symbol lookup:** code from inside B will see **all** symbols from A as possible candidates for a symbol lookup, which can confuse the code
* **forward-declaration:** There is no way to forward-declare A::B without giving a full declaration of A
* **Extensibility:** It is impossible to add another class A::C unless you are owner of A
* **Code verbosity:** putting classes into classes only makes headers larger. You can still separate this into multiple declarations, but there's no way to use namespace-like aliases, imports or usings.
As a conclusion, unless exceptions (e.g. the nested class is an intimate part of the nesting class... And even then...), I see no point in nested classes in normal code, as the flaws outweights by magnitudes the perceived advantages.
Furthermore, it smells as a clumsy attempt to simulate namespacing without using C++ namespaces.
On the pro-side, you isolate this code, and if private, make it unusable but from the "outside" class...
## Nested enums
Pros: Everything.
Con: Nothing.
The fact is enum items will pollute the global scope:
```
// collision
enum Value { empty = 7, undefined, defined } ;
enum Glass { empty = 42, half, full } ;
// empty is from Value or Glass?
```
Ony by putting each enum in a different namespace/class will enable you to avoid this collision:
```
namespace Value { enum type { empty = 7, undefined, defined } ; }
namespace Glass { enum type { empty = 42, half, full } ; }
// Value::type e = Value::empty ;
// Glass::type f = Glass::empty ;
```
Note that C++0x defined the class enum:
```
enum class Value { empty, undefined, defined } ;
enum class Glass { empty, half, full } ;
// Value e = Value::empty ;
// Glass f = Glass::empty ;
```
exactly for this kind of problems.
|
One con that can become a big deal for large projects is that it is impossible to make a forward declaration for nested classes or enums.
|
Pros and cons of using nested C++ classes and enumerations?
|
[
"",
"c++",
"class",
"enums",
"nested",
""
] |
I know that this is a simple question for PHP guys but I don't know the language and just need to do a simple "get" from another web page when my page is hit. i.e. signal the other page that this page has been hit.
EDIT: curl is not available to me.
|
If curl wrappers are on (they are per default), you can use:
```
file_get_contents('http://www.example.org');
```
Note that this happens synchronous, so before the request has completed, your page won't either. It would be better to log access to a logfile (or database) and export the data occasionally. Alternatively, you could do the request after your page has completed, and output has been sent to the client.
|
Beware `file_get_contents()` and `fopen()`:
> If PHP has decided that filename specifies a registered protocol, and that protocol is registered as a network URL, PHP will check to make sure that allow\_url\_fopen is enabled. If it is switched off, PHP will emit a warning and the fopen call will fail.
|
How do I get using php?
|
[
"",
"php",
""
] |
I have a form showing progress messages as a fairly long process runs. It's a call to a web service so I can't really show a percentage complete figure on a progress bar meaningfully. (I don't particularly like the Marquee property of the progress bar)
I would like to show an animated GIF to give the process the feel of some activity (e.g. files flying from one computer to another like Windows copy process).
How do you do this?
|
It's not too hard.
1. Drop a picturebox onto your form.
2. Add the .gif file as the image in the picturebox
3. Show the picturebox when you are loading.
**Things to take into consideration:**
* Disabling the picturebox will prevent the gif from being animated.
**Another way of doing it:**
Another way that I have found that works quite well is the async dialog control that I found on the [code project](http://www.codeproject.com/KB/dialog/asyncdialog.aspx)
|
I had the same problem. Whole form (including gif) stopping to redraw itself because of long operation working in the background. Here is how i solved this.
```
private void MyThreadRoutine()
{
this.Invoke(this.ShowProgressGifDelegate);
//your long running process
System.Threading.Thread.Sleep(5000);
this.Invoke(this.HideProgressGifDelegate);
}
private void button1_Click(object sender, EventArgs e)
{
ThreadStart myThreadStart = new ThreadStart(MyThreadRoutine);
Thread myThread = new Thread(myThreadStart);
myThread.Start();
}
```
I simply created another thread to be responsible for this operation. Thanks to this initial form continues redrawing without problems (including my gif working). ShowProgressGifDelegate and HideProgressGifDelegate are delegates in form that set visible property of pictureBox with gif to true/false.
|
How do you show animated GIFs on a Windows Form (c#)
|
[
"",
"c#",
"winforms",
"animated-gif",
""
] |
How can I change the class of an HTML element in response to an `onclick` or any other events using JavaScript?
|
## Modern HTML5 Techniques for changing classes
Modern browsers have added [**classList**](https://developer.mozilla.org/en-US/docs/DOM/element.classList) which provides methods to make it easier to manipulate classes without needing a library:
```
document.getElementById("MyElement").classList.add('MyClass');
document.getElementById("MyElement").classList.remove('MyClass');
if ( document.getElementById("MyElement").classList.contains('MyClass') )
document.getElementById("MyElement").classList.toggle('MyClass');
```
Unfortunately, these do not work in Internet Explorer prior to v10, though there is a [shim](http://en.wikipedia.org/wiki/Shim_(computing)) to add support for it to IE8 and IE9, available from [this page](https://developer.mozilla.org/en-US/docs/DOM/element.classList). It is, though, getting more and more [supported](http://caniuse.com/#feat=classlist).
## Simple cross-browser solution
The standard JavaScript way to select an element is using [`document.getElementById("Id")`](https://developer.mozilla.org/en-US/docs/DOM/document.getElementById), which is what the following examples use - you can of course obtain elements in other ways, and in the right situation may simply use `this` instead - however, going into detail on this is beyond the scope of the answer.
### To change all classes for an element:
To replace all existing classes with one or more new classes, set the className attribute:
```
document.getElementById("MyElement").className = "MyClass";
```
(You can use a space-delimited list to apply multiple classes.)
### To add an additional class to an element:
To add a class to an element, without removing/affecting existing values, append a space and the new classname, like so:
```
document.getElementById("MyElement").className += " MyClass";
```
### To remove a class from an element:
To remove a single class to an element, without affecting other potential classes, a simple regex replace is required:
```
document.getElementById("MyElement").className =
document.getElementById("MyElement").className.replace
( /(?:^|\s)MyClass(?!\S)/g , '' )
/* Code wrapped for readability - above is all one statement */
```
An explanation of this regex is as follows:
```
(?:^|\s) # Match the start of the string or any single whitespace character
MyClass # The literal text for the classname to remove
(?!\S) # Negative lookahead to verify the above is the whole classname
# Ensures there is no non-space character following
# (i.e. must be the end of the string or space)
```
The `g` flag tells the replace to repeat as required, in case the class name has been added multiple times.
### To check if a class is already applied to an element:
The same regex used above for removing a class can also be used as a check as to whether a particular class exists:
```
if ( document.getElementById("MyElement").className.match(/(?:^|\s)MyClass(?!\S)/) )
```
### Assigning these actions to onClick events:
Whilst it is possible to write JavaScript directly inside the HTML event attributes (such as `onClick="this.className+=' MyClass'"`) this is not recommended behavior. Especially on larger applications, more maintainable code is achieved by separating HTML markup from JavaScript interaction logic.
The first step to achieving this is by creating a function, and calling the function in the onClick attribute, for example:
```
<script type="text/javascript">
function changeClass(){
// Code examples from above
}
</script>
...
<button onClick="changeClass()">My Button</button>
```
*(It is not required to have this code in script tags, this is simply for the brevity of example, and including the JavaScript in a distinct file may be more appropriate.)*
The second step is to move the onClick event out of the HTML and into JavaScript, for example using [addEventListener](https://developer.mozilla.org/en-US/docs/DOM/element.addEventListener)
```
<script type="text/javascript">
function changeClass(){
// Code examples from above
}
window.onload = function(){
document.getElementById("MyElement").addEventListener( 'click', changeClass);
}
</script>
...
<button id="MyElement">My Button</button>
```
(Note that the window.onload part is required so that the contents of that function are executed *after* the HTML has finished loading - without this, the MyElement might not exist when the JavaScript code is called, so that line would fail.)
## JavaScript Frameworks and Libraries
The above code is all in standard JavaScript, however, it is common practice to use either a framework or a library to simplify common tasks, as well as benefit from fixed bugs and edge cases that you might not think of when writing your code.
Whilst some people consider it overkill to add a ~50 KB framework for simply changing a class, if you are doing any substantial amount of JavaScript work or anything that might have unusual cross-browser behavior, it is well worth considering.
*(Very roughly, a library is a set of tools designed for a specific task, whilst a framework generally contains multiple libraries and performs a complete set of duties.)*
The examples above have been reproduced below using [jQuery](http://jquery.com), probably the most commonly used JavaScript library (though there are others worth investigating too).
(Note that `$` here is the jQuery object.)
### Changing Classes with jQuery:
```
$('#MyElement').addClass('MyClass');
$('#MyElement').removeClass('MyClass');
if ( $('#MyElement').hasClass('MyClass') )
```
In addition, jQuery provides a shortcut for adding a class if it doesn't apply, or removing a class that does:
```
$('#MyElement').toggleClass('MyClass');
```
### Assigning a function to a click event with jQuery:
```
$('#MyElement').click(changeClass);
```
or, without needing an id:
```
$(':button:contains(My Button)').click(changeClass);
```
|
You could also just do:
```
document.getElementById('id').classList.add('class');
document.getElementById('id').classList.remove('class');
```
And to toggle a class (remove if exists else add it):
```
document.getElementById('id').classList.toggle('class');
```
|
How can I change an element's class with JavaScript?
|
[
"",
"javascript",
"html",
"dom",
""
] |
I have a java webapp that has to be deployed on either Win or Linux machines. I now want to add log4j for logging and I'd like to use a relative path for the log file as I don't want to change the file path on every deployment. The container will most likely be Tomcat but not necessarily.
What's the best way of doing this?
|
I've finally done it in this way.
Added a ServletContextListener that does the following:
```
public void contextInitialized(ServletContextEvent event) {
ServletContext context = event.getServletContext();
System.setProperty("rootPath", context.getRealPath("/"));
}
```
Then in the log4j.properties file:
```
log4j.appender.file.File=${rootPath}WEB-INF/logs/MyLog.log
```
By doing it in this way Log4j will write into the right folder as long as you don't use it before the "rootPath" system property has been set. This means that you cannot use it from the ServletContextListener itself but you should be able to use it from anywhere else in the app.
It should work on every web container and OS as it's not dependent on a container specific system property and it's not affected by OS specific path issues.
Tested with Tomcat and Orion web containers and on Windows and Linux and it works fine so far.
What do you think?
|
Tomcat sets a catalina.home system property. You can use this in your log4j properties file. Something like this:
```
log4j.rootCategory=DEBUG,errorfile
log4j.appender.errorfile.File=${catalina.home}/logs/LogFilename.log
```
On Debian (including Ubuntu), `${catalina.home}` will not work because that points at /usr/share/tomcat6 which has no link to /var/log/tomcat6. Here just use `${catalina.base}`.
If your using another container, try to find a similar system property, or define your own. Setting the system property will vary by platform, and container. But for Tomcat on Linux/Unix I would create a setenv.sh in the CATALINA\_HOME/bin directory. It would contain:
```
export JAVA_OPTS="-Dcustom.logging.root=/var/log/webapps"
```
Then your log4j.properties would be:
```
log4j.rootCategory=DEBUG,errorfile
log4j.appender.errorfile.File=${custom.logging.root}/LogFilename.log
```
|
Log4j, configuring a Web App to use a relative path
|
[
"",
"java",
"web-applications",
"log4j",
""
] |
The title pretty much says it all. I want to create a SqlConnection and then check that connection without opening a database, cause at that point I don't know yet where will I connect to. Is it possible to do that?
The SqlConnection class has a 'Open' member which tries to open the database you'd set in the Database property, and if you didn't set one, SqlServer tries with the master db. The thing is the user I'm trying to connect with (MACHINE\ASPNET) has access to some databases (which I don't know yet) and not the master db.
Regards,
Seba
|
Connect to temp db. Everybody has accecss to tempdb so you will be able to authenticate yourself for access. Later when you know the actual database , you can change this property to connect to the db you want.
|
I am not sure if this is what you need.
[Check if a user has access to a database in Sql Server 2005](http://www.sqlservercurry.com/2008/03/check-if-user-has-access-to-database.html?showComment=1221349680000)
```
SELECT HAS_DBACCESS('Northwind');
```
HAS\_DBACCESS returns information about whether the user has access to the specified database (BOL).
Find all databases that the current user has access to
```
SELECT [Name] as DatabaseName from master.dbo.sysdatabases
WHERE ISNULL(HAS_DBACCESS ([Name]),0)=1
ORDER BY [Name]
```
|
How to test SqlServer connection without opening a database
|
[
"",
".net",
"sql",
"sql-server",
"sqlconnection",
""
] |
I've been trying this a few different ways, but I'm reaching the conclusion that it can't be done. It's a language feature I've enjoyed from other languages in the past. Is it just something I should just write off?
|
No, static indexers aren't supported in C#. Unlike other answers, however, I see how there could easily be point in having them. Consider:
```
Encoding x = Encoding[28591]; // Equivalent to Encoding.GetEncoding(28591)
Encoding y = Encoding["Foo"]; // Equivalent to Encoding.GetEncoding("Foo")
```
It would be relatively rarely used, I suspect, but I think it's odd that it's prohibited - it gives asymmetry for no particular reason as far as I can see.
|
You can simulate static indexers using static indexed properties:
```
public class MyEncoding
{
public sealed class EncodingIndexer
{
public Encoding this[string name]
{
get { return Encoding.GetEncoding(name); }
}
public Encoding this[int codepage]
{
get { return Encoding.GetEncoding(codepage); }
}
}
private static EncodingIndexer StaticIndexer;
public static EncodingIndexer Items
{
get { return StaticIndexer ?? (StaticIndexer = new EncodingIndexer()); }
}
}
```
Usage:
```
Encoding x = MyEncoding.Items[28591]; // Equivalent to Encoding.GetEncoding(28591)
Encoding y = MyEncoding.Items["Foo"]; // Equivalent to Encoding.GetEncoding("Foo")
```
|
Are static indexers not supported in C#?
|
[
"",
"c#",
".net",
"clr",
""
] |
When iterating over elements of a vector it is preferred to use iterators instead of an index (see [Why use iterators instead of array indices?](https://stackoverflow.com/questions/131241/why-use-iterators-instead-of-array-indices)).
```
std::vector<T> vec;
std::vector<T>::iterator it;
for ( it = vec.begin(); it != vec.end(); ++it )
{
// do work
}
```
However, it can be necessary to use the index in the body of the loop. Which of the following would be preferable in that case, considering performance and flexibility/extensibility?
1. Revert to the indexed loop
```
std::vector vec;
size_t i;
for ( i = 0; i < vec.size(); ++i )
{
// use i
}
```
2. Calculate offset
```
std::vector vec;
std::vector::iterator it;
for ( it = vec.begin(); it != vec.end(); ++it )
{
size_t i = it - vec.begin();
// use i
}
```
3. Use std::distance
```
std::vector vec;
std::vector::iterator it;
for ( it = vec.begin(); it != vec.end(); ++it )
{
size_t i = std::distance( vec.begin(), it );
// use i
}
```
|
If you're planning on using exclusively a vector, you may want to switch back to the indexed loop, since it conveys your intent more clearly than iterator-loop. However, if evolution of your program in the future may lead to a change of container, you should stick to the iterators and use std::distance, which is guaranteed to work with all standard iterators.
|
Using std::distance is a bit more generic since it works for all iterators, not just random access iterators. And it should be just as fast as It - vec.begin() in case of random access iterators.
It - vec.begin() is basically pointer arithmetic.
|
Obtain an index into a vector using Iterators
|
[
"",
"c++",
"indexing",
"vector",
"iterator",
""
] |
The issue is there is a database with around 20k customer records and I want to make a best effort to avoid duplicate entries. The database is Microsoft SQL Server 2005, the application that maintains that database is Microsoft Dynamics/SL. I am creating an ASP.NET webservice that interacts with that database. My service can insert customer records into the database, read records from it, or modify those records. Either in my webservice, or through MS Dynamics, or in Sql Server, I would like to give a list of possible matches before a user confirms a new record add.
So the user would submit a record, if it seems to be unique, the record will save and return a new ID. If there are possible duplications, the user can then resubmit with a confirmation saying, "yes, I see the possible duplicates, this is a new record, and I want to submit it".
This is easy if it is just a punctuation or space thing (such as if you are entering "Company, Inc." and there is a "Company Inc" in the database, But what if there is slight changes such as "Company Corp." instead of "Company Inc" or if there is a fat fingered misspelling, such as "Cmpany, Inc." Is it even possible to return records like that in the list? If it's absolutely not possible, I'll deal with what I have. It just causes more work later on, if records need to be merged due to duplications.
|
The specifics of which [algorithm](http://www.cs.sunysb.edu/~algorith/files/approximate-pattern-matching.shtml) will work best for you depends greatly on your domain, so I'd suggest experimenting with a few different ones - you may even need to combine a few to get optimal results. Abbreviations, especially domain specific ones, may need to be preprocessed or standardized as well.
For the names, you'd probably be best off with a [phonetic algorithm](http://en.wikipedia.org/wiki/Phonetic_algorithm) - which takes into account pronunciation. These will score Smith and Schmidt close together, as they are easy to confuse when *saying* the words. [Double Metaphone](http://en.wikipedia.org/wiki/Double_Metaphone) is a good first choice.
For fat fingering, you'd probably be better off with an [edit distance](http://en.wikipedia.org/wiki/Edit_distance) algorithm - which gives a "difference" between 2 words. These would score Smith and Smoth close together - even though the 2 may slip through the phonetic search.
T-SQL has SOUNDEX and [DIFFERENCE](http://msdn.microsoft.com/en-us/library/ms188753.aspx) - but they are pretty poor. A [Levenshtein](http://en.wikipedia.org/wiki/Levenshtein_distance) variant is the canonical choice, but there's other good choices - most of which are fairly easy to *implement* in C#, if you can't find a suitably licensed implementation.
All of these are going to be much easier to code/use from C# than T-SQL (though I did find [double metaphone in a horrendous abuse of T-SQL](http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=519&lngWId=5) that may work in SQL).
Though this [example is in Access](http://www.kdkeys.net/forums/thread/6450.aspx) (and I've never actually looked at the code, or used the implementation) the included presentation gives a fairly good idea of what you'll probably end up needing to do. The code is probably worth a look, and perhaps a port from VBA.
|
Look into [SOUNDEXing](http://msdn.microsoft.com/en-us/magazine/cc163731.aspx) within SQL Server. I believe it will give you the fuzziness of probable matches that you're looking for.
[SOUNDEX @ MSDN](http://msdn.microsoft.com/en-us/library/aa259235(SQL.80).aspx)
[SOUNDEX @ Wikipedia](http://en.wikipedia.org/wiki/Soundex)
|
intelligent database search
|
[
"",
"c#",
"asp.net",
"sql-server",
"web-services",
"dynamics-sl",
""
] |
I have a multi-threaded C++ app which does 3D rendering with the OpenSceneGraph library. I'm planning to kick off OSG's render loop as a separate thread using boost::threads, passing a data structure containing shared state in to the thread. I'm trying to avoid anything too heavyweight (like mutexes) for synchronization, as the render loop needs to be pretty tight, and OSG itself tries to avoid having to ever lock. Most of the shared state is set before the thread is started, and never changed. I do have some data that does need to be changed, which I am planning to double-buffer. However, I have a simple boolean for signaling the thread to suspend rendering, and later resume rendering, and another to kill it. In both cases the app thread sets the bool, and the render thread only reads it. Do I need to synchronize access to these bools? As far as I can tell, the worse thing that could happen is the the render loop continues on for an extra frame before suspending or quitting.
|
In C++11 and later, which has standards-defined concurrency, use `std::atomic<bool>` for this purpose. From <http://en.cppreference.com/w/cpp/atomic/atomic>:
> If one thread writes to an atomic object while another thread reads from it, the behavior is well-defined (see memory model for details on data races).
---
The following old answer may have been true at some time in the past with some compilers and some operating environments, but it should not be relied upon today:
You're right, in this case you won't need to synchronise the bools. You should declare them `volatile` though, to ensure that the compiler actually reads them from memory each time, instead of caching the previous read in a thread (that's a simplified explanation, but it should do for this purpose).
The following question has more information about this: [C++ Thread, shared data](https://stackoverflow.com/questions/118199/c-thread-shared-data)
|
Why not simply use an [interlocked variable](http://msdn.microsoft.com/en-us/library/ms684122(VS.85).aspx)?
|
In a multi-threaded C++ app, do I need a mutex to protect a simple boolean?
|
[
"",
"c++",
"multithreading",
"boolean",
"mutex",
"openscenegraph",
""
] |
For homework, I was given the following 8 code fragments to analyze and give a Big-Oh notation for the running time. Can anybody please tell me if I'm on the right track?
```
//Fragment 1
for(int i = 0; i < n; i++)
sum++;
```
I'm thinking O(N) for fragment 1
```
//Fragment 2
for(int i = 0; i < n; i+=2)
sum++;
```
O(N) for fragment 2 as well
```
//Fragment 3
for(int i = 0; i < n; i++)
for( int j = 0; j < n; j++)
sum++;
```
O(N^2) for fragment 3
```
//Fragment 4
for(int i = 0; i < n; i+=2)
sum++;
for(int j = 0; j < n; j++)
sum++;
```
O(N) for fragment 4
```
//Fragment 5
for(int i = 0; i < n; i++)
for( int j = 0; j < n * n; j++)
sum++;
```
O(N^2) for fragment 5 but the n \* n is throwing me off a bit so I'm not quite sure
```
//Fragment 6
for(int i = 0; i < n; i++)
for( int j = 0; j < i; j++)
sum++;
```
O(N^2) for fragment 6 as well
```
//Fragment 7
for(int i = 0; i < n; i++)
for( int j = 0; j < n * n; j++)
for(int k = 0; k < j; k++)
sum++;
```
O(N^3) for fragment 7 but once again the n \* n is throwing me off
```
//Fragment 8
for(int i = 1; i < n; i = i * 2)
sum++;
```
O(N) for fragment 8
|
I think fragment 5 is O(n^3), and similarly fragment 7 is O(n^5)\*. It also looks like O(log(n)) for fragment 8.
For the n \* n problems, you have to execute the body of the loop n \* n times, so it would be O(n^2), then you compound that with the order of the other code. Fragment 8 actually doubles the counter instead of incrementing it, so the larger the problem, the less additional work you have to do, so it's O(log(n))
**\*edit:** Fragment 7 is O(n^5), not O(n^4) as I previously thought. This is because both j *and k* go from 1 to n \* n. Sorry I didn't catch this earlier.
|
Fragment 7 is O(n^5), not O(n^4) as the currently accepted comment claims. Otherwise, it's correct.
|
Big O Notation Homework--Code Fragment Algorithm Analysis?
|
[
"",
"java",
"big-o",
""
] |
From C#, is it possible to detect the number of context switches that occurred while executing a block of code on a particular thread? Ideally, I'd like to know how many times and what CPU my thread code was scheduled on.
I know I can use tools like [Event Tracing for Windows](http://msdn.microsoft.com/en-us/magazine/cc163437.aspx) and the associated viewers, but this seemed a bit complicated to get the data I wanted.
Also, tools like [Process Explorer](http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx) make it too hard to tell how many switches occurred as a result of a specific block of code.
**Background**: I'm trying to test the actual performance of a low level lock primitive in .NET (as a result of some comments on a recent [blog post](http://www.moserware.com/2008/09/how-do-locks-lock.html) I made.
|
It looks like procexp might be using the kernel thread (KTHREAD) or executive thread (ETHREAD) data structures that have a ContextSwitches field on them. It might be possible to get this from managed code.
|
It sounds like you may be looking for a programmatic solution, but if not, Microsoft's [Process Explorer](http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx) tool will tell you very easily the number of context switches for a particular thread.
Once in the tool, double-click your process, select the Threads tab, and select your thread.
The .NET tab has more specific .NET-related perf data.
|
How to detect the number of context switches that occurred while running C# code?
|
[
"",
"c#",
"multithreading",
""
] |
In PHP, I want to read a file into a variable and process the PHP in the file at the same time without using output buffering. Is this possible?
Essentially I want to be able to accomplish this without using `ob_start()`:
```
<?php
ob_start();
include 'myfile.php';
$xhtml = ob_get_clean();
?>
```
Is this possible in PHP?
Update: I want to do some more complex things within an output callback (where output buffering is not allowed).
|
A little known feature of PHP is being able to treat an included/required file like a function call, with a return value.
For example:
```
// myinclude.php
$value = 'foo';
$otherValue = 'bar';
return $value . $otherValue;
// index.php
$output = include './myinclude.php';
echo $output;
// Will echo foobar
```
|
From what I can tell in the PHP documentation, no. Why do you want to avoid output buffering?
The only way to get around this would be hacky methods involving either invoking the command line php client or doing a curl request based on what's available and what the particular requirements are.
|
In PHP, is there a way to capture the output of a PHP file into a variable without using output buffering?
|
[
"",
"php",
"eval",
"output-buffering",
""
] |
Is there a way to statically/globally request a copy of the ApplicationContext in a Spring application?
Assuming the main class starts up and initializes the application context, does it need to pass that down through the call stack to any classes that need it, or is there a way for a class to ask for the previously created context? (Which I assume has to be a singleton?)
|
If the object that needs access to the container is a bean in the container, just implement the [BeanFactoryAware](http://static.springframework.org/spring/docs/2.5.x/reference/beans.html#beans-factory-aware-beanfactoryaware) or [ApplicationContextAware](http://static.springframework.org/spring/docs/2.5.x/api/org/springframework/context/ApplicationContextAware.html) interfaces.
If an object outside the container needs access to the container, I've used a [standard GoF singleton pattern](http://en.wikipedia.org/wiki/Singleton_pattern#Java) for the spring container. That way, you only have one singleton in your application, the rest are all singleton beans in the container.
|
You can implement `ApplicationContextAware` or just use `@Autowired`:
```
public class SpringBean {
@Autowired
private ApplicationContext appContext;
}
```
`SpringBean` will have `ApplicationContext` injected, within which this bean is instantiated. For example if you have web application with a pretty standard contexts hierarchy:
```
main application context <- (child) MVC context
```
and `SpringBean` is declared within main context, it will have main context injected;
otherwise, if it's declared within MVC context, it will have MVC context injected.
|
Getting Spring Application Context
|
[
"",
"java",
"spring",
"configuration",
"applicationcontext",
""
] |
I have used C# in Visual Studio with .NET, and I have played around a little with Mono on openSUSE Linux, but I don't really understand how it works.
If I write an app in Windows on .NET, how does this relate to Mono? I can't just execute an a Windows .exe file on Linux without Wine, so it doesn't help me execute apps developed in Windows.
Is the purpose purely to have a .NET library equivalent on Linux (and others) to make cross platform development easier? For example, if I was a business and wanted to reach Linux customers, but really wanted to use .NET, then Mono should be my choice? Or is there something more that I'm missing?
|
This is an old question (with an already selected answer) but I do not believe the question has really been answered well.
First, a little background...
**How does .NET work?**
A traditional Windows .EXE file is a binary file that represents a series of machine language instructions that your computer understands and that makes calls into the Win32 API which are parts of Windows that deliver services that applications can take advantage of. The machine language used is very specific to your kind of computer and the Win32 calls make the executable very dependent on Windows. A .NET executable is not like that.
It is important to realize that a .NET executable (.EXE file) is not actually a native Windows application. Windows itself does not understand how to run the code in a .NET executable. Your computer does not understand it either.
Much like Java, a .NET application is made up of instructions in a language called CIL (Common Intermediate Language) that you can think of as the machine language for an idealized computer that does not really exist. In .NET, the software implementation of this idealized machine is called the Common Language Runtime (CLR). The equivalent in the Java world is called the Java Virtual Machine (JVM). In Java, the equivalent to CIL is called Java bytecode. CIL is sometimes called MSIL (Microsoft Intermediate Language).
CIL is designed to run on the CLR (an idealized machine) but is otherwise platform independent, which means that the CIL does not care what kind of computer you have or what operating system you are running.
Just as you need a native version of the Java JVM on each platform on which you want to run Java, you need a native version of the CLR to run .NET CIL executables. The CLR is a native Windows application just like the traditional Win32 EXE files described above. The CLR itself is specific to the Windows implementation and computer architecture on which it was designed to run.
It does not matter what .NET language you start with (C#, VisualBasic, F#, IronPython, IronRuby, Boo, etc.), they all get compiled down to CIL bytecode. You can easily "disassemble" a CIL program into a form of object-oriented assembly language that is easily readable by humans. You can write a program in CIL directly yourself but few people do.
On Windows, the CLR compiles this CIL code Just-in-Time (JIT) right when you run the executable--just before the code is actually run. This means that the CIL bytecode is converted (compiled) to actual machine code that runs natively on your computer. This part of the CLR is called the JIT compiler or often just the JIT.
To date, Microsoft has released four versions of the CLR: 1.0, 1.1, 2.0, and 4.0. You need to have the right version of the CLR installed on your machine if you want to run .NET executables targeting that runtime. The CLR 2.0 supports .NET 2.0, 3.0, and 3.5 applications. For other versions of .NET, the .NET version maps cleanly to the CLR version.
In addition to the JIT/CLR, .NET provides a host of libraries (assemblies) that make up the rest of the .NET framework and that provide a host of capabilities and services that .NET applications can call upon. The great majority of these assemblies are pure CIL code which runs on the CLR. On Windows, a some make calls into the Win32 API as well. When you install .NET, you are installing the CLR, the class libraries (framework), and a bunch of development tools. Each version of the CLR generally requires a complete set of these "framework" assemblies. Some versions of .NET (eg. 3.0 and 3.5) added additional framework assemblies without updating the CLR or the existing assemblies associated with that CLR.
The Portable Executable (PE) file format that a Windows .EXE file is delivered in contains a header that describes the executable and identifies the file as a .NET file or a native Win32 file. When Windows tries to run a .NET file, it sees this header and automatically invokes the CLR on your behalf. This is why .NET EXE files appear to run natively on Windows.
**Ok, so how does Mono work?**
Mono implements the CLR on Linux, Mac, and other platforms. The Mono runtime (the CLR) is a native application written mostly in the C language and compiled down to machine language code for the computer system on which is designed to run. Like on Windows, the Mono runtime is specific to the Operating System and kind of machine you are using.
Just like on Windows, the Mono runtime (the CLR) compiles the CIL bytecode in your .NET executable Just-in-time to native code that your computer can understand and execute. In this way, a .NET file is just as "native" to Linux as it is to Windows.
To port Mono to a new architecture you need to port the JIT/CLR. This is just like porting any native application to a new platform.
How well .NET code runs on Linux or Mac is really just a question of how well the CLR is implemented on these systems. In theory, the Mono CLR could execute .NET code on these systems much better than the MS version of .NET does on Windows. In practice, the MS implementation is generally superior (though not in all cases).
In addition to the CLR, Mono provides most of the rest of the libraries (assemblies) that make up the .NET framework. Just as with the Microsoft version of .NET (in fact more so) the Mono assemblies are provided as CIL bytecode. This makes it possible to take a \*.dll or \*.exe file from Mono and run it unmodified on Windows, Mac, or Linux as CIL is the "native" language of the CLR implementations on these systems.
Just like on Windows, Mono supports multiple versions of the CLR and the associated assemblies:
Very early versions of Mono (before 1.2?) only supported CLR 1.0 or 1.1.
Mono did not support big chunks of the 2.0 framework until it's own 2.0 version.
Mono versions up to version 2.4 supported both CLR 1.1 and CLR 2.0 applications.
Starting with Mono 2.6, CLR 4.0 was added but CLR 2.0 was still the default.
Starting with Mono 2.8 the CLR 4.0 became the default and the CLR 1.1 is no longer supported.
Mono 2.10 continues to use the CLR 4.0 as default and also to support the CLR 2.0.
Just like the real .NET (but in far fewer cases) there are some Mono assemblies that call into native libraries. In order to make the System.Drawing assembly work on Mono, the Mono team wrote a Linux program to simulate the GDI+ portion of the Win32 API on Linux. This library is called 'libgdiplus'. If you compile Mono from source, you will notice that you need to build this 'libgdiplus' file before you can build 'mono'. You do not need 'libgdiplus' on Windows because the GDI+ portion of the Win32 API is already part of Windows. A full port of Mono to new platforms requires this 'libgdiplus' library to be ported as well.
In areas where the design of the .NET library is overly influenced by the design of Windows, and a poor fit for systems like Mac or Linux, the Mono team has written extensions to the .NET framework. The Mono extensions are also just CIL bytecode and generally work just fine on .NET.
Unlike on Windows, Linux generally does not detect .NET executables and launch the CLR by default. The user must usually run the CLR directly by typing 'mono appname.exe' or something similar. Here 'mono' is the application that implements the CLR and 'appname.exe' is the EXE file that contains the .NET code to be executed.
To make things easier for users, Mono applications are often wrapped in a shell script that launches the CLR. This hides the fact that the CLR is being used just as in Windows. It is also possible to tell Linux to launch the CLR when a file using the PE file format is encountered. This is usually not done as the PE file format is also used for native Win32 Windows executables which of course the CLR (Mono) does not support.
There is no technical reason why a PE launcher could not be used by Linux which then launches either a system that understands native Windows code (like Wine) or the CLR (Mono) as appropriate. This has simply not been done to my knowledge.
**Back and forth**
Any .NET code that sticks to "fully managed" code, which means it does not call into non-.NET code, should work fine on Mono on all platforms. I routinely use compiled .NET assemblies from Windows (for which I do not have the code) on Linux and Mac.
I can also take any code that I compile on Mono and run that on .NET on Windows. I can provide a client some code I compiled with Mono and not worry if he is on 32-bit or 64-bit Windows for example. The client does need to have the right version of .NET (the right CLR) installed fo course. CLR 2.0 has been around for a very long time and you can bet almost all Windows users have it installed. The Mono compilers and other code are also just CIL executables and so they run fine on Windows if you like.
Mono compatibility is good enough that large chunks of actual Microsoft code, like ASP.NET MVC, can be taken (where legal to do so) from the actual MS version of .NET and run on Mac or Linux. In general, the Mono team has done a great job of implementing both the CLR and the rest of the framework (class libraries/assemblies).
**ASP.NET**
On Windows, the Internet Information Server (IIS) knows how to call into the CLR to execute .NET as part of a web application. On Linux/Mac there is an Apache module (mod\_mono) that provides similar capabilities to the Apache webserver. This application is written in C and must also be ported to new architectures.
**Porting Mono**
This discussion has identified parts of Mono that are built as "native" executables and must exist on a system on which you want to run .NET applications.
* The CLR (including JIT compiler) - generally known as Mono
* libgdiplus (for systems which do not natively support the GDI+ API [only Windows does])
* mod\_mono (to allow Apache to invoke the CLR for .NET web applications)
These three components, with the addition of the class libraries, provide a .NET environment that looks "native" to the .NET executable files you need to run.
That is how Mono works.
|
A Windows EXE contains multiple "parts". *Simplified*, the .net Code (=MSIL) is only a Part of the EXE, and there is also a "real" native Windows Part inside the EXE that serves as some sort of launcher for the .net Framework which then executes the MSIL.
Mono will just take the MSIL and execute it, ignoring the native Windows Launcher stuff.
Again, this is a simplified overview.
*Edit:* I fear my understanding of the deep deep details is not good enough for really much detail (I know roughly what a PE Header is, but not really the details), but i found these links helpful:
[NET Assembly Structure – Part II](http://xyberangel.blogspot.com/2004/11/net-assembly-structure-part-ii.html)
[.NET Foundations - .NET assembly structure](http://blog.vuscode.com/malovicn/archive/2007/12/24/net-foundations-net-assembly-structure.aspx)
|
How does Mono work
|
[
"",
"c#",
"mono",
""
] |
I've run into this issue quite a few times and never liked the solution chosen. Let's say you have a list of States (just as a simple example) in the database. In your code-behind, you want to be able to reference a State by ID and have the list of them available via Intellisense.
For example:
```
States.Arizona.Id //returns a GUID
```
But the problem is that I don't want to hard-code the GUIDS. Now in the past I've done all of the following:
* Create class constants (hard-coding of the worst kind.. ugh!)
* Create Lookup classes that have an ID property (among others) (still hard-coded and would require a rebuild of the project if ever updated)
* Put all the GUIDS into the .config file, create an enumeration, and within a static constructor load the GUIDS from the .config into a Hashtable with the enumeration item as the key. So then I can do: `StateHash[StatEnum.Arizona]`. Nice, because if a GUID changes, no rebuild required. However, doesn't help if a new record is added or an old one removed, because the enumeration will need to be updated.
So what I'm asking is if someone has a better solution? Ideally, I'd want to be able to look up via Intellisense and not have to rebuild code when there's an update. Not even sure that's possible.
EDIT: Using states was just an example (probably a bad one). It could be a list of widgets, car types, etc. if that helps.
|
I believe that if it shows up in Intellisense, then, by definition, it is hard-coded into your program.
That said, if your goal is make the hard-coding as painless as possible, on thing you might try is auto-generating your enumeration based on what's in the database. That is, you can write a program that reads the database and creates a FOO.cs file containing your enumeration. Then just run that program every time the data changes.
|
Personally, I would store lookup data in a database, and simply try to avoid the type of hard coding that binds rules to things like individual states. Perhaps some key property *of* those states (like .ApplyDoubleTax or something). And non-logic code doesn't need to use intellisense - it typically just needs to list them or find by name, which can be done easily enough however you have stored it.
Equally, I'd load the data once and cache it.
Arguably, coding the logic against states *is* hard coding - especially if you want to go international anytime soon - I *hate* it when a site asks me what state I live in...
Re the data changing... is the USA looking to annex anytime soon?
|
How would you reference lookup/meta data?
|
[
"",
"c#",
".net",
"asp.net",
""
] |
We are designing a Payroll Generation System for a client.
The organization we are targeting has a hierarchy as follows:
Company -> Cluster -> Business Unit (BU) -> Department -> Employee
The salary for an employee is made up of various salary components.
Each salary component has 3 rules associated with it, a Calculation Rule (Calculate component as % of another component, or % of a fixed number or a fixed number), an Eligibility Rule (whether an Employee/Dept is eligible for a component) and a Constraint Rule which limits tha max and min of a component.
\*\*\*\*These rules are editable and can be edited by a user end user\*\*. Also these rules are inherited top-down, but if defined at a lower level, the lower level rule takes precedence.\*\*
**We have a database which has Attendance, Leaves, Bonuses tables and these rules are also supposed to interact with these tables.**
The client will be generating payroll for multiple clients each hosting a Separate Database instance. They may each have a different interpretation of each component and may have different components.
We are only looking to support SQL Server, and payroll generation will be an offline activity.
**We are divided upon where to put the logic which uses these rules to generate the individual tax components (which will include Tax Deductions, Tax Write-offs, Allowances, etc).**
Some people are advocating magic SPs which will take an employee Id and generate a payroll for that month.
Others want the logic to be split into separate components which will get the dependent data for an employee in the application layer and calculate these components there.
The order of our priorities are:
1. The ability to adapt changes to new clients quickly
2. Long term maintainability
3. Performance
1 and 2 outweigh 3 here by a large factor as this will be an offline activity.
Maintainability and Quick Customizability are very important, we will be deploying the application for different clients.
Client A may have a Salary Component Rule as ((0.3 \* Basic) + 800)
and Client B as (0.2 \* Basic) + (0.1 \* Atendance Bonus)
**Will SPs cause a mess here, as the rules suggested above will be specified by the end user and will need to be customizable through a web UI.
We will have to parse the formulas from SQL. How difficult or easy will it be?
What advantage will doing this in the Application Layer (C# .Net) have over using SPs?**
The original post is here:
[Design Hints, Payroll System...repost](https://stackoverflow.com/questions/208707/design-hints-payroll-systemrepost)
...but none of the questions were properly answered.
Suggestions and pointers to existing systems' architecture will be very helpful.
...and yes, we are using LINQ to SQL elsewhere in the system.
Kind regards,
Ashish Sharma
|
I always try to shy away from putting business logic in the DB layer. It is more difficult to write, debug and maintain. In addition, the DB is generally the most expensive layer to scale. If you end up needing to beef up your system to support more users it is relatively cheap and easy to add new webservers to a system, but adding DB instances becomes expensive as each DB requires a license and additional support.
|
If you store the formulas as in for example JEP (for java) it's not much of a problem. Just keep the full formula as a string: i.e: "pay=((0.3 \* Basic) + 800) and then parse it into a tree.
You can see some of jep documentation for info, and take the ideas from there. Shouldn't be 2 problematic to implement simple solvers for the formulas you've posted here.
My sugestion:
* Keep it in a string, in the data base
* Make a small library for the eval and parser.
* Parse it into a binary tree. (like '+' points to 800 and also points to '*') then the '*' points to 'basic' and to '0.3'
* After it you just need a simple recursive function to solve it.
If the complexity of these formulas aren't much you can do this in any side you want as it won't take you that much time to process it.
|
Payroll System Design, Business Logic in SPs or Application Layer (C#.Net), Maintainability - Repost
|
[
"",
"c#",
"database",
"oop",
"web-applications",
"n-tier-architecture",
""
] |
I have a many to many index table, and I want to do an include/exclude type query on it.
fid is really a integer index, but here as letters for easier understanding. Here's a sample table :
table t
```
eid | fid
----+----
1 | A
1 | B
1 | C
2 | B
2 | C
3 | A
3 | C
4 | A
4 | B
5 | B
```
Here are some sample queries I want.
1. What eids have fid B, and NOT A? (Answer eid 2 and 5)
2. What eids have fid C, and NOT A? (Answer eid 2)
I can't seem to figure out a query that will do this.
I've tried a self join like this:
```
select *
from t as t1
join t as t2
where t1.eid=t2.eid
and t1.fid!=t2.fid
and t1.fid=B and t2.fid!=A
```
That won't work, because it will still return rows where eid=1 and fid=C.
Am I clear on what I want?
|
Here's an example of a query for 1 (2 works much the same)
```
select t1.eid
from t t1
where t1.fid = 'B'
and not exists
(select 1
from t t2
where t2.eid = t1.eid
and t2.fid = 'A')
```
|
Use [set subtraction](http://msdn.microsoft.com/en-us/library/ms188055.aspx)
> ```
> Select eid from t where fid = 'B'
> EXCEPT
> select eid from t where fid = 'A'
> ```
|
Many to many table queries
|
[
"",
"sql",
"join",
""
] |
I found this [PECL package called threads](http://pecl.php.net/package/threads), but there is not a release yet. And nothing is coming up on the PHP website.
|
There is nothing available that I'm aware of. The next best thing would be to simply have one script execute another via CLI, but that's a bit rudimentary. Depending on what you are trying to do and how complex it is, this may or may not be an option.
|
From the PHP manual for the [**pthreads**](http://php.net/pthreads) extension:
> pthreads is an Object Orientated API that allows user-land multi-threading in PHP. It includes all the tools you need to create multi-threaded applications targeted at the Web or the Console. PHP applications can create, read, write, execute and synchronize with Threads, Workers and Stackables.
As unbelievable as this sounds, it's entirely true. Today, PHP can multi-thread for those wishing to try it.
The first release of PHP4, 22 May 2000, PHP was shipped with a thread safe architecture - a way for it to execute multiple instances of it's interpreter in separate threads in multi-threaded SAPI ( Server API ) environments. Over the last 13 years, the design of this architecture has been maintained and advanced: It has been in production use on the worlds largest websites ever since.
Threading in user land was never a concern for the PHP team, and it remains as such today. You should understand that in the world where PHP does it's business, there's already a defined method of scaling - add hardware. Over the many years PHP has existed, hardware has got cheaper and cheaper and so this became less and less of a concern for the PHP team. While it was getting cheaper, it also got much more powerful; today, our mobile phones and tablets have dual and quad core architectures and plenty of RAM to go with it, our desktops and servers commonly have 8 or 16 cores, 16 and 32 gigabytes of RAM, though we may not always be able to have two within budget and having two desktops is rarely useful for most of us.
Additionally, PHP was written for the non-programmer, it is many hobbyists native tongue. The reason PHP is so easily adopted is because it is an easy language to learn and write. The reason PHP is so reliable today is because of the vast amount of work that goes into it's design, and every single decision made by the PHP group. It's reliability and sheer greatness keep it in the spot light, after all these years; where it's rivals have fallen to time or pressure.
Multi-threaded programming is not easy for most, even with the most coherent and reliable API, there are different things to think about, and many misconceptions. The PHP group do not wish for user land multi-threading to be a core feature, it has never been given serious attention - and rightly so. PHP should not be complex, for everyone.
All things considered, there are still benefits to be had from allowing PHP to utilize it's production ready and tested features to allow a means of making the most out of what we have, when adding more isn't always an option, and for a lot of tasks is never really needed.
pthreads achieves, for those wishing to explore it, an API that does allow a user to multi-thread PHP applications. It's API is very much a work in progress, and designated a beta level of stability and completeness.
It is common knowledge that some of the libraries PHP uses are not thread safe, it should be clear to the programmer that pthreads cannot change this, and does not attempt to try. However, any library that is thread safe is useable, as in any other thread safe setup of the interpreter.
pthreads utilizes Posix Threads ( even in Windows ), what the programmer creates are real threads of execution, but for those threads to be useful, they must be aware of PHP - able to execute user code, share variables and allow a useful means of communication ( synchronization ). So every thread is created with an instance of the interpreter, but by design, it's interpreter is isolated from all other instances of the interpreter - just like multi-threaded Server API environments. pthreads attempts to bridge the gap in a sane and safe way. Many of the concerns of the programmer of threads in C just aren't there for the programmer of pthreads, by design, pthreads is copy on read and copy on write ( RAM is cheap ), so no two instances ever manipulate the same physical data, but they can both affect data in another thread. The fact that PHP may use thread unsafe features in it's core programming is entirely irrelevant, user threads, and it's operations are completely safe.
**Why copy on read and copy on write:**
```
public function run() {
...
(1) $this->data = $data;
...
(2) $this->other = someOperation($this->data);
...
}
(3) echo preg_match($pattern, $replace, $thread->data);
```
(1) While a read, and write lock are held on the pthreads object data store, data is copied from its original location in memory to the object store. pthreads does not adjust the refcount of the variable, Zend is able to free the original data if there are no further references to it.
(2) The argument to someOperation references the object store, the original data stored, which it itself a copy of the result of (1), is copied again for the engine into a zval container, while this occurs a read lock is held on the object store, the lock is released and the engine can execute the function. When the zval is created, it has a refcount of 0, enabling the engine to free the copy on completion of the operation, because no other references to it exist.
(3) The last argument to preg\_match references the data store, a read lock is obtained, the data set in (1) is copied to a zval, again with a refcount of 0. The lock is released, The call to preg\_match operates on a copy of data, that is itself a copy of the original data.
**Things to know:**
* The object store's hash table where data is stored, thread safe, is
based on the TsHashTable shipped with PHP, by Zend.
* The object store has a read and write lock, an additional access lock is provided for the TsHashTable such that if requires ( and it does, var\_dump/print\_r, direct access to properties as the PHP engine wants to reference them ) pthreads can manipulate the TsHashTable outside of the defined API.
* The locks are only held while the copying operations occur, when the copies have been made the locks are released, in a sensible order.
**This means:**
* When a write occurs, not only are a read and write lock held, but an
additional access lock. The table itself is locked down, there is no
possible way another context can lock, read, write or affect it.
* When a read occurs, not only is the read lock held, but the
additional access lock too, again the table is locked down.
No two contexts can physically nor concurrently access the same data from the object store, but writes made in any context with a reference will affect the data read in any context with a reference.
This is shared nothing architecture and the only way to exist is co-exist. Those a bit savvy will see that, there's a lot of copying going on here, and they will wonder if that is a good thing. Quite a lot of copying goes on within a dynamic runtime, that's the dynamics of a dynamic language. pthreads is implemented at the level of the object, because good control can be gained over one object, but methods - the code the programmer executes - have another context, free of locking and copies - the local method scope. The object scope in the case of a pthreads object should be treated as a way to share data among contexts, that is it's purpose. With this in mind you can adopt techniques to avoid locking the object store unless it's necessary, such as passing local scope variables to other methods in a threaded object rather than having them copy from the object store upon execution.
Most of the libraries and extensions available for PHP are thin wrappers around 3rd parties, PHP core functionality to a degree is the same thing. pthreads is not a thin wrapper around Posix Threads; it is a threading API based on Posix Threads. There is no point in implementing Threads in PHP that it's users do not understand or cannot use. There's no reason that a person with no knowledge of what a mutex is or does should not be able to take advantage of all that they have, both in terms of skill, and resources. An object functions like an object, but wherever two contexts would otherwise collide, pthreads provides stability and safety.
Anyone who has worked in java will see the similarities between a pthreads object and threading in java, those same people will have no doubt seen an error called ConcurrentModificationException - as it sounds an error raised by the java runtime if two threads write the same physical data concurrently. I understand why it exists, but it baffles me that with resources as cheap as they are, coupled with the fact the runtime is able to detect the concurrency at the exact and only time that safety could be achieved for the user, that it chooses to throw a possibly fatal error at runtime rather than manage the execution and access to the data.
No such stupid errors will be emitted by pthreads, the API is written to make threading as stable, and compatible as is possible, I believe.
Multi-threading isn't like using a new database, close attention should be paid to every word in the manual and examples shipped with pthreads.
Lastly, from the PHP manual:
> pthreads was, and is, an experiment with pretty good results. Any of its limitations or features may change at any time; that is the nature of experimentation. It's limitations - often imposed by the implementation - exist for good reason; the aim of pthreads is to provide a useable solution to multi-tasking in PHP at any level. In the environment which pthreads executes, some restrictions and limitations are necessary in order to provide a stable environment.
|
Does PHP have threading?
|
[
"",
"php",
"multithreading",
"apache",
"command-line-interface",
""
] |
I am writing code for a search results page that needs to highlight search terms. The terms happen to occur within table cells (the app is iterating through GridView Row Cells), and these table cells may have HTML.
Currently, my code looks like this (relevant hunks shown below):
```
const string highlightPattern = @"<span class=""Highlight"">$0</span>";
DataBoundLiteralControl litCustomerComments = (DataBoundLiteralControl)e.Row.Cells[CUSTOMERCOMMENTS_COLUMN].Controls[0];
// Turn "term1 term2" into "(term1|term2)"
string spaceDelimited = txtTextFilter.Text.Trim();
string pipeDelimited = string.Join("|", spaceDelimited.Split(new[] {" "}, StringSplitOptions.RemoveEmptyEntries));
string searchPattern = "(" + pipeDelimited + ")";
// Highlight search terms in Customer - Comments column
e.Row.Cells[CUSTOMERCOMMENTS_COLUMN].Text = Regex.Replace(litCustomerComments.Text, searchPattern, highlightPattern, RegexOptions.IgnoreCase);
```
Amazingly it works. BUT, sometimes the text I am matching on is HTML that looks like this:
```
<span class="CustomerName">Fred</span> was a classy individual.
```
And if you search for "class" I want the highlight code to wrap the "class" in "classy" but of course not the HTML attribute "class" that happens to be in there! If you search for "Fred", that should be highlighted.
So what's a good regex that will make sure matches happen only OUTSIDE the html tags? It doesn't have to be super hardcore. Simply making sure the match is not between < and > would work fine, I think.
|
This regex should do the job : `(?<!<[^>]*)(regex you want to check: Fred|span)` It checks that it is impossible to match the regex `<[^>]*` going backward starting from a matching string.
Modified code below:
```
const string notInsideBracketsRegex = @"(?<!<[^>]*)";
const string highlightPattern = @"<span class=""Highlight"">$0</span>";
DataBoundLiteralControl litCustomerComments = (DataBoundLiteralControl)e.Row.Cells[CUSTOMERCOMMENTS_COLUMN].Controls[0];
// Turn "term1 term2" into "(term1|term2)"
string spaceDelimited = txtTextFilter.Text.Trim();
string pipeDelimited = string.Join("|", spaceDelimited.Split(new[] {" "}, StringSplitOptions.RemoveEmptyEntries));
string searchPattern = "(" + pipeDelimited + ")";
searchPattern = notInsideBracketsRegex + searchPattern;
// Highlight search terms in Customer - Comments column
e.Row.Cells[CUSTOMERCOMMENTS_COLUMN].Text = Regex.Replace(litCustomerComments.Text, searchPattern, highlightPattern, RegexOptions.IgnoreCase);
```
|
You can use a regex with balancing groups and backreferences, but I strongly recommend that you use a [parser](http://www.antlr.org/) here.
|
What regex will match text excluding what lies within HTML tags?
|
[
"",
"c#",
"regex",
""
] |
Say if I had a table of books in a MySQL database and I wanted to search the 'title' field for keywords (input by the user in a search field); what's the best way of doing this in PHP? Is the MySQL `LIKE` command the most efficient way to search?
|
Yes, the most efficient way usually is searching in the database. To do that you have three alternatives:
* [LIKE, ILIKE](http://dev.mysql.com/doc/refman/5.0/en/string-comparison-functions.html) to match exact substrings
* [RLIKE](http://dev.mysql.com/doc/refman/5.0/en/pattern-matching.html) to match POSIX regexes
* [FULLTEXT](http://dev.mysql.com/doc/refman/5.0/en/fulltext-search.html) indexes to match another three different kinds of search aimed at natural language processing
So it depends on what will you be actually searching for to decide what would the best be. For book titles I'd offer a LIKE search for exact substring match, useful when people know the book they're looking for and also a FULLTEXT search to help find titles similar to a word or phrase. I'd give them different names on the interface of course, probably something like exact for the substring search and similar for the fulltext search.
An example about fulltext: <http://www.onlamp.com/pub/a/onlamp/2003/06/26/fulltext.html>
|
Here's a simple way you can break apart some keywords to build some clauses for filtering a column on those keywords, either ANDed or ORed together.
```
$terms=explode(',', $_GET['keywords']);
$clauses=array();
foreach($terms as $term)
{
//remove any chars you don't want to be searching - adjust to suit
//your requirements
$clean=trim(preg_replace('/[^a-z0-9]/i', '', $term));
if (!empty($clean))
{
//note use of mysql_escape_string - while not strictly required
//in this example due to the preg_replace earlier, it's good
//practice to sanitize your DB inputs in case you modify that
//filter...
$clauses[]="title like '%".mysql_escape_string($clean)."%'";
}
}
if (!empty($clauses))
{
//concatenate the clauses together with AND or OR, depending on
//your requirements
$filter='('.implode(' AND ', $clauses).')';
//build and execute the required SQL
$sql="select * from foo where $filter";
}
else
{
//no search term, do something else, find everything?
}
```
|
What's the best way to search a MySQL database with PHP?
|
[
"",
"php",
"mysql",
"search",
""
] |
I have a Delphi DLL with a function defined as:
function SubmitJobStringList(joblist: tStringList; var jobno: Integer): Integer;
I am calling this from C#. How do I declare the first parameter as a tStringList does not exist in C#. I currently have the declaration as:
```
[DllImport("opt7bja.dll", CharSet = CharSet.Ansi, CallingConvention = CallingConvention.StdCall)]
public static extern int SubmitJobStringList(string[] tStringList, ref int jobno);
```
But when I call it I get a memory access violation exception.
Anyone know how to pass to a tStringList correctly from C#?
|
You'll most likely not have any luck with this. The TStringList is more than just an array, it's a full-blown class, and the exact implementation details may differ from what is possible with .NET. Take a look at the Delphi VCL source code (that is, if you have it) and try to find out if you can rebuild the class in C#, and pass it with the help of your best friend, the Interop Marshaller. Note that even the Delphi string type is different from the .NET string type, and passing it without telling the marshaller what he should do, he will pass it as a char-array, most likely.
Other than that, I would suggest changing the Delphi DLL. It's never a good thing to expose anything Delphi-specific in a DLL that is to be used by non-Delphi clients. Make the parameter an array of PChar and you should be fine.
|
If this is your DLL, I'd rewrite the function to accept an array of strings instead. Avoid passing classes as DLL parameters.
Or, if you *really* want to use a TStringList for some reason, Delphi's VCL.Net can be used from any .Net language.
An old example using TIniFile: <http://cc.codegear.com/Item/22691>
The example uses .Net 1.1 in Delphi 2005. Delphi 2006 and 2007 support .Net 2.0.
|
tStringList passing in C# to Delphi DLL
|
[
"",
"c#",
"delphi",
"dll",
"tstringlist",
""
] |
I am developing a wizard for a machine that is to be used as a backup of other machines. When it replaces an existing machine, it needs to set its IP address, DNS, WINS, and host name to match the machine being replaced.
Is there a library in .net (C#) which allows me to do this programatically?
There are multiple NICs, each which need to be set individually.
**EDIT**
Thank you [TimothyP](https://stackoverflow.com/questions/209779/how-can-you-change-network-settings-ip-address-dns-wins-host-name-with-code-in-c#209822) for your example. It got me moving on the right track and the quick reply was awesome.
Thanks [balexandre](https://stackoverflow.com/questions/209779/how-can-you-change-network-settings-ip-address-dns-wins-host-name-with-code-in-c#209983). Your code is perfect. I was in a rush and had already adapted the example TimothyP linked to, but I would have loved to have had your code sooner.
I've also developed a routine using similar techniques for changing the computer name. I'll post it in the future so subscribe to this questions [RSS feed](https://stackoverflow.com/feeds/question/209779 "RSS Feed") if you want to be informed of the update. I may get it up later today or on Monday after a bit of cleanup.
|
Just made this in a few minutes:
```
using System;
using System.Management;
namespace WindowsFormsApplication_CS
{
class NetworkManagement
{
public void setIP(string ip_address, string subnet_mask)
{
ManagementClass objMC =
new ManagementClass("Win32_NetworkAdapterConfiguration");
ManagementObjectCollection objMOC = objMC.GetInstances();
foreach (ManagementObject objMO in objMOC)
{
if ((bool)objMO["IPEnabled"])
{
ManagementBaseObject setIP;
ManagementBaseObject newIP =
objMO.GetMethodParameters("EnableStatic");
newIP["IPAddress"] = new string[] { ip_address };
newIP["SubnetMask"] = new string[] { subnet_mask };
setIP = objMO.InvokeMethod("EnableStatic", newIP, null);
}
}
}
public void setGateway(string gateway)
{
ManagementClass objMC = new ManagementClass("Win32_NetworkAdapterConfiguration");
ManagementObjectCollection objMOC = objMC.GetInstances();
foreach (ManagementObject objMO in objMOC)
{
if ((bool)objMO["IPEnabled"])
{
ManagementBaseObject setGateway;
ManagementBaseObject newGateway =
objMO.GetMethodParameters("SetGateways");
newGateway["DefaultIPGateway"] = new string[] { gateway };
newGateway["GatewayCostMetric"] = new int[] { 1 };
setGateway = objMO.InvokeMethod("SetGateways", newGateway, null);
}
}
}
public void setDNS(string NIC, string DNS)
{
ManagementClass objMC = new ManagementClass("Win32_NetworkAdapterConfiguration");
ManagementObjectCollection objMOC = objMC.GetInstances();
foreach (ManagementObject objMO in objMOC)
{
if ((bool)objMO["IPEnabled"])
{
// if you are using the System.Net.NetworkInformation.NetworkInterface
// you'll need to change this line to
// if (objMO["Caption"].ToString().Contains(NIC))
// and pass in the Description property instead of the name
if (objMO["Caption"].Equals(NIC))
{
ManagementBaseObject newDNS =
objMO.GetMethodParameters("SetDNSServerSearchOrder");
newDNS["DNSServerSearchOrder"] = DNS.Split(',');
ManagementBaseObject setDNS =
objMO.InvokeMethod("SetDNSServerSearchOrder", newDNS, null);
}
}
}
}
public void setWINS(string NIC, string priWINS, string secWINS)
{
ManagementClass objMC = new ManagementClass("Win32_NetworkAdapterConfiguration");
ManagementObjectCollection objMOC = objMC.GetInstances();
foreach (ManagementObject objMO in objMOC)
{
if ((bool)objMO["IPEnabled"])
{
if (objMO["Caption"].Equals(NIC))
{
ManagementBaseObject setWINS;
ManagementBaseObject wins =
objMO.GetMethodParameters("SetWINSServer");
wins.SetPropertyValue("WINSPrimaryServer", priWINS);
wins.SetPropertyValue("WINSSecondaryServer", secWINS);
setWINS = objMO.InvokeMethod("SetWINSServer", wins, null);
}
}
}
}
}
}
```
|
Refactored the code from balexandre a little so objects gets disposed and the new language features of C# 3.5+ are used (Linq, var, etc). Also renamed the variables to more meaningful names. I also merged some of the functions to be able to do more configuration with less WMI interaction. I removed the WINS code as I don't need to configure WINS anymore. Feel free to add the WINS code if you need it.
For the case anybody likes to use the refactored/modernized code I put it back into the community here.
```
/// <summary>
/// Helper class to set networking configuration like IP address, DNS servers, etc.
/// </summary>
public class NetworkConfigurator
{
/// <summary>
/// Set's a new IP Address and it's Submask of the local machine
/// </summary>
/// <param name="ipAddress">The IP Address</param>
/// <param name="subnetMask">The Submask IP Address</param>
/// <param name="gateway">The gateway.</param>
/// <remarks>Requires a reference to the System.Management namespace</remarks>
public void SetIP(string ipAddress, string subnetMask, string gateway)
{
using (var networkConfigMng = new ManagementClass("Win32_NetworkAdapterConfiguration"))
{
using (var networkConfigs = networkConfigMng.GetInstances())
{
foreach (var managementObject in networkConfigs.Cast<ManagementObject>().Where(managementObject => (bool)managementObject["IPEnabled"]))
{
using (var newIP = managementObject.GetMethodParameters("EnableStatic"))
{
// Set new IP address and subnet if needed
if ((!String.IsNullOrEmpty(ipAddress)) || (!String.IsNullOrEmpty(subnetMask)))
{
if (!String.IsNullOrEmpty(ipAddress))
{
newIP["IPAddress"] = new[] { ipAddress };
}
if (!String.IsNullOrEmpty(subnetMask))
{
newIP["SubnetMask"] = new[] { subnetMask };
}
managementObject.InvokeMethod("EnableStatic", newIP, null);
}
// Set mew gateway if needed
if (!String.IsNullOrEmpty(gateway))
{
using (var newGateway = managementObject.GetMethodParameters("SetGateways"))
{
newGateway["DefaultIPGateway"] = new[] { gateway };
newGateway["GatewayCostMetric"] = new[] { 1 };
managementObject.InvokeMethod("SetGateways", newGateway, null);
}
}
}
}
}
}
}
/// <summary>
/// Set's the DNS Server of the local machine
/// </summary>
/// <param name="nic">NIC address</param>
/// <param name="dnsServers">Comma seperated list of DNS server addresses</param>
/// <remarks>Requires a reference to the System.Management namespace</remarks>
public void SetNameservers(string nic, string dnsServers)
{
using (var networkConfigMng = new ManagementClass("Win32_NetworkAdapterConfiguration"))
{
using (var networkConfigs = networkConfigMng.GetInstances())
{
foreach (var managementObject in networkConfigs.Cast<ManagementObject>().Where(objMO => (bool)objMO["IPEnabled"] && objMO["Caption"].Equals(nic)))
{
using (var newDNS = managementObject.GetMethodParameters("SetDNSServerSearchOrder"))
{
newDNS["DNSServerSearchOrder"] = dnsServers.Split(',');
managementObject.InvokeMethod("SetDNSServerSearchOrder", newDNS, null);
}
}
}
}
}
}
```
|
How can you change Network settings (IP Address, DNS, WINS, Host Name) with code in C#
|
[
"",
"c#",
".net",
"configuration",
"networking",
""
] |
> Timsort is an adaptive, stable,
> natural mergesort. It has supernatural
> performance on many kinds of partially
> ordered arrays (less than lg(N!)
> comparisons needed, and as few as
> N-1), yet as fast as Python's previous
> highly tuned samplesort hybrid on
> random arrays.
Have you seen [timsort](http://svn.python.org/projects/python/trunk/Objects/listsort.txt) used outside of CPython? Does it make sense?
|
Yes, it makes quite a bit of sense to use timsort outside of CPython, in specific, or Python, in general.
There is currently an [effort underway](https://bugs.java.com/bugdatabase/view_bug?bug_id=6804124) to replace Java's "modified merge sort" with timsort, and the initial results are quite positive.
|
The algorithm is pretty generic, but the benefits are rather Python-specific. Unlike most sorting routines, what Python's list.sort (which is what uses timsort) cares about is avoiding unnecessary comparisons, because generally comparisons are a *lot* more expensive than swapping items (which is always just a set of pointer copies) or even allocating some extra memory (because it's always just an array of pointers, and the overhead is small compared to the average overhead in any Python operation.)
If you're under similar constraints, then it may be suitable. I've yet to see any other case where comparisons are really that expensive, though :-)
|
Is timsort general-purpose or Python-specific?
|
[
"",
"python",
"algorithm",
"sorting",
""
] |
When I dynamically create a Datagrid and add in a new buttoncolumn how do I access the buttoncolumn\_click event?
Thanks.
|
```
protected void Page_Load(object sender, EventArgs e)
{
DataGrid dg = new DataGrid();
dg.GridLines = GridLines.Both;
dg.Columns.Add(new ButtonColumn {
CommandName = "add",
HeaderText = "Event Details",
Text = "Details",
ButtonType = ButtonColumnType.PushButton
});
dg.DataSource = getDataTable();
dg.DataBind();
dg.ItemCommand += new DataGridCommandEventHandler(dg_ItemCommand);
pnlMain.Controls.Add(dg);
}
protected void dg_ItemCommand(object source, DataGridCommandEventArgs e)
{
if (e.CommandName == "add")
{
throw new Exception("add it!");
}
}
protected DataTable getDataTable()
{
// returns your data table
}
```
|
This article on the MSDN site clearly explains how to go about [adding a button into a datagrid](http://msdn.microsoft.com/en-us/library/aa984252(VS.71).aspx). Instead of using the click event of the button you'll use the command event of the DataGrid. Each button will be passing specific commandarguments that you will set.
This article shows [how to use the command event with buttons](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.button.command.aspx). In it you use CommandArguments and CommandNames.
|
Accessing a dynamitcally added buttoncolumn's (in a Datagrid) click event. C#/ASP.NET
|
[
"",
"c#",
"asp.net",
""
] |
One of our providers are sometimes sending XML feeds that are tagged as UTF-8 encoded documents but includes characters that are not included in the UTF-8 charset. This causes the parser to throw an exception and stop building the DOM object when these characters are encountered:
```
DocumentBuilder.parse(ByteArrayInputStream bais)
```
throws the following exception:
```
org.xml.sax.SAXParseException: Invalid byte 2 of 2-byte UTF-8 sequence.
```
Is there a way to "capture" these problems early and avoid the exception (i.e. finding and removing those characters from the stream)? What I'm looking for is a "best effort" type of fallback for wrongly encoded documents. The correct solution would obviously be to attack the problem at the source and make sure that only correct documents are delivered, but what is a good approach when that is not possible?
|
if the problem truly is the wrong encoding (as opposed to a mixed encoding), you don't need to re-encode the document to parse it. just parse it as a Reader instead of an InputStream and the dom parser will ignore the header:
```
DocumentBuilder.parse(new InpputSource(new InputStreamReader(inputStream, "<real encoding>")));
```
|
You should manually take a look at the invalid documents and see what is the common problem to them. It's quite probable they are in fact in another encoding (most probably windows-1252), and the best solution then would be to take every document from the broken system and recode it to UTF-8 before parsing.
Another possible cause is mixed encodings (the content of some elements is in one encoding and the content of other elements is in another encoding). That would be harder to fix.
You would also need a way to know when the broken system gets fixed so you can stop using your workaround.
|
Repairing wrong encoding in XML files
|
[
"",
"java",
"xml",
"parsing",
"encoding",
"xerces",
""
] |
There have been a few questions asked here about why you can't define static methods within interfaces, but none of them address a basic inconsistency: why can you define static fields and static inner types within an interface, but not static methods?
Static inner types perhaps aren't a fair comparison, since that's just syntactic sugar that generates a new class, but why fields but not methods?
An argument against static methods within interfaces is that it breaks the virtual table resolution strategy used by the JVM, but shouldn't that apply equally to static fields, i.e. the compiler can just inline it?
Consistency is what I desire, and Java should have either supported no statics of any form within an interface, or it should be consistent and allow them.
|
An [official proposal](http://docs.google.com/Doc?docid=dfkwr6vq%5F30dtg2z9d8&hl=en) has been made to allow static methods in interfaces in Java 7. This proposal is being made under [Project Coin](http://blogs.oracle.com/darcy/entry/project_coin).
My personal opinion is that it's a great idea. There is no technical difficulty in implementation, and it's a very logical, reasonable thing to do. There are several proposals in Project Coin that I hope will *never* become part of the Java language, but this is one that could clean up a lot of APIs. For example, the [`Collections` class has static methods](http://java.sun.com/javase/6/docs/api/java/util/Collections.html#sort(java.util.List)) for manipulating any `List` implementation; those could be included in the `List` interface.
---
*Update:* In the [Java Posse Podcast #234,](http://javaposse.com/index.php?post_id=442910) Joe D'arcy mentioned the proposal briefly, saying that it was "complex" and probably would not make it in under Project Coin.
---
*Update:* While they didn't make it into Project Coin for Java 7, Java 8 does support [static functions in interfaces.](http://hg.openjdk.java.net/lambda/lambda/langtools/rev/67030038d40b)
|
I'm going to go with my pet theory with this one, which is that the lack of consistency in this case is a matter of convenience rather than design or necessity, since I've heard no convincing argument that it was either of those two.
Static fields are there (a) because they were there in JDK 1.0, and many dodgy decisions were made in JDK 1.0, and (b) static final fields in interfaces are the closest thing java had to constants at the time.
Static inner classes in interfaces were allowed because that's pure syntactic sugar - the inner class isn't actually anything to do with the parent class.
So static methods aren't allowed simply because there's no compelling reason to do so; consistency isn't sufficiently compelling to change the status quo.
Of course, this could be permitted in future JLS versions without breaking anything.
|
Why no static methods in Interfaces, but static fields and inner classes OK? [pre-Java8]
|
[
"",
"java",
"interface",
"jls",
""
] |
What's the best way to pipe the output from an java.io.OutputStream to a String in Java?
Say I have the method:
```
writeToStream(Object o, OutputStream out)
```
Which writes certain data from the object to the given stream. However, I want to get this output into a String as easily as possible.
I'm considering writing a class like this (untested):
```
class StringOutputStream extends OutputStream {
StringBuilder mBuf;
public void write(int byte) throws IOException {
mBuf.append((char) byte);
}
public String getString() {
return mBuf.toString();
}
}
```
But is there a better way? I only want to run a test!
|
I would use a `ByteArrayOutputStream`. And on finish you can call:
```
new String( baos.toByteArray(), codepage );
```
or better:
```
baos.toString( codepage );
```
For the `String` constructor, the `codepage` can be a `String` or an instance of [java.nio.charset.Charset](http://docs.oracle.com/javase/7/docs/api/java/nio/charset/Charset.html). A possible value is [java.nio.charset.StandardCharsets.UTF\_8](http://docs.oracle.com/javase/7/docs/api/java/nio/charset/StandardCharsets.html#UTF_8).
The method `toString()` accepts only a `String` as a `codepage` parameter (stand Java 8).
|
I like the Apache Commons IO library. Take a look at its version of [ByteArrayOutputStream](https://commons.apache.org/proper/commons-io/javadocs/api-2.5/org/apache/commons/io/output/ByteArrayOutputStream.html), which has a `toString(String enc)` method as well as `toByteArray()`. Using existing and trusted components like the Commons project lets your code be smaller and easier to extend and repurpose.
|
Get an OutputStream into a String
|
[
"",
"java",
"string",
"io",
"stream",
""
] |
I have one of those "I swear I didn't touch the server" situations. I honestly didn't touch any of the php scripts. The problem I am having is that php data is not being saved across different pages or page refreshes. I know a new session is being created correctly because I can set a session variable (e.g. $\_SESSION['foo'] = "foo" and print it back out on the same page just fine. But when I try to use that same variable on another page it is not set! Is there any php functions or information I can use on my hosts server to see what is going on?
Here is an example script that does not work on my hosts' server as of right now:
```
<?php
session_start();
if(isset($_SESSION['views']))
$_SESSION['views'] = $_SESSION['views']+ 1;
else
$_SESSION['views'] = 1;
echo "views = ". $_SESSION['views'];
echo '<p><a href="page1.php">Refresh</a></p>';
?>
```
The 'views' variable never gets incremented after doing a page refresh. I'm thinking this is a problem on their side, but I wanted to make sure I'm not a complete idiot first.
Here is the phpinfo() for my hosts' server (PHP Version 4.4.7):

|
Thanks for all the helpful info. It turns out that my host changed servers and started using a different session save path other than /var/php\_sessions which didn't exist anymore. A solution would have been to declare `ini_set(' session.save_path','SOME WRITABLE PATH');` in all my script files but that would have been a pain. I talked with the host and they explicitly set the session path to a real path that did exist. Hope this helps anyone having session path troubles.
|
Check to make sure you are not mixing https:// with http://. Session variables do not flow between secure and insecure sessions.
|
PHP Session data not being saved
|
[
"",
"php",
"session",
""
] |
Kind of a random question...
What I'm looking for is a way to express a cast operation which uses a defined operator of the class instance I'm casting from, and generates a compile-time error if there is not a defined cast operator for the type. So, for example, what I'm looking for is something like:
```
template< typename RESULT_TYPE, typename INPUT_TYPE >
RESULT_TYPE operator_cast( const INPUT_TYPE& tValue )
{
return tValue.operator RESULT_TYPE();
}
// Should work...
CString sString;
LPCTSTR pcszString = operator_cast< LPCTSTR >( sString );
// Should fail...
int iValue = 42;
DWORD dwValue = operator_cast< DWORD >( iValue );
```
Interesting side-note: The above code crashes the VS2005 C++ compiler, and doesn't compile correctly in the VS2008 C++ compiler due to what I'm guessing is a compiler bug, but hopefully demonstrates the idea.
Anybody know of any way to achieve this effect?
Edit: More rationale, to explain why you might use this. Say you have a wrapper class which is supposed to encapsulate or abstract a type, and you're casting it to the encapsulated type. You could use static\_cast<>, but that might work when you wanted it to fail (ie: the compiler chooses an operator which is allowed to convert to the type you asked for, when you wanted a failure because that operator is not present).
Admittedly it's an uncommon case, but it's annoying that I can't express exactly what I want the compiler to do in an encapsulated function... hence the question here.
|
The code you posted works with the [Cameau compiler](http://www.comeaucomputing.com/tryitout/) (which is usually a good indication that it's valid C++).
As you know a valid cast consists of no more than one user defined cast, so a possible solution I was thinking of was adding another user defined cast by defining a new type in the cast template and having a [static assert](http://www.boost.org/doc/libs/1_36_0/boost/static_assert.hpp) that no cast is available from the new type to the result type (using [boost is\_convertible](http://www.boost.org/doc/libs/1_36_0/boost/type_traits/is_convertible.hpp)), however this doesn't distinguish between cast operators and cast constructors (ctor with one argument) and alows additional casts to take place (e.g. `void*` to `bool`). I'm not sure if making a distinction between cast operators and cast constructors is the the *correct* thing to do but that's what the question states.
After a couple of days mulling this over it hit me, you can simply take the address of the cast operator. This is slightly easier said than done due to C++'s hairy pointer to member syntax (it took me way longer than expected to get it right). I don't know if this works on VS2008, I only checked it on Cameau.
```
template< typename Res, typename T>
Res operator_cast( const T& t )
{
typedef Res (T::*cast_op_t)() const;
cast_op_t cast_op = &T::operator Res;
return (t.*cast_op)();
}
```
**Edit:** I got a chance to test it on VS2005 and VS2008. My findings differ from the original poster's.
* On VS2008 the original version seems to work fine (as does mine).
* On VS2005 the original version only crashes the compiler when casting from a built in type (e.g. casting int to int) after providing a compilation error which doesn't seem so bad too me and my version seems to works in all cases.
|
Using a converting constructor marked [explicit](https://stackoverflow.com/questions/121162/what-does-the-explicit-keyword-in-c-mean) is how you would prevent the compiler from allowing implicitly converted types from initializing your wrapper class.
|
Any way to cast with class operator only?
|
[
"",
"c++",
"compiler-construction",
"casting",
""
] |
I'm running my site through the W3C's validator trying to get it to validate as XHTML 1.0 Strict and I've gotten down to a particularly sticky (at least in my experience) validation error. I'm including certain badges from various services in the site that provide their own API and code for inclusion on an external site. These badges use javascript (for the most part) to fill an element that you insert in the markup which requires a child. This means that in the end, perfectly valid markup is generated, but to the validator, all it sees is an incomplete parent-child tag which it then throws an error on.
As a caveat, I understand that I could complain to the services that their badges don't validate. Sans this, I assume that someone has validated their code while including badges like this, and that's what I'm interested in. Answers such as, 'Complain to Flickr about their badge' aren't going to help me much.
An additional caveat: I would prefer that as much as possible the markup remains semantic. I.E. Adding an empty li tag or tr-td pair to make it validate would be an *undesirable* solution, even though it may be necessary. If that's the only way it can be made to validate, oh well, but please lean answers towards semantic markup.
As an example:
```
<div id="twitter_div">
<h2><a href="http://twitter.com/stopsineman">@Twitter</a></h2>
<ul id="twitter_update_list">
<script type="text/javascript" src="http://twitter.com/javascripts/blogger.js"></script>
<script type="text/javascript" src="http://twitter.com/statuses/user_timeline/stopsineman.json?callback=twitterCallback2&count=1"></script>
</ul>
</div>
```
Notice the ul tags wrapping the javascript. This eventually gets filled in with lis via the script, but to the validator it only sees the unpopulated ul.
Thanks in advance!
|
The following fragment is valid XHTML and does the job:
```
<div id="twitter_div">
<h2 class="twitter-title"><a href="http://twitter.com/stopsineman" title="Tim's Twitter Page.">Twitter Updates</a></h2>
<div id="myDiv" />
</div>
<script type="text/javascript">
var placeHolderNode = document.getElementById("myDiv");
var parentNode = placeHolderNode.parentNode;
var insertedNode = document.createElement("ul");
insertedNode .setAttribute("id", "twitter_update_list");
parentNode.insertBefore( insertedNode, placeHolderNode);
parentNode.remove(placeHolderNode);
</script>
<script type="text/javascript" src="http://twitter.com/javascripts/blogger.js"></script>
<script type="text/javascript" src="http://twitter.com/statuses/user_timeline/stopsineman.json?callback=twitterCallback2&count=5"></script>
```
|
Perhaps you could use javascript to write the initial badge HTML? You'd probably only want the badge code to be inserted in your document if javascript were available to populate it, right?
You'd just need to make sure your document writing happens before the javascript for your various badges.
Could you give a specific example of the HTML / link to a page with the invalid code?
|
How do you write Valid XHTML 1.0 Strict code when you are using javascript to fill an element that requires a child?
|
[
"",
"javascript",
"validation",
"twitter",
"w3c-validation",
"xhtml-1.0-strict",
""
] |
We are hosting a site for a client and they want us to include the header they have on their server into the pages we are hosting. So whenever they change it, it will automatically change on our site.
We are attempting to use the "include" tag in our JSP code. The code we are using is as follows:
`<%@ include file="www.CLIENT.com/CLIENT2/MiddlePageFiles/Vendor_header.html" %>`
We also tried
`<%@ include file="**http://**www.CLIENT.com/CLIENT2/MiddlePageFiles/Vendor_header.html" %>`
Unfortunately these aren't working for us. What seems to be happening is that the code is ONLY looking locally for this file and never seems to go "outside" to look for it.
We are able to pull the header into our page when we use an iframe but because of the way the header is constructed/coded the mouse over drop-down menus aren't working
as they should when we use the iframe. The drop-down menus are "cascading" underneath the rest of the content on the page and we weren't able to bring
them to the "top".
As a temporary work around, were are hosting the HTML on our own servers.
Any ideas?
|
If you choose to do this in Java, it's nice and easy using the HttpClient from Apache Commons.
```
public static String fetchSourceHtml( String urlString ) {
try {
HttpClient httpClient = new HttpClient();
GetMethod getMethod = new GetMethod( urlString );
getMethod.setFollowRedirects( true );
int httpStatus = httpClient.executeMethod( getMethod );
if (httpStatus >= 400) {
return "";
}
String sourceHtml = getMethod.getResponseBodyAsString();
return sourceHtml;
}
catch (IOException e) {
return "";
}
}
```
For a quick and dirty solution, your JSP you can call this method directly. You could, of course, create a taglib tag to call the method if you prefer.
You may want to change the time-out and retry mechanism for HttpClient. By default it will automatically try up to a maximum of 3 times with each attempt timing out after 30s.
However, you probably want to look into caching the strings for a suitable period of time. You really don't want to make 2 blocking external http requests for each page access to your site.
|
JSP includes don't support including remote files, which is why a relative URL is required: <http://java.sun.com/products/jsp/syntax/1.2/syntaxref1214.html>
I suggest writing a function which opens a connection to that page and downloads the contents and then prints them to your own `out` stream. Then you can put that function in a local file and just `include` that.
|
How do I include a header into my site that is hosted externally?
|
[
"",
"java",
"jsp",
"include",
"jspinclude",
""
] |
This is likely going to be an easy answer and I'm just missing something, but here goes...If I have a Type, (that is, an actual System.Type...not an instance) how do I tell if it inherits from another specific base type?
|
Use the [IsSubclassOf](http://msdn.microsoft.com/en-us/library/system.type.issubclassof.aspx) method of the [System.Type](http://msdn.microsoft.com/en-us/library/system.type_members.aspx) class.
|
One thing to clarify between `Type.IsSubTypeOf()` and `Type.IsAssignableFrom()`:
* `IsSubType()` will return `true` only if the given type is derived from
the specified type. It will return `false` if the given type **IS**
the specified type.
* `IsAssignableFrom()` will return `true` if the given type is either the
specified type or derived from the specified type.
So if you are using these to compare `BaseClass` and `DerivedClass` (which inherits from `BaseClass`) then:
```
BaseClassInstance.GetType.IsSubTypeOf(GetType(BaseClass)) = FALSE
BaseClassInstance.GetType.IsAssignableFrom(GetType(BaseClass)) = TRUE
DerivedClassInstance.GetType.IsSubTypeOf(GetType(BaseClass)) = TRUE
DerivedClassInstance.GetType.IsAssignableFrom(GetType(BaseClass)) = TRUE
```
|
How do you determine whether or not a given Type (System.Type) inherits from a specific base class (in .Net)?
|
[
"",
"c#",
".net",
"reflection",
"inheritance",
""
] |
Is it possible to add attributes at runtime or to change the value of an attribute at runtime?
|
Attributes are static metadata. Assemblies, modules, types, members, parameters, and return values aren't first-class objects in C# (e.g., the `System.Type` class is merely a reflected representation of a type). You can get an instance of an attribute for a type and change the properties if they're writable but that won't affect the attribute as it is applied to the type.
|
This really depends on what exactly you're trying to accomplish.
The [System.ComponentModel.TypeDescriptor](http://msdn.microsoft.com/en-us/library/system.componentmodel.typedescriptor.aspx) stuff can be used to add attributes to types, properties and object instances, and it has the limitation that you have to use it to retrieve those properties as well. If you're writing the code that consumes those attributes, and you can live within those limitations, then I'd definitely suggest it.
As far as I know, the PropertyGrid control and the visual studio design surface are the only things in the BCL that consume the TypeDescriptor stuff. In fact, that's how they do about half the things they really need to do.
|
Can attributes be added dynamically in C#?
|
[
"",
"c#",
".net",
"attributes",
""
] |
I'm trying to call the OpenThemeData (see msdn [OpenThemeData](http://msdn.microsoft.com/en-us/library/bb759821%28v=VS.85%29.aspx)) function but I couldn't determine what are the acceptable Class names to be passed in by the `pszClassList` parameter.
```
HTHEME OpenThemeData(
HWND hwnd,
LPCWSTR pszClassList
);
```
Could anybody tell me what are the acceptable class names that I can pass into that parameter?
Thanks!
|
You can look in "AeroStyle.xml" as a previous poster noted, which gives an exact list for Vista/Aero. However, if you want to play safe (and you probably do) the class names should, in general, be Windows class names of Windows common controls. For example, push buttons and check boxes use the class name "Button", the edit control "Edit", etc. I generally pick the class name of the control that's closest to whatever custom element I'm working on is, and use the theme data for that. That way you'll get code that works with XP, Vista and (hopefully) Windows 7, regardless of what the user's selected theme actually is.
However, unless you use raw Win32 a lot, you probably don't do much control creation directly using the class name. The class names are rather liberally sprinkled throughout MSDN. A good place to start is usually the "CommCtrl.h" file from the Platform SDK, which has a lot of them, and they're always described in the MSDN help on the individual common controls. You can also often learn them by looking at how dialogs are defined in .rc files by opening them in a text editor: these contain the class name for the controls.
|
The article [**Parts and States**](http://msdn.microsoft.com/en-us/library/bb773210%28v=VS.85%29.aspx) on MSDN contains a table which shows the *control classes*, *parts*, and *states*. The values in the table are defined in `Vsstyle.h` and `Vssym32.h`.
Here is a quick reference:
```
BUTTON, CLOCK, COMBOBOX, COMMUNICATIONS, CONTROLPANEL, DATEPICKER, DRAGDROP,
EDIT, EXPLORERBAR, FLYOUT, GLOBALS, HEADER, LISTBOX, LISTVIEW, MENU, MENUBAND,
NAVIGATION, PAGE, PROGRESS, REBAR, SCROLLBAR, SEARCHEDITBOX, SPIN, STARTPANEL,
STATUS, TAB, TASKBAND, TASKBAR, TASKDIALOG, TEXTSTYLE, TOOLBAR, TOOLTIP,
TRACKBAR, TRAYNOTIFY, TREEVIEW, WINDOW
```
The answer to the question [Windows Visual Themes: Gallery of Parts and States?](https://stackoverflow.com/questions/4009701/windows-visual-themes-gallery-of-parts-and-states/4009712#4009712) provides a *"Parts and States Explorer"* application where you can browse and test most of the styles.
|
What are the possible classes for the OpenThemeData function?
|
[
"",
"c++",
"winapi",
"mfc",
"themes",
"msdn",
""
] |
Say for example you just queried a database and you recieved this 2D array.
```
$results = array(
array('id' => 1, 'name' => 'red' , 'spin' => 1),
array('id' => 2, 'name' => 'green', 'spin' => -1),
array('id' => 3, 'name' => 'blue' , 'spin' => .5)
);
```
I often find myself writing loops like this.
```
foreach($results as $result)
$names[] = $result['name'];
```
My questions is does there exist a way to get this array $names without using a loop? Using callback functions count as using a loop.
Here is a more generic example of getting every field.
```
foreach($results as $result)
foreach($result as $key => $value)
$fields[$key][] = $value;
```
|
As of June 20th in PHP-5.5 there is a new function [array\_column](https://www.php.net/array_column)
For example:
```
$records = array(
array(
'id' => 2135,
'first_name' => 'John',
'last_name' => 'Doe'
),
array(
'id' => 3245,
'first_name' => 'Sally',
'last_name' => 'Smith'
),
array(
'id' => 5342,
'first_name' => 'Jane',
'last_name' => 'Jones'
),
array(
'id' => 5623,
'first_name' => 'Peter',
'last_name' => 'Doe'
)
);
$firstNames = array_column($records, 'first_name');
print_r($firstNames);
```
Will return
```
Array
(
[0] => John
[1] => Sally
[2] => Jane
[3] => Peter
)
```
There are even more examples in the above mentioned link.
|
I voted @Devon's response up because there really isn't a way to do what you're asking with a built-in function. The best you can do is write your own:
```
function array_column($array, $column)
{
$ret = array();
foreach ($array as $row) $ret[] = $row[$column];
return $ret;
}
```
|
Isolate a single column in a multi-dimensional array
|
[
"",
"php",
"arrays",
"associative-array",
""
] |
I seem to be seeing more 'for' loops over iterators in questions & answers here than I do for\_each(), transform(), and the like. Scott Meyers suggests that [stl algorithms are preferred](http://www.ddj.com/cpp/184401446), or at least he did in 2001. Of course, using them often means moving the loop body into a function or function object. Some may feel this is an unacceptable complication, while others may feel it better breaks down the problem.
So... should STL algorithms be preferred over hand-rolled loops?
|
It depends on:
* Whether high-performance is required
* The readability of the loop
* Whether the algorithm is complex
If the loop isn't the bottleneck, and the algorithm is simple (like for\_each), then for the current C++ standard, I'd prefer a hand-rolled loop for readability. (Locality of logic is key.)
However, now that C++0x/C++11 is supported by some major compilers, I'd say use STL algorithms because they now allow lambda expressions — and thus the locality of the logic.
|
I’m going to go against the grain here and advocate that using STL algorithms with functors makes code much easier to understand and maintain, but you have to do it right. You have to pay more attention to readability and clearity. Particularly, you have to get the naming right. But when you do, you can end up with cleaner, clearer code, and paradigm shift into more powerful coding techniques.
Let’s take an example. Here we have a group of children, and we want to set their “Foo Count” to some value. The standard for-loop, iterator approach is:
```
for (vector<Child>::iterator iter = children.begin();
iter != children.end();
++iter)
{
iter->setFooCount(n);
}
```
Which, yeah, it’s pretty clear, and definitely not *bad* code. You can figure it out with just a little bit of looking at it. But look at what we can do with an appropriate functor:
```
for_each(children.begin(), children.end(), SetFooCount(n));
```
Wow, that says exactly what we need. You don’t have to figure it out; you immediately know that it’s setting the “Foo Count” of every child. (It would be even clearer if we didn’t need the .begin() / .end() nonsense, but you can’t have everything, and they didn’t consult me when making the STL.)
Granted, you do need to define this magical functor, `SetFooCount`, but its definition is pretty boilerplate:
```
class SetFooCount
{
public:
SetFooCount(int n) : fooCount(n) {}
void operator () (Child& child)
{
child.setFooCount(fooCount);
}
private:
int fooCount;
};
```
In total it’s more code, and you have to look at another place to find out exactly what `SetFooCount` is doing. But because we named it well, 99% of the time we don’t have to look at the code for `SetFooCount`. We assume it does what it says, and we only have to look at the `for_each` line.
What I really like is that using the algorithms leads to a paradigm shift. Instead of thinking of a list as a collection of objects, and doing things to every element of the list, you think of the list as a first class entity, and you operate directly on the list itself. The for-loop iterates through the list, calling a member function on each element to set the Foo Count. Instead, I am doing one command, which sets the Foo Count of every element in the list. It’s subtle, but when you look at the forest instead of the trees, you gain more power.
So with a little thought and careful naming, we can use the STL algorithms to make cleaner, clearer code, and start thinking on a less granular level.
|
Should one prefer STL algorithms over hand-rolled loops?
|
[
"",
"c++",
"algorithm",
"stl",
""
] |
Say you have a large PHP project and suddenly, when attempting to run it, you just end up with a blank page. The script terminates and you want to find exactly where that is with as little effort as possible.
Is there a tool/program/command/IDE that can, on PHP script termination, tell you the location of a script exit?
Note: I can't mark my own post as "accepted answer" so look at the bottom to see my solution. If you come up with a better solution I will mark your post as the answer.
|
With some inspiration from the nonworking but still right-direction answer from RoBorg, I used the following code in the beginning:
```
function shutdown() {
global $dbg_stack_a;
print_r($dbg_stack_a);
}
register_shutdown_function('shutdown');
```
And then I made a global conditional breakpoint (global = breakpoint is evaluated on each row), exploiting the fact that it can run code trough eval(), with the following "condition":
```
eval('
global $dbg_stack_a, $dbg_stack_b, $dbg_stack_c;
$dbg_stack_a = $dbg_stack_b;
$dbg_stack_b = $dbg_stack_c;
$dbg_stack_c = debug_backtrace();
return false;
')
```
Probably not fast but does the trick! Using this I was able to determine the exact file and line location that raised die(). (This example works in NuSphere.)
|
I use the following code and need no special debugging environment. Note that this might take really long; you can set the ticks count higher - that makes it faster, but blurry.
```
function shutdown_find_exit()
{
var_dump($GLOBALS['dbg_stack']);
}
register_shutdown_function('shutdown_find_exit');
function write_dbg_stack()
{
$GLOBALS['dbg_stack'] = debug_backtrace();
}
register_tick_function('write_dbg_stack');
declare(ticks=1);
```
|
Fastest way to determine where PHP script exits
|
[
"",
"php",
""
] |
Which types of objects fall into the domain of "subscriptable"?
|
It basically means that the object implements the `__getitem__()` method. In other words, it describes objects that are "containers", meaning they contain other objects. This includes strings, lists, tuples, and dictionaries.
|
Off the top of my head, the following are the only built-ins that are subscriptable:
```
string: "foobar"[3] == "b"
tuple: (1,2,3,4)[3] == 4
list: [1,2,3,4][3] == 4
dict: {"a":1, "b":2, "c":3}["c"] == 3
```
But [mipadi's answer](https://stackoverflow.com/a/216980/8554766) is correct - any class that implements `__getitem__` is subscriptable
|
What does it mean if a Python object is "subscriptable" or not?
|
[
"",
"python",
"terminology",
""
] |
I'm implementing a secure WCF service. Authentication is done using username / password or Windows credentials. The service is hosted in a Windows Service process. Now, I'm trying to find out the best way to implement *authorization* for each service operation.
For example, consider the following method:
```
public EntityInfo GetEntityInfo(string entityId);
```
As you may know, in WCF, there is an OperationContext object from which you can retrieve the security credentials passed in by the caller/client. Now,*authentication* would have already finished by the time the first line in the method is called. However, how do we implement authorization if the decision depends on the input data itself? For example, in the above case, say 'admin' users(whose permissions etc are stored in a database), are allowed to get entity info, and other users should not be allowed... where do we put the authorization checks?
Say we put it in the first line of the method like so:
```
CheckAccessPermission(PermissionType.GetEntity, user, entityId) //user is pulled from the current OperationContext
```
Now, there are a couple of questions:
1. Do we validate the entityId (for example check null / empty value etc) BEFORE the authorization check or INSIDE the authorization check? In other words, if authorization checks should be included in every method, is that a good pattern? Which should happen first - argument validation or authorization?
2. How do we unit test a WCF service when authorization checks are all over the place like this, and we don't have an OperationContext in the unit test!? (Assuming I'm tryin to test this service class implementation directly without any of the WCF setup).
Any ideas guys?
|
For question 1, absolutely do authorization first. No code (within your control) should execute before authorization to maintain the tightest security. Paul's example above is excellent.
For question 2, you could handle this by subclassing your concrete service implementation. Make the true business logic implementation an abstract class with an abstract "CheckPermissions" method as you mention above. Then create 2 subclasses, one for WCF use, and one (very isolated in a non deployed DLL) which returns true (or whatever you'd like it to do in your unit testing).
Example (note, these shouldn't be in the same file or even DLL though!):
```
public abstract class MyServiceImpl
{
public void MyMethod(string entityId)
{
CheckPermissions(entityId);
//move along...
}
protected abstract bool CheckPermissions(string entityId);
}
public class MyServiceUnitTest
{
private bool CheckPermissions(string entityId)
{
return true;
}
}
public class MyServiceMyAuth
{
private bool CheckPermissions(string entityId)
{
//do some custom authentication
return true;
}
}
```
Then your WCF deployment uses the class "MyServiceMyAuth", and you do your unit testing against the other.
|
For question 1, it's best to perform authorization first. That way, you don't leak validation error messages back to unauthorized users.
BTW, instead of using a home-grown authentication method (which I assume is what your CheckAccessPermission is), you might be able to hook up to WCF's out-of-the-box support for ASP.NET role providers. Once this is done, you perform authorization via OperationContext.Current.ServiceSecurityContext.PrimaryIdentity.IsInRole(). The PrimaryIdentity is an IPrincipal.
|
WCF Service authorization patterns
|
[
"",
"c#",
".net",
"wcf",
"authorization",
""
] |
I'm using NetBeans, trying to change the familiar Java coffee cup icon to a png file that I have saved in a resources directory in the jar file. I've found many different web pages that claim they have a solution, but so far none of them work.
Here's what I have at the moment (leaving out the try-catch block):
```
URL url = new URL("com/xyz/resources/camera.png");
Toolkit kit = Toolkit.getDefaultToolkit();
Image img = kit.createImage(url);
getFrame().setIconImage(img);
```
The class that contains this code is in the **com.xyz** package, if that makes any difference. That class also extends JFrame. This code is throwing a MalformedUrlException on the first line.
Anyone have a solution that works?
|
```
java.net.URL url = ClassLoader.getSystemResource("com/xyz/resources/camera.png");
```
May or may not require a '/' at the front of the path.
|
You can simply go Netbeans, in the design view, go to `JFrame` property, choose icon image property, Choose Set Form's `iconImage` property using: "Custom code" and then in the `Form.SetIconImage()` function put the following code:
```
Toolkit.getDefaultToolkit().getImage(name_of_your_JFrame.class.getResource("image.png"))
```
Do not forget to import:
```
import java.awt.Toolkit;
```
in the source code!
|
How do I change the default application icon in Java?
|
[
"",
"java",
"icons",
""
] |
I'm trying to implement an IErrorHandler in my WCF service in order to log every exception that hits the service boundary before it's passed to the client. I already use IErrorHandlers for translating Exceptions to typed FaultExceptions, which has been very useful. According to the MSDN for IErrorHandler.HandleError(), it's also intended to be used for logging at the boundary.
The problem is, the HandleError function isn't guaranteed to be called on the operation thread, so I can't figure out how to get information about what operation triggered the exception. I can get the TargetSite out of the exception itself, but that gives me the interior method instead of the operation. I could also parse through the StackTrace string to figure out where it was thrown, but this seems a little fragile and hokey. Is there any consistent, supported way to get any state information (messages, operationdescription, anything) while in the HandleError function? Or any other ways to automatically log exceptions for service calls?
I'm looking for a solution to implement on production, using my existing logging framework, so SvcTraceViewer won't do it for me.
Thanks.
|
I ended up putting the logging in IErrorHandler.ProvideFault() instead of IErrorHandler.HandlerError(). The ProvideFault call is made in the operation thread, so I can use OperationContext.Current to get some information to log.
|
I use the IErrorHanlder in the same way that you describe, but not for logging. Instead on service classes (WCF or not) I use an interceptor as described [here](http://ayende.com/Blog/archive/2008/07/31/Logging--the-AOP-way.aspx). I believe that this technique will capture the information you are interested in.
|
WCF Error Logging at Service Boundary
|
[
"",
"c#",
"wcf",
"exception",
"error-handling",
""
] |
My code is in c# asp.net 3.5
In the following code the "Msg" has many words with spaces and characters (eg:Failed to prepare Sync Favorites : Directory does not exist: \STL-FNP-02\ryounes$\Sync\Favorites). This "Msg" is pulled from database to a gridview. I am not able to create hyperlink for this "Msg" in gridview. Since it has spaces it is not creating hyperlink.
I need to create hyperlink for this "Msg" and latter use it in linq query.
I think one shud either use eval or url encoder. I am not sure how to do it. Can anyone say how to go about it?
```
<asp:HyperLinkField DataTextField="Msg" HeaderText="Msg" DataNavigateUrlFields="Msg"
DataNavigateUrlFormatString="Sync.aspx?Msg={0}" />
```
|
The easiest way to get around it is to use something like
```
<asp:TemplateField HeaderText="Msg">
<asp:HyperLink runat="server" Text='<%# HttpUtility.UrlEncode(Eval("Msg")) %>' NavigateUrl='<%#Eval("Msg")%>' />
</asp:TemplateField>
```
Not 100% certain that the syntax is correct but you get the general idea
|
It doesn't create the link as it is not a valid URL, rather than using a hyperlink column most likely you are going to need to migrate to a template and manage it yourself, or at minimum do some formatting on it.
I would be cautious regardless of making that a hyperlink, where is it going to go anway?
If there is a specific place based on message that you should be going, calculate that BEFORE you bind to the grid...
|
create hyperlink in gridview asp.net c#
|
[
"",
"c#",
"asp.net",
"linq",
""
] |
I have the following class which uses BinaryReader internally and implements IDisposable.
```
class DisposableClass : IDisposable
{
private BinaryReader reader;
public DisposableClass(Stream stream)
{
reader = new BinaryReader(stream);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
((IDisposable)reader).Dispose();
// reader.Dispose();// this won't compile
}
}
public void Dispose()
{
this.Dispose(true);
}
}
```
I have already figured out that I need to cast BinaryReader to IDisposable to be able to call Dispose on it, but I don't understand why I can't just call the Dispose() method directly without casting to IDisposable?
|
It won't work because the `Dispose` method on `BinaryReader` has been explicitly implemented.
Instead of being implicitly implemented, as in:
```
public void Dispose()
{
}
```
...it has been explicitly implemented, as in:
```
void IDisposable.Dispose()
{
}
```
...which means it can only be accessed via the `IDisposable` interface. Therefore, you have to cast the instance to `IDisposable` first.
|
Expanding on my comments [here](https://stackoverflow.com/questions/222996/why-calling-dispose-on-binaryreader-results-in-compile-error#223020), the `BinaryReader` class does not properly implement the Dispose pattern.
Looking at this class in Reflector, it looks like this (for .NET 3.5):
```
public class BinaryReader : IDisposable
{
public virtual void Close()
{
this.Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
Stream stream = this.m_stream;
this.m_stream = null;
if (stream != null)
{
stream.Close();
}
}
this.m_stream = null;
this.m_buffer = null;
this.m_decoder = null;
this.m_charBytes = null;
this.m_singleChar = null;
this.m_charBuffer = null;
}
void IDisposable.Dispose()
{
this.Dispose(true);
}
}
```
The problem here is that by making `IDisposable.Dispose()` an explicit interface implementaiton it forces a developer to call `Close()` instead of `Dispose()`.
In this context, we have a case of imbalanced semantics. There was never a call to "Open" the reader so it is not intuitive to "Close" the reader.
Going one step further, in order to call Dispose() you must then explicitly cast to `IDisposable`, which is not something you ordinarily need to do. You do have the option of calling `Dispose(bool)` directly, but how do you know what the boolean parameter should be?
To properly follow the pattern, it should have been implmented as:
```
public class BinaryReader : IDisposable
{
public virtual void Close()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
Stream stream = this.m_stream;
this.m_stream = null;
if (stream != null)
{
stream.Close();
}
}
this.m_stream = null;
this.m_buffer = null;
this.m_decoder = null;
this.m_charBytes = null;
this.m_singleChar = null;
this.m_charBuffer = null;
}
public void Dispose()
{
this.Close();
}
}
```
This would allow you to call either `Close()` or `Dispose()`, in which case either call continues to result in calling `Dispose(true)`. (This is the same flow as the actual implementation by calling `Close()` or `((IDisposable)reader).Dispose()`).
Fortunately (or unfortunately, depending on which way you choose to look at it) because `BinaryReader` does implement the `IDisposable` interface it is allowed in a using statement:
```
using (BinaryReader reader = new BinaryReader(...))
{
}
```
|
Why calling Dispose() on BinaryReader results in compile error?
|
[
"",
"c#",
".net",
""
] |
If I have an assembly (A) which references another assembly (B).
I want to reference A in a project, I add the reference and it copies A into my BIN directory. It does not copy B as well, even though A depends on it, so the code doesn't compile.
How can I set things up so that whenever I reference A, both A and B get copied to my bin directory?
|
I dont think there is any way around what you ask other than to explicitly add both. I dont think however adding projects for the sake of getting references copied is a viable solution to the issue. Not all projects that a solution depends on should necassarily be added to the solution. This would completely depdend on your overall project structure, processes, source control, division of labour, etc
|
In Visual Studio, add each project to the same solution. Ensure you use Project References instead of direct file references (ie browsing for the assembly).
|
Assembly dependencies with .Net projects
|
[
"",
"c#",
".net",
"vb.net",
""
] |
It seems pretty common to want to let your javascript know a particular dom node corresponds to a record in the database. So, how do you do it?
One way I've seen that's pretty common is to use a class for the type and an id for the id:
```
<div class="thing" id="5">
<script> myThing = select(".thing#5") </script>
```
There's a slight html standards issue with this though -- if you have more than one type of record on the page, you may end up duplicating IDs. But that doesn't do anything bad, does it?
An alternative is to use data attributes:
```
<div data-thing-id="5">
<script> myThing = select("[data-thing-id=5]") </script>
```
This gets around the duplicate IDs problem, but it does mean you have to deal with attributes instead of IDs, which is sometimes more difficult. What do you guys think?
|
Note that an ID cannot start with a digit, so:
```
<div class="thing" id="5">
```
is invalid HTML. See [What are valid values for the id attribute in HTML?](https://stackoverflow.com/questions/70579/what-is-a-valid-value-for-id-attributes-in-html#70586)
In your case, I would use ID's like `thing5` or `thing.5`.
|
```
<div class="thing" id="myapp-thing-5"/>
// Get thing on the page for a particular ID
var myThing = select("#myapp-thing-5");
// Get ID for the first thing on the page
var thing_id = /myapp-thing-(\d+)/.exec ($('.thing')[0].id)[1];
```
|
Opinion: in HTML, Possible Duplicate IDs or Non-Standard Attributes?
|
[
"",
"javascript",
"html",
"database",
""
] |
What is the best way to bind WPF properties to ApplicationSettings in C#? Is there an automatic way like in a Windows Forms Application? Similar to [this question](https://stackoverflow.com/questions/18421/best-way-to-bind-windows-forms-properties-to-applicationsettings-in-c), how (and is it possible to) do you do the same thing in WPF?
|
You can directly bind to the static object created by Visual Studio.
In your windows declaration add:
```
xmlns:p="clr-namespace:UserSettings.Properties"
```
where `UserSettings` is the application namespace.
Then you can add a binding to the correct setting:
```
<TextBlock Height="{Binding Source={x:Static p:Settings.Default},
Path=Height, Mode=TwoWay}" ....... />
```
Now you can save the settings, per example when you close your application:
```
protected override void OnClosing(System.ComponentModel.CancelEventArgs e)
{
Properties.Settings.Default.Save();
base.OnClosing(e);
}
```
|
In case you are a **VB.Net** developer attempting this, the answer is a smidge different.
```
xmlns:p="clr-namespace:ThisApplication"
```
Notice the .Properties isn't there.
---
In your binding it's MySettings.Default, instead of Settings.Default - since the app.config stores it differently.
```
<TextBlock Height={Binding Source={x:Static p:MySettings.Default}, Path=Height, ...
```
After a bit of pulling out my hair, I discovered this. Hope it helps
|
Best way to bind WPF properties to ApplicationSettings in C#?
|
[
"",
"c#",
".net",
"wpf",
"visual-studio",
""
] |
If you create a panel on a form and set it to Dock=Top and drop another panel and set its Dock=Fill, it may fill the entire form, ignoring the first panel. Changing the tab order does nothing.
|
Docking layout depends on the **order** of sibling controls. Controls are docked "bottom up", so **the last control in the collection is docked first**. A docked control only take the layout of *previously* docked siblings into account. Hence the control with Dock=Fill should be first (top) in the sibling order, if you want it to take the other docked controls into account. If it is not the first control, earlier controls will overlap it.
This can be confusing because the sibling-order is not necessarily the same as the visual order, and the sibling order is not always apparent from the design view.
The **Document outline** window (View -> Other Windows -> Document outline) gives a useful tree-view over the control hierarchy and order, and allows you to change the sibling order of controls.
You can also change sibling order directly in the designer by context menu -> Bring to front / Send to back, which moves the control to be first or last of the siblings. These menu labels may be somewhat confusing since the actual effect depends on the layout model.
With fixed positioned controls, the 2D position is independent of the sibling order, but when controls are overlapping, the control earliest in the order will be "on top", hiding part of siblings later in the order. In this context Bring to front / Send to back makes sense.
Inside flow- or table-layout panels, the creation order determines the visual order of the controls. There is no overlapping controls. So bring to front/send to back really means make first or last in the order of controls.
With docked layout, the bring to front / send to back may be even more confusing since it determines in which order the docking is calculated, so "bring to front" on a fill-docked control will place the control in the *middle* of the parent, taking all edge-docked controls into account.
|
Right click on the panel with Dock=Fill and click 'Bring to Front'.
This makes this control be created last, which takes into account the Dock settings on other controls in the same container.
|
Panel.Dock Fill ignoring other Panel.Dock setting
|
[
"",
"c#",
".net",
"winforms",
"visual-studio-2005",
""
] |
Let's say that a class has a `public int counter` field that is accessed by multiple threads. This `int` is only incremented or decremented.
To increment this field, which approach should be used, and why?
* `lock(this.locker) this.counter++;`,
* `Interlocked.Increment(ref this.counter);`,
* Change the access modifier of `counter` to `public volatile`.
Now that I've discovered `volatile`, I've been removing many `lock` statements and the use of `Interlocked`. But is there a reason not to do this?
|
### Worst (won't actually work)
> Change the access modifier of `counter` to `public volatile`
As other people have mentioned, this on its own isn't actually safe at all. The point of `volatile` is that multiple threads running on multiple CPUs can and will cache data and re-order instructions.
If it is **not** `volatile`, and CPU A increments a value, then CPU B may not actually see that incremented value until some time later, which may cause problems.
If it is `volatile`, this just ensures the two CPUs see the same data at the same time. It doesn't stop them at all from interleaving their reads and write operations which is the problem you are trying to avoid.
### Second Best:
> `lock(this.locker) this.counter++`;
This is safe to do (provided you remember to `lock` everywhere else that you access `this.counter`). It prevents any other threads from executing any other code which is guarded by `locker`.
Using locks also, prevents the multi-CPU reordering problems as above, which is great.
The problem is, locking is slow, and if you re-use the `locker` in some other place which is not really related then you can end up blocking your other threads for no reason.
### Best
> `Interlocked.Increment(ref this.counter);`
This is safe, as it effectively does the read, increment, and write in 'one hit' which can't be interrupted. Because of this, it won't affect any other code, and you don't need to remember to lock elsewhere either. It's also very fast (as MSDN says, on modern CPUs, this is often literally a single CPU instruction).
~~I'm not entirely sure however if it gets around other CPUs reordering things, or if you also need to combine volatile with the increment.~~
InterlockedNotes:
1. INTERLOCKED METHODS ARE CONCURRENTLY SAFE ON ANY NUMBER OF COREs OR CPUs.
2. Interlocked methods apply a full fence around instructions they execute, so reordering does not happen.
3. Interlocked methods **do not need or even do not support access to a volatile field**, as volatile is placed a half fence around operations on given field and interlocked is using the full fence.
### Footnote: What volatile is actually good for.
As `volatile` doesn't prevent these kinds of multithreading issues, what's it for? A good example is saying you have two threads, one which always writes to a variable (say `queueLength`), and one which always reads from that same variable.
If `queueLength` is not volatile, thread A may write five times, but thread B may see those writes as being delayed (or even potentially in the wrong order).
A solution would be to lock, but you could also use volatile in this situation. This would ensure that thread B will always see the most up-to-date thing that thread A has written. Note however that this logic *only* works if you have writers who never read, and readers who never write, *and* if the thing you're writing is an atomic value. As soon as you do a single read-modify-write, you need to go to Interlocked operations or use a Lock.
|
**EDIT:** As noted in comments, these days I'm happy to use `Interlocked` for the cases of a *single variable* where it's *obviously* okay. When it gets more complicated, I'll still revert to locking...
Using `volatile` won't help when you need to increment - because the read and the write are separate instructions. Another thread could change the value after you've read but before you write back.
Personally I almost always just lock - it's easier to get right in a way which is *obviously* right than either volatility or Interlocked.Increment. As far as I'm concerned, lock-free multi-threading is for real threading experts, of which I'm not one. If Joe Duffy and his team build nice libraries which will parallelise things without as much locking as something I'd build, that's fabulous, and I'll use it in a heartbeat - but when I'm doing the threading myself, I try to keep it simple.
|
Volatile vs. Interlocked vs. lock
|
[
"",
"c#",
"multithreading",
"locking",
"volatile",
"interlocked",
""
] |
I want to check the login status of a user through an ajax request. Depending wether the user is logged in I want to display either the username/password input or the username. Currently the request is sent on body.onload and a prgoress indicator is shown until the response arrives. Is there a better way?
---
Let's assume that the requirements state that there should be no direct server side processing.
|
If you don't want to depend on a toolkit, you can create your own DOMReady function that looks kinda like this:
```
/* Usage: DOMReady(ajaxFunc); */
function DOMReady(f) {
if (!document.all) {
document.addEventListener("DOMContentLoaded", f, false);
} else {
if (document.readystate == 'complete') {
window.setTimeout(f, 0);
}
else {
//Add event to onload just if all else fails
attachEvent(window, "load", f);
}
}
}
```
Or for a more complex solution: <http://snipplr.com/view/6029/domreadyjs/>
|
This sounds like an operation that should be done on the server first, before the page is rendered. If someone has javascript disabled, what would happen?
|
How do i submit an ajax request before the page is loaded
|
[
"",
"javascript",
"ajax",
""
] |
In terms of Java, when someone asks:
> what is polymorphism?
Would **overloading** or **overriding** be an acceptable answer?
I think there is a bit more to it than that.
*IF you had a abstract base class that defined a method with no implementation, and you defined that method in the sub class, is that still overridding?*
I think **overloading** is not the right answer for sure.
|
The clearest way to express polymorphism is via an abstract base class (or interface)
```
public abstract class Human{
...
public abstract void goPee();
}
```
This class is abstract because the `goPee()` method is not definable for Humans. It is only definable for the subclasses Male and Female. Also, Human is an abstract concept — You cannot create a human that is neither Male nor Female. It’s got to be one or the other.
So we defer the implementation by using the abstract class.
```
public class Male extends Human{
...
@Override
public void goPee(){
System.out.println("Stand Up");
}
}
```
and
```
public class Female extends Human{
...
@Override
public void goPee(){
System.out.println("Sit Down");
}
}
```
Now we can tell an entire room full of Humans to go pee.
```
public static void main(String[] args){
ArrayList<Human> group = new ArrayList<Human>();
group.add(new Male());
group.add(new Female());
// ... add more...
// tell the class to take a pee break
for (Human person : group) person.goPee();
}
```
Running this would yield:
```
Stand Up
Sit Down
...
```
|
**Polymorphism** is the ability of a class instance to behave as if it were an instance of another class in its inheritance tree, most often one of its ancestor classes. For example, in Java all classes inherit from Object. Therefore, you can create a variable of type Object and assign to it an instance of any class.
An **override** is a type of function which occurs in a class which inherits from another class. An override function "replaces" a function inherited from the base class, but does so in such a way that it is called even when an instance of its class is pretending to be a different type through polymorphism. Referring to the previous example, you could define your own class and override the toString() function. Because this function is inherited from Object, it will still be available if you copy an instance of this class into an Object-type variable. Normally, if you call toString() on your class while it is pretending to be an Object, the version of toString which will actually fire is the one defined on Object itself. However, because the function is an override, the definition of toString() from your class is used even when the class instance's true type is hidden behind polymorphism.
**Overloading** is the action of defining multiple methods with the same name, but with different parameters. It is unrelated to either overriding or polymorphism.
|
Polymorphism vs Overriding vs Overloading
|
[
"",
"java",
"oop",
"polymorphism",
"overloading",
"overriding",
""
] |
I want to combine these:
```
keys = ['name', 'age', 'food']
values = ['Monty', 42, 'spam']
```
into a single dictionary:
```
{'name': 'Monty', 'age': 42, 'food': 'spam'}
```
How can I do this?
|
Like this:
```
keys = ['a', 'b', 'c']
values = [1, 2, 3]
dictionary = dict(zip(keys, values))
print(dictionary) # {'a': 1, 'b': 2, 'c': 3}
```
Voila :-) The pairwise [`dict`](https://docs.python.org/3/library/functions.html#func-dict) constructor and [`zip`](https://docs.python.org/3/library/functions.html#zip) function are awesomely useful.
|
> Imagine that you have:
>
> ```
> keys = ('name', 'age', 'food')
> values = ('Monty', 42, 'spam')
> ```
>
> **What is the simplest way to produce the following dictionary ?**
>
> ```
> dict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'}
> ```
## Most performant, `dict` constructor with `zip`
```
new_dict = dict(zip(keys, values))
```
In Python 3, zip now returns a lazy iterator, and this is now the most performant approach.
`dict(zip(keys, values))` does require the one-time global lookup each for `dict` and `zip`, but it doesn't form any unnecessary intermediate data-structures or have to deal with local lookups in function application.
## Runner-up, dict comprehension:
A close runner-up to using the dict constructor is to use the native syntax of a dict comprehension (not a *list* comprehension, as others have mistakenly put it):
```
new_dict = {k: v for k, v in zip(keys, values)}
```
Choose this when you need to map or filter based on the keys or value.
In Python 2, `zip` returns a list, to avoid creating an unnecessary list, use `izip` instead (aliased to zip can reduce code changes when you move to Python 3).
```
from itertools import izip as zip
```
So that is still (2.7):
```
new_dict = {k: v for k, v in zip(keys, values)}
```
## Python 2, ideal for <= 2.6
`izip` from `itertools` becomes `zip` in Python 3. `izip` is better than zip for Python 2 (because it avoids the unnecessary list creation), and ideal for 2.6 or below:
```
from itertools import izip
new_dict = dict(izip(keys, values))
```
## Result for all cases:
In all cases:
```
>>> new_dict
{'age': 42, 'name': 'Monty', 'food': 'spam'}
```
## Explanation:
If we look at the help on `dict` we see that it takes a variety of forms of arguments:
```
>>> help(dict)
class dict(object)
| dict() -> new empty dictionary
| dict(mapping) -> new dictionary initialized from a mapping object's
| (key, value) pairs
| dict(iterable) -> new dictionary initialized as if via:
| d = {}
| for k, v in iterable:
| d[k] = v
| dict(**kwargs) -> new dictionary initialized with the name=value pairs
| in the keyword argument list. For example: dict(one=1, two=2)
```
The optimal approach is to use an iterable while avoiding creating unnecessary data structures. In Python 2, zip creates an unnecessary list:
```
>>> zip(keys, values)
[('name', 'Monty'), ('age', 42), ('food', 'spam')]
```
In Python 3, the equivalent would be:
```
>>> list(zip(keys, values))
[('name', 'Monty'), ('age', 42), ('food', 'spam')]
```
and Python 3's `zip` merely creates an iterable object:
```
>>> zip(keys, values)
<zip object at 0x7f0e2ad029c8>
```
Since we want to avoid creating unnecessary data structures, we usually want to avoid Python 2's `zip` (since it creates an unnecessary list).
## Less performant alternatives:
This is a generator expression being passed to the dict constructor:
```
generator_expression = ((k, v) for k, v in zip(keys, values))
dict(generator_expression)
```
or equivalently:
```
dict((k, v) for k, v in zip(keys, values))
```
And this is a list comprehension being passed to the dict constructor:
```
dict([(k, v) for k, v in zip(keys, values)])
```
In the first two cases, an extra layer of non-operative (thus unnecessary) computation is placed over the zip iterable, and in the case of the list comprehension, an extra list is unnecessarily created. I would expect all of them to be less performant, and certainly not more-so.
## Performance review:
In 64 bit Python 3.8.2 provided by Nix, on Ubuntu 16.04, ordered from fastest to slowest:
```
>>> min(timeit.repeat(lambda: dict(zip(keys, values))))
0.6695233230129816
>>> min(timeit.repeat(lambda: {k: v for k, v in zip(keys, values)}))
0.6941362579818815
>>> min(timeit.repeat(lambda: {keys[i]: values[i] for i in range(len(keys))}))
0.8782548159942962
>>>
>>> min(timeit.repeat(lambda: dict([(k, v) for k, v in zip(keys, values)])))
1.077607496001292
>>> min(timeit.repeat(lambda: dict((k, v) for k, v in zip(keys, values))))
1.1840861019445583
```
`dict(zip(keys, values))` wins even with small sets of keys and values, but for larger sets, the differences in performance will become greater.
A commenter said:
> `min` seems like a bad way to compare performance. Surely `mean` and/or `max` would be much more useful indicators for real usage.
We use `min` because these algorithms are deterministic. We want to know the performance of the algorithms under the best conditions possible.
If the operating system hangs for any reason, it has nothing to do with what we're trying to compare, so we need to exclude those kinds of results from our analysis.
If we used `mean`, those kinds of events would skew our results greatly, and if we used `max` we will only get the most extreme result - the one most likely affected by such an event.
A commenter also says:
> In python 3.6.8, using mean values, the dict comprehension is indeed still faster, by about 30% for these small lists. For larger lists (10k random numbers), the `dict` call is about 10% faster.
I presume we mean `dict(zip(...` with 10k random numbers. That does sound like a fairly unusual use case. It does makes sense that the most direct calls would dominate in large datasets, and I wouldn't be surprised if OS hangs are dominating given how long it would take to run that test, further skewing your numbers. And if you use `mean` or `max` I would consider your results meaningless.
Let's use a more realistic size on our top examples:
```
import numpy
import timeit
l1 = list(numpy.random.random(100))
l2 = list(numpy.random.random(100))
```
And we see here that `dict(zip(...` does indeed run faster for larger datasets by about 20%.
```
>>> min(timeit.repeat(lambda: {k: v for k, v in zip(l1, l2)}))
9.698965263989521
>>> min(timeit.repeat(lambda: dict(zip(l1, l2))))
7.9965161079890095
```
|
Make a dictionary (dict) from separate lists of keys and values
|
[
"",
"python",
"list",
"dictionary",
""
] |
Can anyone give me a complete list of string manipulation function in Microsoft SQL Server (2000 or 2005)?
(I don't need a lecture about doing all my string processing in the presentation layer. And, I don't need a list of MySQL string functions.)
Thanks!
|
[String Functions (Transact-SQL)](http://msdn.microsoft.com/en-us/library/ms181984.aspx)
|
MSDN and Google are your friends
[here is the 2008 version](http://msdn.microsoft.com/en-us/library/ms181984.aspx), drill down to your specific flavor.
|
String Manipulation in MS SQL Server
|
[
"",
"sql",
"sql-server",
"string",
""
] |
I often need to run reduce (also called foldl / foldr, depending on your contexts) in java to aggregate elements of an Itterable.
Reduce takes a collection/iterable/etc, a function of two parameters, and an optional start value (depending on the implementation details). The function is successively applied to an element of the collection and the output of the previous invocation of reduce until all elements have been processed, and returns the final value.
Is there a type-safe implementation of reduce in any common java api? [Google Collections](http://google-collections.googlecode.com) *seems* like it should have one, but I haven't been able to find it. (possibly because I don't know what other names it would use.)
|
you could probably roll your own generic pretty easily, based on your description:
```
public interface Reducer<A, T>
{
public A foldIn(A accum, T next);
}
```
Then using the strategy pattern:
```
public class Reductor<A, T>
{
private Reducer<A, T> worker;
public Reductor<A, T>(Reducer<A, T> worker)
{
this.worker = worker;
}
public A fold(A rval, Iterator<T> itr)
{
while(itr.hasNext())
{
A rval = worker.foldIn(rval, itr.next());
}
return rval;
}
}
```
I'm sure there are a ton of syntax errors but that's the main point (there a few choices you could make about how to get the empty accumulator value. Then to use it on a particular iterator just define your Reducer on the fly:
```
Reductor r = new Reductor<A, T>(new Reducer<A, T>()
{
public A foldIn(A prev, T next)
{
A rval;
//do stuff...
return rval;
}
}
A fold = r.fold(new A(), collection.getIterator());
```
depending on how your iterator works this can fold left or fold right as long as the iterator goes in the right direction.
hope this helps.
|
Based on Luke's suggestion, here is a legit Java implementation:
```
public interface Reducer<A,T>
{
A foldIn(A accum, T next);
}
public static <T> T reduce(final Reducer<T,T> reducer,
final Iterable<? extends T> i)
{
T result = null;
final Iterator<? extends T> iter = i.iterator();
if (iter.hasNext())
{
result = iter.next();
while (iter.hasNext())
{
result = reducer.foldIn(result, iter.next());
}
}
return result;
}
public static <A,T> A reduce(final Reducer<A,T> reducer,
final Iterable<? extends T> i,
final A initializer)
{
A result = initializer;
final Iterator<? extends T> iter = i.iterator();
while (iter.hasNext())
{
result = reducer.foldIn(result, iter.next());
}
return result;
}
```
|
Is there a type-safe Java implementation of 'reduce'?
|
[
"",
"java",
"collections",
"reduce",
""
] |
I am looking out for some good practices on naming assemblies and versioning them. How often do you increment the major or minor versions?
In some cases, I have seen releases going straight from version 1.0 to 3.0. In other cases, it seems to be stuck at version 1.0.2.xxxx.
This will be for a shared assembly used in multiple projects across the company. Looking forward to some good inputs.
|
Some good information from [this article](http://blogs.msdn.com/suzcook/archive/2003/05/29/57148.aspx) on Suzanne Cook's blog on MSDN (posted 2003-05-30):
> # When to Change File/Assembly Versions
>
> First of all, file versions and assembly versions need not coincide
> with each other. I recommend that file versions change with each
> build. But, don’t change assembly versions with each build just so
> that you can tell the difference between two versions of the same
> file; use the file version for that. Deciding when to change assembly
> versions takes some discussion of the types of builds to consider:
> shipping and non-shipping.
>
> **Non-Shipping Builds**
> In general, I recommend keeping non-shipping assembly versions the same between shipping builds. This
> avoids strongly-named assembly loading problems due to version
> mismatches. Some people prefer using publisher policy to redirect new
> assembly versions for each build. I recommend against that for
> non-shipping builds, however: it doesn’t avoid all of the loading
> problems. For example, if a partner x-copies your app, they may not
> know to install publisher policy. Then, your app will be broken for
> them, even though it works just fine on your machine.
>
> But, if there are cases where different applications on the same
> machine need to bind to different versions of your assembly, I
> recommend giving those builds different assembly versions so that the
> correct one for each app can be used without having to use
> LoadFrom/etc.
>
> **Shipping Builds**
> As for whether it’s a good idea to change that version for shipping builds, it depends on how you want the binding to
> work for end-users. Do you want these builds to be side-by-side or
> in-place? Are there many changes between the two builds? Are they
> going to break some customers? Do you care that it breaks them (or do
> you want to force users to use your important updates)? If yes, you
> should consider incrementing the assembly version. But, then again,
> consider that doing that too many times can litter the user’s disk
> with outdated assemblies.
>
> **When You Change Your Assembly Versions**
> To change hardcoded versions to the new one, I recommend setting a variable to the version
> in a header file and replacing the hardcoding in sources with the
> variable. Then, run a pre-processor during the build to put in the
> correct version. I recommend changing versions right after shipping,
> not right before, so that there's more time to catch bugs due to the
> change.
|
One way to define your versioning is to give semantic meaning to each portion:
* Go from N.x to N+1.0 when compatibility breaks with the new relase* Go from N.M to N.M+1 when new features are added which do not break compatibility* Go from N.M.X to N.M.X+1 when bug fixes are added
The above is just an example -- you'd want to define the rules that make sense for you. But it is very nice for users to quickly tell if incompatibilities are expected just by looking at the version.
Oh, and don't forget to publish the rules you come up with so people know what to expect.
|
Best practices for assembly naming and versioning?
|
[
"",
"c#",
".net",
"assemblies",
"versioning",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.