text stringlengths 8 267k | meta dict |
|---|---|
Q: Is it possible to use Soundex (or other SQL functions) in LinqToSql? I'm refactoring some code currently implemented in stored procedures to use LinqToSql (for use in training). Is it possible to use SQL functions in a linqToSql Query?
A: Here is some helpful information from a MSDN Forum Post. Soundex and LINQ.
A: I haven't tried it myself yet but the below method originally posted here seems to be the best possible solution.
[Function(Name="SoundEx", IsComposable = true)]
public string SoundsLike(string input)
{
throw new NotImplementedException();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How does NUnit (and MSTest) handle tests that change static/shared variables? I have some code that uses the shared gateway pattern to implement an inversion of control container. I have several hundred NUnit unit tests that exercises the code that uses this IOC. They all work (on my machine!) but I am concerned that these tests might fail under load. I seem to remember that NUnit (and MSTest) attempts to run tests in parallel on multiple threads (which would definitely trigger race conditions on the static/shared gateway) but I cannot find any documentation that says what actually happens. My experience is that NUnit seems to be running the tests sequencially. My question is, does NUnit (or MSTest) ever run unit tests in parallel? If so, under what conditions? And, can I turn this off via some sort of configuration option?
A: Update:
Visual Studio 2010 introduced the ability to run tests in parallel.
Here is a step by step article about how to enable this.
MsTest:
So according to David Williamson, from Microsoft Visual Studio Team System, on this post in the MSDN forums:
Tests absolutely do NOT run in
parallel when run in VS or via
mstest.exe. If they are run in a
Load Test through VS then that is a
different story. Basic execution,
however, is always serial.
Also, tests run using MsTest are each run using a different thread in order to ensure that you have a clean slate for each test. There is no way to disable this behavior.
NUnit:
NUnit runs all tests on the same thread.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Setting WPF dependency property without triggering events I need to set a dependency property on a control (Slider.Value) in my code without it triggering a ValueChanged event (since I set the value, not the user). What is the best way to handle this situation in WPF?
For clarification, what I am trying to do is hook up WPF sliders to a WinForms User Control. Currently in my app I have a ValueChanged event handler that passes the slider value to the WinForms control via a method call. The WinForms control (which is actually a wrapper around a native OpenGL window) needs to be able to dynamically change the slider based on it's internal calculations. To do this I use an abstraction of a slider (ISlider) , I instantiate a WPF-flavor of that slider in my app, and pass a handle to it's base to the WinForms control via a .NET property on the WinForms User Control. All of this is currently working, it's just that when the internal logic decides the slider needs to change, it calls ISlider::SetPos(), which then changes the WPF slider, which then triggers a ValueChanged event on the slider, and the handler for that event extracts the slider's position and passes it in to the WinForms control which originated the event in the first place. The suggestions by ligaz and Alan Le both seem like they should work, but I'm just not sure I'm going about this in the best way.
A: Here's a simply workaround/hack. Add a boolean to keep track whether you changed the setting, let's say "IsChangedByMe". When you change the dependency property in code, set the bool to true. In the ValueChanged event, if IsChangedByMe is true, don't do anything.
A: Are you sure you really want to do that? If there's a piece of UI databound to that property and you allow for changing the value without triggering the ValueChanged event you'd quickly end up with your UI no longer synchronized with the data.
In the case of your slider, imagine the user places it at 75%. Now your code changes it to 10% in the background but suppresses the change notification. It still looks on the screen like it's at 75% (since it wasn't told it changed), but it's being used as 10% by the code that's running. Sounds like a recipe for confusion.
A: One possible solution is to derive from Slider and override OnValueChanged(...). When you did not want to raise the event you should do nothing, otherwise you should call the base implementation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What is object marshalling? I have heard this concept used frequently, but I don't have a really good grasp of what it is.
A: Marshalling is the process of transforming the memory representation of an object to a data format that could be stored or transmitted. It's also called serialization (although it could be different in certain contexts). The memory representation of the object could be stored as binary or XML or any format suitable for storage and/or transmission in a way that allows you to unmarshal it and get the original object back.
For an example of usage, if you have some online game with a client and server components and you wanted to send the player object containing player stats and world coordinates from the client to the server (or the other way around), you could simply marshal it at the client, send it over the network, and unmarshal it at the other end and it would appear for the server as if the object was created on the server itself. Here's a ruby example:
srcplayer = Player.new
# marshal (store it as string)
str = Marshal.dump(srcplayer)
#unmarshal (get it back)
destplayer = Marshal.load(str)
A: Converting an object in memory into a format that can be written to disk, or sent over the wire, etc.
Wikipedia's description.
A: I clarified a google search to "data marshalling" and the first hit was on some place called webopedia which is pretty good. The gist is that you transform data back and forth to a form for things like transmission over a network. The problem it solves is that you can't really transmit data over a network in a form that is usable by a program. You have to solve a number of issues including things like endianness of data, how you store complex data types like strings, etc.
Marshalling is not just to solve network transmission problems but other problems such as going from one architecture to another, maybe different languages especially those that might use things like virtual machines, and other "translation" problems.
A: Marshalling is the process of transferring data across application boundaries or between different data formats. Marshalling is very common, for example writing data to disk or to a database is technically marshalling, however the term tends to be used to describe data conversion for "foreign" APIs or for interprocess communication.
For example, in .NET, communicating between managed and unmanaged code (such as accessing certain win32 APIs) will likely require marshalling in order to convert back and forth between managed C# objects and C/C++ style objects (structs, handles, output buffers, etc.) The help for the static Marshal class might be helpful.
A: I beg to differ, Wikipedia is pretty clear on this.
In computer science, marshalling
(similar to serialization) is the
process of transforming the memory
representation of an object to a data
format suitable for storage or
transmission. It is typically used
when data must be moved between
different parts of a computer program
or from one program to another.
http://en.wikipedia.org/wiki/Marshalling_(computer_science)
A: It means turning any data into another data type to transfer to another system.
E.g., marshalling a struct into an XML document to send to the webservice, or marshalling a pointer to send to a different thread apartment.
A: Basically it's an expression for generically transforming an object (or similar) into another representation that (e.g.) can be sent over the wire or stored to disk (typically string or binary stream. The opposite, unmarshalling, describes the opposite direction of reading the marshalled representation and re-creating an object or whatever in-memory-structure existed earlier.
Another current everyday example is JSON
A: People have defined marshalling quite clearly already, so I'll skip the definition and jump to an example.
Remote procedure call uses marshalling. When invoking remote functions you will have to marshall the arguments to some kind of standard format so it can be transport across the network.
A: In a very generic sense in programming it simply means taking data in one format and transforming it into a format that is acceptable by some other sub-system.
A: Marshalling is the conversion of call parameters that needs to occur when calling across an ABI boundary. The boundary may be between a COM client and a COM server, where the types of the ABI of the COM client need to be marshalled by the COM library to the ABI of the COM binary (in COM, marshalling can also refer to the conversion of parameters required when crossing an apartment boundary within the same process to the format of a message to be sent to the owning thread's message queue to be then handled and unmarshalled by the COM window procedure, and in the event of crossing a process boundary, the additional step of marshalling to an RPC/LPC by a COM proxy, i.e. an LPC message to an LPC port). The boundary may be between the execution of high-level code in a virtual environment and the native code that the environment is implemented in / set up the environment, where a conversion takes place between the ABI of the high-level code, implemented in an ABI of the native language, and a typical ABI of the native language concerning those types.
One example of the second case is Mono .NET. You can call managed code (high level language, which is managed and run by the virtual machine library and represented by internal objects and structures) from unmanaged (native) code (C++, which isn't managed by the virtual machine library, but instead is linked to the library), and you can also perform native calls from C# to unmanaged (native) code (C++) based on internal bindings made by C++ code when setting up the virtual machine using the virtual machine library API. For instance, System.String in C# is internally represented by a MonoString. MonoString is a C++ object which uses the C++ ABI, but in a different way to how it is used standardly and how the native code expects parameters of a string type to be represented in the ABI, because the VM library has logically implemented its own ABI using a certain arrangement of the C++ ABI -- boxed in a C++ object of type MonoString* instead of const wchar_t*. Passing a System.String to a native call in C# using P/Invoke (which performs automatic marshalling) causes a const wchar_t* to be passed to the native call as automatic marshalling takes place. When you use internal calls however, it will be passed as a MonoString*, which the C++ function will have to marshal itself and then marshal whatever it needs to return back to a type of the VM's logical ABI. Only blittable types don't need to be marshalled when using internal calls, for instance, int, which is a System.Int32 is passed as a gint32, which is just an int.
Another example is Spidermonkey JS engine, which marshals between a C++ native type of HTMLElement and an internal runtime representation, JSObject, which represents the HTMLElement type in javascript.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "68"
} |
Q: Modifying default tab size in RichTextBox Is there any way to change the default tab size in a .NET RichTextBox?
It currently seems to be set to the equivalent of 8 spaces which is kinda large for my taste.
Edit: To clarify, I want to set the global default of "\t" displays as 4 spaces for the control. From what I can understand, the SelectionTabs property requires you to select all the text first and then the the tab widths via the array. I will do this if I have to, but I would rather just change the global default once, if possible, sot that I don't have to do that every time.
A: Winforms doesn't have a property to set the default tab size of a RichTexBox with a single number, but if you're prepared to dig into the Rtf of the rich text box, and modify that, there's a setting you can use called: "\deftab". The number afterwards indicates the number of twips (1 point = 1/72 inch = 20 twips). The resulting Rtf with the standard tab size of 720 twips could look something like:
{\rtf1\ansi\ansicpg1252\deff0\deflang2057\deftab720{\fonttbl{\f0\fnil\fcharset0 Microsoft Sans Serif;}}
\viewkind4\uc1\pard\f0\fs41
1\tab 2\tab 3\tab 4\tab 5\par
}
If you need to convert twips into pixels, use this code inspired from Convert Pixels to Points:
int tabSize=720;
Graphics g = this.CreateGraphics();
int pixels = (int)Math.Round(((double)tabSize) / 1440.0 * g.DpiX);
g.Dispose();
A: It's strange that no one has proposed this method for all this time)
We can inherit from the RichTextBox and rewrite the CmdKey handler (ProcessCmdKey)It will look like this:
public class TabRichTextBox : RichTextBox
{
[Browsable(true), Category("Settings")]
public int TabSize { get; set; } = 4;
protected override bool ProcessCmdKey(ref Message Msg, Keys KeyData)
{
const int WM_KEYDOWN = 0x100; // https://learn.microsoft.com/en-us/windows/desktop/inputdev/wm-keydown
const int WM_SYSKEYDOWN = 0x104; // https://learn.microsoft.com/en-us/windows/desktop/inputdev/wm-syskeydown
// Tab has been pressed
if ((Msg.Msg == WM_KEYDOWN || Msg.Msg == WM_SYSKEYDOWN) && KeyData.HasFlag(Keys.Tab))
{
// Let's create a string of spaces, which length == TabSize
// And then assign it to the current position
SelectedText += new string(' ', TabSize);
// Tab processed
return true;
}
return base.ProcessCmdKey(ref Msg, KeyData);
}
}
Now, when you'll press Tab, a specified number of spaces will be inserted into the control area instead of \t
A: You can set it by setting the SelectionTabs property.
private void Form1_Load(object sender, EventArgs e)
{
richTextBox1.SelectionTabs = new int[] { 100, 200, 300, 400 };
}
UPDATE:
The sequence matters....
If you set the tabs prior to the control's text being initialized, then you don't have to select the text prior to setting the tabs.
For example, in the above code, this will keep the text with the original 8 spaces tab stops:
richTextBox1.Text = "\t1\t2\t3\t4";
richTextBox1.SelectionTabs = new int[] { 100, 200, 300, 400 };
But this will use the new ones:
richTextBox1.SelectionTabs = new int[] { 100, 200, 300, 400 };
richTextBox1.Text = "\t1\t2\t3\t4";
A: If you have a RTF box that is only used to display (read only) fixed pitch text, the easiest thing would be not to mess around with Tab stops. Simply replace them stuff with spaces.
If you want that the user can enter something and use that Tab key to advance you could also capture the Tab key by overriding OnKeyDown() and print spaces instead.
A: I'm using this class with monospaced fonts; it replaces all TABs with spaces.
All you have to do is to set the following designer properties according to your requirements:
*
*AcceptsTab = True
TabSize
*ConvertTabToSpaces = True
*TabSize = 4
PS: As @ToolmakerSteve pointed out, obviously the tab size logic here is very simple: it just replaces tabs with 4 spaces, which only works well for tabs at the beginning of each line. Just extend the logic if you need improved tab treatment.
Code
using System.ComponentModel;
using System.Windows.Forms;
namespace MyNamespace
{
public partial class MyRichTextBox : RichTextBox
{
public MyRichTextBox() : base() =>
KeyDown += new KeyEventHandler(RichTextBox_KeyDown);
[Browsable(true), Category("Settings"), Description("Convert all tabs into spaces."), EditorBrowsable(EditorBrowsableState.Always), DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
public bool ConvertTabToSpaces { get; set; } = false;
[Browsable(true), Category("Settings"), Description("The number os spaces used for replacing a tab character."), EditorBrowsable(EditorBrowsableState.Always), DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
public int TabSize { get; set; } = 4;
[Browsable(true), Category("Settings"), Description("The text associated with the control."), Bindable(true), EditorBrowsable(EditorBrowsableState.Always), DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
public new string Text
{
get => base.Text;
set => base.Text = ConvertTabToSpaces ? value.Replace("\t", new string(' ', TabSize)) : value;
}
protected override bool ProcessCmdKey(ref Message Msg, Keys KeyData)
{
const int WM_KEYDOWN = 0x100; // https://learn.microsoft.com/en-us/windows/desktop/inputdev/wm-keydown
const int WM_SYSKEYDOWN = 0x104; // https://learn.microsoft.com/en-us/windows/desktop/inputdev/wm-syskeydown
if (ConvertTabToSpaces && KeyData == Keys.Tab && (Msg.Msg == WM_KEYDOWN || Msg.Msg == WM_SYSKEYDOWN))
{
SelectedText += new string(' ', TabSize);
return true;
}
return base.ProcessCmdKey(ref Msg, KeyData);
}
public new void AppendText(string text)
{
if (ConvertTabToSpaces)
text = text.Replace("\t", new string(' ', TabSize));
base.AppendText(text);
}
private void RichTextBox_KeyDown(object sender, KeyEventArgs e)
{
if ((e.Shift && e.KeyCode == Keys.Insert) || (e.Control && e.KeyCode == Keys.V))
{
SuspendLayout();
int start = SelectionStart;
string end = Text.Substring(start);
Text = Text.Substring(0, start);
Text += (string)Clipboard.GetData("Text") + end;
SelectionStart = TextLength - end.Length;
ResumeLayout();
e.Handled = true;
}
}
} // class
} // namespace
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Can You Use A DynamicResource in a Storyboard Contained Within Style Or ControlTemplate I am trying to use a DynamicResource in Storyboard contained within a ControlTemplate.
But, when I try to do this, I get a 'Cannot freeze this Storyboard timeline tree for use across threads' error.
What is going on here?
A: No, you can't use a DynamicResource in a Storyboard that is contained within a Style or ControlTemplate. In fact, you can't use a data binding expression either.
The story here is that everything within a Style or ControlTemplate must be safe for use across threads and the timing system actually tries to freeze the Style or ControlTemplate to make them thread-safe. However, if a DynamicResource or data binding expression is present, it is unable to freeze them.
For more info see: MSDN Link. Check out the 'Animate in a Style' and the 'Animate in a ControlTemplate' sections (this documentation page is rather long).
And for a workaround (at least for my scenario) see: WPF Forum Post.
Hope this helps someone. I've lost more than enough hair on it.
Cory
A: In some scenarios, there is a workaround:
*
*introduce an attached property,
*specify a style trigger for that introduced property with desired setter(s)
A: While you can have DynamicResource in a ControlTemplate, you just can't have one in a StoryBoard.
I worked around this with a Opacity (or Visibility) hack.
You can add two elements to your ControlTemplate. Each of them uses one of the DynamicResources but only one of them is visible. You can set the Visibility or Opacity of each element via the Storyboard
A: Simple solution is to use sb in code
public static void ColorAnimation(FrameworkElement Obj, string From, string To, int Milliseconds)
{
Color from = ( Color )ColorConverter.ConvertFromString( From );
Color to = ( Color )ColorConverter.ConvertFromString( To );
{
ColorAnimation animation = new ColorAnimation();
animation.From = from;
animation.To = to;
animation.Duration = new Duration( TimeSpan.FromMilliseconds( Milliseconds ) );
Storyboard.SetTargetProperty( animation, new PropertyPath( "(Grid.Background).(SolidColorBrush.Color)", null ) );
Storyboard storyboard = new Storyboard();
storyboard.Children.Add( animation );
storyboard.Begin( Obj );
}
}
Use in mouse_leave event: (or also in mouse_enter)
if( your theme determinant 1 ) ColorAnimation( MinimizeButton, "#FF333333", "#00202020", 150 );
if( your theme determinant 2 ) ColorAnimation( MinimizeButton, "#FF32506E", "#0032506E", 200 );
etc...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Hitting Webservice on Different Subnet I know this is somewhat of a server question, but I wanted to ask anyways in case someone has done this before.
I have a web service that is on our internal 172.x.x.x subnet and a web server that is on our internal 10.x.x.x subnet. The webserver needs to hit the 172 web service, but is unable to route there. The real solution would be to convince our network admins to put the server on the 172 network behind the DMZ, but this solution seems far off.
My quick and dirty solution is to create a proxy server on a box that connects to both networks, so I can then program my web service calls to hit the proxy server. However, I am a developer and have little knowledge on how to set this up.
I have friends that have had good luck with Squid Proxy Server in Nix, but the only box that is available for me is a Windows Server 2003 box. Ideally, I would like some sort of proxy that I could set up on top of IIS. Do you guys know of anything? I've seen some reviews for ISA Server 2006, but I'd hate to charge up the corporate budget since we only need it for this one web service.
A: As you mentioned, the best option is to cram the web server into the DMZ. That being impossible, see if the wiremonkeys can open up the appropriate port in the firewall just between the server and the web service (and just for http/https traffic). If both are impossible, I guess a proxy is possible (if the proxy is allowed to relay between the two networks).
The thing I keep asking myself, however, is under what circumstances could you have a web service for which you have a business need, yet you're not allowed to expose it on the 'Z? Are your wiremonkeys so resistant to change that you can't get your job done? If so, jump ship, man! Life's too short.
A: It is really quick and dirty, but you could use the tcpmon tool on a windows machine that has access to both networks.
A: I have to agree with Danimal that the right way to handle this would be to have the appropriate holes poked in the firewall. Especially if, as you have said, the interface is important to a customer-facing application.
It seems to me that "customers affected > 1000" is a great business case to convince the network admins, or perhaps their boss(es) to expend the effort on safely allowing your traffic.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I create a (Type, ID) (aka 'polymorphic')- foreign key column in MS Access? In Ruby-on-Rails, this is called a "polymorphic association."
I have several Commentable things in my application, the tables for each are below:
Post
id | title | text | author (FK:Person.id) | ...
Person
id | name | ...
Photo
id | title | owner (FK:Person.id) | path | ...
I'd like to add a Comments table as follows:
Comments
id | commentable_type | commentable_id | text | author (FK:Person.id)
I understand that I lose the database's referential integrity this way, but the only other option is to have multiple Comments tables: PostComments, PersonComments, PhotoComments, ...
And now for the question:
How can I build a form that will grok how to do the lookup, first by getting the table name from Comments.commentable_type and then the id from Comments.commentable_id?
A: This technique is known colloquially in the SQL world as 'subclassing'. For a worked example (SQL Server syntax but is easily adapted for MS Access), see David Porta's blog..
In your scenario, the data items common to all comments would be in your Comments table; anything specific to each type would be in specialized tables such as PhotoComments, etc. Note the FK should be the two-column compound of the ID plus the type, something which is often overlooked but is essential to referential integrity here e.g. you don’t want something typed as a photo comment appearing in the PersonComments table.
A: I believe many people make meta-tables for that sort of thing. Pretty much exactly as you described it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Evaluating OPF3 (ORM framework for .NET) Is anyone using or has anyone evaluated OPF3 as an ORM (.NET)? How does it stack up against EntitySpaces or Subsonic?
One thing about OPF3 I like in my evaluation so far is that it is very easy to customize. Since it binds database fields to object members using attributes, you do not need to use any code generation tool. This also means you can basically create your own classes, and then add OPF3 data binding on top of that.
<Persistent("users")> _
Public Class User
<Field("userid", AutoNumber:=True, Identifier:=True, allowDbnull:=False)> _
Public Property ID() As Long
<Field("name", allowDbnull:=False)> _
Public Property Name() As String
End Class
They do have a generation tool, and one thing I don't like is that the demo will not output classes, so I can't actually see what it is really going to do. On the plus side again, it appears that when you buy the tool, you get the source for it as well.
A: We are using Opf3 at my company and until now it works great, except that it's having more functionality than needed. But watch out how u construct your classes, creating a new item to add as a child the ObjectSetHolder can be tricky. New items don't have any informatio about the ObjectContext unless u attach them to the ObjectContext using _context.Attach()
Anyway i personally like Opf3 and what it can do, but we don't use the wizzard since it doesn't really work against our database Pervasive.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: ASP.NET 2.0 app runs on Win 2003 in IIS 5 isolation mode but not in (default) IIS 6 mode The app uses DLLImport to call a legacy unmanaged dll. Let's call this dll Unmanaged.dll for the sake of this question. Unmanaged.dll has dependencies on 5 other legacy dll's. All of the legacy dll's are placed in the WebApp/bin/ directory of my ASP.NET application.
When IIS is running in 5.0 isolation mode, the app works fine - calls to the legacy dll are processed without error.
When IIS is running in the default 6.0 mode, the app is able to initiate the Unmanaged.dll (InitMe()), but dies during a later call to it (ProcessString()).
I'm pulling my hair out here. I've moved the unmanaged dll's to various locations, tried all kinds of security settings and searched long and hard for a solution. Help!
Sample code:
[DllImport("Unmanaged.dll", EntryPoint="initME", CharSet=System.Runtime.InteropServices.CharSet.Ansi, CallingConvention=CallingConvention.Cdecl)]
internal static extern int InitME();
//Calls to InitMe work fine - Unmanaged.dll initiates and writes some entries in a dedicated log file
[DllImport("Unmanaged.dll", EntryPoint="processString", CharSet=System.Runtime.InteropServices.CharSet.Ansi, CallingConvention=CallingConvention.Cdecl)]
internal static extern int ProcessString(string inStream, int inLen, StringBuilder outStream, ref int outLen, int maxLen);
//Calls to ProcessString cause the app to crash, without leaving much of a trace that I can find so far
Update:
Back with a stack, taken from a mini dump. Seems to be a stack overflow, but I'd like to know more if someone can help me out. Results of "kb" in WinDbg with sos.dll loaded:
1beb51fc 7c947cfb 7c82202c 00000002 1beb524c ntdll!KiFastSystemCallRet
1beb5200 7c82202c 00000002 1beb524c 00000001 ntdll!NtWaitForMultipleObjects+0xc
WARNING: Stack unwind information not available. Following frames may be wrong.
1beb52a8 7c822fbe 00000002 1beb52ec 00000000 kernel32!WaitForMultipleObjectsEx+0xd2
1beb52c4 7a2e1468 00000002 1beb52ec 00000000 kernel32!WaitForMultipleObjects+0x18
1beb5308 7a2d00c4 7a0c3077 1bc4ffd8 1bc4ffd8 mscorwks!CreateHistoryReader+0x19e9d
1beb531c 7a0c312f 7a0c3077 1bc4ffd8 888d9fd9 mscorwks!CreateHistoryReader+0x8af9
1beb5350 7a106b2d 1b2733a0 00000001 1b2733a0 mscorwks!GetCompileInfo+0x345ed
1beb5378 7a105b91 1b272ff8 1b2733a0 00000001 mscorwks!GetAddrOfContractShutoffFlag+0x93a8
1beb53e0 7a105d46 1beb5388 1b272ff8 1beb5520 mscorwks!GetAddrOfContractShutoffFlag+0x840c
1beb5404 79fe29c5 00000001 00000000 00000000 mscorwks!GetAddrOfContractShutoffFlag+0x85c1
1beb5420 7c948752 1beb5504 1beef9b8 1beb5520 mscorwks!NGenCreateNGenWorker+0x4d52
1beb5444 7c948723 1beb5504 1beef9b8 1beb5520 ntdll!ExecuteHandler2+0x26
1beb54ec 7c94855e 1beb1000 1beb5520 1beb5504 ntdll!ExecuteHandler+0x24
1beb54ec 1c9f2264 1beb1000 1beb5520 1beb5504 ntdll!KiUserExceptionDispatcher+0xe
1beb57f4 1c92992d 1beb6e28 1db84d70 1db90e28 Unmanaged1!UMgetMaxSmth+0x1200ad
1beb5860 1c929cfe 00000000 1db84d70 1beb6e28 Unmanaged1!UMgetMaxSmth+0x57776
1beb58c0 1c930b04 00000000 1db84d70 1beb6e28 Unmanaged1!UMgetMaxSmth+0x57b47
1beb5924 1c99d088 00000000 1db84d70 1beb6e28 Unmanaged1!UMgetMaxSmth+0x5e94d
1beb5990 1c99c955 00000000 1beb6e28 1beb6590 Unmanaged1!UMgetMaxSmth+0xcaed1
1beb5a44 1c99e9ae 00000000 40977000 1db90e28 Unmanaged1!UMgetMaxSmth+0xca79e
A: Solution:
Create a new thread in which to run the imported dll, assign more memory to its stack.
A: What is the error given? if the application truly crashed you might have to go into the Windows Event Log to get the stack trace of the error.
A: I've run procmon, debugdiag, tried to work with microsoft debugging tools. Each time the app crashes, Dr. Watson creates a pair of files - .dmp and .tmp (which I have tried to debug without success).
Here's the error from the Event Log:
Event Type: Error
Event Source: .NET Runtime 2.0 Error Reporting
Event Category: None
Event Code: 1000
Date: 30.09.2008
Time: 16:13:38
User: Not Applicable
Computer: APPLICATIONTEST010
Description:
Faulting application w3wp.exe, version 6.0.3790.3959, stamp 45d6968e, faulting module Unmanaged1.dll, version 0.0.0.0, stamp 48b6bfb8, debug? 0, fault address 0x00122264.
A: I think a potential problem to look for here is that your DLL could be unloaded by runtime between calls to InitME and ProcessString - so if ProcessString depends on InitME being called first, it might go "boom".
The solution to that would be using good old LoadLibrary and FreeLibrary to force runtime to keep the library loaded between calls to those two functions. GetProcAddress is not needed (as far as I can tell).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Digital Right Management (DRM) forward lock in Windows Mobile Is it possible to apply DRM forward lock to windows mobile applications?
A: Yes it does.
You need to use the Windows Mobile File-Based Digital Rights Management API. This is a WM5 API, so it's not supported under WM 2003.
Your device needs to have a FDRM provider installed. I beleave most WM5/6 devices should have some sort of provider installed by default in the ROM.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Editing a text buffer Ok, this is a bit of a cheeky question. I want to build a simple text editor (using my own text mode screen handling). I just want a good example of data structures that can be used to represent the text buffer, and some simple examples of char/text insertion/deletion. I can handle all the rest of the code myself (file i/o, console i/o etc). A link to a nice simple editor source would be great (C or C++).
A: The "Gang of Four" book (Design Patterns) has a GUI-based text editor as it's main source of examples and is a worthwhile book to own.
The general "pure text" editor probably uses ropes, which SGI's STL has an implementation of. Basically, they are a linked list of character buffers. That way, inserting/deleting characters involves changing smaller buffers and a few pointers, instead of storing the entire document in a single buffer and having to shift everything.
A: This is 2008. Don't write a text editor; you're reinventing fire.
Still here? I'm not sure if this applies or what platforms you plan to support, but the Neatpad series of tutorials is a great place to start thinking about writing a text editor. They focus on Win32 as the basic platform, but many of the lessons learned will apply anywhere.
A: My favorite solution is the gap buffer, because it's pretty easy to implement and has good amortized efficiency. Just use a single array of characters, with a region designated as the gap. Once you understand the concept, the code follows almost naturally.
You also need an auxilliary array [vector<int>] to track the index of the beginning of each line--so that you can easily extract a particular line of text. The auxilliary array only needs to be updated when the gap moves, or when a newline is inserted/removed.
A: These two online documents present a small, but useful cornucopia of "well-known" data structures/techniques for text editors.
*
*Data Structures for Text Sequences describes, and experimentally analyses a few data structures, leaning towards piece tables as the data structure of choice. Net.wisdom however seems to lean towards gap buffers as being more than adequate for text editing, and simpler to implement/debug.
*"The craft of text editing" (www.finseth.com/craft/) is an older work, and addresses more than just data structures, and is oriented towards Emacs-style editors; but the concepts are generally useful.
A: A simple approach would be line oriented -- represent the file as an array/vector of char/wchar_t arrays/vectors, one per line. Insertions and deletions work the way you'd expect, although end of line is a special case.
I'd start with that and possibly replace the line data structure with something more efficiently supporting inserts/deletes on long lines after you have everything else working.
A: You can use almost any data structure to write a text editor. Two million characters is fairly thick novel's worth of typing and you can easily move them up/down (for an insert/delete in a simple array) in less than one-tenth of a second. Don't listen to anyone who tells you not build one, you'll get something which works exactly right in all the small details.
I wrote mine, after I'd done too much web browsing and I'd got used to page up/down being the same as clicking above/below the scrollbar thumb. The jump back to before you started scrollbar navigating when you typed a character in a normal editor, just got too annoying for me, so I wrote my own.
If I was going to do a rewrite (I just used delphi ansistrings for each text buffer in the current version, with newline characters embedded), I'd use integers or int64s for each character and encode block start/stop, cursor position and line markers in the high bits, that way you don't have to adjust pointers when you insert or delete things.
A: Your primary data structure is one to contain the text. Rather than using a long buffer to contain the text, you'll probably want an array of lines because it's faster to insert a character into the middle of a line then it is to insert a character into the middle of a large buffer.
You'll need to decide if your text editor should support embedded formatting. If, for example, you need to use fonts, bolding, underlining, etc, then your data structure will need to include ways of embedding formatting codes within your text. In the good old days of 8-bit characters we could use the upper 8-bits of an integer to store any formatting flags and the lower 8-bits to store the character itself.
The actual code will depend on the language you're using. In C# or C++ you'll probably use an array of strings for the lines. In C you'll have an array of heap-based character arrays.
Separate out the display code from the text handling code as much as possible. The center of your code will be a tight loop something like:
while (editing) {
GetCharacter();
ProcessCharacter();
UpdateDisplay();
}
A more sophisticated editor will use separate threads for the character getting/processing and the display updating.
A: I used to work for a company whose main product was a text editor. While I mainly worked on the scripting language for it, the internal design of the editor itself was naturally a major topic of discussion.
It seemed like it broke down into two general trains of thought. One was that you stored each line by itself, and then link them together in a linked list or other overall data structure that you were happy with. The advantage was that any line-oriented editing actions (such as deleting an entire line, or moving a line block within a file) were beyond trivial to implement and therefore lightning fast. The down side was that loading and saving the file took a bit more work, because you'd have to traverse the entire file and build these data structures.
The other train of thought at that time was to try to keep hunks of text together regardless of line breaks when they hadn't been changed, breaking them up only as required by editing. The advantage was that an unedited hunk of the file could be blasted out to a file very easily. So simple edits where you load a file, change one line, and save the file, were super fast. The disadvantage was that line-oriented or column-block operations were very time consuming to execute because you would have to parse through these hunks of text and move alot of data around.
We always stuck with the line-oriented design, for whatever that is worth, and our product was considered one of the fastest editors at the time.
A: This really depends on your design. Couple of years back, I wrote a small editor using curses. I used doubly linked list where each node was a character (quite a wasteful design.. but it makes formatting and screen refresh routines really easy).
Other data structures used by my friends were (this was a homework project):
1)linked list of arrays with each array representing a line.
2)a 2D linked list (just made up that name).. it was a linked list of characters but each character was linked to the character above and below.
3)Array of linked list
However, I would suggest you to go through the source code of some simple editors like pico to see what ds they are using.
A: Have you checked out Scintilla's source code?
A: Check out vim, it's open-source. Poke around in it to see how it handles what you want.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: How do I configure IE7 to download filetype instead of opening in browser I have written a watir script that downloads files. One of the files it downloads has a .dcf extension. Months ago, on my machine, I changed a setting somewhere so that .dcf files prompt for download ("Do you want to open or save this file?") instead of opening in the browser. This is the behavior that I desire. I am using XP Pro/IE7.
I'm now setting up a dedicated test machine, but cannot seem to find the configuration option that I did on my machine - which was so easy to find that I didn't make note of it. All of the solutions that I am finding now are either about changing the download itself or modifying the registry. I am looking for something from the client perspective at the browser/IE level.
A: instructions here:
http://www.mydigitallife.info/2007/06/15/disable-automatic-opening-or-saving-of-downloads-re-enable-always-ask-before-check-box/
A: For PHP, try using these headers:
header("Content-Type: application/force-download");
header("Content-Type: application/octet-stream");
header("Content-Type: application/download");
header("Content-Disposition: attachment; filename=".basename($filename).";");
Naturally you can make use of the same headers in whatever language you are using.
A: tloach's link provided direction towards the answer I needed:
*
*I needed to open a windows explorer window and select Folder Options... from the Tools menu.
*I needed to go to the File Types tab and locate the extension for which I wanted to change behavior.
*I needed to have the Advanced button for the file type - it isn't always there.
*The Confirm open after download checkbox needs to be checked (not cleared)
*There need to be Actions - there aren't always. I found that one called edit and another called open were enough. I tied them to notepad (C:\WINDOWS\system32\NOTEPAD.EXE %1). I also needed to check the Use DDE checkbox and fill in NOTEPAD for Application: and System for Topic:
Hopefully this will help others. It will likely help me the next time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to add an attribute to an XML node in Java 1.4 I tried:
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
Document doc = db.parse(f);
Node mapNode = getMapNode(doc);
System.out.print("\r\n elementName "+ mapNode.getNodeName());//This works fine.
Element e = (Element) mapNode; //This is where the error occurs
//it seems to work on my machine, but not on the server.
e.setAttribute("objectId", "OBJ123");
But this throws a java.lang.ClassCastException error on the line that casts it to Element. mapNode is a valid node. I already have it printing out
I think maybe this code does not work in Java 1.4. What I really need is an alternative to using Element. I tried doing
NamedNodeMap atts = mapNode.getAttributes();
Attr att = doc.createAttribute("objId");
att.setValue(docId);
atts.setNamedItem(att);
But getAttributes() returns null on the server. Even though its not and I am using the same document locally as on the server. And it can print out the getNodeName() its just that the getAttributes() does not work.
A: I was using a different dtd file on the server. That was causing the issue.
A: Might the first child be a whitespace only text node or suchlike?
Try:
System.out.println(doc.getFirstChild().getClass().getName());
EDIT:
Just looked it up in my own code, you need:
doc.getDocumentElement().getChildNodes();
Or:
NodeList nodes = doc.getElementsByTagName("MyTag");
A: I think your cast of the output of doc.getFirstChild() is where you're getting your exception -- you're getting some non-Element Node object. Does the line number on the stack trace point to that line? You might need to do a doc.getChildNodes() and iterate to find the first Element child (doc root), skipping non-Element Nodes.
Your e.setAttribute() call looks sensible. Assuming e is an Element and you actually get to that line...
A: As already noted, the ClassCastException is probably not being thrown in setAttribute. Check the line number in the stack. My guess is that getFirstChild() is returning a DocumentType, not an Element.
Try this:
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
Document doc = db.parse(f);
Element e = (Element) doc.getDocumentElement().getFirstChild();
e.setAttribute("objectId", "OBJ123");
Update:
It seems like you are confusing Node and Element. Element is an implementation of Node, but certainly not the only one. So, not all Node's are castable to Element. If the cast is working on one machine and not on another, it's because you're getting something else back from getMapNode() because the parsers are behaving differently. The XML parser is pluggable in Java 1.4, so you could be getting an entirely different implementation, from a different vendor, with different bugs even.
Since you're not posting getMapNode() we cannot see what it's doing, but you should be explicit about what node you want it to return (using getElementsByTagName or otherwise).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Running many virtual machines on a single host I have a need to run a relatively large number of virtual machines on a relatively small number of physical hosts. Each virtual machine isn't doing to much - each only needs to run essentially one basic network service - think SMTP or the like. Furthermore, the load on each is going to be extremely light.
Unfortunately, the numbers are something like 100 virtual machines on 5 physical hosts. Each host is decent enough - core 2 with 2 gigs of ram and a 1tb disk. However, I know just taking a vmware image of ubuntu and throwing on that machine wont get me anywhere near 100 instances and would be something closer to 20.
So, is there any hope for this ratio of images to hosts? Also, which implementation of virtual machine would be best suited for this purpose - ie has efficient overall usage of resources? We mostly use vmware here, but if there is a significant performance advantage that could be gained by switching to Xen or the like, I am sure we would consider it.
Thank you in advance for your insights :)
Note: We ended up using OpenVZ and it worked rather well. The default parameters for an ubuntu template let us run about 40 instances per machine.
A: there are three main fronts to make those fit:
*
*lower overhead. OpenVZ, Vserver, chroot, would be ideal if applicable. if you really need each instance be a real VM with it's own kernel, try KVM/Xen instead of VMWare. may be less mature, but you'll have a lot more flexibility.
*smaller guests. try Ubuntu JeOS, or roll your own with busybox
*share as much as possible between guests. try sharing a single R/O image with all the OS, and mount a small R/W image for each guest on /var, /home, /etc, etc
A: A couple of problems with that...
*
*For Vmware server you really need Server hardware unless it's only for testing.
*Go with a virtualization solution that is bare level like Xen Server, or VMware ESX or ESXi (free) or Hyper-V which isn't bare-level but closer in performance.
*For 20-1 you will need more RAM. The math doesn't add up. Minimal functional machines need 512 unless it's a perfectly stripped linux that should have at least 256. 20x256= 5gb + 5-10% overhead. Not really going to happen on those specs.
*For 20-1 you will need more processor. Each machine will have a vCPU. shared on a core 2 means that 10-1 per processor. not good. We run almost 20 on a dual quad core Dell 1950, 16gb RAM. Works great.
*Whatever you choose, you are going to be oversubscribing memory. Not exactly sure which ones let you. Vmware will, but shows warnings.
*I've heard but have no proof that XenServer will offer performance benefits, but nobody claims more than 10-20%.
Good luck
A: Do you really need 100 full-functional operating systems?
Why not take approach web servers use already? I mean virtual web servers/hosts.
For example take Apache HTTPD installed on single physical server hosting many virtual servers using single config file. Plus you'll need DNS configured and/or many virtual network interfaces (eth0:0, eth0:1, ... ,eth0:n) with different IP addresses.
This should work if you really need only several services exposed to the world and the load is not high.
A: Another possibility is to use a lightweight Linux distribution that can run in very small amounts of memory. Something like DamnSmallLinux or a variation on DDWRT. They can run in as little as 16MB of memory, allowing you to run 20 or more on a single machine.
A: You'd be best off running VMware ESX/ESXi as they both have a fancy memory pooling feature. It basically takes pages of memory that are identical and uses them amongst multiple guests, so if you're running a lot of identical guests, you'll be able to get a lot more on your host than with other VMs.
Check the bit about "Transparent Page Sharing" in this blog entry, and a comment about it here too.
Obviously you're still pushing it with 20 guests per host and only 2Gb RAM on each, but if you remove all extraneous services and apps, and build 1 guest image and clone it before installing the dedicated app on each, you might just get away with it, especially as the VMware link shows a 4Gb host running 40 guests!
A: i've got one quadcore machine running a full desktop and 9 virtual machines. since this is a testing machine i use all sorts of guests. the best on ram usage seem to be debian-kfreebsd, and tiny core linux. tiny core linux uses 10M of ram doing nothing. add a couple of services and it might be 32M, so i could run 32 vm's within 1GB of ram! you have 2GB so lets say you could run 48 machines including a hypervisor and overhead(i'm using kvm.) so with 5 machines we'd be up to 240 machines :D
i think i'm going to try that in a moment :D
btw. you said the vm's whould have a light load, so i didn't count on cpuload or diskload. and those figures have exactly 0 redundancy.
A: If you can slim down the guest enough you could probably do it, no X, minimal services started etc. Look at slackware or ubuntu server.
Xen seems popular among web hosting companies, so might be worth looking at.
CPU usage will depend on the apps but you might want to buy some more ram!
A: Is there a reason why each network service instance needs to be compartmentalized into their own virtual machines? If you don't need to isolate users from each other, but do need to isolate the processes and traffic, then you'd probably be better off just using the five servers as-is and launching separate processes for each instance. Each instance would be bound to a separate virtual interface.
For example, configure up a virtual interface and assign it an IP address. Create an httpd.conf file and/or sendmail.cf file for the instance you want to create. In the config file, specify that the daemon should be bound to the virtual interface (and only that one). Launch the daemon.
Repeat for each of the instances. You'll have a lot of processes running (hundreds, if not thousands), the sum total of them will use less memory than dozens of VMs. Plus, your OS will be able to swap unused ones out to disk.
A: If you do the math, you get in average 100 MB of ram for each machine. This is not much. The overhead for a VM is pretty big, having to run a complete OS in each instance.
Either you use some really small footprint os (http://www.damnsmalllinux.org/?) and spend time to strip it down even more or you get bigger machines.
Machines being that cheap, I'd tend to upgrade to a 64bit OS with plenty of ram.
A: VMWare has a cool option where you can "pool" a group of physical machines, and it will automatically move the virtual machines to whichever hardware is least utilized, without interrupting the operation of the VM.
Rather advertisey link.
A: Are you restricted to vmware? Have you considered Operating system-level virtualization? You'll get more VMs with less overhead, given that each VM can run the same kernel.
A: Several thoughts ...
1- As pointed out by others, the memory arithmetic doesn't work, you will need more RAM.
2- Depending on the service, you may be able to find pre-configured virtual machines. For instance, Astaro has a VM setup for it's free firewall software. You may also be able to find a very small linux distro that you can adapt.
3- Maybe I am missing something, but it sounds like Ubuntu is pretty close already ... 20 instances per machine on 5 machines get the 100 instances that you require. There is not much headroom for future growth, however ...
Take care, good luck.
A: I don't know if this is possible, but how about running each service in a chroot environment? You could probably save disk space by hard linking the necessary library files to create each chroot filesystem.
A: Another issue with running each service in its own VM is that they will all need their own IP address. 100 IPs may not be an issue on an internal network (like a 172/8 or 10/8 setup), but if they're part of your Class A (presuming you have that many public), you're going to run out fast.
And, as others have asked, why does each service need to be its own VM? Many of them should be easily capable of running on the same host.
A: If it's something that can be done at the application level - I'd go without any virtualizatoin. You can run multiple instances of your app on different port numbers, even different IPs with IP aliasing easily. That way you can easily run more than 20 copies on each of your boxes. Heck, you might be able to do everything with half of your hardware.
Virtualization is not the solution for everything. :)
My 2c.
A: Having that many VMs, you may run into performance interference problems, according to this blog.
A: Cloud Foundry. I know nothing about VMs compared to anyone else who may have submitted an answer, but from what I understand if you have a host, a VM on that host, and then Cloud Foundry on that VM you can easily create a base Secondary VM and easily replicate and configure all of your services within that Secondary VM set, while keeping hardware usage low. I don't know if it will work for sure, but from what I understand that would be one of the more minimal approaches and it is a two hull approach which would reduce possible risk of damaging the host machine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Is there a way to iterate through all enum values?
Possible Duplicate:
C#: How to enumerate an enum?
The subject says all. I want to use that to add the values of an enum in a combobox.
Thanks
vIceBerg
A: You could iterate through the array returned by the Enum.GetNames method instead.
public class GetNamesTest {
enum Colors { Red, Green, Blue, Yellow };
enum Styles { Plaid, Striped, Tartan, Corduroy };
public static void Main() {
Console.WriteLine("The values of the Colors Enum are:");
foreach(string s in Enum.GetNames(typeof(Colors)))
Console.WriteLine(s);
Console.WriteLine();
Console.WriteLine("The values of the Styles Enum are:");
foreach(string s in Enum.GetNames(typeof(Styles)))
Console.WriteLine(s);
}
}
A: If you need the values of the combo to correspond to the values of the enum you can also use something like this:
foreach (TheEnum value in Enum.GetValues(typeof(TheEnum)))
dropDown.Items.Add(new ListItem(
value.ToString(), ((int)value).ToString()
);
In this way you can show the texts in the dropdown and obtain back the value (in SelectedValue property)
A: string[] names = Enum.GetNames (typeof(MyEnum));
Then just populate the dropdown withe the array
A: I know others have already answered with a correct answer, however, if you're wanting to use the enumerations in a combo box, you may want to go the extra yard and associate strings to the enum so that you can provide more detail in the displayed string (such as spaces between words or display strings using casing that doesn't match your coding standards)
This blog entry may be useful - Associating Strings with enums in c#
public enum States
{
California,
[Description("New Mexico")]
NewMexico,
[Description("New York")]
NewYork,
[Description("South Carolina")]
SouthCarolina,
Tennessee,
Washington
}
As a bonus, he also supplied a utility method for enumerating the enumeration that I've now updated with Jon Skeet's comments
public static IEnumerable<T> EnumToList<T>()
where T : struct
{
Type enumType = typeof(T);
// Can't use generic type constraints on value types,
// so have to do check like this
if (enumType.BaseType != typeof(Enum))
throw new ArgumentException("T must be of type System.Enum");
Array enumValArray = Enum.GetValues(enumType);
List<T> enumValList = new List<T>();
foreach (T val in enumValArray)
{
enumValList.Add(val.ToString());
}
return enumValList;
}
Jon also pointed out that in C# 3.0 it can be simplified to something like this (which is now getting so light-weight that I'd imagine you could just do it in-line):
public static IEnumerable<T> EnumToList<T>()
where T : struct
{
return Enum.GetValues(typeof(T)).Cast<T>();
}
// Using above method
statesComboBox.Items = EnumToList<States>();
// Inline
statesComboBox.Items = Enum.GetValues(typeof(States)).Cast<States>();
A: Use the Enum.GetValues method:
foreach (TestEnum en in Enum.GetValues(typeof(TestEnum)))
{
...
}
You don't need to cast them to a string, and that way you can just retrieve them back by casting the SelectedItem property to a TestEnum value directly as well.
A: The problem with using enums to populate pull downs is that you cann't have weird characters or spaces in enums. I have some code that extends enums so that you can add any character you want.
Use it like this..
public enum eCarType
{
[StringValue("Saloon / Sedan")] Saloon = 5,
[StringValue("Coupe")] Coupe = 4,
[StringValue("Estate / Wagon")] Estate = 6,
[StringValue("Hatchback")] Hatchback = 8,
[StringValue("Utility")] Ute = 1,
}
Bind data like so..
StringEnum CarTypes = new StringEnum(typeof(eCarTypes));
cmbCarTypes.DataSource = CarTypes.GetGenericListValues();
Here is the class that extends the enum.
// Author: Donny V.
// blog: http://donnyvblog.blogspot.com
using System;
using System.Collections;
using System.Collections.Generic;
using System.Reflection;
namespace xEnums
{
#region Class StringEnum
/// <summary>
/// Helper class for working with 'extended' enums using <see cref="StringValueAttribute"/> attributes.
/// </summary>
public class StringEnum
{
#region Instance implementation
private Type _enumType;
private static Hashtable _stringValues = new Hashtable();
/// <summary>
/// Creates a new <see cref="StringEnum"/> instance.
/// </summary>
/// <param name="enumType">Enum type.</param>
public StringEnum(Type enumType)
{
if (!enumType.IsEnum)
throw new ArgumentException(String.Format("Supplied type must be an Enum. Type was {0}", enumType.ToString()));
_enumType = enumType;
}
/// <summary>
/// Gets the string value associated with the given enum value.
/// </summary>
/// <param name="valueName">Name of the enum value.</param>
/// <returns>String Value</returns>
public string GetStringValue(string valueName)
{
Enum enumType;
string stringValue = null;
try
{
enumType = (Enum) Enum.Parse(_enumType, valueName);
stringValue = GetStringValue(enumType);
}
catch (Exception) { }//Swallow!
return stringValue;
}
/// <summary>
/// Gets the string values associated with the enum.
/// </summary>
/// <returns>String value array</returns>
public Array GetStringValues()
{
ArrayList values = new ArrayList();
//Look for our string value associated with fields in this enum
foreach (FieldInfo fi in _enumType.GetFields())
{
//Check for our custom attribute
StringValueAttribute[] attrs = fi.GetCustomAttributes(typeof (StringValueAttribute), false) as StringValueAttribute[];
if (attrs.Length > 0)
values.Add(attrs[0].Value);
}
return values.ToArray();
}
/// <summary>
/// Gets the values as a 'bindable' list datasource.
/// </summary>
/// <returns>IList for data binding</returns>
public IList GetListValues()
{
Type underlyingType = Enum.GetUnderlyingType(_enumType);
ArrayList values = new ArrayList();
//List<string> values = new List<string>();
//Look for our string value associated with fields in this enum
foreach (FieldInfo fi in _enumType.GetFields())
{
//Check for our custom attribute
StringValueAttribute[] attrs = fi.GetCustomAttributes(typeof (StringValueAttribute), false) as StringValueAttribute[];
if (attrs.Length > 0)
values.Add(new DictionaryEntry(Convert.ChangeType(Enum.Parse(_enumType, fi.Name), underlyingType), attrs[0].Value));
}
return values;
}
/// <summary>
/// Gets the values as a 'bindable' list<string> datasource.
///This is a newer version of 'GetListValues()'
/// </summary>
/// <returns>IList<string> for data binding</returns>
public IList<string> GetGenericListValues()
{
Type underlyingType = Enum.GetUnderlyingType(_enumType);
List<string> values = new List<string>();
//Look for our string value associated with fields in this enum
foreach (FieldInfo fi in _enumType.GetFields())
{
//Check for our custom attribute
StringValueAttribute[] attrs = fi.GetCustomAttributes(typeof(StringValueAttribute), false) as StringValueAttribute[];
if (attrs.Length > 0)
values.Add(attrs[0].Value);
}
return values;
}
/// <summary>
/// Return the existence of the given string value within the enum.
/// </summary>
/// <param name="stringValue">String value.</param>
/// <returns>Existence of the string value</returns>
public bool IsStringDefined(string stringValue)
{
return Parse(_enumType, stringValue) != null;
}
/// <summary>
/// Return the existence of the given string value within the enum.
/// </summary>
/// <param name="stringValue">String value.</param>
/// <param name="ignoreCase">Denotes whether to conduct a case-insensitive match on the supplied string value</param>
/// <returns>Existence of the string value</returns>
public bool IsStringDefined(string stringValue, bool ignoreCase)
{
return Parse(_enumType, stringValue, ignoreCase) != null;
}
/// <summary>
/// Gets the underlying enum type for this instance.
/// </summary>
/// <value></value>
public Type EnumType
{
get { return _enumType; }
}
#endregion
#region Static implementation
/// <summary>
/// Gets a string value for a particular enum value.
/// </summary>
/// <param name="value">Value.</param>
/// <returns>String Value associated via a <see cref="StringValueAttribute"/> attribute, or null if not found.</returns>
public static string GetStringValue(Enum value)
{
string output = null;
Type type = value.GetType();
if (_stringValues.ContainsKey(value))
output = (_stringValues[value] as StringValueAttribute).Value;
else
{
//Look for our 'StringValueAttribute' in the field's custom attributes
FieldInfo fi = type.GetField(value.ToString());
StringValueAttribute[] attrs = fi.GetCustomAttributes(typeof (StringValueAttribute), false) as StringValueAttribute[];
if (attrs.Length > 0)
{
_stringValues.Add(value, attrs[0]);
output = attrs[0].Value;
}
}
return output;
}
/// <summary>
/// Parses the supplied enum and string value to find an associated enum value (case sensitive).
/// </summary>
/// <param name="type">Type.</param>
/// <param name="stringValue">String value.</param>
/// <returns>Enum value associated with the string value, or null if not found.</returns>
public static object Parse(Type type, string stringValue)
{
return Parse(type, stringValue, false);
}
/// <summary>
/// Parses the supplied enum and string value to find an associated enum value.
/// </summary>
/// <param name="type">Type.</param>
/// <param name="stringValue">String value.</param>
/// <param name="ignoreCase">Denotes whether to conduct a case-insensitive match on the supplied string value</param>
/// <returns>Enum value associated with the string value, or null if not found.</returns>
public static object Parse(Type type, string stringValue, bool ignoreCase)
{
object output = null;
string enumStringValue = null;
if (!type.IsEnum)
throw new ArgumentException(String.Format("Supplied type must be an Enum. Type was {0}", type.ToString()));
//Look for our string value associated with fields in this enum
foreach (FieldInfo fi in type.GetFields())
{
//Check for our custom attribute
StringValueAttribute[] attrs = fi.GetCustomAttributes(typeof (StringValueAttribute), false) as StringValueAttribute[];
if (attrs.Length > 0)
enumStringValue = attrs[0].Value;
//Check for equality then select actual enum value.
if (string.Compare(enumStringValue, stringValue, ignoreCase) == 0)
{
output = Enum.Parse(type, fi.Name);
break;
}
}
return output;
}
/// <summary>
/// Return the existence of the given string value within the enum.
/// </summary>
/// <param name="stringValue">String value.</param>
/// <param name="enumType">Type of enum</param>
/// <returns>Existence of the string value</returns>
public static bool IsStringDefined(Type enumType, string stringValue)
{
return Parse(enumType, stringValue) != null;
}
/// <summary>
/// Return the existence of the given string value within the enum.
/// </summary>
/// <param name="stringValue">String value.</param>
/// <param name="enumType">Type of enum</param>
/// <param name="ignoreCase">Denotes whether to conduct a case-insensitive match on the supplied string value</param>
/// <returns>Existence of the string value</returns>
public static bool IsStringDefined(Type enumType, string stringValue, bool ignoreCase)
{
return Parse(enumType, stringValue, ignoreCase) != null;
}
#endregion
}
#endregion
#region Class StringValueAttribute
/// <summary>
/// Simple attribute class for storing String Values
/// </summary>
public class StringValueAttribute : Attribute
{
private string _value;
/// <summary>
/// Creates a new <see cref="StringValueAttribute"/> instance.
/// </summary>
/// <param name="value">Value.</param>
public StringValueAttribute(string value)
{
_value = value;
}
/// <summary>
/// Gets the value.
/// </summary>
/// <value></value>
public string Value
{
get { return _value; }
}
}
#endregion
}
A: .NET 3.5 makes it simple by using extension methods:
enum Color {Red, Green, Blue}
Can be iterated with
Enum.GetValues(typeof(Color)).Cast<Color>()
or define a new static generic method:
static IEnumerable<T> GetValues<T>() {
return Enum.GetValues(typeof(T)).Cast<T>();
}
Keep in mind that iterating with the Enum.GetValues() method uses reflection and thus has performance penalties.
A: It is often useful to define a Min and Max inside your enum, which will always be the first and last items. Here is a very simple example using Delphi syntax:
procedure TForm1.Button1Click(Sender: TObject);
type
TEmployeeTypes = (etMin, etHourly, etSalary, etContractor, etMax);
var
i : TEmployeeTypes;
begin
for i := etMin to etMax do begin
//do something
end;
end;
A: Little more "complicated" (maybe overkill) but I use these two methods to return dictionaries to use as datasources. The first one returns the name as key and the second value as key.
public static IDictionary<string, int> ConvertEnumToDictionaryNameFirst<K>()
{
if (typeof(K).BaseType != typeof(Enum))
{
throw new InvalidCastException();
}
return Enum.GetValues(typeof(K)).Cast<int>().ToDictionary(currentItem
=> Enum.GetName(typeof(K), currentItem));
}
Or you could do
public static IDictionary<int, string> ConvertEnumToDictionaryValueFirst<K>()
{
if (typeof(K).BaseType != typeof(Enum))
{
throw new InvalidCastException();
}
return Enum.GetNames(typeof(K)).Cast<string>().ToDictionary(currentItem
=> (int)Enum.Parse(typeof(K), currentItem));
}
This assumes you are using 3.5 though. You'd have to replace the lambda expressions if not.
Use:
Dictionary list = ConvertEnumToDictionaryValueFirst<SomeEnum>();
using System;
using System.Collections.Generic;
using System.Linq;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: How to join the newest rows from a table? I frequently run into problems of this form and haven't found a good solution yet:
Assume we have two database tables representing an e-commerce system.
userData (userId, name, ...)
orderData (orderId, userId, orderType, createDate, ...)
For all users in the system, select their user information, their most recent order information with type = '1', and their most recent order information with type = '2'. I want to do this in one query. Here is an example result:
(userId, name, ..., orderId1, orderType1, createDate1, ..., orderId2, orderType2, createDate2, ...)
(101, 'Bob', ..., 472, '1', '4/25/2008', ..., 382, '2', '3/2/2008', ...)
A: This should work, you'll have to adjust the table / column names:
select ud.name,
order1.order_id,
order1.order_type,
order1.create_date,
order2.order_id,
order2.order_type,
order2.create_date
from user_data ud,
order_data order1,
order_data order2
where ud.user_id = order1.user_id
and ud.user_id = order2.user_id
and order1.order_id = (select max(order_id)
from order_data od1
where od1.user_id = ud.user_id
and od1.order_type = 'Type1')
and order2.order_id = (select max(order_id)
from order_data od2
where od2.user_id = ud.user_id
and od2.order_type = 'Type2')
Denormalizing your data might also be a good idea. This type of thing will be fairly expensive to do. So you might add a last_order_date to your userData.
A: I have provided three different approaches for solving this problem:
*
*Using Pivots
*Using Case Statements
*Using inline queries in the where clause
All of the solutions assume we are determining the "most recent" order based on the orderId column. Using the createDate column would add complexity due to timestamp collisions and seriously hinder performance since createDate is probably not part of the indexed key. I have only tested these queries using MS SQL Server 2005, so I have no idea if they will work on your server.
Solutions (1) and (2) perform almost identically. In fact, they both result in the same number of reads from the database.
Solution (3) is not the preferred approach when working with large data sets. It consistently makes hundreds of logical reads more than (1) and (2). When filtering for one specific user, approach (3) is comparable to the other methods. In the single user case, a drop in the cpu time helps to counter the significantly higher number of reads; however, as the disk drive becomes busier and cache misses occur, this slight advantage will disappear.
Conclusion
For the presented scenario, use the pivot approach if it is supported by your DBMS. It requires less code than the case statement and simplifies adding order types in the future.
Please note, in some cases, PIVOT is not flexible enough and characteristic value functions using case statements are the way to go.
Code
Approach (1) using PIVOT:
select
ud.userId, ud.fullname,
od1.orderId as orderId1, od1.createDate as createDate1, od1.orderType as orderType1,
od2.orderId as orderId2, od2.createDate as createDate2, od2.orderType as orderType2
from userData ud
inner join (
select userId, [1] as typeOne, [2] as typeTwo
from (select
userId, orderType, orderId
from orderData) as orders
PIVOT
(
max(orderId)
FOR orderType in ([1], [2])
) as LatestOrders) as LatestOrders on
LatestOrders.userId = ud.userId
inner join orderData od1 on
od1.orderId = LatestOrders.typeOne
inner join orderData od2 on
od2.orderId = LatestOrders.typeTwo
Approach (2) using Case Statements:
select
ud.userId, ud.fullname,
od1.orderId as orderId1, od1.createDate as createDate1, od1.orderType as orderType1,
od2.orderId as orderId2, od2.createDate as createDate2, od2.orderType as orderType2
from userData ud
-- assuming not all users will have orders use outer join
inner join (
select
od.userId,
-- can be null if no orders for type
max (case when orderType = 1
then ORDERID
else null
end) as maxTypeOneOrderId,
-- can be null if no orders for type
max (case when orderType = 2
then ORDERID
else null
end) as maxTypeTwoOrderId
from orderData od
group by userId) as maxOrderKeys on
maxOrderKeys.userId = ud.userId
inner join orderData od1 on
od1.ORDERID = maxTypeTwoOrderId
inner join orderData od2 on
OD2.ORDERID = maxTypeTwoOrderId
Approach (3) using inline queries in the where clause (based on Steve K.'s response):
select ud.userId,ud.fullname,
order1.orderId, order1.orderType, order1.createDate,
order2.orderId, order2.orderType, order2.createDate
from userData ud,
orderData order1,
orderData order2
where ud.userId = order1.userId
and ud.userId = order2.userId
and order1.orderId = (select max(orderId)
from orderData od1
where od1.userId = ud.userId
and od1.orderType = 1)
and order2.orderId = (select max(orderId)
from orderData od2
where od2.userId = ud.userId
and od2.orderType = 2)
Script to generate tables and 1000 users with 100 orders each:
CREATE TABLE [dbo].[orderData](
[orderId] [int] IDENTITY(1,1) NOT NULL,
[createDate] [datetime] NOT NULL,
[orderType] [tinyint] NOT NULL,
[userId] [int] NOT NULL
)
CREATE TABLE [dbo].[userData](
[userId] [int] IDENTITY(1,1) NOT NULL,
[fullname] [nvarchar](50) NOT NULL
)
-- Create 1000 users with 100 order each
declare @userId int
declare @usersAdded int
set @usersAdded = 0
while @usersAdded < 1000
begin
insert into userData (fullname) values ('Mario' + ltrim(str(@usersAdded)))
set @userId = @@identity
declare @orderSetsAdded int
set @orderSetsAdded = 0
while @orderSetsAdded < 10
begin
insert into orderData (userId, createDate, orderType)
values ( @userId, '01-06-08', 1)
insert into orderData (userId, createDate, orderType)
values ( @userId, '01-02-08', 1)
insert into orderData (userId, createDate, orderType)
values ( @userId, '01-08-08', 1)
insert into orderData (userId, createDate, orderType)
values ( @userId, '01-09-08', 1)
insert into orderData (userId, createDate, orderType)
values ( @userId, '01-01-08', 1)
insert into orderData (userId, createDate, orderType)
values ( @userId, '01-06-06', 2)
insert into orderData (userId, createDate, orderType)
values ( @userId, '01-02-02', 2)
insert into orderData (userId, createDate, orderType)
values ( @userId, '01-08-09', 2)
insert into orderData (userId, createDate, orderType)
values ( @userId, '01-09-01', 2)
insert into orderData (userId, createDate, orderType)
values ( @userId, '01-01-04', 2)
set @orderSetsAdded = @orderSetsAdded + 1
end
set @usersAdded = @usersAdded + 1
end
Small snippet for testing query performance on MS SQL Server in addition to SQL Profiler:
-- Uncomment these to clear some caches
--DBCC DROPCLEANBUFFERS
--DBCC FREEPROCCACHE
set statistics io on
set statistics time on
-- INSERT TEST QUERY HERE
set statistics time off
set statistics io off
A: Sorry I don't have oracle in front of me, but this is the basic structure of what I would do in oracle:
SELECT b.user_id, b.orderid, b.orderType, b.createDate, <etc>,
a.name
FROM orderData b, userData a
WHERE a.userid = b.userid
AND (b.userid, b.orderType, b.createDate) IN (
SELECT userid, orderType, max(createDate)
FROM orderData
WHERE orderType IN (1,2)
GROUP BY userid, orderType)
A: T-SQL sample solution (MS SQL):
SELECT
u.*
, o1.*
, o2.*
FROM
(
SELECT
, userData.*
, (SELECT TOP 1 orderId.url FROM orderData WHERE orderData.userId=userData.userId AND orderType=1 ORDER BY createDate DESC)
AS order1Id
, (SELECT TOP 1 orderId.url FROM orderData WHERE orderData.userId=userData.userId AND orderType=2 ORDER BY createDate DESC)
AS order2Id
FROM userData
) AS u
LEFT JOIN orderData o1 ON (u.order1Id=o1.orderId)
LEFT JOIN orderData o2 ON (u.order2Id=o2.orderId)
In SQL 2005 you could also use RANK ( ) OVER function. (But AFAIK its completely MSSQL-specific feature)
A: Their newest you mean all new in the current day? You can always check with your createDate and get all user and order data if the createDate >= current day.
SELECT * FROM
"orderData", "userData"
WHERE
"userData"."userId" ="orderData"."userId"
AND "orderData".createDate >= current_date;
UPDATED
Here is what you want after your comment here:
SELECT * FROM
"orderData", "userData"
WHERE
"userData"."userId" ="orderData"."userId"
AND "orderData".type = '1'
AND "orderData"."orderId" = (
SELECT "orderId" FROM "orderData"
WHERE
"orderType" = '1'
ORDER "orderId" DESC
LIMIT 1
)
A: You might be able to do a union query for this. The exact syntax needs some work, especially the group by section, but the union should be able to do it.
For example:
SELECT orderId, orderType, createDate
FROM orderData
WHERE type=1 AND MAX(createDate)
GROUP BY orderId, orderType, createDate
UNION
SELECT orderId, orderType, createDate
FROM orderData
WHERE type=2 AND MAX(createDate)
GROUP BY orderId, orderType, createDate
A: i use things like this in MySQL:
SELECT
u.*,
SUBSTRING_INDEX( MAX( CONCAT( o1.createDate, '##', o1.otherfield)), '##', -1) as o2_orderfield,
SUBSTRING_INDEX( MAX( CONCAT( o2.createDate, '##', o2.otherfield)), '##', -1) as o2_orderfield
FROM
userData as u
LEFT JOIN orderData AS o1 ON (o1.userId=u.userId AND o1.orderType=1)
LEFT JOIN orderData AS o2 ON (o1.userId=u.userId AND o2.orderType=2)
GROUP BY u.userId
In short, use MAX() to get the newest, by prepending the criteria field (createDate) to the interesting field(s) (otherfield). SUBSTRING_INDEX() then strips off the date.
OTOH, if you need an arbitrary number of orders (if userType can be any number, and not a limited ENUM); it's better to handle with a separate query, something like this:
select * from orderData where userId=XXX order by orderType, date desc group by orderType
for each user.
A: Assuming orderId is monotonic increasing with time:
SELECT *
FROM userData u
INNER JOIN orderData o
ON o.userId = u.userId
INNER JOIN ( -- This subquery gives the last order of each type for each customer
SELECT MAX(o2.orderId)
--, o2.userId -- optional - include if joining for a particular customer
--, o2.orderType -- optional - include if joining for a particular type
FROM orderData o2
GROUP BY o2.userId
,o2.orderType
) AS LastOrders
ON LastOrders.orderId = o.orderId -- expand join to include customer or type if desired
Then pivot at the client or if using SQL Server, there is a PIVOT functionality
A: Here is one way to move the type 1 and 2 data on to the same row:
(by placing the type 1 and type 2 information into their own selects that then get used in the from clause.)
SELECT
a.name, ud1.*, ud2.*
FROM
userData a,
(SELECT user_id, orderid, orderType, reateDate, <etc>,
FROM orderData b
WHERE (userid, orderType, createDate) IN (
SELECT userid, orderType, max(createDate)
FROM orderData
WHERE orderType = 1
GROUP BY userid, orderType) ud1,
(SELECT user_id, orderid, orderType, createDate, <etc>,
FROM orderData
WHERE (userid, orderType, createDate) IN (
SELECT userid, orderType, max(createDate)
FROM orderData
WHERE orderType = 2
GROUP BY userid, orderType) ud2
A: Here's how I do it. This is standard SQL and works in any brand of database.
SELECT u.userId, u.name, o1.orderId, o1.orderType, o1.createDate,
o2.orderId, o2.orderType, o2.createDate
FROM userData AS u
LEFT OUTER JOIN (
SELECT o1a.orderId, o1a.userId, o1a.orderType, o1a.createDate
FROM orderData AS o1a
LEFT OUTER JOIN orderData AS o1b ON (o1a.userId = o1b.userId
AND o1a.orderType = o1b.orderType AND o1a.createDate < o1b.createDate)
WHERE o1a.orderType = 1 AND o1b.orderId IS NULL) AS o1 ON (u.userId = o1.userId)
LEFT OUTER JOIN (
SELECT o2a.orderId, o2a.userId, o2a.orderType, o2a.createDate
FROM orderData AS o2a
LEFT OUTER JOIN orderData AS o2b ON (o2a.userId = o2b.userId
AND o2a.orderType = o2b.orderType AND o2a.createDate < o2b.createDate)
WHERE o2a.orderType = 2 AND o2b.orderId IS NULL) o2 ON (u.userId = o2.userId);
Note that if you have multiple orders of either type whose dates are equal to the latest date, you'll get multiple rows in the result set. If you have multiple orders of both types, you'll get N x M rows in the result set. So I would recommend that you fetch the rows of each type in separate queries.
A: Steve K is absolutely right, thanks! I did rewrite his answer a little to account for the fact that there might be no order for a particular type (which I failed to mention, so I can't fault Steve K.)
Here's what I wound up using:
select ud.name,
order1.orderId,
order1.orderType,
order1.createDate,
order2.orderId,
order2.orderType,
order2.createDate
from userData ud
left join orderData order1
on order1.orderId = (select max(orderId)
from orderData od1
where od1.userId = ud.userId
and od1.orderType = '1')
left join orderData order2
on order2.orderId = (select max(orderId)
from orderData od2
where od2.userId = ud.userId
and od2.orderType = '2')
where ...[some limiting factors on the selection of users]...;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Example code required for how to access embedded .Net image resources in C# It's very easy to mark an image file to become an embedded resource however how does one access the image thereafter. Please can I have some example code?
A: 1) Adding and Editing Resources (Visual C#)
System.Drawing.Bitmap bitmap1 = myProject.Properties.Resources.Image01;
2) Accessing Embedded Resources using GetManifestResourceStream
Assembly _assembly = Assembly.GetExecutingAssembly();
Stream _imageStream =
_assembly.GetManifestResourceStream(
"ThumbnailPictureViewer.resources.Image1.bmp");
Bitmap theDefaultImage = new Bitmap(_imageStream);
A: The most direct method:
YourProjectsBaseNamespace.Properties.Resources.YourImageResourceName
A: Look at the third code snippet at http://msdn.microsoft.com/en-us/library/aa309403(VS.71).aspx
A: //Get the names of the embedded resource files;
List<string> resources = new List<string>(AssemblyBuilder.GetExecutingAssembly().GetManifestResourceNames());
//Get the stream
StreamReader sr = new StreamReader(
AssemblyBuilder.GetExecutingAssembly().GetManifestResourceStream(
resources.Find(target => target.ToLower().Contains("insert name here"))
You can convert from bitmap from the stream. The Bitmap class has a method that does this. LoadFromStream if my memory serves.
A: You can try using https://www.nuget.org/packages/EmbeddedResourceBrowser/ library, there are code samples on the project page (https://andrei15193.github.io/EmbeddedResourceBrowser/). It's just to help access embedded resources in .NET applications. You can get a Stream to read the contents, it makes browsing resources a lot more easier.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Stuff in Windows Form Move When Maximized - C# It's been a while since I've programmed a GUI program, so this may end up being super simple, but I can't find the solution anywhere online.
Basically my problem is that when I maximize my program, all the things inside of the window (buttons, textboxes, etc.) stay in the same position in the window, which results in a large blank area near the bottom and right side.
Is there a way of making the the elements in the program to stretch to scale?
A: Anchor and Dock properties
A: You want to check and properly set the Anchor and Dock properties on each control in the Form. The Anchor property on a control tells which sides of the form (top, bottom, left, right) the control is 'anchored' to. When the form is resized, the distance between the control and its anchors will stay the same. This lets you make a control stay in the bottom right corner for example.
The Dock property instructs the control to fill the entire parent form or to fill one side of it (again top, bottom, left or right).
A: Look at the Dynamic Layout: Anchoring and Docking sample at http://msdn.microsoft.com/en-us/library/aa289756(VS.71).aspx
It's in VB, but the concepts and the propeties/methods you need are the same in C#.
A: There are some layout panel controls that help you keep things proportioned as the form expands/contracts:
TableLayoutPanel
FlowLayoutPanel
A:
As to layouts, I'm not quite sure what you mean, but I'm using Visual Studio 2008's default GUI editor.
There are some special 'container' type panels that you could stick on your form such as FlowLayoutPanel and TableLayoutPanel. These types of containers have additional layout behavior.
If you find that some of your controls still don't want to behave during resize, then use the right-click context menu of the control to list the controls ancestors: its parent, its parent's parent, etc. You may find that the troublesome control is a child of some special container which has its own layout rules.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Windows path searching in LoadLibrary with manifest If you call LoadLibrary without a path (e.g., LoadLibrary("whatever.dll"), Windows will generally follow its standard search algorithm, the same one it uses to find EXEs.
My question is this: suppose that an application manifest specifies specifies a particular version of a system DLL, say, comctl32.dll 6.0. In that case, will LoadLibrary("comctl32.dll") go immediately to the correct side-by-side folder, or does it still perform some kind of search?
A: From Microsoft:
Applications can control the location from which a DLL is loaded by specifying a full path, using DLL redirection, or by using a manifest. If none of these methods are used, the system searches for the DLL at load time as described in this topic.
So yes, if a manifest is present, it will directly go to the SxS folder.
A: To probe the loader when having troubles with missing libraries, you can use the "sxstrace" feature. www.codeproject.com/KB/DLL/QueryAssemblyIdentities.aspx gives some details about the dependencies between manifest and WinSxs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: php Access violation I am trying to install Php on Vista (IIS 7). The installation and configuration seems to be fine. Pretty much followed everything mentioned in http://blogs.iis.net/bills/archive/2006/09/19/How-to-install-PHP-on-IIS7-_2800_RC1_2900_.aspx
I can even bring up a test.php which is basically and can also connect to mysql db through code. But when I try to bring up some other php page like drupal's index page or phpmyadmin index page, it brings up a Php access violation message.
Any clue whats happening? Also is there some tracing/diagnositic tool for php to trace whats happenign on the webserver.
A: In your PHP.INI file, try to comment all the extensions lines and restart IIS. Take note that MySql will no longer work. Do some tests.
If it's successfull, uncomment back the mySql line. Do some tests.
If it's unsucessfull, them I can't help you more. I suspect that's a loaded extension causing the error. You have to find which one...
Hope this helps.
vIceBerg
A: I'm assuming you're using PHP 5.2.6 (latest version). I'm also assuming your using ISAPI from the directions in your link.
There seem to be a lot of bug reports for PHP on IIS with access violations.
Here and here or a list
So far the only suggestion I've found is to try FastCGI instead of ISAPI.
I've never had to mess with debugging PHP itself, my own scripts of cource, but not PHP. There are instructions for generating a backtrace and some general debugging info.
Edit -- After reading viceBerg's answer I remembered another possible cause was non-thread safe PHP extensions and IIS not liking each other. It might be worth it to disable them all and enable them one at a time but it's only really useful if you can get a script to cause that error consistently.
Without a consistent, reproducible error this will be very hard to debug.
A: Thanks guys. I disabled all the extensions and started added one at a time and only the ones I needed to use (mysql,mbstring etc). I also had to copy php.ini into System32 folder (even though I had it in the system Path). Voila, it worked.
A: Try moving the libmysql.dll that came with php to windows\system32
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Any issues with ActiveMQ broker and clients running on different JDKs? We have a distributed system with components (including the ActiveMQ broker) running on jdk 1.6.
A potential customer would like to integrate a component that was built on jdk 1.4 with our system. While this customer is willing to write code to integrate with our system, they are not comfortable moving from jdk 1.4.
Would there be any problems with a system where one client is running off of an older jdk?
A: ActiveMQ 5.x works on Java 1.5 or later - any JVM 1.5 or later should work fine as ActiveMQ uses its own marshalling layer and does not rely on serialisation etc.
If you want to work with Java 1.4 you'll need to either install the Retrotranslator JIT or transform the jars to 1.4 complianct bytecode with Retrotranslator. There is a Maven retrotranslator plugin to help. See the ActiveMQ FAQ entry for more help
Another option is to write a simple STOMP client which is a good solution for applets etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Trailing slash on an ASP.NET MVC route In the latest MVC preview, I'm using this route for a legacy URL:
routes.MapRoute(
"Legacy-Firefox", // Route name
"Firefox-Extension/", // URL with parameters
new { controller = "Home", action = "Firefox", id = "" } // Parameter defaults
);
The problem is that both of these URL's work:
http://example.com/Firefox-Extension
http://example.com/Firefox-Extension/
I only want the second to work (for SEO). Also, when I create a link to that page, the routing engine gives me back a URL without a trailing slash.
This is the code I'm using to generate the link:
<%= Html.ActionLink("Firefox Extension", "Firefox", "Home")%>
I believe can fix the first problem by using an HTTP handler to do a 301 redirect to the URL with the trailing slash. However, I want to link to the URL with the trailing slash, and I'm hoping to not have to hard-code the version with the slash.
Anyone know how to force the route to use a trailing slash?
A: If you have a wrapper over RouteLink than there is an easy solution of the problem.
For example, I had a wrapper method RouteLinkEx:
public static string RouteLinkEx(this HtmlHelper helper,string text,string routeName,RouteValueDictionary rvd,object htmlAttributes)
{
UrlHelper uh = new UrlHelper(helper.ViewContext.RequestContext,helper.RouteCollection);
// Add trailing slash to the url of the link
string url = uh.RouteUrl(routeName,rvd) + "/";
TagBuilder builder = new TagBuilder("a")
{
InnerHtml = !string.IsNullOrEmpty(text) ? HttpUtility.HtmlEncode(text) : string.Empty
};
builder.MergeAttributes(new RouteValueDictionary(htmlAttributes));
builder.MergeAttribute("href",url);
return builder.ToString(TagRenderMode.Normal);
//---
}
As you see I used parameters to generate URL first. Then I added "/" at the end of the URL. and then I generated complete link using those URL.
A: I happened across this blog post:
http://www.ytechie.com/2008/10/aspnet-mvc-what-about-seo.html
this morning before running into this question on StackOverflow. That blog post (from the author of this question) has a trackback to this blog post from Scott Hanselman with an answer to this question:
http://www.hanselman.com/blog/ASPNETMVCAndTheNewIIS7RewriteModule.aspx
I was surprised to find no link from here to there yet, so I just added it. :)
Scott's answer suggests using URL Rewriting.
A: When you write your links, you should always include the final slash. I don't know if this applies to the mvc framework (or URL Routing in general), but I know that for static resources, if you don't put the slash in you add a slight overhead as the request gets done twice.
The slash immediately identifies the url as pointing to a directory. No need to parse files.
Again, I don't believe this applies when you use URL routing, but I haven't looked into it.
Check HERE for an article about the trailing slash
edit:
Upon thinking about this... I think it's probably better to leave off the slash, instead of trying to include it. When you're using url routing, you're using the URL to route directly to a resource. As opposed to pointing to a directory with an index.html or default.aspx, you're pointing to a specific file.
I know the difference is subtle, but it may be better to stick to the non-slash for Routed Urls, rather than fight with the framework.
Use a trailing slash strictly when you're actually pointing to a directory. Thought I guess you could just append a slash to the end every time if you really didn't like it.
A: MVC 5 and 6 has the option of generating lower case URL's for your routes. My route config is shown below:
public static class RouteConfig
{
public static void RegisterRoutes(RouteCollection routes)
{
// Imprive SEO by stopping duplicate URL's due to case or trailing slashes.
routes.AppendTrailingSlash = true;
routes.LowercaseUrls = true;
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapRoute(
name: "Default",
url: "{controller}/{action}/{id}",
defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional });
}
}
With this code, you should no longer need the canonicalize the URL's as this is done for you. One problem that can occur if you are using HTTP and HTTPS URL's and want a canonical URL for this. In this case, it's pretty easy to use the above approaches and replace HTTP with HTTPS or vice versa.
Another problem is external websites that link to your site may omit the trailing slash or add upper-case characters and for this you should perform a 301 permanent redirect to the correct URL with the trailing slash. For full usage and source code, refer to my blog post and the RedirectToCanonicalUrlAttribute filter:
/// <summary>
/// To improve Search Engine Optimization SEO, there should only be a single URL for each resource. Case
/// differences and/or URL's with/without trailing slashes are treated as different URL's by search engines. This
/// filter redirects all non-canonical URL's based on the settings specified to their canonical equivalent.
/// Note: Non-canonical URL's are not generated by this site template, it is usually external sites which are
/// linking to your site but have changed the URL case or added/removed trailing slashes.
/// (See Google's comments at http://googlewebmastercentral.blogspot.co.uk/2010/04/to-slash-or-not-to-slash.html
/// and Bing's at http://blogs.bing.com/webmaster/2012/01/26/moving-content-think-301-not-relcanonical).
/// </summary>
[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, Inherited = true, AllowMultiple = false)]
public class RedirectToCanonicalUrlAttribute : FilterAttribute, IAuthorizationFilter
{
private readonly bool appendTrailingSlash;
private readonly bool lowercaseUrls;
#region Constructors
/// <summary>
/// Initializes a new instance of the <see cref="RedirectToCanonicalUrlAttribute" /> class.
/// </summary>
/// <param name="appendTrailingSlash">If set to <c>true</c> append trailing slashes, otherwise strip trailing
/// slashes.</param>
/// <param name="lowercaseUrls">If set to <c>true</c> lower-case all URL's.</param>
public RedirectToCanonicalUrlAttribute(
bool appendTrailingSlash,
bool lowercaseUrls)
{
this.appendTrailingSlash = appendTrailingSlash;
this.lowercaseUrls = lowercaseUrls;
}
#endregion
#region Public Methods
/// <summary>
/// Determines whether the HTTP request contains a non-canonical URL using <see cref="TryGetCanonicalUrl"/>,
/// if it doesn't calls the <see cref="HandleNonCanonicalRequest"/> method.
/// </summary>
/// <param name="filterContext">An object that encapsulates information that is required in order to use the
/// <see cref="RedirectToCanonicalUrlAttribute"/> attribute.</param>
/// <exception cref="ArgumentNullException">The <paramref name="filterContext"/> parameter is <c>null</c>.</exception>
public virtual void OnAuthorization(AuthorizationContext filterContext)
{
if (filterContext == null)
{
throw new ArgumentNullException("filterContext");
}
if (string.Equals(filterContext.HttpContext.Request.HttpMethod, "GET", StringComparison.Ordinal))
{
string canonicalUrl;
if (!this.TryGetCanonicalUrl(filterContext, out canonicalUrl))
{
this.HandleNonCanonicalRequest(filterContext, canonicalUrl);
}
}
}
#endregion
#region Protected Methods
/// <summary>
/// Determines whether the specified URl is canonical and if it is not, outputs the canonical URL.
/// </summary>
/// <param name="filterContext">An object that encapsulates information that is required in order to use the
/// <see cref="RedirectToCanonicalUrlAttribute" /> attribute.</param>
/// <param name="canonicalUrl">The canonical URL.</param>
/// <returns><c>true</c> if the URL is canonical, otherwise <c>false</c>.</returns>
protected virtual bool TryGetCanonicalUrl(AuthorizationContext filterContext, out string canonicalUrl)
{
bool isCanonical = true;
canonicalUrl = filterContext.HttpContext.Request.Url.ToString();
int queryIndex = canonicalUrl.IndexOf(QueryCharacter);
if (queryIndex == -1)
{
bool hasTrailingSlash = canonicalUrl[canonicalUrl.Length - 1] == SlashCharacter;
if (this.appendTrailingSlash)
{
// Append a trailing slash to the end of the URL.
if (!hasTrailingSlash)
{
canonicalUrl += SlashCharacter;
isCanonical = false;
}
}
else
{
// Trim a trailing slash from the end of the URL.
if (hasTrailingSlash)
{
canonicalUrl = canonicalUrl.TrimEnd(SlashCharacter);
isCanonical = false;
}
}
}
else
{
bool hasTrailingSlash = canonicalUrl[queryIndex - 1] == SlashCharacter;
if (this.appendTrailingSlash)
{
// Append a trailing slash to the end of the URL but before the query string.
if (!hasTrailingSlash)
{
canonicalUrl = canonicalUrl.Insert(queryIndex, SlashCharacter.ToString());
isCanonical = false;
}
}
else
{
// Trim a trailing slash to the end of the URL but before the query string.
if (hasTrailingSlash)
{
canonicalUrl = canonicalUrl.Remove(queryIndex - 1, 1);
isCanonical = false;
}
}
}
if (this.lowercaseUrls)
{
foreach (char character in canonicalUrl)
{
if (char.IsUpper(character))
{
canonicalUrl = canonicalUrl.ToLower();
isCanonical = false;
break;
}
}
}
return isCanonical;
}
/// <summary>
/// Handles HTTP requests for URL's that are not canonical. Performs a 301 Permanent Redirect to the canonical URL.
/// </summary>
/// <param name="filterContext">An object that encapsulates information that is required in order to use the
/// <see cref="RedirectToCanonicalUrlAttribute" /> attribute.</param>
/// <param name="canonicalUrl">The canonical URL.</param>
protected virtual void HandleNonCanonicalRequest(AuthorizationContext filterContext, string canonicalUrl)
{
filterContext.Result = new RedirectResult(canonicalUrl, true);
}
#endregion
}
Usage example to ensure all requests are 301 redirected to the correct canonical URL:
filters.Add(new RedirectToCanonicalUrlAttribute(
RouteTable.Routes.AppendTrailingSlash,
RouteTable.Routes.LowercaseUrls));
A: Here a overload for RouteLinkEx(HtmlHelper, string,string, object)
public static string RouteLinkEx(this HtmlHelper helper, string text, string routeName, object routeValues)
{
UrlHelper uh = new UrlHelper(helper.ViewContext.RequestContext);
// Add trailing slash to the url of the link
string url = uh.RouteUrl(routeName, routeValues) + "/";
TagBuilder builder = new TagBuilder("a")
{
InnerHtml = !string.IsNullOrEmpty(text) ? HttpUtility.HtmlEncode(text) : string.Empty
};
//builder.MergeAttributes(new RouteValueDictionary(htmlAttributes));
builder.MergeAttribute("href", url);
return builder.ToString(TagRenderMode.Normal);
//---
}
A: Here is my version for ASP.NET MVC 2
public static MvcHtmlString RouteLinkEx(this HtmlHelper helper, string text, RouteValueDictionary routeValues)
{
return RouteLinkEx(helper, text, null, routeValues, null);
}
public static MvcHtmlString RouteLinkEx(this HtmlHelper htmlHelper, string text, string routeName, RouteValueDictionary routeValues, object htmlAttributes)
{
string url = UrlHelper.GenerateUrl(routeName, null, null, null, null, null, routeValues, htmlHelper.RouteCollection, htmlHelper.ViewContext.RequestContext, false);
var builder = new TagBuilder("a")
{
InnerHtml = !string.IsNullOrEmpty(text) ? HttpUtility.HtmlEncode(text) : string.Empty
};
builder.MergeAttributes(new RouteValueDictionary(htmlAttributes));
// Add trailing slash to the url of the link
builder.MergeAttribute("href", url + "/");
return MvcHtmlString.Create(builder.ToString(TagRenderMode.Normal));
}
A: I think you are solving the problem from the wrong angle. The reason given for wanting to force the single url is for SEO. I believe this refers to getting a duplicate content penalty because search engines consider this two URLs with the same content.
Another solution to this problem then is to add a CANONICAL tag to your page which tells the search engines which is the "official" url for the page. Once you do that you no longer need to force the URLs and search engines will not penalize you and will route search results to your official url.
https://support.google.com/webmasters/answer/139066?hl=en
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: How to set working directory when debugging VB6 app? I am debugging a VB6 executable. The executable loads dlls and files from it's current directory, when running. When run in debugger, the current directory seems to be VB6's dir.
How do I set working directory for VB6?
A: Solution that I have found that works uses a Sub Main, and checks if the program is running in the IDE.
Dim gISIDE as Boolean
Sub Main()
If IsIDE Then
ChDrive App.Path
ChDir App.Path
End If
' The rest of the code goes here...
End Sub
Public Function IsIDE() As Boolean '
IsIDE = False
'This line is only executed if running in the IDE and then returns True
Debug.Assert CheckIDE
If gISIDE Then
IsIDE = True
End If
End Function
Private Function CheckIDE() As Boolean ' this is a helper function for Public Function IsIDE()
gISIDE = True 'set global flag
CheckIDE = True
End Function
A: "The current directory seems to be VB6's dir" only when you open a project using File-Open.
Open it by double clicking the .vbp file while having the IDE closed.
A: It doesn't seems to be a "out of the box" solution for this thing.
Taken from The Old Joel On Software Forums
Anyways.. to put this topic to rest..
the following was my VB6 solution: I
define 2 symbols in my VB project
"MPDEBUG" and "MPRELEASE" and call the
following function as the first
operation in my apps entry point
function.
Public Sub ChangeDirToApp()
#If MPDEBUG = 0 And MPRELEASE = 1 Then
' assume that in final release builds the current dir will be the location
' of where the .exe was installed; paths are relative to the install dir
ChDrive App.path
ChDir App.path
#Else
' in all debug/IDE related builds, we need to switch to the "bin" dir
ChDrive App.path
ChDir App.path & BackSlash(App.path) & "..\bin"
#End If
End Sub
A: Will this work?
'Declaration
Private Declare Function SetCurrentDirectory Lib "kernel32" _
Alias "SetCurrentDirectoryA" (ByVal lpPathName As String) As Long
'syntax to set current dir
SetCurrentDirectory App.Path
A: Current directory for any program - including vb6 - can be changed in the properties of the shortcut. I've changed it to the root of my source tree, it makes using File-Open quicker.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Is it possible to ScaleTransform everything on a Canvas/Grid except for 1 Control? Given the following canvas:
<Canvas>
<Canvas.LayoutTransform>
<ScaleTransform ScaleX="1" ScaleY="1" CenterX=".5" CenterY=".5" />
</Canvas.LayoutTransform>
<Button x:Name="scaleButton" Content="Scale Me" Canvas.Top="10" Canvas.Left="10" />
<Button x:Name="dontScaleButton" Content="DON'T Scale Me" Canvas.Top="10" Canvas.Left="50" />
</Canvas>
Is it possible to scale 1 button, but not the other when ScaleX and ScaleY changes?
A: Not in XAML. You can do this in code by building the reverse transform and applying it to the object you don't want transformed.
If you want to go fancy, you can build a dependency property that you can attach in XAML to any object you don't want to be transformed by any parent transforms. This dependency property will take the transform of the parent, build a reverse transform and apply it to the object it's attached to.
A: Not sure if this was impossible when you asked the question but i would approach it like this:
<Button x:Name="dontScaleButton" Content="DON'T Scale Me" Canvas.Top="10" Canvas.Left="50"
LayoutTransform="{Binding LayoutTransform.Inverse,
RelativeSource={RelativeSource AncestorType=Canvas}}"/>
The original transform still seems to have a translative effect on the button though.
A: You could also restructure the elements so that the elements you don't want to scale with the Canvas are not actually children of that Canvas.
<Canvas>
<Canvas>
<Canvas.LayoutTransform>
<ScaleTransform ScaleX="1" ScaleY="1" CenterX=".5" CenterY=".5" />
</Canvas.LayoutTransform>
<Button x:Name="scaleButton" Content="Scale Me" Canvas.Top="10" Canvas.Left="10" />
</Canvas>
<Button x:Name="dontScaleButton" Content="DON'T Scale Me" Canvas.Top="10" Canvas.Left="50" />
</Canvas>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Why are entries in addition order in a .Net Dictionary? I just saw this behaviour and I'm a bit surprised by it...
If I add 3 or 4 elements to a Dictionary, and then do a "For Each" to get all the keys, they appear in the same order I added them.
The reason this surprises me is that a Dictionary is supposed to be a HashTable internally, so I expected things to come out in ANY order (ordered by the hash of the key, right?)
What am I missing here?
Is this a behaviour I can count on?
EDIT: OK, I thought already of many of the reasons why this might happen (like the separate list to entries, whether this is a coincidence, etc).
My question is, does anyone know how this really works?
A: A dictionary retrieves items in hashed order. The fact that they came out in insertion order was a total coincidence.
The MSDN documentation says:
The order of the keys in the KeyCollection is unspecified, but it is the same order as the associated values in the ValueCollection returned by the Values property.
A: You cannot count on this behavior, but it's not surprising.
Consider how you would implement key iteration for a simple hash table. You would need to iterate over all the hash buckets, whether or not they had anything in them. Getting a small data set from a big hashtable could be inefficient.
Therefore it might be a good optimization to keep a separate, duplicate list of keys. Using a double-linked list you still get constant-time insert/delete. (You would keep a pointer from the hashtable bucket back to this list.) That way iterating through the list of keys depends only on the number of entries, not on the number of buckets.
A: If you use .NET Reflector on the 3.5 class libraries you can see that the implementation of Dictionary actually stores the items in an array (which is resized as needed), and hashes indexes into that array. When getting the keys, it completely ignores the hashtable and iterates over the array of items. For this reason, you will see the behavior you have described since new items are added at the end of the array. It looks like if you do the following:
add 1
add 2
add 3
add 4
remove 2
add 5
you will get back 1 5 3 4 because it reuses empty slots.
It is important to note, like many others have, you cannot count on this behavior in future (or past) releases. If you want your dictionary to be sorted then there is a SortedDictionary class for this purpose.
A: I think this comes from the old .NET 1.1 times where you had two kinds of dictionaries "ListDictionary" and "HybridDictionary". ListDictionary was a dictionary implemented internally as an ordered list and was recommended for "small sets of entries". Then you had HybridDictionary, that was initially organized internally as a list, but if it grew bigger than a configurable threshold would become a hash table. This was done because historically proper hash-based dictionaries were considered expensive. Now a days that doesn't make much sense, but I suppose .NET just based it's new Dictionary generic class on the old HybridDictionary.
Note: Anyway, as someone else already pointed out, you should never count on the dictionary order for anything
A: A quote from MSDN :
The order of the keys in the
Dictionary<(Of <(TKey,
TValue>)>).KeyCollection is
unspecified, but it is the same order
as the associated values in the
Dictionary<(Of <(TKey,
TValue>)>).ValueCollection
returned by the Dictionary<(Of <(TKey,
TValue>)>).Values property.
A: What keys did you add with in your test, and in what order?
A: Your entries might all be in the same hash bucket in the dictionary. Each bucket is probably a list of entries in the bucket. This would explain the entries coming back in order.
A: From what I know this shouldn't be a behavior to rely on. To check it quickly use the same elements and change the order in which you add them to the dictionary. You'll see if you get them back in the order they were added, or it was just a coincidence.
A: Up to a certain list size it is cheaper to just check every entry instead of hashing. That is probably what is happening.
Add 100 or 1000 items and see if they are still in the same order.
A: I hate this kind of "by design" functionalities. I think when giving your class such a generic name as "Dictionary", it should also behave "as generally expected". For example std::map always keeps it's key-values sorted.
Edit: apparently solution is to use SortedDictionary, which behaves similarly to std::map.
A: The question and many of the answers seem to misunderstand the purpose of a hashtable or dictionary. These data structures have no specified behaviors with respect to the enumeration of the values (or in fact the keys) of the items contained in the data structure.
The purpose of a dictionary or hashtable is to be able to efficiently lookup a specific value given a known key. The internal implementation of any dictionary or hashtable should provide for this efficiency in lookups but need not provide any specific behavior with respect to enumerations or "for each" type iterations on the values or keys.
In short, the internal data structure can store and enumerate these values in any manner that it wishes, including the order that they were inserted.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: How can I figure out what is holding on to unfreed objects? One of our programs is sometimes getting an OutOfMemory error on one user's machine, but of course not when I'm testing it. I just ran it with JProfiler (on a 10 day evaluation license because I've never used it before), and filtering on our code prefix, the biggest chunk both in total size and number of instances is 8000+ instances of a particular simple class.
I clicked the "Garbage Collect" button on JProfiler, and most instances of other classes of ours went away, but not these particular ones. I ran the test again, still in the same instance, and it created 4000+ more instances of the class, but when I clicked "Garbage Collect", those went away leaving the 8000+ original ones.
These instances do get stuck into various Collections at various stages. I assume that the fact that they're not garbage collected must mean that something is holding onto a reference to one of the collections so that's holding onto a reference to the objects.
Any suggestions how I can figure out what is holding onto the reference? I'm looking for suggestions of what to look for in the code, as well as ways to find this out in JProfiler if there are.
A: Try Eclipse Memory Analyzer. It will show you for each object how it is connected to a GC root - an object that is not garbage collected because it is held by the JVM.
See http://dev.eclipse.org/blogs/memoryanalyzer/2008/05/27/automated-heap-dump-analysis-finding-memory-leaks-with-one-click/ for more information on how Eclipse MAT works.
A: I would look at Collections (especially static ones) in your classes (HashMaps are a good place to start). Take this code for example:
Map<String, Object> map = new HashMap<String, Object>(); // 1 Object
String name = "test"; // 2 Objects
Object o = new Object(); // 3 Objects
map.put(name, o); // 3 Objects, 2 of which have 2 references to them
o = null; // The objects are still being
name = null; // referenced by the HashMap and won't be GC'd
System.gc(); // Nothing is deleted.
Object test = map.get("test"); // Returns o
test = null;
map.remove("test"); // Now we're down to just the HashMap in memory
// o, name and test can all be GC'd
As long as the HashMap or some other collection has a reference to that object it won't be garbage collected.
A: No silver bullet there, you have to use the profiler to identify collections that hold those unneeded objects and find the place in code where they should have been removed. As JesperE said, static collections are the first place to look at.
A: Dump the heap and inspect it.
I'm sure there's more than one way to do this, but here is a simple one. This description is for MS Windows, but similar steps can be taken on other operating systems.
*
*Install the JDK if you don't already have it. It comes with a bunch of neat tools.
*Start the application.
*Open task manager and find the process id (PID) for java.exe (or whatever executable you are using). If the PID's aren't shown by default, use View > Select Columns... to add them.
*Dump the heap using jmap.
*Start the jhat server on the file you generated and open your browser to http://localhost:7000 (the default port is 7000). Now you can browse the type you're interested in and information like the number of instances, what has references to them, etcetera.
Here is an example:
C:\dump>jmap -dump:format=b,file=heap.bin 3552
C:\dump>jhat heap.bin
Reading from heap.bin...
Dump file created Tue Sep 30 19:46:23 BST 2008
Snapshot read, resolving...
Resolving 35484 objects...
Chasing references, expect 7 dots.......
Eliminating duplicate references.......
Snapshot resolved.
Started HTTP server on port 7000
Server is ready.
To interpret this, it is useful to understand some of the array type nomenclature Java uses - like knowing that class [Ljava.lang.Object; really means an object of type Object[].
A: Keep an eye out for static containers. Any objects in a static container will remain as long as the class is loaded.
Edit: removed incorrect remark on WeakReference.
A: One obvious candidate is objects with finalisers. They can linger while their finalize method is called. They need to be collected, then finalised (usually with just a single finaliser thread) and then collected again.
Also be aware that you can get an OOME because the gc failed to collect enough memory, despite there actually being enough for the object request to be created. Otherwise performance would grind into the ground.
A: I just read an article on this, but I'm sorry I can't remember where. I think it might have been in the book "Effective Java". If I find the reference, I'll update my answer.
The two important lessons it outlined are:
1) Final methods tell the gc what to do when it culls the object, but it doesn't ask it to do so, nor is there a way to demand that it does.
2) The modern-day equivalent of the "memory leak" in unmanaged memory environments, is the forgotten references. If you don't set all references to an object to null when you're done with it, the object will never be culled. This is most important when implementing your own kind of Collection, or your own wrapper that manages a Collection. If you have a pool or a stack or a queue, and you don't set the bucket to null when you "remove" an object from the collection, the bucket that object was in will keep that object alive until that bucket is set to refer to another object.
disclaimer: I know other answers mentioned this, but I'm trying to offer more detail.
A: I've used the Yourkit Java profiler(http://www.yourkit.com) for performance optimizations on java 1.5. It has a section on how to work on memory leaks. I find it useful.
http://www.yourkit.com/docs/75/help/performance_problems/memory_leaks/index.jsp
You can get a 15 day eval : http://www.yourkit.com/download/yjp-7.5.7.exe
BR,
~A
A: Collections was already mentioned. Another hard-to-find location is if you use multiple ClassLoaders, as the old classloader may be unable to be garbage collected until all references have gone.
Also check statics - these are nasty. Logging frameworks can keep things open which may keep references in custom appenders.
Did you resolve the problem?
A: Some suggestions:
*
*Unlimited maps used as caches, especially when static
*ThreadLocals in server apps, because the threads usually do not die, so the ThreadLocal is not freed
*Interning strings (Strings.intern()), which results in a pile of Strings in the PermSpace
A: If you're getting OOM errors in a garbage collected language, it usually means that there's some memory not being accounted by the collector. Maybe your objects hold non-java resources? if so, then they should have some kind of 'close' method to make sure that resource is released even if the Java object isn't collected soon enough.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: XML Parsing Error: SYSTEM or PUBLIC, the URI is missing I am parsing an RSS feed from the following URL:
http://rss.sciam.com/ScientificAmerican-Global?format=xml
// $xml_text is filled with the contents read from the URL
$xml_parser = xml_parser_create();
$res = xml_parse($xml_parser, $xml_text);
if (!$res) {
$error =
xml_error_string(xml_get_error_code($xml_parser)).
" at line ".
xml_get_current_line_number($xml_parser);
}
// $error contains: "SYSTEM or PUBLIC, the URI is missing at line 1"
FeedValidator.org says this is a good feed.
How can I get PHP's XML parser to work around this error?
EDIT: It looks like they are redirecting this feed to another location based on the user-agent. My PHP script is not getting the correct feed.
A: The code works for me, you must be getting the text wrong.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: When should one use final for method parameters and local variables? I've found a couple of references (for example) that suggest using final as much as possible and I'm wondering how important that is. This is mainly in the the context of method parameters and local variables, not final methods or classes. For constants, it makes obvious sense.
On one hand, the compiler can make some optimizations and it makes the programmer's intent clearer. On the other hand, it adds verbosity and the optimizations may be trivial.
Is it something I should make an effort to remember?
A: I've found marking method parameters and locals as final is useful as a refactoring aid when the method in question is an incomprehensible mess several pages long. Sprinkle final liberally, see what "cannot assign to final variable" errors the compiler (or your IDE) throws up, and you just might discover why the variable called "data" ends up null even though several (out of date) comments swear that can't happen.
Then you can fix some of the errors by replacing the reused variables with new variables declared closer to the point of use. Then you find you can wrap whole parts of the method in scoping braces, and suddenly you're one IDE keypress away from "Extract Method" and your monster just got more comprehensible.
If your method is not already an unmaintainable wreck, I guess there might be value in making stuff final to discourage people from turning it into said wreck; but if it's a short method (see: not unmaintainable) then you risk adding a lot of verbosity. In particular, Java function signatures are hard enough to fit into 80 characters as it is without adding six more per argument!
A: It is useful in parameters to avoid change the parameter value by accident and introduce a subtle bug. I use to ignore this recommendation but after spending some 4 hrs. in a horrible method ( with hundreds of lines of code and multiple fors, nested ifs and all sort of bad practices ) I would recommend you to do it.
public int processSomethingCritical( final int x, final int y ){
// hundreds of lines here
// for loop here...
int x2 = 0;
x++; // bug aarrgg...
// hundreds of lines there
// if( x == 0 ) { ...
}
Of course in a perfect world this wouldn't happen, but.. well.. sometimes you have to support others code. :(
A: If you are writing a application that someone will have to read the code after, say, 1 year, then yes, use final on variable that should not be modified all the time. By doing this, your code will be more "self-documenting" and you also reduce the chance for other developers to do silly things like using a local constant as a local temporary variable.
If you're writing some throwaway code, then, nah, don't bother to identify all the constant and make them final.
A:
Is it something I should make an effort to remember to do?
No, if you are using Eclipse, because you can configure a Save Action to automatically add these final modifiers for you. Then you get the benefits for less effort.
A: I will use final as much as I can. Doing so will flag if you unintentionally change the field. I also set Method parameters to final. Doing so I have caught several bug from code I have taken over when they try to 'set' a parameter forgetting Java passes by value.
A: It's not clear from the question whether this is obvious, but making a method parameter final affects only the body of the method. It does NOT convey any interesting information about the method's intentions to the invoker. The object being passed in can still be mutated within the method (finals are not consts), and the scope of the variable is within the method.
To answer your precise question, I wouldn't bother making an instance or local variable (including method parameters) final unless the code required it (e.g. the variable is referenced from an inner class), or to clarify some really complicated logic.
For instance variables, I would make them final if they are logically constants.
A: There are many uses for the variable final. Here are just a few
Final Constants
public static class CircleToolsBetter {
public final static double PI = 3.141;
public double getCircleArea(final double radius) {
return (Math.pow(radius, 2) * PI);
}
}
This can be used then for other parts of your codes, or accessed by other classes, that way if you would ever change the value you wouldn't have to change them one by one.
Final Variables
public static String someMethod(final String environmentKey) {
final String key = "env." + environmentKey;
System.out.println("Key is: " + key);
return (System.getProperty(key));
}
}
In this class, you build a scoped final variable that adds a prefix to the parameter environmentKey. In this case, the final variable is final only within the execution scope, which is different at each execution of the method. Each time the method is entered, the final is reconstructed. As soon as it is constructed, it cannot be changed during the scope of the method execution. This allows you to fix a variable in a method for the duration of the method. see below:
public class FinalVariables {
public final static void main(final String[] args) {
System.out.println("Note how the key variable is changed.");
someMethod("JAVA_HOME");
someMethod("ANT_HOME");
}
}
Final Constants
public double equation2Better(final double inputValue) {
final double K = 1.414;
final double X = 45.0;
double result = (((Math.pow(inputValue, 3.0d) * K) + X) * M);
double powInputValue = 0;
if (result > 360) {
powInputValue = X * Math.sin(result);
} else {
inputValue = K * Math.sin(result); // <= Compiler error
}
These are especially useful when you have really long lines of codes, and it will generate compiler error so you don't run in to logic/business error when someone accidentally changes variables that shouldn't be changed.
Final Collections
Different case when we are talking about Collections, you need to set them as an unmodifiable.
public final static Set VALID_COLORS;
static {
Set temp = new HashSet( );
temp.add(Color.red);
temp.add(Color.orange);
temp.add(Color.yellow);
temp.add(Color.green);
temp.add(Color.blue);
temp.add(Color.decode("#4B0082")); // indigo
temp.add(Color.decode("#8A2BE2")); // violet
VALID_COLORS = Collections.unmodifiableSet(temp);
}
otherwise, if you don't set it as unmodifiable:
Set colors = Rainbow.VALID_COLORS;
colors.add(Color.black); // <= logic error but allowed by compiler
Final Classes and Final Methods cannot be extended or overwritten respectively.
EDIT:TO ADDRESS THE FINAL CLASS PROBLEM REGARDING ENCAPSULATION:
There are two ways to make a class final. The first is to use the keyword final in the class declaration:
public final class SomeClass {
// . . . Class contents
}
The second way to make a class final is to declare all of its constructors as private:
public class SomeClass {
public final static SOME_INSTANCE = new SomeClass(5);
private SomeClass(final int value) {
}
Marking it final saves you the trouble if finding out that it is actual a final, to demonstrate look at this Test class. looks public at first glance.
public class Test{
private Test(Class beanClass, Class stopClass, int flags)
throws Exception{
// . . . snip . . .
}
}
Unfortunately, since the only constructor of the class is private, it is impossible to extend this class. In the case of the Test class, there is no reason that the class should be final. The Test class is a good example of how implicit final classes can cause problems.
So you should mark it final when you implicitly make a class final by making it's constructor private.
A: Obsess over:
*
*Final fields - Marking fields as final forces them to be set by end of construction, making that field reference immutable. This allows safe publication of fields and can avoid the need for synchronization on later reads. (Note that for an object reference, only the field reference is immutable - things that object reference refers to can still change and that affects the immutability.)
*Final static fields - Although I use enums now for many of the cases where I used to use static final fields.
Consider but use judiciously:
*
*Final classes - Framework/API design is the only case where I consider it.
*Final methods - Basically same as final classes. If you're using template method patterns like crazy and marking stuff final, you're probably relying too much on inheritance and not enough on delegation.
Ignore unless feeling anal:
*
*Method parameters and local variables - I RARELY do this largely because I'm lazy and I find it clutters the code. I will fully admit that marking parameters and local variables that I'm not going to modify is "righter". I wish it was the default. But it isn't and I find the code more difficult to understand with finals all over. If I'm in someone else's code, I'm not going to pull them out but if I'm writing new code I won't put them in. One exception is the case where you have to mark something final so you can access it from within an anonymous inner class.
*Edit: note that one use case where final local variables are actually very useful as mentioned by @adam-gent is when value gets assigned to the var in the if/else branches.
A: I use final all the time to make Java more expression based. See Java's conditions (if,else,switch) are not expression based which I have always hated especially if your used to functional programming (ie ML, Scala or Lisp).
Thus you should try to always (IMHO) use final variables when using conditions.
Let me give you an example:
final String name;
switch(pluginType) {
case CANDIDATE_EXPORT:
name = "Candidate Stuff";
break;
case JOB_POSTING_IMPORT:
name = "Blah";
break;
default:
throw new IllegalStateException();
}
Now If add another case statement and do not set name the compiler will fail. The compiler will also fail if you do not break on every case (that you set the variable). This allows you to make Java very similar to Lisp's let expressions and makes it so your code is not massively indented (because of lexical scoping variables).
And as @Recurse noted (but apparently -1 me) you can do the preceding with out making String name final to get the compiler error (which I never said you couldn't) but you could easily make the compiler error go away setting name after the switch statement which throws away the expression semantics or worse forgetting to break which you cannot cause an error (despite what @Recurse says) without using final:
String name;
switch(pluginType) {
case CANDIDATE_EXPORT:
name = "Candidate Stuff";
//break; whoops forgot break..
//this will cause a compile error for final ;P @Recurse
case JOB_POSTING_IMPORT:
name = "Blah";
break;
}
// code, code, code
// Below is not possible with final
name = "Whoops bug";
Because of the bug setting name (besides forgetting to break which also another bug) I can now accidentally do this:
String name;
switch(pluginType) {
case CANDIDATE_EXPORT:
name = "Candidate Stuff";
break;
//should have handled all the cases for pluginType
}
// code, code, code
// Below is not possible with final
name = "Whoops bug";
The final variable forces a single evaluation of what name should be. Similar to how a function that has a return value must always return a value (ignoring exceptions) the name switch block will have to resolve name and thus bound to that switch block which makes refactoring chunks of code easier (ie Eclipe refactor: extract method).
The above in OCaml:
type plugin = CandidateExport | JobPostingImport
let p = CandidateExport
let name = match p with
| CandidateExport -> "Candidate Stuff"
| JobPostingImport -> "Blah" ;;
The match ... with ... evaluates like a function ie expression. Notice how it looks like our switch statement.
Here is an example in Scheme (Racket or Chicken):
(define name
(match b
['CandidateExport "Candidate Stuff"]
['JobPostingImport "Blah"]))
A: The development-time benefits of "final" are at least as significant as the run-time benefits. It tells future editors of the code something about your intentions.
Marking a class "final" indicates that you've not made an effort during design or implementation of the class to handle extension gracefully. If the readers can make changes to the class, and want to remove the "final" modifier, they can do so at their own risk. It's up to them to make sure the class will handle extension well.
Marking a variable "final" (and assigning it in the constructor) is useful with dependency injection. It indicates the "collaborator" nature of the variable.
Marking a method "final" is useful in abstract classes. It clearly delineates where the extension points are.
A: Well, this all depends on your style... if you LIKE seeing the final when you won't be modifying the variable, then use it. If you DON'T LIKE seeing it... then leave it out.
I personally like as little verbosity as possible, so I tend to avoid using extra keywords that aren't really necessary.
I prefer dynamic languages though, so it's probably no surprise I like to avoid verbosity.
So, I would say just pick the direction you are leaning towards and just go with it (whatever the case, try to be consistent).
As a side note, I have worked on projects that both use and don't use such a pattern, and I have seen no difference in the amount of bugs or errors... I don't think it is a pattern that will hugely improve your bug count or anything, but again it is style, and if you like expressing the intent that you won't modify it, then go ahead and use it.
A: Somewhat of a trade-off as you mention, but I prefer explicit use of something over implicit use. This will help remove some ambiguity for future maintainers of code - even if it is just you.
A: If you have inner (anonymous) classes, and the method needs to access variable of the containing method, you need to have that variable as final.
Other than that, what you've said is right.
A: Use final keyword for a variable if you are making that variable as immutable
By declaring the variable as final, it aids developers to rule out possible modification issues of variables in highly multi-threaded environment.
With java 8 release, we have one more concept called "effectively final variable". A non-final variable can heave as final variable.
local variables referenced from a lambda expression must be final or effectively final
A variable is considered effective final if it is not modified after initialization in the local block. This means you can now use the local variable without final keyword inside an anonymous class or lambda expression, provided they must be effectively final.
Till Java 7, you cannot use a non-final local variable inside an anonymous class, but from Java 8 you can
Have a look at this article
A: First of all, the final keyword is used to make a variable constant. Constant means it does not change. For example:
final int CM_PER_INCH = 2.54;
You would declare the variable final because a centimeter per inch does not change.
If you try to override a final value, the variable is what it was declared first. For example:
final String helloworld = "Hello World";
helloworld = "A String"; //helloworld still equals "Hello World"
There is a compile error that is something like:
local variable is accessed from inner class, must be declared final
If your variable cannot be declared final or if you don't want to declare it final try this:
final String[] helloworld = new String[1];
helloworld[0] = "Hello World!";
System.out.println(helloworld[0]);
helloworld[0] = "A String";
System.out.println(helloworld[0]);
This will print:
Hello World!
A String
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "193"
} |
Q: How can I stream an XPS document to a browser and embed it in a webpage? I'm looking for some suggestions on how to go about this. Any input is appreciated!
Currently, I have an ASP.NET MVC application. On the client, I have a link with an ID of an XPS document. When the user clicks the link, they are taken to a page with details about the document. In addition to this information, I wish to display the document along side of this information, in the same page.
On the server side, once I have the ID, I can get the document, serialized as a byte array, from the database. The question is, what's the best way to get that serialized document into the webpage?
I would SEVERELY prefer not having to drop that document into the file system and then munge together a file path. I would like to be able to stream the document to the browser as a content type of "application/vnd.ms-xpsdocument".
I've tried using a web control to handle this (can't write binary out), thought about creating a HTTP handler to do this (no idea where to start), and have fuddled around with a couple other ways to get the document stream to the browser.
In addition, there is also the question of how to embed the document in the web page. Do I use an EMBED tag? Or an Object? Or do I use an iframe and set the source to point to whatever delivers the document?
Again, I don't expect a solution wrapped up in a bow. I'm looking for some advice on how to go about this. And, while this question is about xps documents, it applies to any application that streams a binary file that is to be hosted in a browser (PDFs, etc).
Okay, as for displaying in the browser, one word: Silverlight. That's solved. I still have the issue of figuring out the best way to send it from the server to the browser.
Strike that. It appears Silverlight isn't advanced enough to display an XPS document just quite yet. I'm thinking about an iframe pointing to a http handler now... The iframe works. Too bad it throws the entire thing down the pipe. I suppose I could always strip off the first page and send that puppy...
Wow. No need for a HTTP handler. A custom ActionResult in MVC is all you need. How friggen awesome is that?
A: I think the simplest way would be to provide the document as a link (target="_blank") from the details page. This has several advantages:
*
*You don't need to retrieve and stream the entire doc unless the user asks for it.
*On my system at least IE is already registered as the default XPS viewer, so by giving the doc it's own window you avoid the whole question of how to embed it.
*It makes it simple to provide a streaming source for the document: just use an HTTP Handler with an ID for a query string parameter.
Even if you don't like that idea, definitely go with an HTTP Handler for transmitting document. It's real simple to do: when you create a new HTTP Handler in Visual Studio it should give you a nice page with all the busy-work done already.
A: There is a Silverlight control capable of displaying XPS documents. See http://firstfloorsoftware.com/blog/announcement-document-toolkit-for-silverlight/ for more
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How can I use mysqldump to replicate views between accounts? I'm using mysqldump to replicate a database between accounts on a particular machine. Everything works just great, except when we get to our defined views. Because the dump includes a line like the following ...
/*!50013 DEFINER=`user_a`@`localhost` SQL SECURITY DEFINER */
... when loading the dump into mysql on user_b we receive an error:
ERROR 1227 (42000) at line 657: Access denied; you need the SUPER privilege for this operation
Needless to say, I don't have SUPER privilege on this mysql instance. Is there a way to convince mysqldump to dump the views in a user-agnostic way? I can't find anything in the manual on this point. Do I have to actually parse the dumpfile to replace the usernames? Or am I missing something?
A: same problem. I solved it that way:
mysqldump -uuser1 -ppassword1 database1 > backup.sql
sed '/^\/\*\!50013 DEFINER/d' backup.sql > backup_without_50013.sql
mysql -u user2 -ppassword2 -D database2 < backup_without_50013.sql
The interesting thing is the sed command which, here, removes all lines beginning with /*!50013.
Heidy
A: The SQL SECURITY clauses determine which MySQL account to use when checking access privileges for the view (as you have probably figured out).
When you create a view you can define a couple of options for security for that view. You can read more here, but essentially by default access is restricted to the 'definer' of the view, i.e. the user who created it.
A: You will need to process the backup file and change the DEFINER:
DEFINER=user_a@localhost
I like to change it to :
DEFINER=CURRENT_USER
It will then be automatically associated to the account that loads it.
A: Run mysqldump with the option "--skip-triggers"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: ubuntu trouble with SDL I'm trying to learn to use SDL for a little game I'm writing , but I have a problem . One single application can play sound at a given time on my system . If for example I have amarok or kaffeine ( or even firefox while playing a flash video ) , no other application can play sound . The only solution I've found is to run just an application at a given time , but , that doesn't seems like a natural solution .
Any hints ?
A: I see from your tag that you're using Ubuntu Hardy Heron (8.04). There are some audio issues with pulse audio under this version, which are known to affect flash/firefox and maybe your other applications as well. See 'Known issues' on https://wiki.ubuntu.com/PulseAudio.
Workarounds of sorts do exist (see the link), but they're not very satisfactory. Ubuntu has come under a certain amount of criticism for releasing 8.04 with this issue unresolved.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Hibernate.initialize() and second-level cache Does anybody know if Hibernate's static initialize() method, which populates a proxy object, will attempt to hit the second-level cache before going to the database? My code seems to be behaving that way, and I can't seem to find anything in the documentation about this. The Java doc is (as usual) sparse.
Thanks!
A: It does. As long as second level caching is activated and that your entity is declared cacheable, then cache takes precedence when there is no explicite querying. You can follow the cache query/hit/miss by configuring the org.hibernate.cache logger.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Convert nmake makefile into Visual Studio 2005 project We have some old C code here that's built with nmake.
Is there an automated way to pull the Makefile into Visual Studio 2005 and create a project? Some searching on MSDN indicates VS6 could do this, but it looks like VS7/8 dropped the feature.
If necessary I can build the project from scratch using the project.mak file as a reference, but for more complex projects this may not be viable.
A: I have been heavily involved in a project at my company to do the same thing. I found that many of the old nmake based projects of our had many common settings. Because of that, I went the route of creating a custom project wizard.
It took several days of fooling around with it to get right, but has saved a lot of time. It has also allowed us to bring other devs into the effort of re-compiling in a manner that makes it easy for them, yet enforces many of the compiler settings we would like.
With the wizard, the steps of recompiling look much like:
1.) Create project.
2.) Compile and stomp out all errors and warnings.
3.) Add libraries that need to be linked in.
4.) Done.
A: Here is a link to the VS 2005 docs on the subject. It also has links to VS 2003 & 2008, which are probably the same.
Edit: I would only want to do this with old code that would not change much, especially in the way of compile and link parameters, as hand editing the make file is the only way to change how the code compiles.
A: This ought to be possible using Visual Studio's COM-interface, but I can't help you with any details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: search 25 000 words within a text I need to find occurrences of ~ 25 000 words within a text. What is the most suitable algorithm/library for this purpose?
target language is C++
A: if the corpus is so large, try to optimize it in this way:
compute an hash of each word you need to check, assigning to each char an integer prime number and then multiplying each number together;store each number->word in a multimap (you need to allow for multiple value on single key)
while scanning the wordlist, compute the hash in the same way for each word, then get the word(s) associated with the computed key on the hashmap. using integers as key, you have a retrieval of O(1); this way you could find in a really quick way if the processed word has some anagram (you multiplied characters) inside the map.
remember: you stored in the multimap the set of the word having that same hash, so you now need to find a match in this greatly reduced set. you need this additional check as the simply existence of the integer on the map does not equate to the existence of the word in the associated set: we are using hashing here to reduce the computational space of the problem, but this introduces collision which need to be disambiguated checking on each identified anagram.
A: Use the Aho-Corasick algorithm. It was made for this application. You'll only need to read each letter in your search text once. I've recently implemented and used it with great results.
A: As Javier says, the simplest solution is probably a hash table.
In C++, this can be implemented using an STL set. First add the 25,000 test words to the set, and then scan through each word in the text, using set.find(current_word) to evaluate whether the word is among the 25,000 test words.
set.find is logarithmically fast, so 25,000 test words shouldn't be too large. The algorithm is obviously linear in the number of words in the text.
A: If the text you're searching is huge, then it might be worth doing some preprocessing: assemble your 25,000 words into a TRIE.
Scan to the start of the first word in the text, and start walking the TRIE as you walk through the letters of the word. If there's no transition in your TRIE, skip to the start of the next word and go back to the root of the TRIE. If you reach the end of the word, and you're at a word-ending node in the TRIE, you've found a match. Repeat for each word in the text.
If your text is merely large (rather than huge), then simply looking up each word in a hash table is probably sufficient.
A:
I once used the Boyer-Moore algorithm and it was quite fast.
Boyer-Moore isn't apt for efficiently searching many words. There is actually a very efficient algorithm for doing just that, called the Wu-Manber algorithm. I'll post a reference implementation. Notice, however, that I did this some time ago for educational purpose only. Hence, the implementation isn't really apt for direct usage and can also be made more efficient.
It also uses the stdext::hash_map from the Dinkumware STL. Subsitute with std::tr1::unordered_map or an appropriate implementation.
There's an explanation of the algorithm in a lecture script from a lecture at the Freie Universität Berlin, held by Knut Reinert.
The original paper is also online (just found it again) but I don't particularly like the pseudocode presented there.
#ifndef FINDER_HPP
#define FINDER_HPP
#include <string>
namespace thru { namespace matching {
class Finder {
public:
virtual bool find() = 0;
virtual std::size_t position() const = 0;
virtual ~Finder() = 0;
protected:
static size_t code_from_chr(char c) {
return static_cast<size_t>(static_cast<unsigned char>(c));
}
};
inline Finder::~Finder() { }
} } // namespace thru::matching
#endif // !defined(FINDER_HPP)
#include <vector>
#include <hash_map>
#include "finder.hpp"
#ifndef WUMANBER_HPP
#define WUMANBER_HPP
namespace thru { namespace matching {
class WuManberFinder : public Finder {
public:
WuManberFinder(std::string const& text, std::vector<std::string> const& patterns);
bool find();
std::size_t position() const;
std::size_t pattern_index() const;
private:
template <typename K, typename V>
struct HashMap {
typedef stdext::hash_map<K, V> Type;
};
typedef HashMap<std::string, std::size_t>::Type shift_type;
typedef HashMap<std::string, std::vector<std::size_t> >::Type hash_type;
std::string const& m_text;
std::vector<std::string> const& m_patterns;
shift_type m_shift;
hash_type m_hash;
std::size_t m_pos;
std::size_t m_find_pos;
std::size_t m_find_pattern_index;
std::size_t m_lmin;
std::size_t m_lmax;
std::size_t m_B;
};
} } // namespace thru::matching
#endif // !defined(WUMANBER_HPP)
#include <cmath>
#include <iostream>
#include "wumanber.hpp"
using namespace std;
namespace thru { namespace matching {
WuManberFinder::WuManberFinder(string const& text, vector<string> const& patterns)
: m_text(text)
, m_patterns(patterns)
, m_shift()
, m_hash()
, m_pos()
, m_find_pos(0)
, m_find_pattern_index(0)
, m_lmin(m_patterns[0].size())
, m_lmax(m_patterns[0].size())
, m_B()
{
for (size_t i = 0; i < m_patterns.size(); ++i) {
if (m_patterns[i].size() < m_lmin)
m_lmin = m_patterns[i].size();
else if (m_patterns[i].size() > m_lmax)
m_lmax = m_patterns[i].size();
}
m_pos = m_lmin;
m_B = static_cast<size_t>(ceil(log(2.0 * m_lmin * m_patterns.size()) / log(256.0)));
for (size_t i = 0; i < m_patterns.size(); ++i)
m_hash[m_patterns[i].substr(m_patterns[i].size() - m_B)].push_back(i);
for (size_t i = 0; i < m_patterns.size(); ++i) {
for (size_t j = 0; j < m_patterns[i].size() - m_B + 1; ++j) {
string bgram = m_patterns[i].substr(j, m_B);
size_t pos = m_patterns[i].size() - j - m_B;
shift_type::iterator old = m_shift.find(bgram);
if (old == m_shift.end())
m_shift[bgram] = pos;
else
old->second = min(old->second, pos);
}
}
}
bool WuManberFinder::find() {
while (m_pos <= m_text.size()) {
string bgram = m_text.substr(m_pos - m_B, m_B);
shift_type::iterator i = m_shift.find(bgram);
if (i == m_shift.end())
m_pos += m_lmin - m_B + 1;
else {
if (i->second == 0) {
vector<size_t>& list = m_hash[bgram];
// Verify all patterns in list against the text.
++m_pos;
for (size_t j = 0; j < list.size(); ++j) {
string const& str = m_patterns[list[j]];
m_find_pos = m_pos - str.size() - 1;
size_t k = 0;
for (; k < str.size(); ++k)
if (str[k] != m_text[m_find_pos + k])
break;
if (k == str.size()) {
m_find_pattern_index = list[j];
return true;
}
}
}
else
m_pos += i->second;
}
}
return false;
}
size_t WuManberFinder::position() const {
return m_find_pos;
}
size_t WuManberFinder::pattern_index() const {
return m_find_pattern_index;
}
} } // namespace thru::matching
Example of usage:
vector<string> patterns;
patterns.push_back("announce");
patterns.push_back("annual");
patterns.push_back("annually");
WuManberFinder wmf("CPM_annual_conference_announce", patterns);
while (wmf.find())
cout << "Pattern \"" << patterns[wmf.pattern_index()] <<
"\" found at position " << wmf.position() << endl;
A: build a hashtable with the words, and scan throuhgt the text, for each word lookup in the wordtable and stuff the needed info (increment count, add to a position list, whatever).
A: A Bloom Filter may be your best bet. You initialize your filter with your search terms, then while reading your corpus can quickly check if each work is in the filter.
It is very space efficient, much better than simply hashing each word: with a 1% false-positive rate it should require only 9.6 bits per element. The false-positive rate is reduced by a factor of 10 for each additional 4.8 bits per element. Contrast this to plain hashing, which usually requires sizeof(int) == 32 bits per element.
I have an implementation in C# here: http://www.codeplex.com/bloomfilter
Here's an example, demonstrating its use with strings:
int capacity = 2000000; // the number of items you expect to add to the filter
Filter<string> filter = new Filter<string>(capacity);
filter.Add("Lorem");
filter.Add("Ipsum");
if (filter.Contains("Lorem"))
Console.WriteLine("Match!");
A: viceBerg says:
I once used the Boyer-Moore algorithm
and it was quite fast.
With Boyer-Moore, aren't you typically searching a block of text for a single string?
For a simple to implement solution go with the hash table approach suggested by Javier. The Bloom Filter suggested by FatCat1111 should work too... depending on the goals.
A: May be storing your initial dictionnary (the 25000 words) in a berkeley db hash table on disk, that you can probably use directly from c/c++ (i know you can do it from perl), and for each word in the text, query if it's present in the database.
A: You may also sort the text and the word-list alphabetically. When you have two sorted arraysyou can easily find the matches in linear time.
A: You want a Ternary Search Tree. A good implementation can be found here.
A: Aho-Corasick algorithm is built specifically for that purpose: searching for many words at once.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: What is the Best Way to Represent Summer Time Rules? I need to store the summer time (daylight saving time) change-over rules for different world regions in a database. I already have a way of storing regions and sub-regions (so the whole "half of Australia"/Arizona/Navaho problem is taken care of), but I'm wondering what the most efficient schema would be to accomplish this. The two options as I see them:
*
*Have a table which contains unique one row for each year and region giving the start and end times for summer time as well as the specific offset
*Have a table which stores a formula and effective date range for each region (effective range required for regions like Israel)
The advantage to the first is flexibility, since literally anything is possible. Unfortunately, it also requires (a) more space, and correspondingly (b) a lot of work to get the data input. The second is nice because one row could correspond to one region for decades, but it also requires some sort of language parser and interpreter in the application layer. Since this database will be used by several different applications written in languages without powerful text processing capabilities, I would rather avoid that route.
I would love to just use zoneinfo or something like that, but unfortunately that's not an option in this case. Likewise, I cannot normalize the dates, timezone and summer time info must be in the database to satisfy certain use cases.
Does anybody have any experience doing something similar? Likewise, does anyone have any brilliant options that I may have missed?
A: If the DST rules must be in the database, I'd probably choose to automatically update them from an external authoritative source (library, website, whatever). Manually maintaining DST rules doesn't sound like much fun.
A: One of the best sources of information about time zone rules is the Olson database, which was available from elsie.nci.nih.gov. In September 2008, the current version of the data was tzdata2008f.tar.gz, the current version of the code was tzcode2008e.tar.gz (and yes, the code was not always released when the data was). This tends to be the source of information for many other systems (including, in particular, the Oracle information). There's a mailing list available, too. As you can see, there have been six versions of the data so far in 2008; I have copies of 2005r, 2006l, 2007k lurking on my machine, so things can change rather frequently.
Nowadays (March 2017), the Olson database is available from IANA — see https://iana.org/time-zones and ftp://ftp.iana.org/tz (especially ftp://ftp.iana.org/tz/releases).
There's also the Common Locale Data Repository CLDR which has information about time zones too.
A: You're pretty much doomed to the first option. You can pre-generate dates as far ahead as you wish for countries that have "rules" regarding time changes, but some areas do not have any rule and the changes are enacted either by dictatorial fiat or by legislative vote annually (Brazil did so until this year).
This is why all OS vendors roll out timezone file changes once or twice a year -- they have to, because they cannot generate a 100% accurate file programatically.
A: The Oracle DBMS automatically handles this for you. The date is stored in an internal representation (lets imagine UMT for the sake of the argument) and is formatted according to the rules of the timezone when converted to a string.
This also solves the argument about what to do during the change over time. I.E. when you roll the clock back 1/2 hour there is actually 2 instances of 3:25 am on the same day.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How to list of all the tables defined for the database when using active record? How do I get a list of all the tables defined for the database when using active record?
A: Call ActiveRecord::ConnectionAdapters::SchemaStatements#tables. This method is undocumented in the MySQL adapter, but is documented in the PostgreSQL adapter. SQLite/SQLite3 also has the method implemented, but undocumented.
>> ActiveRecord::Base.connection.tables
=> ["accounts", "assets", ...]
See activerecord/lib/active_record/connection_adapters/abstract/schema_statements.rb:21, as well as the implementations here:
*
*activerecord/lib/active_record/connection_adapters/mysql_adapter.rb:412
*activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb:615
*activerecord/lib/active_record/connection_adapters/sqlite_adapter.rb:176
A: Based on the two previous answers, you could do:
ActiveRecord::Base.connection.tables.each do |table|
next if table.match(/\Aschema_migrations\Z/)
klass = table.singularize.camelize.constantize
puts "#{klass.name} has #{klass.count} records"
end
to list every model that abstracts a table, with the number of records.
A: It seems like there should be a better way, but here is how I solved my problem:
Dir["app/models/*.rb"].each do |file_path|
require file_path # Make sure that the model has been loaded.
basename = File.basename(file_path, File.extname(file_path))
clazz = basename.camelize.constantize
clazz.find(:all).each do |rec|
# Important code here...
end
end
This code assumes that you are following the standard model naming conventions for classes and source code files.
A:
An update for Rails 5.2
For Rails 5.2 you can also use ApplicationRecord to get an Array with your table' names. Just, as imechemi mentioned, be aware that this method will also return ar_internal_metadata and schema_migrations in that array.
ApplicationRecord.connection.tables
Keep in mind that you can remove ar_internal_metadata and schema_migrations from the array by calling:
ApplicationRecord.connection.tables - %w[ar_internal_metadata schema_migrations]
A: Don't know about active record, but here's a simple query:
select table_name
from INFORMATION_SCHEMA.Tables
where TABLE_TYPE = 'BASE TABLE'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "131"
} |
Q: ASP.NET Image Listbox I want to display a list of images (instead of text) for the user to choose from. The control is databound (the URLs come from the database) Instead of the typical vertical scroll bar in a listbox, I want this box to be horizontal. I'm looking for an ASP.NET server control similar to this: http://www.infragistics.com/dotnet/netadvantage/aspnet/webimageviewer.aspx#Overview
I considered all the answers and finally decided to use the ComboBox from obout.com which can also display the images.
thanks
Shankar
A: if you are experienced with ajax and/or jQuery you can have a look at the jQuery SliderGallery control.
http://ui.jquery.com/repository/real-world/product-slider/
A: I'd just put them in a div (asp:panel?) that's styled to have a particular height and width and overflow horizontally.
A: Unless I'm missing something, this seems like a pretty simple problem to me. Just create a scrollable DIV that will host all the images.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to draw delphi group boxes to have transparent backgrounds I'm trying to get something very subtle to work, it looks pretty awful right now. I'm trying to paint the background of a TGroupBox which I have overloaded the paint function of so that the corners are show through to their parent object. I've got a bunch of nested group boxes that look very decent without XPThemes.
Is there a way to paint part of a background transparent at runtime. I'm programming the form generator, not using Delphi design view.
A: when i had a situation like that, i worked with TGroupBox initially but then decided to use TPaintBox (called pb in this sample) and simulate the graphical part of the TGroupBox instead.
procedure TfraNewRTMDisplay.pbPaint(Sender: TObject);
const
icMarginPixels=0;
icCornerElipsisDiameterPixels=10;
begin
pb.Canvas.Pen.Color:=clDkGray;
pb.Canvas.Pen.Width:=1;
pb.Canvas.Pen.Style:=psSolid;
pb.Canvas.Brush.Color:=m_iDisplayColor;
pb.Canvas.Brush.Style:=bsSolid;
pb.Canvas.RoundRect(icMarginPixels,
icMarginPixels,
pb.Width-icMarginPixels*2,
pb.Height-icMarginPixels*2,
icCornerElipsisDiameterPixels,
icCornerElipsisDiameterPixels);
end;
A: I'm trying to duplicate this problem with the following steps:
1 - Set theme to Windows XP default
2 - Drop a TGroupBox on an empty form (align = alNone)
3 - Drop two TGroupBoxes inside the first one, with align = alBottom and align = alClient
But visually it looks just fine for me.
Can you provide some more info on exactly how you've designed the form? Some code pasted from the .DFM would be fine.
Here's the relevant part of my DFM:
object GroupBox1: TGroupBox
Left = 64
Top = 56
Width = 481
Height = 361
Margins.Left = 10
Caption = 'GroupBox1'
ParentBackground = False
TabOrder = 0
object GroupBox2: TGroupBox
Left = 2
Top = 254
Width = 477
Height = 105
Align = alBottom
Caption = 'GroupBox2'
TabOrder = 0
end
object GroupBox3: TGroupBox
Left = 2
Top = 15
Width = 477
Height = 239
Align = alClient
Caption = 'GroupBox3'
TabOrder = 1
end
end
A: Ha, that was lame, I just needed to not set ParentBackground := false in my constructor and paint the interior of the group box when appropriate.
A:
Ha, that was lame, I just needed to not set ParentBackground := false in my constructor and paint the interior of the group box when appropriate.
maybe there's something i don't know but in my recent experience, it's not as simple as it sounds because of themes & knowing exactly what area to paint. even TCanvas.FloodFill doesn't work reliably for this work probably because at times, the OS doesn't need to repaint everything.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: .NET Framework platform support tradition From the first days the .NET framework came out there was a minimum OS support:
*
*.NET 1.0 - Windows NT or higher (Windows 98/ME are also supported)
*.NET 2.0 - Windows 2000 or higher (Windows 98/ME are also supported)
*.NET 3.0 - Windows XP or higher
*.NET 3.5 - Windows XP or higher
This minimum OS support made possible to ignore platform-specific issues by using just the framework. Newer features were ignored on older platforms. For example, "Tile View" style in the ListView control or form transparency on the Windows 98/ME.
However, in the System.Core.dll (part of .NET 3.5) there are some classes that use a new Vista API and throw PlatformNotSupportedException if used on an XP machine. For example, take a look on the new EventLogReader class (in the System.Diagnostics.Eventing.Reader namespace).
Does Microsoft break with the tradition of platform support?
To be fair .NET 2.0 had already classes that supported NTFS security features, which are not available on the Windows 98/ME.
A: A minimum OS support means that the product was tested on particular platform and above. It does not guarantee that all the features (or classes/APIs in the case of a dev platform like .Net) will work on all the supported platforms.
There are Vista specific native APIs which do not exist in XP. .Net 3.5 adds support for managed world for these APIs, but it does not attempt to provide managed implementation for non-existing native APIs on XP.
A: I can tell you with experience that .NET has never really worked on Windows 98 or ME. I remember back in 2002 when version 1.0 was new, some colleagues and I discovered that anything more complicated than a WinForm with some buttons and dead simple functionality would flat out not run on Windows 9x, despite Microsoft's claims to the contrary. Given that we were a year into XP and we could reasonably expect Windows 2000 at least on the machines we were dealing with at the time, it wasn't a big deal.
But basically Microsoft neglecting the older versions of Windows with .NET is nothing new. At least they're throwing PlatformNotSupportedException exceptions these days.
A: Microsoft is obviously pushing towards a Vista-centric development environment. It's not just because of the obvious 'we need to make money' reason, but also because Vista is where the cool new APIs are sprouting.
To be fair, it's been always been like that with the Windows API, and it will probably be like that with .NET. There's not such a "tradition" as you describe, but rather a tradition of making APIs that might not work notify you that you are in an operating system that doesn't support it. They always try to be backwards compatible, not forwards limiting, if you understand :)
A: Platform support has always varied by type. Many types are not supported by the Compact Framework, for example.
A: I can't speak for all of it, but I know that for example the Event Log system in Vista was totally overhauled and bears almost no resemblance whatsoever to the event log system in Windows XP. It's probably just literally incompatible.
A: The problem here is that System.Diagnostics.Eventing is used in AppFabric (ie Windows Azure), so you're basically going to fight an uphill battle getting Windows XP to talk to the Cloud.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: User interface for generating Phing build files? I want to use Phing as a build tool and I was wondering if there any (web) interface for generating build files.
Any recommendations on alternative methods to writing those by hand would be appreciated as well.
A: There are a couple of GUI editors for ANT build files, which is what Phing is based on.
A quick google found this.
http://antsnest.sourceforge.net/
IDE's such as eclipse also have plugins for editing ant buildfiles, and should be fairly workable with phing.
None of these are web based though, sorry.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Visio VBA function to see if there's a shape in front of/behind a shape Is there a way in Visio VBA to see if there is a shape in front of or behind a shape in Visio?
I imagine I could write something that checks the bounding box of each shape in a page to see if it occupies the same space as my shape.
I'd rather use something built-in since checking each shape could take a long time as a drawing gets more and more shapes.
A: The Shape.SpatialRelation property will tell you if two shapes touch. The Shape.Index property will tell you which is in front or behind in the z-order.
Here is a simple example:
Public Sub DoShapesIntersect(ByRef shape1 As Visio.Shape, ByRef shape2 As Visio.Shape)
'// do they touch?
If (shape1.SpatialRelation(shape2, 0, 0) <> 0) Then
'// they touch, which one is in front?
If (shape1.Index > shape2.Index) Then
Debug.Print shape1.Name + " is in front of " + shape2.Name
Else
Debug.Print shape1.Name + " is behind " + shape2.Name
End If
Else
Debug.Print "shape1 and shape2 do not touch"
End If
End Sub
Read more here:
Shape.SpatialRelation Property on MSDN
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do I get Team Build to run MbUnit tests? I am having trouble getting Team Build to execute my MbUnit unit tests. I have tried to edit TFSBuild.proj and added the following parts:
<Project ...>
<UsingTask TaskName="MbUnit.MSBuild.Tasks.MbUnit" AssemblyFile="path_to_MbUnit.MSBuild.Tasks.dll" />
...
...
<ItemGroup>
<TestAssemblies Include="$(OutDir)\Project1.dll" />
<TestAssemblies Include="$(OutDir)\Project2.dll" />
</ItemGroup>
<Target Name="Tests">
<MbUnit
Assemblies="@(TestAssemblies)"
ReportTypes="html"
ReportFileNameFormat="buildreport{0}{1}"
ReportOutputDirectory="." />
</Target>
...
</Project>
But I have yet to get the tests to run.
A: Above suggestion didn't help me a lot, but I found some documentation for Team Build and adjusted my build script to override the AfterCompile target:
(EDIT: Now that I have a better understanding of Team Build, I have added some more to the test runner. It will now update the Build Explorer/Build monitor with build steps with details about the test run)
<Project ...>
<UsingTask TaskName="MbUnit.MSBuild.Tasks.MbUnit" AssemblyFile="path_to_MbUnit.MSBuild.Tasks.dll" />
...
...
<Target Name="AfterCompile">
<ItemGroup>
<TestAssemblies Include="$(OutDir)\Project1.dll" />
<TestAssemblies Include="$(OutDir)\Project2.dll" />
</ItemGroup>
<BuildStep
TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Message="Running tests (cross your fingers)...">
<Output TaskParameter="Id" PropertyName="StepId" />
</BuildStep>
<MbUnit
Assemblies="@(TestAssemblies)"
ReportTypes="html"
ReportFileNameFormat="buildreport{0}{1}"
ReportOutputDirectory="." />
<BuildStep
TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Id="$(StepId)"
Message="Yay! All tests succeded!"
Status="Succeeded" />
<OnError ExecuteTargets="MarkBuildStepAsFailed" />
</Target>
<Target Name="MarkBuildStepAsFailed">
<BuildStep
TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Id="$(StepId)"
Message="Oh no! Some tests have failed. See test report in drop folder for details."
Status="Failed" />
</Target>
...
</Project>
A: You don't need to call MSBuild again to have your ItemGroup populated, there is a easier way. Re-calling MSBuild has its downsides, like passing all Teambuild-parameters on to make TeamBuild-tasks work. We use the CreateItem task from MSBuild to dynamically generate a ItemGroup with all our Test DLLs:
<Target Name="AfterCompile">
<CreateItem Include="$(OutDir)\*.Test.dll">
<Output
TaskParameter="Include"
ItemName="TestBinaries"/>
</CreateItem>
</Target><!--Test run happens in a later target in our case, we use MSTest -->
A: The way ItemGroups in MSBuild work is that they are evaluated at the very start of the MSBuild scripts, before any targets are ran. Therefore if the assemblies don't exist yet (which they will not because they have not been built yet) then the ItemGroups will not find any files.
The usual pattern in MSBuild to work around this is to re-call MSBuild again at this point so that when the item groups get evaluated in the inner MSBuild execution, the assemblies will exist.
For example, something like:
<PropertyGroup>
<TestDependsOn>
$(TestDependsOn);
CallMbUnitTests;
</TestDependsOn>
</PropertyGroup>
<Target Name="CallMbUnitTests">
<MSBuild Projects="$(MSBuildProjectFile)"
Properties="BuildAgentName=$(BuildAgentName);BuildAgentUri=$(BuildAgentUri);BuildDefinitionName=$(BuildDefinitionName);BuildDefinitionUri=$(BuildDefinitionUri);
BuildDirectory=$(BuildDirectory);BuildNumber=$(BuildNumber);CompilationStatus=$(CompilationStatus);CompilationSuccess=$(CompilationSuccess);
ConfigurationFolderUri=$(ConfigurationFolderUri);DropLocation=$(DropLocation);
FullLabelName=$(FullLabelName);LastChangedBy=$(LastChangedBy);LastChangedOn=$(LastChangedOn);LogLocation=$(LogLocation);
MachineName=$(MachineName);MaxProcesses=$(MaxProcesses);Port=$(Port);Quality=$(Quality);Reason=$(Reason);RequestedBy=$(RequestedBy);RequestedFor=$(RequestedFor);
SourceGetVersion=$(SourceGetVersion);StartTime=$(StartTime);Status=$(Status);TeamProject=$(TeamProject);TestStatus=$(TestStatus);
TestSuccess=$(TestSuccess);WorkspaceName=$(WorkspaceName);WorkspaceOwner=$(WorkspaceOwner);
SolutionRoot=$(SolutionRoot);BinariesRoot=$(BinariesRoot);TestResultsRoot=$(TestResultsRoot)"
Targets="RunMbUnitTests"/>
</Target>
<ItemGroup>
<TestAssemblies Include="$(OutDir)\Project1.dll" />
<TestAssemblies Include="$(OutDir)\Project2.dll" />
</ItemGroup>
<Target Name="RunMbUnitTests">
<MbUnit
Assemblies="@(TestAssemblies)"
ReportTypes="html"
ReportFileNameFormat="buildreport{0}{1}"
ReportOutputDirectory="." />
</Target>
Hope that helps, good luck.
Martin.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Individual DirecShow merit on audio renders My computer has two external audiocards and one in the motherboard with windows vista. In Vista it sees two entities for the same soundcard, a digital- and a analog output.
When i try to play an videofile with digital audio, say an dvd, it chooses 'Default DirectSound' whereas i want it to use 'Digital Output Auzentech'. So i thought easy enough, i just change the merit for 'Digital Output Auzentech' to a value higher than the others, so it would be picked, when a application tries to build an play-graph.
The problem i have is that all audio entities has the same id, so by changing 'Digital Output Auzentech', the 'Default Direct Sound' gets the same merit. I believe to have searched google-dry for information about merit, directshow and audiorenders, but still haven't found an answer.
Maybe some of the bright minds, who hang out here could help me
(tools i have used is GSpot and GraphEdit on a Vista Ultimate 32bit)
A: Why would you want to tinker with "merit" of directshow filters? Won't that be a bit overkill. Digital audio or not in your video file, you can use any output device here. So you can even stick your 'Digital Output Auzentech' for all the audio on your system, not just for dvd.
If you just want to select 'Digital Output Auzentech' for a particular apps, then I believe most decent apps let you do that. If you want to do and overrides all your apps (not the apps setting but it's windows' default setting), then head to "Sound and Audio Devices" under control panel (this is for xp, vista had something similiar, but I can't recall it's name), under audio tabs, change your Sound Playback Default device to 'Digital Output Auzentech', that's it. Then all your audio will be output from it.
Personally I use Media Player Classic, if I have AC3 or ACC track on the movie, and prefer to enjoy full digital output, I can easily switch from the Sound Filter settings.
A: Thanks for the reply faulty
Its kinda because i'm lazy. My audiocard is attached by an optical cable to hifi setup, and when i'm watching dvd, moviefiles with DTS og DD audio tracks i want it to autoswitch to digital source so i can enable SPDIF. Its because of the new protected media path and other stuff in vista, that has made this harder, because in the ol' days (winxp) it would do DTS Connect and SPDIF the "right" way without having me to automatically change the default audio output device. And another thing is i use windows media center and windows media player because i find them the best applications for media playback.
By the solution you propose setting the digital output as default, my pc would output PCM in games, music etc. whereas when there is a digital audio track the spdif capabiliaty would work. But i don't want PCM sterio when i play games, i want 5.1 sound many games offer DTS encoded, so i wont need to either switch audio channel on my amp nor in windows settings. I know many others with the same problem, when they switched to vista and some have more or less accepted the solution, by switching manually, but i refuse :)
And i got to the idea that if i could change merit settings for someting like PCM audio renders to be the digital output rather than the defaut directsound output, my autoswitching problem would be solved, hence for PCM audio my audio render by choice would be favored.
And by using my solution, all Directshow applications, say Itunes, (going out on a limb here) powerdvd, itunes, mpc etc. would utilize my settings if they don't have implemented or overridden the merit system.
I haven't found any other more likely solutions, than setting the merit for audio renders, which unfortunately i can't get to work properly. Any other suggestions are welcome and i will be willing to try, however it will not be likely i'm going to change my default usage of MCE nor WMP. I have tried other players out and i don't like most of them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: JQuery Datepicker returned Date object type What's the object type returned by Datepicker?
Supposing I have the following:
$("#txtbox").datepicker({
onClose: function(date){
//something
}
});
What is date? I'm interested in reading the date object from another Datepicker for comparison, something like:
function(date){
oDate = $("#oDP").datepicker("getDate");
if(oDate == date)
//do one
else if(oDate > date)
//do two
}
However, this kind of comparison is not working. I'm guessing there is some sort of comparison method for Date object, but I don't know. I also tried comparing the String representation of the dates like oDate.toString() > date.toString() to no avail.
A: I just downloaded the source from here and noticed (ex line 600) the author is using .getTime() to compare dates, have you tried that?
if (oDate.getTime() > date.getTime()) {
...
}
Also this is tangential but you mention you tried oDate.toString() while I noticed in the examples the author is using .asString()
A: A Date object is returned by the datePicker.
Your method for comparing dates is valid - from W3schools:
var myDate=new Date();
myDate.setFullYear(2010,0,14);
var today = new Date();
if (myDate>today)
{
alert("Today is before 14th January 2010");
}
Are you getting a value in oDate from this line?
oDate = $("#oDP").datepicker("getDate");
Your comparison method seems valid - so I'm wondering if datePicker is successfully pulling a value from #oDP?
Edit - oDate confirmed to contain a valid date. This may be a very silly question, but have you confirmed that date contains a valid date? I'm wondering if there may be some issue with naming it the same as the keyword Date (Javascript keywords and reserved words). Perhaps try renaming it to tDate or the like in your function to be doubly-clear this isn't causing your problems.
A: Use this to compare dates, it works:
$("#datepickerfrom").datepicker("getDate") < $("#datepickerto").datepicker("getDate")
A:
What is date?
it's the $("#txtbox") object
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Persistent storage of encrypted data using .Net I need to store encrypted data (few small strings) between application runs. I do not want the user to provide a passphrase every time (s)he launches the application. I.e. after all it goes down to storing securely the encryption key(s).
I was looking into RSACryptoServiceProvider and using PersistentKeyInCsp, but I'm not sure how it works. Is the key container persistent between application runs or machine restarts? If yes, is it user specific, or machine specific. I.e. if I store my encrypted data in user's roaming profile, can I decrypt the data if the user logs on a different machine?
If the above does not work, what are my options (I need to deal with roaming profiles).
A: The Data Protection API (DPAPI) does exactly what you want. It provides symmetric encryption of arbitrary data, using the credentials of the machine or (better) the user, as the encryption key. You don't have to worry about managing the keys; Windows takes care of that for you. If the user changes his password, Windows will re-encrypt the data using the user's new password.
DPAPI is exposed in .NET with the System.Security.Cryptography.ProtectedData class:
byte[] plaintextBytes = GetDataToProtect();
byte[] encodedBytes = ProtectedData.Protect(plaintextBytes, null, DataProtectionScope.CurrentUser);
The second parameter of the Protect method is an optional entropy byte array, which can be used as an additional application-specific "secret".
To decrypt, use the ProtectedData.Unprotect call:
byte[] encodedBytes = GetDataToUnprotect();
byte[] plaintextBytes = ProtectedData.Unprotect(encodedBytes, null, DataProtectionScope.CurrentUser);
DPAPI works correctly with roaming profiles (as described here), though you'll need to store the encrypted data in a place (network share, IsolatedStorage with IsolatedStorageScope.Roaming, etc.) that your various machines can access.
See the ProtectedData class in MSDN for more information. There's a DPAPI white paper here, with more information than you'd ever want.
A: I'd like to add to the DPAPI approach.
Although I haven't implemented the user-store approach myself, there is Microsoft documentation for a user-store approach which encrypts and decrypts data for a specific user.
I used the DPAPI using machine store. I'll describe it in case it fits with what you're looking to do. I used a Windows service to load a Windows user profile and that user's password is used to encrypt data.
As a side note, DPAPI uses Triple-DES which may be slightly weaker (than AES), but then I'm not sure what type of protection you're looking for.
Windows Data Protection
http://msdn.microsoft.com/en-us/library/ms995355.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Getting Excel to refresh data on sheet from within VBA How do you get spreadsheet data in Excel to recalculate itself from within VBA, without the kluge of just changing a cell value?
A: Sometimes Excel will hiccup and needs a kick-start to reapply an equation. This happens in some cases when you are using custom formulas.
Make sure that you have the following script
ActiveSheet.EnableCalculation = True
Reapply the equation of choice.
Cells(RowA,ColB).Formula = Cells(RowA,ColB).Formula
This can then be looped as needed.
A: The following lines will do the trick:
ActiveSheet.EnableCalculation = False
ActiveSheet.EnableCalculation = True
Edit: The .Calculate() method will not work for all functions. I tested it on a sheet with add-in array functions. The production sheet I'm using is complex enough that I don't want to test the .CalculateFull() method, but it may work.
A: You might also try
Application.CalculateFull
or
Application.CalculateFullRebuild
if you don't mind rebuilding all open workbooks, rather than just the active worksheet. (CalculateFullRebuild rebuilds dependencies as well.)
A: I had an issue with turning off a background image (a DRAFT watermark) in VBA. My change wasn't showing up (which was performed with the Sheets(1).PageSetup.CenterHeader = "" method) - so I needed a way to refresh. The ActiveSheet.EnableCalculation approach partly did the trick, but didn't cover unused cells.
In the end I found what I needed with a one liner that made the image vanish when it was no longer set :-
Application.ScreenUpdating = True
A: This should do the trick...
'recalculate all open workbooks
Application.Calculate
'recalculate a specific worksheet
Worksheets(1).Calculate
' recalculate a specific range
Worksheets(1).Columns(1).Calculate
A: After a data connection update, some UDF's were not executing. Using a subroutine, I was trying to recalcuate a single column with:
Sheets("mysheet").Columns("D").Calculate
But above statement had no effect. None of above solutions helped, except kambeeks suggestion to replace formulas worked and was fast if manual recalc turned on during update. Below code solved my problem, even if not exactly responsible to OP "kluge" comment, it provided a fast/reliable solution to force recalculation of user-specified cells.
Application.Calculation = xlManual
DoEvents
For Each mycell In Sheets("mysheet").Range("D9:D750").Cells
mycell.Formula = mycell.Formula
Next
DoEvents
Application.Calculation = xlAutomatic
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
} |
Q: Set up an HTTP proxy to insert a header I need to test some HTTP interaction with a client I'd rather not modify. What I need to test is the behavior of the server when the client's requests include a certain, static header.
I'm thinking the easiest way to run this test is to set up an HTTP proxy that inserts the header on every request. What would be the simplest way to set this up?
A: I do something like this in my development environment by configuring Apache on port 80 as a proxy for my application server on port 8080, with the following Apache config:
NameVirtualHost *
<VirtualHost *>
<Proxy http://127.0.0.1:8080/*>
Allow from all
</Proxy>
<LocationMatch "/myapp">
ProxyPass http://127.0.0.1:8080/myapp
ProxyPassReverse http://127.0.0.1:8080/myapp
Header add myheader "myvalue"
RequestHeader set myheader "myvalue"
</LocationMatch>
</VirtualHost>
See LocationMatch and RequestHeader documentation.
This adds the header myheader: myvalue to requests going to the application server.
A: i'd try tinyproxy. in fact, the vey best would be to embedd a scripting language there... sounds like a perfect job for Lua, especially after seeing how well it worked for mysqlproxy
A: I have had co-workers that have used Burp ("an interactive HTTP/S proxy server for attacking and testing web applications") for this. You also may be able to use Fiddler ("a HTTP Debugging Proxy").
A: You can also install Fiddler (http://www.fiddler2.com/fiddler2/) which is very easy to install (easier than Apache for example).
After launching it, it will register itself as system proxy. Then open the "Rules" menu, and choose "Customize Rules..." to open a JScript file which allow you to customize requests.
To add a custom header, just add a line in the OnBeforeRequest function:
oSession.oRequest.headers.Add("MyHeader", "MyValue");
A: Use http://www.proxomitron.info and set up the header you want, etc.
A: Rather than using a proxy, I'm using the Firefox plugin "Modify Headers" to insert headers (in my case, to fake a login using the Single Sign On so I can test as different people).
A: If you have ruby on your system, how about a small Ruby Proxy using Sinatra (make sure to install the Sinatra Gem). This should be easier than setting up apache. The code can be found here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60"
} |
Q: How to avoid .pyc files? Can I run the python interpreter without generating the compiled .pyc files?
A: Solution for ipython 6.2.1 using python 3.5.2 (Tested on Ubuntu 16.04 and Windows 10):
Ipython doesn’t respect %env PYTHONDONTWRITEBYTECODE =1 if set in the ipython interpretor or during startup in ~/.ipython/profile-default/startup/00-startup.ipy.
Instead using the following in your ~.ipython/profile-default/startup/00-startup.py
import sys
sys.dont_write_bytecode=True
A: I have several test cases in a test suite and before I was running the test suite in the Mac Terminal like this:
python LoginSuite.py
Running the command this way my directory was being populated with .pyc files. I tried the below stated method and it solved the issue:
python -B LoginSuite.py
This method works if you are importing test cases into the test suite and running the suite on the command line.
A: From "What’s New in Python 2.6 - Interpreter Changes":
Python can now be prevented from
writing .pyc or .pyo files by
supplying the -B switch to the Python
interpreter, or by setting the
PYTHONDONTWRITEBYTECODE environment
variable before running the
interpreter. This setting is available
to Python programs as the
sys.dont_write_bytecode variable, and
Python code can change the value to
modify the interpreter’s behaviour.
So run your program as python -B prog.py.
Update 2010-11-27: Python 3.2 addresses the issue of cluttering source folders with .pyc files by introducing a special __pycache__ subfolder, see What's New in Python 3.2 - PYC Repository Directories.
NOTE: The default behavior is to generate the bytecode and is done for "performance" reasons (for more information see here for python2 and see here for python3).
*
*The generation of bytecode .pyc files is a form of caching (i.e. greatly improves average performance).
*Configuring python with PYTHONDONTWRITEBYTECODE=1 can be bad for python performance (for python2 see https://www.python.org/dev/peps/pep-0304/ and for python3 see https://www.python.org/dev/peps/pep-3147/ ).
*If you are interested in the performance impact please see here https://github.com/python/cpython .
A: There actually IS a way to do it in Python 2.3+, but it's a bit esoteric. I don't know if you realize this, but you can do the following:
$ unzip -l /tmp/example.zip
Archive: /tmp/example.zip
Length Date Time Name
-------- ---- ---- ----
8467 11-26-02 22:30 jwzthreading.py
-------- -------
8467 1 file
$ ./python
Python 2.3 (#1, Aug 1 2003, 19:54:32)
>>> import sys
>>> sys.path.insert(0, '/tmp/example.zip') # Add .zip file to front of path
>>> import jwzthreading
>>> jwzthreading.__file__
'/tmp/example.zip/jwzthreading.py'
According to the zipimport library:
Any files may be present in the ZIP archive, but only files .py and .py[co] are available for import. ZIP import of dynamic modules (.pyd, .so) is disallowed. Note that if an archive only contains .py files, Python will not attempt to modify the archive by adding the corresponding .pyc or .pyo file, meaning that if a ZIP archive doesn't contain .pyc files, importing may be rather slow.
Thus, all you have to do is zip the files up, add the zipfile to your sys.path and then import them.
If you're building this for UNIX, you might also consider packaging your script using this recipe: unix zip executable, but note that you might have to tweak this if you plan on using stdin or reading anything from sys.args (it CAN be done without too much trouble).
In my experience performance doesn't suffer too much because of this, but you should think twice before importing any very large modules this way.
A: You could make the directories that your modules exist in read-only for the user that the Python interpreter is running as.
I don't think there's a more elegant option. PEP 304 appears to have been an attempt to introduce a simple option for this, but it appears to have been abandoned.
I imagine there's probably some other problem you're trying to solve, for which disabling .py[co] would appear to be a workaround, but it'll probably be better to attack whatever this original problem is instead.
A: Starting with Python 3.8 you can use the environment variable PYTHONPYCACHEPREFIX to define a cache directory for Python.
From the Python docs:
If this is set, Python will write .pyc files in a mirror directory tree at this path, instead of in pycache directories within the source tree. This is equivalent to specifying the -X pycache_prefix=PATH option.
Example
If you add the following line to your ./profile in Linux:
export PYTHONPYCACHEPREFIX="$HOME/.cache/cpython/"
Python won't create the annoying __pycache__ directories in your project directory, instead it will put all of them under ~/.cache/cpython/
A: import sys
sys.dont_write_bytecode = True
A: In 2.5, theres no way to suppress it, other than measures like not giving users write access to the directory.
In python 2.6 and 3.0 however, there may be a setting in the sys module called "dont_write_bytecode" that can be set to suppress this. This can also be set by passing the "-B" option, or setting the environment variable "PYTHONDONTWRITEBYTECODE"
A: You can set sys.dont_write_bytecode = True in your source, but that would have to be in the first python file loaded. If you execute python somefile.py then you will not get somefile.pyc.
When you install a utility using setup.py and entry_points= you will have set sys.dont_write_bytecode in the startup script. So you cannot rely on the "default" startup script generated by setuptools.
If you start Python with python file as argument yourself you can specify -B:
python -B somefile.py
somefile.pyc would not be generated anyway, but no .pyc files for other files imported too.
If you have some utility myutil and you cannot change that, it will not pass -B to the python interpreter. Just start it by setting the environment variable PYTHONDONTWRITEBYTECODE:
PYTHONDONTWRITEBYTECODE=x myutil
A: As far as I know python will compile all modules you "import". However python will NOT compile a python script run using: "python script.py" (it will however compile any modules that the script imports).
The real questions is why you don't want python to compile the modules? You could probably automate a way of cleaning these up if they are getting in the way.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "292"
} |
Q: Help someone new to C# variables I am trying to save data to a database on a button push, but the variables seem to be private by the nature of where they are defined. I have tried to move where they are defined, but this seems to produce other errors.
Given a fix, why was it fixed that way?
The code follows.
namespace enable
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
OleDbConnection favouriteConnection = new System.Data.OleDb.OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=\\\\192.168.123.5\\Share\\Matt\\BugTypes.mdb");
string strSQL = "SELECT CategoryName, Show " + "FROM [Categories] WHERE Show = 'Yes' " + "ORDER BY CategoryName";
OleDbDataAdapter adapter = new OleDbDataAdapter(strSQL, favouriteConnection);
OleDbCommandBuilder cBuilder = new OleDbCommandBuilder(adapter);
DataTable dTable = new DataTable();
adapter.Fill(dTable);
BindingSource bSource = new BindingSource();
bSource.DataSource = dTable;
dataGridView1.DataSource = bSource;
adapter.Update(dTable);
}
private void button1_Click(object sender, EventArgs e)
{
adapter.Update(dTable);//this is the button that needs to do the save, but can't see the variables.
}
}
}
A: namespace enable
{
public partial class Form1 : Form
{
OleDbDataAdapter adapter;
DataTable dTable = new DataTable();
public Form1()
{
InitializeComponent();
OleDbConnection favouriteConnection = new System.Data.OleDb.OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=\\\\192.168.123.5\\Share\\Matt\\BugTypes.mdb");
string strSQL = "SELECT CategoryName, Show " + "FROM [Categories] WHERE Show = 'Yes' " + "ORDER BY CategoryName";
adapter = new OleDbDataAdapter(strSQL, favouriteConnection);
OleDbCommandBuilder cBuilder = new OleDbCommandBuilder(adapter);
adapter.Fill(dTable);
BindingSource bSource = new BindingSource();
bSource.DataSource = dTable;
dataGridView1.DataSource = bSource;
adapter.Update(dTable);
}
private void button1_Click(object sender, EventArgs e)
{
adapter.Update(dTable);//this is the button that needs to do the save, but can't see the variables.
}
}
}
You need to change DataAdapter and the dataTable scope to be accesible to the button click method event. If you declare them on the constructor they cannot be acceced on other methods, you need to declare them as object fields to be "global" to your object instance.
You need to find out what scope need each variable, you can have a local scope, that is, declared inside a method or a class scope, declared outside a method.
A: You're declaring dTable and adapter in the constructor, so it goes out of scope as soon as the constructor is completed.
You want to move the variable declarations out into the main class, like:
public partial class Form1 : Form
{
private DataTable dTable;
private OleDbDataAdapter adapter;
Public Form1()
{
... your setup here ...
dTable = new DataTable();
... etc ...
}
}
A: adapter is scoped to the constructor of Form1, not to the class itself.
Move adapter and dtable to be private members of the class.
A: Update: [sigh] I forgot to move dTable to the class cope as well...
namespace enable
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
OleDbConnection favouriteConnection = new System.Data.OleDb.OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=\\\\192.168.123.5\\Share\\Matt\\BugTypes.mdb");
string strSQL = "SELECT CategoryName, Show " + "FROM [Categories] WHERE Show = 'Yes' " + "ORDER BY CategoryName";
m_Adapter = new OleDbDataAdapter(strSQL, favouriteConnection)l
OleDbCommandBuilder cBuilder = new OleDbCommandBuilder(m_Adapter);
dTable = new DataTable();
m_Adapter.Fill(dTable);
BindingSource bSource = new BindingSource();
bSource.DataSource = dTable;
dataGridView1.DataSource = bSource;
m_Adapter.Update(dTable);
}
private void button1_Click(object sender, EventArgs e)
{
m_Adapter.Update(dTable);//this is the button that needs to do the save, but can't see the variables.
}
OleDbDataAdapter m_Adapter;
DataTable dTable;
}
}
A: adapter and dTable is declared within your constructor. They should both be 'moved out' of the constructor to get class wide scoop. Just as Franci did with the adapter.
There might be other errors but it is hard to guess when you haven't posted your compiler error.
/johan/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: SharpZipLib - ZipException "End of extra data" - Why am I getting this exception? I'm using SharpZipLib version 0.85.5 to unzip files. My code has been working nicely for a couple of months until I found a ZIP file that it doesn't like.
ICSharpCode.SharpZipLib.Zip.ZipException: End of extra data
at ICSharpCode.SharpZipLib.Zip.ZipExtraData.ReadCheck(Int32 length) in C:\C#\SharpZLib\Zip\ZipExtraData.cs:line 933
at ICSharpCode.SharpZipLib.Zip.ZipExtraData.Skip(Int32 amount) in C:\C#\SharpZLib\Zip\ZipExtraData.cs:line 921
at ICSharpCode.SharpZipLib.Zip.ZipEntry.ProcessExtraData(Boolean localHeader) in C:\C#\SharpZLib\Zip\ZipEntry.cs:line 925
at ICSharpCode.SharpZipLib.Zip.ZipInputStream.GetNextEntry() in C:\C#\SharpZLib\Zip\ZipInputStream.cs:line 269
at Constellation.Utils.Tools.UnzipFile(String sourcePath, String targetDirectory) in C:\C#\Constellation2\Utils\Tools.cs:line 90
--- End of inner exception stack trace ---
Here is my unzip method:
public static void UnzipFile(string sourcePath, string targetDirectory)
{
try
{
using (ZipInputStream s = new ZipInputStream(File.OpenRead(sourcePath)))
{
ZipEntry theEntry;
while ((theEntry = s.GetNextEntry()) != null)
{
//string directoryName = Path.GetDirectoryName(theEntry.Name);
string fileName = Path.GetFileName(theEntry.Name);
if (targetDirectory.Length > 0)
{
Directory.CreateDirectory(targetDirectory);
}
if (fileName != String.Empty)
{
using (FileStream streamWriter = File.Create(targetDirectory + fileName))
{
int size = 2048;
byte[] data = new byte[2048];
while (true)
{
size = s.Read(data, 0, data.Length);
if (size > 0)
{
streamWriter.Write(data, 0, size);
}
else
{
break;
}
}
}
}
}
}
}
catch (Exception ex)
{
throw new Exception("Error unzipping file \"" + sourcePath + "\"", ex);
}
}
The file unzips fine using XP's built-in ZIP support, WinZIP, and 7-Zip. The exception is being thrown at s.GetNextEntry().
A: It's possible that the other zip tools are ignoring extra data which is corrupt - or it's equally possible that there's a bug in #ZipLib. (I found one a while ago - a certain file that wouldn't compress and then decompress cleanly with certain options.)
In this particular case, I suggest you post on the #ZipLib forum to get the attention of the developers. If your file doesn't contain any sensitive data and you can get them a short but complete program along with it, I suspect that will help enormously.
A: I agree with Jon. Couldn't fit following in the comment:
(Though this doesn't answer your question)
Isn't it easier to use something like this:
public static void UnzipFile(string sourcePath, string targetDirectory)
{
try
{
FastZip fastZip = new FastZip();
fastZip.CreateEmptyDirectories = false;
fastZip.ExtractZip(sourcePath, targetDirectory,"");
}
catch(Exception ex)
{
throw new Exception("Error unzipping file \"" + sourcePath + "\"", ex);
}
}
A: See the official ZIP specification.
Each file in a ZIP archive can have an 'extra' field associated with it. I think #ZipLib is telling you that the 'extra' field length given was longer than the amount of data that was available to read; in other words, the ZIP file has most likely been truncated.
A: According to 4.5.3 of official ZIP specification, fields Size & CompressedSize of extra data "MUST only appear if the corresponding Local or Central directory record field is set to 0xFFFF or 0xFFFFFFFF".
But SharpZipLib writes its at method ZipFile.WriteCentralDirectoryHeader only if "useZip64_ == UseZip64.On". I added entry.IsZip64Forced() condition and bug dissapears)
if ( entry.CentralHeaderRequiresZip64 ) {
ed.StartNewEntry();
if ((entry.Size >= 0xffffffff) || (useZip64_ == UseZip64.On) || entry.IsZip64Forced())
{
ed.AddLeLong(entry.Size);
}
if ((entry.CompressedSize >= 0xffffffff) || (useZip64_ == UseZip64.On) || entry.IsZip64Forced())
{
ed.AddLeLong(entry.CompressedSize);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Unnamed/anonymous namespaces vs. static functions A feature of C++ is the ability to create unnamed (anonymous) namespaces, like so:
namespace {
int cannotAccessOutsideThisFile() { ... }
} // namespace
You would think that such a feature would be useless -- since you can't specify the name of the namespace, it's impossible to access anything within it from outside. But these unnamed namespaces are accessible within the file they're created in, as if you had an implicit using-clause to them.
My question is, why or when would this be preferable to using static functions? Or are they essentially two ways of doing the exact same thing?
A: The difference is the name of the mangled identifier (_ZN12_GLOBAL__N_11bE vs _ZL1b , which doesn't really matter, but both of them are assembled to local symbols in the symbol table (absence of .global asm directive).
#include<iostream>
namespace {
int a = 3;
}
static int b = 4;
int c = 5;
int main (){
std::cout << a << b << c;
}
.data
.align 4
.type _ZN12_GLOBAL__N_11aE, @object
.size _ZN12_GLOBAL__N_11aE, 4
_ZN12_GLOBAL__N_11aE:
.long 3
.align 4
.type _ZL1b, @object
.size _ZL1b, 4
_ZL1b:
.long 4
.globl c
.align 4
.type c, @object
.size c, 4
c:
.long 5
.text
As for a nested anonymous namespace:
namespace {
namespace {
int a = 3;
}
}
.data
.align 4
.type _ZN12_GLOBAL__N_112_GLOBAL__N_11aE, @object
.size _ZN12_GLOBAL__N_112_GLOBAL__N_11aE, 4
_ZN12_GLOBAL__N_112_GLOBAL__N_11aE:
.long 3
All 1st level anonymous namespaces in the translation unit are combined with each other, All 2nd level nested anonymous namespaces in the translation unit are combined with each other
You can also have a nested namespace or nested inline namespace in an anonymous namespace
namespace {
namespace A {
int a = 3;
}
}
.data
.align 4
.type _ZN12_GLOBAL__N_11A1aE, @object
.size _ZN12_GLOBAL__N_11A1aE, 4
_ZN12_GLOBAL__N_11A1aE:
.long 3
which for the record demangles as:
.data
.align 4
.type (anonymous namespace)::A::a, @object
.size (anonymous namespace)::A::a, 4
(anonymous namespace)::A::a:
.long 3
//inline has the same output
You can also have anonymous inline namespaces, but as far as I can tell, inline on an anonymous namespace has 0 effect
inline namespace {
inline namespace {
int a = 3;
}
}
_ZL1b: _Z means this is a mangled identifier. L means it is a local symbol through static. 1 is the length of the identifier b and then the identifier b
_ZN12_GLOBAL__N_11aE _Z means this is a mangled identifier. N means this is a namespace 12 is the length of the anonymous namespace name _GLOBAL__N_1, then the anonymous namespace name _GLOBAL__N_1, then 1 is the length of the identifier a, a is the identifier a and E closes the identifier that resides in a namespace.
_ZN12_GLOBAL__N_11A1aE is the same as above except there's another namespace (1A) in it called A, prefixed with the length of A which is 1. Anonymous namespaces all have the name _GLOBAL__N_1
A: Putting methods in an anonymous namespace prevents you from accidentally violating the One Definition Rule, allowing you to never worry about naming your helper methods the same as some other method you may link in.
And, as pointed out by luke, anonymous namespaces are preferred by the standard over static members.
A: Use of static keyword for that purpose is deprecated by the C++98 standard. The problem with static is that it doesn't apply to type definition. It's also an overloaded keyword used in different ways in different contexts, so unnamed namespaces simplify things a bit.
A: From experience I'll just note that while it is the C++ way to put formerly-static functions into the anonymous namespace, older compilers can sometimes have problems with this. I currently work with a few compilers for our target platforms, and the more modern Linux compiler is fine with placing functions into the anonymous namespace.
But an older compiler running on Solaris, which we are wed to until an unspecified future release, will sometimes accept it, and other times flag it as an error. The error is not what worries me, it's what it might be doing when it accepts it. So until we go modern across the board, we are still using static (usually class-scoped) functions where we'd prefer the anonymous namespace.
A: There is one edge case where static has a surprising effect(at least it was to me). The C++03 Standard states in 14.6.4.2/1:
For a function call that depends on a template parameter, if the function name is an unqualified-id but not a template-id, the candidate functions are found using the usual lookup rules (3.4.1, 3.4.2) except that:
*
*For the part of the lookup using unqualified name lookup (3.4.1), only function declarations with external linkage from the template definition context are found.
*For the part of the lookup using associated namespaces (3.4.2), only function declarations with external linkage found in either the template definition context or the template instantiation context are found.
...
The below code will call foo(void*) and not foo(S const &) as you might expect.
template <typename T>
int b1 (T const & t)
{
foo(t);
}
namespace NS
{
namespace
{
struct S
{
public:
operator void * () const;
};
void foo (void*);
static void foo (S const &); // Not considered 14.6.4.2(b1)
}
}
void b2()
{
NS::S s;
b1 (s);
}
In itself this is probably not that big a deal, but it does highlight that for a fully compliant C++ compiler (i.e. one with support for export) the static keyword will still have functionality that is not available in any other way.
// bar.h
export template <typename T>
int b1 (T const & t);
// bar.cc
#include "bar.h"
template <typename T>
int b1 (T const & t)
{
foo(t);
}
// foo.cc
#include "bar.h"
namespace NS
{
namespace
{
struct S
{
};
void foo (S const & s); // Will be found by different TU 'bar.cc'
}
}
void b2()
{
NS::S s;
b1 (s);
}
The only way to ensure that the function in our unnamed namespace will not be found in templates using ADL is to make it static.
Update for Modern C++
As of C++ '11, members of an unnamed namespace have internal linkage implicitly (3.5/4):
An unnamed namespace or a namespace declared directly or indirectly within an unnamed namespace has internal linkage.
But at the same time, 14.6.4.2/1 was updated to remove mention of linkage (this taken from C++ '14):
For a function call where the postfix-expression is a dependent name, the candidate functions are found using
the usual lookup rules (3.4.1, 3.4.2) except that:
*
*For the part of the lookup using unqualified name lookup (3.4.1), only function declarations from the template definition context are found.
*For the part of the lookup using associated namespaces (3.4.2), only function declarations found in either the template definition context or the template instantiation context are found.
The result is that this particular difference between static and unnamed namespace members no longer exists.
A: The C++ Standard reads in section 7.3.1.1 Unnamed namespaces, paragraph 2:
The use of the static keyword is
deprecated when declaring objects in a
namespace scope, the unnamed-namespace
provides a superior alternative.
Static only applies to names of objects, functions, and anonymous unions, not to type declarations.
Edit:
The decision to deprecate this use of the static keyword (affecting visibility of a variable declaration in a translation unit) has been reversed (ref). In this case using a static or an unnamed namespace are back to being essentially two ways of doing the exact same thing. For more discussion please see this SO question.
Unnamed namespace's still have the advantage of allowing you to define translation-unit-local types. Please see this SO question for more details.
Credit goes to Mike Percy for bringing this to my attention.
A: In addition if one uses static keyword on a variable like this example:
namespace {
static int flag;
}
It would not be seen in the mapping file
A: A compiler specific difference between anonymous namespaces and static functions can be seen compiling the following code.
#include <iostream>
namespace
{
void unreferenced()
{
std::cout << "Unreferenced";
}
void referenced()
{
std::cout << "Referenced";
}
}
static void static_unreferenced()
{
std::cout << "Unreferenced";
}
static void static_referenced()
{
std::cout << "Referenced";
}
int main()
{
referenced();
static_referenced();
return 0;
}
Compiling this code with VS 2017 (specifying the level 4 warning flag /W4 to enable warning C4505: unreferenced local function has been removed) and gcc 4.9 with the -Wunused-function or -Wall flag shows that VS 2017 will only produce a warning for the unused static function. gcc 4.9 and higher, as well as clang 3.3 and higher, will produce warnings for the unreferenced function in the namespace and also a warning for the unused static function.
Live demo of gcc 4.9 and MSVC 2017
A: I recently began replacing static keywords with anonymous namespaces in my code but immediately ran into a problem where the variables in the namespace were no longer available for inspection in my debugger. I was using VC60, so I don't know if that is a non-issue with other debuggers. My workaround was to define a 'module' namespace, where I gave it the name of my cpp file.
For example, in my XmlUtil.cpp file, I define a namespace XmlUtil_I { ... } for all of my module variables and functions. That way I can apply the XmlUtil_I:: qualification in the debugger to access the variables. In this case, the _I distinguishes it from a public namespace such as XmlUtil that I may want to use elsewhere.
I suppose a potential disadvantage of this approach compared to a truly anonymous one is that someone could violate the desired static scope by using the namespace qualifier in other modules. I don't know if that is a major concern though.
A: Personally I prefer static functions over nameless namespaces for the following reasons:
*
*It's obvious and clear from function definition alone that it's private to the translation unit where it's compiled. With nameless namespace you might need to scroll and search to see if a function is in a namespace.
*Functions in namespaces might be treated as extern by some (older) compilers. In VS2017 they are still extern. For this reason even if a function is in nameless namespace you might still want to mark them static.
*Static functions behave very similar in C or C++, while nameless namespaces are obviously C++ only. nameless namespaces also add extra level in indentation and I don't like that :)
So, I'm happy to see that use of static for functions isn't deprecated anymore.
A: Having learned of this feature only just now while reading your question, I can only speculate. This seems to provide several advantages over a file-level static variable:
*
*Anonymous namespaces can be nested within one another, providing multiple levels of protection from which symbols can not escape.
*Several anonymous namespaces could be placed in the same source file, creating in effect different static-level scopes within the same file.
I'd be interested in learning if anyone has used anonymous namespaces in real code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "623"
} |
Q: How to Conditionally Format a String in .Net? I would like to do some condition formatting of strings. I know that you can do some conditional formatting of integers and floats as follows:
Int32 i = 0;
i.ToString("$#,##0.00;($#,##0.00);Zero");
The above code would result in one of three formats if the variable is positive, negative, or zero.
I would like to know if there is any way to use sections on string arguments. For a concrete, but contrived example, I would be looking to replace the "if" check in the following code:
string MyFormatString(List<String> items, List<String> values)
{
string itemList = String.Join(", " items.ToArray());
string valueList = String.Join(", " values.ToArray());
string formatString;
if (items.Count > 0)
//this could easily be:
//if (!String.IsNullOrEmpty(itemList))
{
formatString = "Items: {0}; Values: {1}";
}
else
{
formatString = "Values: {1}";
}
return String.Format(formatString, itemList, valueList);
}
A: Not within String.Format(), but you could use C#'s inline operators, such as:
return items.Count > 0
? String.Format("Items: {0}; Values: {1}", itemList, valueList)
: String.Format("Values: {0}", valueList);
This would help tidy-up the code.
A: While not addressing the OP directly, this does fall under the question title as well.
I frequently need to format strings with some custom unit, but in cases where I don't have data, I don't want to output anything at all. I use this with various nullable types:
/// <summary>
/// Like String.Format, but if any parameter is null, the nullOutput string is returned.
/// </summary>
public static string StringFormatNull(string format, string nullOutput, params object[] args)
{
return args.Any(o => o == null) ? nullOutput : String.Format(format, args);
}
For example, if I am formatting temperatures like "20°C", but encounter a null value, it will print an alternate string instead of "°C".
double? temp1 = 20.0;
double? temp2 = null;
string out1 = StringFormatNull("{0}°C", "N/A", temp1); // "20°C"
string out2 = StringFormatNull("{0}°C", "N/A", temp2); // "N/A"
A: Well, you can simplify it a bit with the conditional operator:
string formatString = items.Count > 0 ? "Items: {0}; Values: {1}" : "Values: {1}";
return string.Format(formatString, itemList, valueList);
Or even include it in the same statement:
return string.Format(items.Count > 0 ? "Items: {0}; Values: {1}" : "Values: {1}",
itemList, valueList);
Is that what you're after? I don't think you can have a single format string which sometimes includes bits and sometimes it doesn't.
A: string.Format( (items.Count > 0 ? "Items: {0}; " : "") + "Values {1}"
, itemList
, valueList);
A: This is probably not what you're looking for, but how about...
formatString = (items.Count > 0) ? "Items: {0}; Values: {1}" : "Values: {1}";
A: Just don't. I have no idea what are both the items and values in your code, but I believe, this pair could be treated as an entity of some kind. Define this entity as a class and override its ToString() method to return whatever you want. There's absolutely nothing wrong with having if for deciding how to format this string depending on some context.
A: I hoped this could do it:
return String.Format(items.ToString(itemList + " ;;") + "Values: {0}", valueList);
Unfortunately, it seems that the .ToString() method doesn't like the blank negative and zero options or not having a # or 0 anywhere. I'll leave it up here in case it points someone else to a better answer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: How to test SqlServer connection without opening a database The title pretty much says it all. I want to create a SqlConnection and then check that connection without opening a database, cause at that point I don't know yet where will I connect to. Is it possible to do that?
The SqlConnection class has a 'Open' member which tries to open the database you'd set in the Database property, and if you didn't set one, SqlServer tries with the master db. The thing is the user I'm trying to connect with (MACHINE\ASPNET) has access to some databases (which I don't know yet) and not the master db.
Regards,
Seba
A: I am not sure if this is what you need.
Check if a user has access to a database in Sql Server 2005
SELECT HAS_DBACCESS('Northwind');
HAS_DBACCESS returns information about whether the user has access to the specified database (BOL).
Find all databases that the current user has access to
SELECT [Name] as DatabaseName from master.dbo.sysdatabases
WHERE ISNULL(HAS_DBACCESS ([Name]),0)=1
ORDER BY [Name]
A: Connect to temp db. Everybody has accecss to tempdb so you will be able to authenticate yourself for access. Later when you know the actual database , you can change this property to connect to the db you want.
A: If you need to know only if the service is active, you could try to connet via a socket to the port, to see if it is open
A: Just curious... What information will you be able to verify if you don't know the precise database you need to connect to? Many things that could go wrong with the "real" database would be untestable from this sort of test connection, such as connectivity or security.
A: I don't know whether you got your answers but as we all look here for answers I hope this is what you were looking for
dim con as new sqlconnection
con.connectionstring="<<put your conn string here>>"
'try...catch block fires exception if the con is not successfully opened
try
con.open()
catch ex as exception
msgbox ex.message
end try
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Are static indexers not supported in C#? I've been trying this a few different ways, but I'm reaching the conclusion that it can't be done. It's a language feature I've enjoyed from other languages in the past. Is it just something I should just write off?
A: No, static indexers aren't supported in C#. Unlike other answers, however, I see how there could easily be point in having them. Consider:
Encoding x = Encoding[28591]; // Equivalent to Encoding.GetEncoding(28591)
Encoding y = Encoding["Foo"]; // Equivalent to Encoding.GetEncoding("Foo")
It would be relatively rarely used, I suspect, but I think it's odd that it's prohibited - it gives asymmetry for no particular reason as far as I can see.
A: You can simulate static indexers using static indexed properties:
public class MyEncoding
{
public sealed class EncodingIndexer
{
public Encoding this[string name]
{
get { return Encoding.GetEncoding(name); }
}
public Encoding this[int codepage]
{
get { return Encoding.GetEncoding(codepage); }
}
}
private static EncodingIndexer StaticIndexer;
public static EncodingIndexer Items
{
get { return StaticIndexer ?? (StaticIndexer = new EncodingIndexer()); }
}
}
Usage:
Encoding x = MyEncoding.Items[28591]; // Equivalent to Encoding.GetEncoding(28591)
Encoding y = MyEncoding.Items["Foo"]; // Equivalent to Encoding.GetEncoding("Foo")
A: No, but it is possible to create a static field that holds an instance of a class that uses an indexer...
namespace MyExample {
public class Memory {
public static readonly MemoryRegister Register = new MemoryRegister();
public class MemoryRegister {
private int[] _values = new int[100];
public int this[int index] {
get { return _values[index]; }
set { _values[index] = value; }
}
}
}
}
...Which could be accessed in the way you are intending. This can be tested in the Immediate Window...
Memory.Register[0] = 12 * 12;
?Memory.Register[0]
144
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
} |
Q: Software for Mac OS X Leopard to track and record network usage I'm looking for a program that I can install on a Mac that will tell me how many bytes I download each day, and store that info in such a way that I could later view the results.
Limiting by ports (80, 443, 21, 22) would be awesome as well.
Does such a thing exist?
A: Wireshark might be of interest to you.
A: You could run a local web proxy, then analyze the logs.
It's simple to download and install SquidMan, which is a point and click
way of enabling the squid web proxy on your system.
You will have to configure your machine to go through the proxy on localhost
instead of directly to the website. You can find this in advanced tab of
your network interface in System Preferences -> Network
Then, you can analyze the logs and see not only how much you've downloaded, but also
what and when.
A: Type 'man tcpdump' in a terminal.
A: Ok, this isn't a complete package or anything, but netstat will show you bytes
transmitted on an interface
netstat -ib
you can record this somewhere every day. "man netstat" for more info.
A: Another tool for this is ethernal but it hasn't been updated for a while.
A: I just came across these two apps, Net Monitor and Net Monitor Sidekick, the first to do what you describe (with a calculator to determine throughput over a date range), the second to track traffic by host. After trying to use ntop, which is horrible to set up, hideous to use, and very, very limited, I'm thrilled =)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Color differences between images and html I'm having issues with color matching css background colors with colors in images on the same html page. What gives?
A: I'm guessing that you use a PNG image? This is a gamma correction “feature”. Mark Ransom has posted a useful text about this.
Notice that the pngcrush solution listed somewhere hasn't worked for me.
A: What image editing program are you using? I found this article about Photoshop color profiles. There can also be issues with PNG gamma correction.
A: Could be due to the browser's colour management.
A: It might be a color profile issue.
For instance, if the image is a JPEG and has a color profile and your browser doesn't support displaying images in the color profiles that they specify, the colors of the image itself will render differently in your browser. In this situation, if you checked the color of the image in Photoshop (color profile aware) and then applied that color in your CSS and viewed the page in a browser that is not color profile aware, it would look different.
A: Three possibilities spring to mind:
*
*check that your monitor colour depth is set to 32- or 24-bit, not 16-bit
*check that the image isn't being assigned a palette (such as the web-safe palette). This might be the case for a .gif or 8-bit .png image.
*check for .png gamma correction issues in IE - see other posts for details
A workaround that I have used in the distant past is to set the background colour by repeating a small image, instead of setting it in the HTML. This kind of trick was useful in the days of web-safe palettes and so on, but less useful now.
A: Probably the browser your testing, I've had a lot of trouble with ie 6.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Is timsort general-purpose or Python-specific?
Timsort is an adaptive, stable,
natural mergesort. It has supernatural
performance on many kinds of partially
ordered arrays (less than lg(N!)
comparisons needed, and as few as
N-1), yet as fast as Python's previous
highly tuned samplesort hybrid on
random arrays.
Have you seen timsort used outside of CPython? Does it make sense?
A: It doesn't look particularly familiar, but "smart" mergesorts are pretty common out in the wide world of software.
As for whether it makes sense, that depends on what you're sorting, and the relative cost of comparisons vs. memory allocation. A sort that requires up to 2*N bytes of extra memory isn't going to be a good choice in a memory-constrained environment.
A: Answered now on Wikipedia: timsort will be used in Java 7 who copied it from Android.
A: Yes, it makes quite a bit of sense to use timsort outside of CPython, in specific, or Python, in general.
There is currently an effort underway to replace Java's "modified merge sort" with timsort, and the initial results are quite positive.
A: Timsort is also in Android now: http://www.kiwidoc.com/java/l/x/android/android/5/p/java.util/c/TimSort
A: The algorithm is pretty generic, but the benefits are rather Python-specific. Unlike most sorting routines, what Python's list.sort (which is what uses timsort) cares about is avoiding unnecessary comparisons, because generally comparisons are a lot more expensive than swapping items (which is always just a set of pointer copies) or even allocating some extra memory (because it's always just an array of pointers, and the overhead is small compared to the average overhead in any Python operation.)
If you're under similar constraints, then it may be suitable. I've yet to see any other case where comparisons are really that expensive, though :-)
A: The description you linked looks completely general.
A: Timsort is not Python-specific. The benefits of using Timsort in Python to sort a list of object references exists in any programming language which has pointers or object references. For instance, Java SE 7+, Android and Swift use Timsort to sort objects.
On the other hand, some variation of quicksort (eg introsort, dual-pivot quicksort) usually sorts primitive types faster, due to cache coherence, and therefore it is usually chosen for this task.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Questions about use of snapshot in Maven2 I am writing a POM file for a in-house jar artifact. The artifact depends on several other in-house artifacts which our team writes. When declaring the dependencies of the target, should I limit those dependencies to fixed release version or leave them to SNAPSHOT version. If too many other SNAPSHOT versions of depended modules, it creates uncertainties in testing, if I limit to fixed release version, I cannot leverage the bug fixes of the depended modules. Whats the practice out there?
Secondly, how do you name the snapshot version>
1.0.0-SNAPSHOT or 1.0-SNAPSHOT.
A: As a rule you should avoid snapshots and use only stable releases unless your code relies on some feature (or bugfix) that has not yet made it into a release.
As for version numbering I prefer three digits, from end:
revision: changes when bugs are fixed
minor: changes when new features are added
major: changes when incompatible changes are made.
I believe this is the standard used by (at least some of) Apache Java libraries.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Best way to bind WPF properties to ApplicationSettings in C#? What is the best way to bind WPF properties to ApplicationSettings in C#? Is there an automatic way like in a Windows Forms Application? Similar to this question, how (and is it possible to) do you do the same thing in WPF?
A: In case you are a VB.Net developer attempting this, the answer is a smidge different.
xmlns:p="clr-namespace:ThisApplication"
Notice the .Properties isn't there.
In your binding it's MySettings.Default, instead of Settings.Default - since the app.config stores it differently.
<TextBlock Height={Binding Source={x:Static p:MySettings.Default}, Path=Height, ...
After a bit of pulling out my hair, I discovered this. Hope it helps
A: I like the accepted answer, I ran into a special case though. I had my text box set as "read only" so that I can change the value of it only in the code. I couldn't understand why the value wasn't propagated back to the Settings although I had the Mode as "TwoWay".
Then, I found this: http://msdn.microsoft.com/en-us/library/system.windows.data.binding.updatesourcetrigger.aspx
The default is Default, which returns the default UpdateSourceTrigger value of the target dependency property. However, the default value for most dependency properties is PropertyChanged, while the Text property has a default value of LostFocus.
Thus, if you have the text box with IsReadOnly="True" property, you have to add a UpdateSourceTrigger=PropertyChanged value to the Binding statement:
<TextBox Text={Binding Source={x:Static p:Settings.Default}, Path=myTextSetting, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged} ... />
A: The easiest way would be to bind to an object that exposes your application settings as properties or to include that object as a StaticResource and bind to that.
Another direction you could take is creation your own Markup Extension so you can simply use PropertyName="{ApplicationSetting SomeSettingName}". To create a custom markup extension you need to inherit MarkupExtension and decorate the class with a MarkupExtensionReturnType attribute. John Bowen has a post on creating a custom MarkupExtension that might make the process a little clearer.
A: Kris, I'm not sure this is the best way to bind ApplicationSettings, but this is how I did it in Witty.
1) Create a dependency property for the setting that you want to bind in the window/page/usercontrol/container. This is case I have an user setting to play sounds.
public bool PlaySounds
{
get { return (bool)GetValue(PlaySoundsProperty); }
set { SetValue(PlaySoundsProperty, value); }
}
public static readonly DependencyProperty PlaySoundsProperty =
DependencyProperty.Register("PlaySounds", typeof(bool), typeof(Options),
new FrameworkPropertyMetadata(false, new PropertyChangedCallback(OnPlaySoundsChanged)));
private static void OnPlaySoundsChanged(DependencyObject obj, DependencyPropertyChangedEventArgs args)
{
Properties.Settings.Default.PlaySounds = (bool)args.NewValue;
Properties.Settings.Default.Save();
}
2) In the constructor, initialize the property value to match the application settings
PlaySounds = Properties.Settings.Default.PlaySounds;
3) Bind the property in XAML
<CheckBox Content="Play Sounds on new Tweets" x:Name="PlaySoundsCheckBox" IsChecked="{Binding Path=PlaySounds, ElementName=Window, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" />
You can download the full Witty source to see it in action or browse just the code for options window.
A: I like to do it through the ViewModel and just do the binding as normal in the XAML
public Boolean Value
{
get
{
return Settings.Default.Value;
}
set
{
Settings.Default.SomeValue= value;
Settings.Default.Save();
Notify("SomeValue");
}
}
A: You can directly bind to the static object created by Visual Studio.
In your windows declaration add:
xmlns:p="clr-namespace:UserSettings.Properties"
where UserSettings is the application namespace.
Then you can add a binding to the correct setting:
<TextBlock Height="{Binding Source={x:Static p:Settings.Default},
Path=Height, Mode=TwoWay}" ....... />
Now you can save the settings, per example when you close your application:
protected override void OnClosing(System.ComponentModel.CancelEventArgs e)
{
Properties.Settings.Default.Save();
base.OnClosing(e);
}
A: Also read this article on how it is done in BabySmash
You only need to back the Settings with DO (Like Alan's example) if you need the change notification! binding to the POCO Settings class will also work!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
} |
Q: How do I ensure Visual Studio 2005 displays the tables and images correctly? I have used Photoshop CS2's "Save for Web" feature to create a table of images for my site layout.
This HTML appears fine in a web browser, however when imported into Visual Studio and viewed in the site designer, the metrics are wrong and there are horizontal gaps between images (table cells).
The output from Photoshop does not refer to any stylesheets.
The table attributes set border, cellpadding and cellspacing to 0.
Here is how it looks in the Designer:
And here is how it looks in the browser:
Is Visual Studio picky about layout of tables and images? Is this a bug in Visual Studio 2005?
A: I haven't played with "Save for Web" feature, but i'm pretty sure that the output html, except for the table markup and images, should also contain some css styles that define the display behavior of the whole page.
So, when importing the html into VS, make sure associated styles are transfered too.
Also don't forget And to ensure HTML validity - you can choose the conformance level of your web app in the web.config or project properties (or just in the html editor - set Target Schema to XHTML Transitional and you should be sure that the html you got from Photoshop will be validated properly.
A: Personally i have never trusted and rarely use the design view in Visual studio and generally have it set to the code view for all pages. I tend to keep working versions of files open in IE and Firefox to enable me to see their layout however this can cause issues when trying to view multi-step forms etc. In these cases i always tend to put some code in place to enable me to select which state / step i wish to see without going through all the rigmorals of going through each step to test it.
Remember that in VS you can right click on a file in the project explorer and select to view it in a web browser. you can also add various different browsers to VS preferences allowing you to select the browser you wish to see the file in.
I realise that this is not an anwser but hope it is useful.
A: The designer of Visual Studio 2005 seems to struggle with rendering certain HTML content. As Toby said, the best way to work around the problem is to preview the page in a web browser as opposed to working with the designer.
The other alternative of course is to use Visual Studio 2008, it uses the same web designer component that is used in Expression Web. I haven't used Visual Studio 2008 extensively for web projects yet, but from what I've seen it is very impressive! Visual Studio 2008 also has the "Split" view option, which allows you to see the designer while you are editing the HTML (no more switching between source and design view and it taking a couple of minutes to catch up with you!)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Encode/Decode URLs in C++ Does anyone know of any good C++ code that does this?
A: I faced the encoding half of this problem the other day. Unhappy with the available options, and after taking a look at this C sample code, i decided to roll my own C++ url-encode function:
#include <cctype>
#include <iomanip>
#include <sstream>
#include <string>
using namespace std;
string url_encode(const string &value) {
ostringstream escaped;
escaped.fill('0');
escaped << hex;
for (string::const_iterator i = value.begin(), n = value.end(); i != n; ++i) {
string::value_type c = (*i);
// Keep alphanumeric and other accepted characters intact
if (isalnum(c) || c == '-' || c == '_' || c == '.' || c == '~') {
escaped << c;
continue;
}
// Any other characters are percent-encoded
escaped << uppercase;
escaped << '%' << setw(2) << int((unsigned char) c);
escaped << nouppercase;
}
return escaped.str();
}
The implementation of the decode function is left as an exercise to the reader. :P
A: Answering my own question...
libcurl has curl_easy_escape for encoding.
For decoding, curl_easy_unescape
A: CGICC includes methods to do url encode and decode. form_urlencode and form_urldecode
A: Inspired by xperroni I wrote a decoder. Thank you for the pointer.
#include <iostream>
#include <sstream>
#include <string>
using namespace std;
char from_hex(char ch) {
return isdigit(ch) ? ch - '0' : tolower(ch) - 'a' + 10;
}
string url_decode(string text) {
char h;
ostringstream escaped;
escaped.fill('0');
for (auto i = text.begin(), n = text.end(); i != n; ++i) {
string::value_type c = (*i);
if (c == '%') {
if (i[1] && i[2]) {
h = from_hex(i[1]) << 4 | from_hex(i[2]);
escaped << h;
i += 2;
}
} else if (c == '+') {
escaped << ' ';
} else {
escaped << c;
}
}
return escaped.str();
}
int main(int argc, char** argv) {
string msg = "J%C3%B8rn!";
cout << msg << endl;
string decodemsg = url_decode(msg);
cout << decodemsg << endl;
return 0;
}
edit: Removed unneeded cctype and iomainip includes.
A: I ended up on this question when searching for an api to decode url in a win32 c++ app. Since the question doesn't quite specify platform assuming windows isn't a bad thing.
InternetCanonicalizeUrl is the API for windows programs. More info here
LPTSTR lpOutputBuffer = new TCHAR[1];
DWORD dwSize = 1;
BOOL fRes = ::InternetCanonicalizeUrl(strUrl, lpOutputBuffer, &dwSize, ICU_DECODE | ICU_NO_ENCODE);
DWORD dwError = ::GetLastError();
if (!fRes && dwError == ERROR_INSUFFICIENT_BUFFER)
{
delete lpOutputBuffer;
lpOutputBuffer = new TCHAR[dwSize];
fRes = ::InternetCanonicalizeUrl(strUrl, lpOutputBuffer, &dwSize, ICU_DECODE | ICU_NO_ENCODE);
if (fRes)
{
//lpOutputBuffer has decoded url
}
else
{
//failed to decode
}
if (lpOutputBuffer !=NULL)
{
delete [] lpOutputBuffer;
lpOutputBuffer = NULL;
}
}
else
{
//some other error OR the input string url is just 1 char and was successfully decoded
}
InternetCrackUrl (here) also seems to have flags to specify whether to decode url
A: The Windows API has the functions UrlEscape/UrlUnescape, exported by shlwapi.dll, for this task.
A: Adding a follow-up to Bill's recommendation for using libcurl: great suggestion, and to be updated:
after 3 years, the curl_escape function is deprecated, so for future use it's better to use curl_easy_escape.
A: Another solution is available using Facebook's folly library : folly::uriEscape and folly::uriUnescape.
A: I couldn't find a URI decode/unescape here that also decodes 2 and 3 byte sequences. Contributing my own version, that on-the-fly converts the c sting input to a wstring:
#include <string>
const char HEX2DEC[55] =
{
0, 1, 2, 3, 4, 5, 6, 7, 8, 9,-1,-1, -1,-1,-1,-1,
-1,10,11,12, 13,14,15,-1, -1,-1,-1,-1, -1,-1,-1,-1,
-1,-1,-1,-1, -1,-1,-1,-1, -1,-1,-1,-1, -1,-1,-1,-1,
-1,10,11,12, 13,14,15
};
#define __x2d__(s) HEX2DEC[*(s)-48]
#define __x2d2__(s) __x2d__(s) << 4 | __x2d__(s+1)
std::wstring decodeURI(const char * s) {
unsigned char b;
std::wstring ws;
while (*s) {
if (*s == '%')
if ((b = __x2d2__(s + 1)) >= 0x80) {
if (b >= 0xE0) { // three byte codepoint
ws += ((b & 0b00001111) << 12) | ((__x2d2__(s + 4) & 0b00111111) << 6) | (__x2d2__(s + 7) & 0b00111111);
s += 9;
}
else { // two byte codepoint
ws += (__x2d2__(s + 4) & 0b00111111) | (b & 0b00000011) << 6;
s += 6;
}
}
else { // one byte codepoints
ws += b;
s += 3;
}
else { // no %
ws += *s;
s++;
}
}
return ws;
}
A: you can simply use function AtlEscapeUrl() from atlutil.h, just go through its documentation on how to use it.
A: string urlDecode(string &SRC) {
string ret;
char ch;
int i, ii;
for (i=0; i<SRC.length(); i++) {
if (SRC[i]=='%') {
sscanf(SRC.substr(i+1,2).c_str(), "%x", &ii);
ch=static_cast<char>(ii);
ret+=ch;
i=i+2;
} else {
ret+=SRC[i];
}
}
return (ret);
}
not the best, but working fine ;-)
A: cpp-netlib has functions
namespace boost {
namespace network {
namespace uri {
inline std::string decoded(const std::string &input);
inline std::string encoded(const std::string &input);
}
}
}
they allow to encode and decode URL strings very easy.
A: [Necromancer mode on]
Stumbled upon this question when was looking for fast, modern, platform independent and elegant solution. Didnt like any of above, cpp-netlib would be the winner but it has horrific memory vulnerability in "decoded" function. So I came up with boost's spirit qi/karma solution.
namespace bsq = boost::spirit::qi;
namespace bk = boost::spirit::karma;
bsq::int_parser<unsigned char, 16, 2, 2> hex_byte;
template <typename InputIterator>
struct unescaped_string
: bsq::grammar<InputIterator, std::string(char const *)> {
unescaped_string() : unescaped_string::base_type(unesc_str) {
unesc_char.add("+", ' ');
unesc_str = *(unesc_char | "%" >> hex_byte | bsq::char_);
}
bsq::rule<InputIterator, std::string(char const *)> unesc_str;
bsq::symbols<char const, char const> unesc_char;
};
template <typename OutputIterator>
struct escaped_string : bk::grammar<OutputIterator, std::string(char const *)> {
escaped_string() : escaped_string::base_type(esc_str) {
esc_str = *(bk::char_("a-zA-Z0-9_.~-") | "%" << bk::right_align(2,0)[bk::hex]);
}
bk::rule<OutputIterator, std::string(char const *)> esc_str;
};
The usage of above as following:
std::string unescape(const std::string &input) {
std::string retVal;
retVal.reserve(input.size());
typedef std::string::const_iterator iterator_type;
char const *start = "";
iterator_type beg = input.begin();
iterator_type end = input.end();
unescaped_string<iterator_type> p;
if (!bsq::parse(beg, end, p(start), retVal))
retVal = input;
return retVal;
}
std::string escape(const std::string &input) {
typedef std::back_insert_iterator<std::string> sink_type;
std::string retVal;
retVal.reserve(input.size() * 3);
sink_type sink(retVal);
char const *start = "";
escaped_string<sink_type> g;
if (!bk::generate(sink, g(start), input))
retVal = input;
return retVal;
}
[Necromancer mode off]
EDIT01: fixed the zero padding stuff - special thanks to Hartmut KaiserEDIT02: Live on CoLiRu
A: Ordinarily adding '%' to the int value of a char will not work when encoding, the value is supposed to the the hex equivalent. e.g '/' is '%2F' not '%47'.
I think this is the best and concise solutions for both url encoding and decoding (No much header dependencies).
string urlEncode(string str){
string new_str = "";
char c;
int ic;
const char* chars = str.c_str();
char bufHex[10];
int len = strlen(chars);
for(int i=0;i<len;i++){
c = chars[i];
ic = c;
// uncomment this if you want to encode spaces with +
/*if (c==' ') new_str += '+';
else */if (isalnum(c) || c == '-' || c == '_' || c == '.' || c == '~') new_str += c;
else {
sprintf(bufHex,"%X",c);
if(ic < 16)
new_str += "%0";
else
new_str += "%";
new_str += bufHex;
}
}
return new_str;
}
string urlDecode(string str){
string ret;
char ch;
int i, ii, len = str.length();
for (i=0; i < len; i++){
if(str[i] != '%'){
if(str[i] == '+')
ret += ' ';
else
ret += str[i];
}else{
sscanf(str.substr(i + 1, 2).c_str(), "%x", &ii);
ch = static_cast<char>(ii);
ret += ch;
i = i + 2;
}
}
return ret;
}
A: This version is pure C and can optionally normalize the resource path. Using it with C++ is trivial:
#include <string>
#include <iostream>
int main(int argc, char** argv)
{
const std::string src("/some.url/foo/../bar/%2e/");
std::cout << "src=\"" << src << "\"" << std::endl;
// either do it the C++ conformant way:
char* dst_buf = new char[src.size() + 1];
urldecode(dst_buf, src.c_str(), 1);
std::string dst1(dst_buf);
delete[] dst_buf;
std::cout << "dst1=\"" << dst1 << "\"" << std::endl;
// or in-place with the &[0] trick to skip the new/delete
std::string dst2;
dst2.resize(src.size() + 1);
dst2.resize(urldecode(&dst2[0], src.c_str(), 1));
std::cout << "dst2=\"" << dst2 << "\"" << std::endl;
}
Outputs:
src="/some.url/foo/../bar/%2e/"
dst1="/some.url/bar/"
dst2="/some.url/bar/"
And the actual function:
#include <stddef.h>
#include <ctype.h>
/**
* decode a percent-encoded C string with optional path normalization
*
* The buffer pointed to by @dst must be at least strlen(@src) bytes.
* Decoding stops at the first character from @src that decodes to null.
* Path normalization will remove redundant slashes and slash+dot sequences,
* as well as removing path components when slash+dot+dot is found. It will
* keep the root slash (if one was present) and will stop normalization
* at the first questionmark found (so query parameters won't be normalized).
*
* @param dst destination buffer
* @param src source buffer
* @param normalize perform path normalization if nonzero
* @return number of valid characters in @dst
* @author Johan Lindh <johan@linkdata.se>
* @legalese BSD licensed (http://opensource.org/licenses/BSD-2-Clause)
*/
ptrdiff_t urldecode(char* dst, const char* src, int normalize)
{
char* org_dst = dst;
int slash_dot_dot = 0;
char ch, a, b;
do {
ch = *src++;
if (ch == '%' && isxdigit(a = src[0]) && isxdigit(b = src[1])) {
if (a < 'A') a -= '0';
else if(a < 'a') a -= 'A' - 10;
else a -= 'a' - 10;
if (b < 'A') b -= '0';
else if(b < 'a') b -= 'A' - 10;
else b -= 'a' - 10;
ch = 16 * a + b;
src += 2;
}
if (normalize) {
switch (ch) {
case '/':
if (slash_dot_dot < 3) {
/* compress consecutive slashes and remove slash-dot */
dst -= slash_dot_dot;
slash_dot_dot = 1;
break;
}
/* fall-through */
case '?':
/* at start of query, stop normalizing */
if (ch == '?')
normalize = 0;
/* fall-through */
case '\0':
if (slash_dot_dot > 1) {
/* remove trailing slash-dot-(dot) */
dst -= slash_dot_dot;
/* remove parent directory if it was two dots */
if (slash_dot_dot == 3)
while (dst > org_dst && *--dst != '/')
/* empty body */;
slash_dot_dot = (ch == '/') ? 1 : 0;
/* keep the root slash if any */
if (!slash_dot_dot && dst == org_dst && *dst == '/')
++dst;
}
break;
case '.':
if (slash_dot_dot == 1 || slash_dot_dot == 2) {
++slash_dot_dot;
break;
}
/* fall-through */
default:
slash_dot_dot = 0;
}
}
*dst++ = ch;
} while(ch);
return (dst - org_dst) - 1;
}
A: the juicy bits
#include <ctype.h> // isdigit, tolower
from_hex(char ch) {
return isdigit(ch) ? ch - '0' : tolower(ch) - 'a' + 10;
}
char to_hex(char code) {
static char hex[] = "0123456789abcdef";
return hex[code & 15];
}
noting that
char d = from_hex(hex[0]) << 4 | from_hex(hex[1]);
as in
// %7B = '{'
char d = from_hex('7') << 4 | from_hex('B');
A: You can use "g_uri_escape_string()" function provided glib.h.
https://developer.gnome.org/glib/stable/glib-URI-Functions.html
#include <stdio.h>
#include <stdlib.h>
#include <glib.h>
int main() {
char *uri = "http://www.example.com?hello world";
char *encoded_uri = NULL;
//as per wiki (https://en.wikipedia.org/wiki/Percent-encoding)
char *escape_char_str = "!*'();:@&=+$,/?#[]";
encoded_uri = g_uri_escape_string(uri, escape_char_str, TRUE);
printf("[%s]\n", encoded_uri);
free(encoded_uri);
return 0;
}
compile it with:
gcc encoding_URI.c `pkg-config --cflags --libs glib-2.0`
A: I know the question asks for a C++ method, but for those who might need it, I came up with a very short function in plain C to encode a string. It doesn't create a new string, rather it alters the existing one, meaning that it must have enough size to hold the new string. Very easy to keep up.
void urlEncode(char *string)
{
char charToEncode;
int posToEncode;
while (((posToEncode=strspn(string,"1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz-_.~"))!=0) &&(posToEncode<strlen(string)))
{
charToEncode=string[posToEncode];
memmove(string+posToEncode+3,string+posToEncode+1,strlen(string+posToEncode));
string[posToEncode]='%';
string[posToEncode+1]="0123456789ABCDEF"[charToEncode>>4];
string[posToEncode+2]="0123456789ABCDEF"[charToEncode&0xf];
string+=posToEncode+3;
}
}
A: Had to do it in a project without Boost. So, ended up writing my own. I will just put it on GitHub: https://github.com/corporateshark/LUrlParser
clParseURL URL = clParseURL::ParseURL( "https://name:pwd@github.com:80/path/res" );
if ( URL.IsValid() )
{
cout << "Scheme : " << URL.m_Scheme << endl;
cout << "Host : " << URL.m_Host << endl;
cout << "Port : " << URL.m_Port << endl;
cout << "Path : " << URL.m_Path << endl;
cout << "Query : " << URL.m_Query << endl;
cout << "Fragment : " << URL.m_Fragment << endl;
cout << "User name : " << URL.m_UserName << endl;
cout << "Password : " << URL.m_Password << endl;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "101"
} |
Q: How can I disable copy / paste in Flex Text controls? Long story short, I need to put some text in my Flex application and I don't want users to be able to copy. I was going to use a label, but apparently labels do not support text wrapping. Can I make it so that users cannot select text in a Flex Text control?
Thanks.
A: You could use the Text control and set the selectable property to false...
<mx:Text width="175" selectable="false" text="This is an example of a multiline text string in a Text control." />
A: You can disable paste of more than 1 character by trapping the textInput event:
private function onTextInput(e:flash.events.TextEvent):void
{
if (e.text.length > 1)
e.preventDefault();
}
A: You can set the enabled property to "false" which disables user interaction. You may want to also change the disabledcolor property to your choice.
print("
<mx:Text enabled="false" disabledColor="0x000000" text=Text"/>
");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How important is it to make your website layout fit on low-res displays? How important is it to make a website layout fit on 640 x 480 and 800 x 600 displays? For some time I have been designing with the assumption of at least 1024 x 768, but I haven't been doing it professionally (just on my site which is just a blog that only 10-15 friends read). Does anyone have any non-anecdotal statistics about the distribution of screen resolutions from real web users?
Note 1: "Non-anecdotal" means please don't give answers like "I know lots of people using 640x480" or "my mom runs in 800x600 so it must be really common." I'd rather have actual data (or links to actual data), especially if it is data about general users (rather than geeks).
Note 2: I'm not concerned with extremely-small displays like those on PDAs and cell phones (at least not at the moment). I'm just talking about desktop/laptop monitors.
A: I think the general assumption these days is that 1024x768 is acceptable, thats certainly the route I take. A good benchmark is to look at big sites like the BBC, they recently switched to a 1024x768 design. Thats not to say every site has to be that, for smaller sites I often stick to 800x600 just so the content doesn't look pathetic :o)
Here is the source I often use for web statistics with its august 2008 stats showing 800x600 at 7%.
A: you're running in my web browser. How certain are you that my browser is full-screen even if I'm using 1024x768?
A: I just checked the Google Analytics data for all of the domains I monitor. The screen resolution for less than 800x600 is zero and for 800x600 it is 3-6%
Maybe you can get Google Analytics setup on the domain ahead of time and gather some information about the visitors.
A: If you can make your website support slightly lower resolutions that's probably worthwhile. That way people can run your website in a non-full screen window and it will still look ok and won't have ugly scroll bars.
Also, "netbooks" such as the Asus EEE are gaining popularity and they typically have small (9-10" screens) with very modest resolutions (e.g. 800x480 or 1024x600).
A: This in an anecdotal answer, but it something to be aware of...
We designed a web app for a client that was to be used internally (really a thin-client intranet app). We were told that designing for 1024x768 was preferred as almost all of their users had that resolution or higher. In fact, we were told to ignore any problems that users running at lower resolutions reported since their internal help desk answer was "Set your resolution to 1024x768 or better and run with the browser maximized".
Right after the app was deployed, there were help desk tickets on pages requiring too much horizontal scrolling and some things not being positioned correctly but the resolution was reported to be 1024x768. What we didn't account for was users who use browser side-bars (history, bookmarks, RSS feeds, etc.). None of our developers used them, so it never crossed their minds. The not positioned correctly parts were true errors on our part. The horizontal scrolling was because we set everything to automatically resize but to stop horizontal resize at 768 (actually the client area width at 1024x768).
Also don't forget that users may change their text size settings to bizarre settings. Actually it's not bizarre when the user is sight-impaired, but it is important to remember to test for and accommodate it. Firefox 3 does a good job of proportionally adjusting text and images, but users running other and older browsers have less control.
A: A good practice is to design the core of the site within about 750 pixels, so that the 5%-15% of our audience who still have 800px monitors can read it without scrolling. You can then put less-crucial information (advertising, non-mission-critical features) in an additional 250px column to the right that will be visible to visitors with 1024px or more.
Also as jimharries99 suggests, internet-wide statistics for screens (or browsers, or whatever) may not match your audience profile, so you should use analytics to get a handle on your specific audience. For example, some of the sites we build get most of their traffic within a certain government agency that standardizes on IE6 (other browsers forbidden) and has a lot of 800x600 displays. Wired Magazine's audience, on the other hand, probably has bigger screens and more modern browsers than average.
A: Personally, I design websites so that they'll appear as I envision them on a 1024 x 768 monitor. I'm aware that there are people who will window their browsers, and people who will also have bigger monitors and have it running full screen.
I feel that 1024 x 768 is the right level of compromise at the moment. At the same time, I try to make sure that my site degrades at lower resolutions acceptable to keep the people who have their browser running in windowed mode happy too. If push comes to shove though and I dont have a choice, I'd rather exclude the lower resolutions and have my vision intact so that the impact it has is not lost.
With modern browsers though, a user is able to zoom out, so depending on the content of the site the issue of degradability can be moot - if the layout is too big for the res, the user can simply zoom out.
A: W3Schools tracks a variety of browser statistics, including display resolution. You can find the display resolution here: http://www.w3schools.com/browsers/browsers_display.asp
Of course, you should use statistics as close to your target audience as possible. These statistics are drawn from the browsers of people who browse W3School's site (primarily developers). Other target audiences are likely to have a different profile.
A: Here's an interesting blog post that gives specific statistics based on 6 million hits during February of this year to this particular network's sites.
Top two: 47% 1024x768, and 30% 1280x1024
A: Valve collect data through steam on the capabilities of machines connecting to them, you can see the results here:
http://steampowered.com/status/survey.html
bear in mind these are gamer's systems and so tend to be more powerful than the average pc in the street, as you can see there's not many systems below 1024*768
A: Whilst I agree that 1024x768 is generally a good minimum desktop resolution, I seem to have spent an increasing amount of time of late peering at websites through smaller panels - particularly Windows Mobile devices, the iPhone, and occasionally an eeepc at 800 x 480.
Whilst the eeepc might be unusual (and undoubtedly will be replaced by 1024x768 on the machine class) access by mobile devices is only likely to increase and it would be wise - especially if you're running the sort of site which people would want to connect to away from the desktop to be aware of how your site works on handheld devices.
A: According to Jakob Nielsen's Alertbox, as of July 2006, 60% of monitors were set at 1024x768, so optimizing your site for that resolution is his recommendation.
How important is it to style your site for lower resolutions? I guess the answer to that depends on how your site looks at 800x600 (and below) ... if it's unreadable, then it's a question of the importance of that audience to your site. If you need every viewer, no matter their resolution, then you should work on styles that will produce a tolerable experience for the occasional 640x480 person. If not, you might try styling it as you have been, and work on alternate styles only when you have a business case for doing so.
A: add Google Analytics to your site, its free, the data will only be about your audience
A: I think the real answer is to design your site such that it degrades gracefully in a lower resolution. Having bars to the right and left of the main column of content that won't resize and cause your site's main content to squish isn't really a good idea.
And like has been pointed out, just because a person runs at 1280x1024 doesn't mean that their web browser will be maximized - some people insist on running everything in a non-maximized window (I think the Mac used to enforce this). And this is to say nothing of the people who have ten toolbars and only run at 800x600 because the OS won't let them go to 640x480 anymore (and then they wonder why everything looks "fuzzy" on an LCD screen and why their eyes are going to shit)
Of course designing your site this way isn't always a luxury you have - sometimes the client wants it pixel perfect to what the Photoshop designed with an enormous monitor came up with. Just do your best with what you can and be prepared to have a "this is what you said you wanted" argument later (assuming you can't explain it to them successfully beforehand)
A: Could I suggest you use a auto-sizing layout that fits the browser window, however wide that happens to be? While you might design it to look best at 1024x768, people with lower resolutions (due, for instance, to poor eyesight or a cheap computer) simply can't use a higher resolution, and shouldn't have to scroll left and right to read every line of text.
A: I have a 1920 X 1200 screen size so I can have multiple windows open all at once and see them all. If a site tried greedily to use all of that space for itself, I would stop going to that site.
A: Here's two reasons to design sites at 700-800 pixels wide.
*
*Text is easier to read when the columns are not too wide. If you use 400-500 pixels wide text, you have room for a 250-400 pixel side column which works well for screens of all sizes.
*Your pages print much nicer. I run a site with information that many users may print and with the sidebar taken disabled in the print css, the pages fit very nicely on a sheet of paper.
A: Look, Monitors are getting bigger and people are wanting more and more. People like the bigger monitors. The only people that have small monitors would be a shared computer center or some old aged people.
@All, We need to join together as developers and require to only build apps on the 1024x768. If we all join together, we will make it the standard.
Don't ask what your monitor can do for you, ask what you can do to the monitor!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Panel.Dock Fill ignoring other Panel.Dock setting If you create a panel on a form and set it to Dock=Top and drop another panel and set its Dock=Fill, it may fill the entire form, ignoring the first panel. Changing the tab order does nothing.
A: Another, potentially cleaner option is to use the TableLayout control. Set up one row of the desired height for your top dock, and another row to fill 100% for your bottom. Set both panels inside to Fill, and you're done.
(TableLayout does take some getting used to, though.)
A: Docking layout depends on the order of sibling controls. Controls are docked "bottom up", so the last control in the collection is docked first. A docked control only take the layout of previously docked siblings into account. Hence the control with Dock=Fill should be first (top) in the sibling order, if you want it to take the other docked controls into account. If it is not the first control, earlier controls will overlap it.
This can be confusing because the sibling-order is not necessarily the same as the visual order, and the sibling order is not always apparent from the design view.
The Document outline window (View -> Other Windows -> Document outline) gives a useful tree-view over the control hierarchy and order, and allows you to change the sibling order of controls.
You can also change sibling order directly in the designer by context menu -> Bring to front / Send to back, which moves the control to be first or last of the siblings. These menu labels may be somewhat confusing since the actual effect depends on the layout model.
With fixed positioned controls, the 2D position is independent of the sibling order, but when controls are overlapping, the control earliest in the order will be "on top", hiding part of siblings later in the order. In this context Bring to front / Send to back makes sense.
Inside flow- or table-layout panels, the creation order determines the visual order of the controls. There is no overlapping controls. So bring to front/send to back really means make first or last in the order of controls.
With docked layout, the bring to front / send to back may be even more confusing since it determines in which order the docking is calculated, so "bring to front" on a fill-docked control will place the control in the middle of the parent, taking all edge-docked controls into account.
A: If you don't want to change the order of the elements inside the code, you can use the method Container.Controls.SetChildIndex() with Container being the e.g. Form, Panel etc. you want do add your controls to.
Example:
//Container ------------------------------------
Panel Container = new Panel();
//Top-Docked Element ---------------------------
ButtonArea = new FlowLayoutPanel();
Container.Controls.Add(ButtonArea);
Container.Controls.SetChildIndex(ButtonArea, 1);
ButtonArea.Dock = DockStyle.Top;
//Fill-Docked Element --------------------------
box = new RichTextBox();
Container.Controls.Add(box);
Container.Controls.SetChildIndex(box, 0); //setting this to 0 does the trick
box.Dock = DockStyle.Fill;
A: Right click on the panel with Dock=Fill and click 'Bring to Front'.
This makes this control be created last, which takes into account the Dock settings on other controls in the same container.
A: I've had the same problem and I managed to solve it.
If you have a container with DockStyle.Fill the others should also have DockStyle but Top or whatever you want.
The important thing is to add the control with DockStyle.Fill first in Controls then the others.
Example:
ComboBox cb = new ComboBox();
cb.Dock = DockStyle.Top;
GridView gv = new GridView();
gv.Dock = DockStyle.Fill;
Controls.Add(gv); // this is okay
Controls.Add(cb);
but if we put cb first
Controls.Add(cb);
Controls.Add(gv); // gv will overlap the combo box.
A: JacquesB had the idea with the document outline but the hierarchy didn't solve my problem.
My controls were not in a hierarchical style they were just listed with the same parent.
I learned that if you changed the order it will fix the way you want it to look.
The controls on the bottom of the list will overlap the controls on top of it in the Document Outline window. In your case you would make sure that the first panel is below the second panel and so forth.
A: Here is a trick that worked for me..
Place the Top item and dock it top.
Place a Splitter, and also dock it top, then set it disabled (unless you want to resize the top).
Then Place the Fill object and set Docking to Fill. The object will stay below the splitter.
A: I ran into the same issue. Mine was with adding new/custom controls below the menu strip during run time. The problem was the controls when docked, decided to dock from the top of the form and completely ignoring the menu strip entirely, very annoying if you ask me.
As this had to be done dynamically with code and not during design mode this became extremely frustrating. The simplest way I found is to create a panel during design mode and dock below the menu strip. From there you can just add/remove the controls to the panel and you can dock it during run time. No need to mess with all your controls on your form that do not really need to change, too much work depending on what you really need to do.
object.dock = Fill
Panel.Controls.Add(object)
A: I know this is an old post but I discovered something useful. To adjust sibling control order programatically for dynamically created control(s), you can do something like:
parentForm.Controls.SetChildIndex (myPanel, 0)
In my case, I did this to move a Dock/Fill panel to be the first control in my form so that it would not overlap with another docked control set to Dock/Top (a menu strip).
A: Also may be a fast solution to take the "Filled" component and right click, cut and paste in the desired area.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "170"
} |
Q: Control for getting hotkeys like tab and space I have a dialog box that allows users to set hotkeys for use in a 3d program on windows. I'm using CHotKeyCtrl, which is pretty good, but doesn't handle some keys that the users would like to use - specifically, tab and space.
The hotkey handling is smart enough to be able to fire on those keys, I just need a UI to let them be set. A control similar to CHotKeyCtrl would be ideal, but other workarounds are also appreciated.
A: One workarounds option would be to use a stock standard edit control with a message hook function.
This would allow you to trap the keyboard WM_KEYDOWN messages sent to that edit control.
The hook function would look something like this:
LRESULT CALLBACK MessageHook(int code, WPARAM wParam, LPMSG lpMsg)
{
LRESULT lResult = 0;
if ((code >= 0) && (code == MSGF_DIALOGBOX))
{
if (lpMsg->message == WM_KEYDOWN)
{
//-- process the key down message
lResult = 1;
}
}
// do default processing if required
if (lResult == 0)
{
lResult = CallNextHookEx(MessageFilterHook, code, wParam, (LPARAM)lpMsg);
}
return lResult;
}
The hook can then be attached to edit control when the edit control gets focus as follows:
//-- create an instance thunk for our hook callback
FARPROC FilterProc = (FARPROC) MakeProcInstance((HOOKPROC)(MessageHook),
hInstance);
//-- attach the message hook
FilterHook = SetWindowsHookEx(WH_MSGFILTER,
(HOOKPROC)FilterProc,
hInstance, GetCurrentThreadId());
and removed when the edit control when looses focus as follows:
//-- remove a message hook
UnhookWindowsHookEx(MessageFilterHook);
Using this approach every key press will be sent to the hook, provided the edit control has focus.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Migrate SQL2000 Database to SQL2005 on another machine I've been beating myself over the head with this app migration for a few days now. What I have is an old MSSQL Server 2000-backed application that is being upgraded to a Windows 2003 Server running SMSQL Server 2005. I know little about SQL Server but obviously not enough.
I tried backing up the database on the old server by going to Databases->[Database]->All Tasks->Backup Database..., selecting 'Full', and saving the file. I moved that backup file to the new server and tried to do a restore but it complained saying that it was looking for the [Database].mdf file in the location it was on the old server.
So then I tried to do an Export Data, selected the local SQL 2000 database, pointed it to the new SQL 2005 database on the other machine, and it gets all the way to the end and dies complaining about the way one of the tables is being joined.
I tried then doing a 'Generate SQL' command on the 2000 box and running that under SQL 2005. It looks like there are a lot of outer joins using the old *= syntax that SQL Server 2005 doesn't support anymore and, this being a vendor database, have no idea what their true intentions were when they set up these tables.
Is there any other way I can try migrating this database over?
A: The backup file has the "hard" location of the data files stored in it. You just need to update them:
When you restore in 2005, before you click the final "ok" to restore (after you have selected the .bak file), go to the options tab. This will have the mdf and ldf locations that were in the backup file. Change these to legitimate directories on your new machine.
A: You could detach the database from the old server, copy the mdf and ldf (and any other related files) to the server server, and then attach the database to the new server.
When you attach it, SQL Server will upgrade this to a 2005 formatted database. If you have problems with compatibility, you can change that, too. In SQL Server Management studio, Right click your database, click properties, click Options, and change the compatibility mode to 'SQL Server 2000 (80)'.
A: As Peter noted, you have to change the path to a new one that exists on the new server.
This picture will help:
One trick I learn years ago is to click the last option button ("Leave the database in read-only ...") for a minute to view and copy where the data files are located in the new server. Just don't forget to put it back to the first option before restoring
A: Create a backup of SQL2000's database to file. Create new database on SQL2005 with same name and restore backup file into new database with option "force restore over existing database" and set copliant level of new database to "SQL2000 (8.0)".
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Volatile vs. Interlocked vs. lock Let's say that a class has a public int counter field that is accessed by multiple threads. This int is only incremented or decremented.
To increment this field, which approach should be used, and why?
*
*lock(this.locker) this.counter++;,
*Interlocked.Increment(ref this.counter);,
*Change the access modifier of counter to public volatile.
Now that I've discovered volatile, I've been removing many lock statements and the use of Interlocked. But is there a reason not to do this?
A: Worst (won't actually work)
Change the access modifier of counter to public volatile
As other people have mentioned, this on its own isn't actually safe at all. The point of volatile is that multiple threads running on multiple CPUs can and will cache data and re-order instructions.
If it is not volatile, and CPU A increments a value, then CPU B may not actually see that incremented value until some time later, which may cause problems.
If it is volatile, this just ensures the two CPUs see the same data at the same time. It doesn't stop them at all from interleaving their reads and write operations which is the problem you are trying to avoid.
Second Best:
lock(this.locker) this.counter++;
This is safe to do (provided you remember to lock everywhere else that you access this.counter). It prevents any other threads from executing any other code which is guarded by locker.
Using locks also, prevents the multi-CPU reordering problems as above, which is great.
The problem is, locking is slow, and if you re-use the locker in some other place which is not really related then you can end up blocking your other threads for no reason.
Best
Interlocked.Increment(ref this.counter);
This is safe, as it effectively does the read, increment, and write in 'one hit' which can't be interrupted. Because of this, it won't affect any other code, and you don't need to remember to lock elsewhere either. It's also very fast (as MSDN says, on modern CPUs, this is often literally a single CPU instruction).
I'm not entirely sure however if it gets around other CPUs reordering things, or if you also need to combine volatile with the increment.
InterlockedNotes:
*
*INTERLOCKED METHODS ARE CONCURRENTLY SAFE ON ANY NUMBER OF COREs OR CPUs.
*Interlocked methods apply a full fence around instructions they execute, so reordering does not happen.
*Interlocked methods do not need or even do not support access to a volatile field, as volatile is placed a half fence around operations on given field and interlocked is using the full fence.
Footnote: What volatile is actually good for.
As volatile doesn't prevent these kinds of multithreading issues, what's it for? A good example is saying you have two threads, one which always writes to a variable (say queueLength), and one which always reads from that same variable.
If queueLength is not volatile, thread A may write five times, but thread B may see those writes as being delayed (or even potentially in the wrong order).
A solution would be to lock, but you could also use volatile in this situation. This would ensure that thread B will always see the most up-to-date thing that thread A has written. Note however that this logic only works if you have writers who never read, and readers who never write, and if the thing you're writing is an atomic value. As soon as you do a single read-modify-write, you need to go to Interlocked operations or use a Lock.
A: I second Jon Skeet's answer and want to add the following links for everyone who want to know more about "volatile" and Interlocked:
Atomicity, volatility and immutability are different, part one - (Eric Lippert's Fabulous Adventures In Coding)
Atomicity, volatility and immutability are different, part two
Atomicity, volatility and immutability are different, part three
Sayonara Volatile - (Wayback Machine snapshot of Joe Duffy's Weblog as it appeared in 2012)
A: "volatile" does not replace Interlocked.Increment! It just makes sure that the variable is not cached, but used directly.
Incrementing a variable requires actually three operations:
*
*read
*increment
*write
Interlocked.Increment performs all three parts as a single atomic operation.
A: Either lock or interlocked increment is what you are looking for.
Volatile is definitely not what you're after - it simply tells the compiler to treat the variable as always changing even if the current code path allows the compiler to optimize a read from memory otherwise.
e.g.
while (m_Var)
{ }
if m_Var is set to false in another thread but it's not declared as volatile, the compiler is free to make it an infinite loop (but doesn't mean it always will) by making it check against a CPU register (e.g. EAX because that was what m_Var was fetched into from the very beginning) instead of issuing another read to the memory location of m_Var (this may be cached - we don't know and don't care and that's the point of cache coherency of x86/x64). All the posts earlier by others who mentioned instruction reordering simply show they don't understand x86/x64 architectures. Volatile does not issue read/write barriers as implied by the earlier posts saying 'it prevents reordering'. In fact, thanks again to MESI protocol, we are guaranteed the result we read is always the same across CPUs regardless of whether the actual results have been retired to physical memory or simply reside in the local CPU's cache. I won't go too far into the details of this but rest assured that if this goes wrong, Intel/AMD would likely issue a processor recall! This also means that we do not have to care about out of order execution etc. Results are always guaranteed to retire in order - otherwise we are stuffed!
With Interlocked Increment, the processor needs to go out, fetch the value from the address given, then increment and write it back -- all that while having exclusive ownership of the entire cache line (lock xadd) to make sure no other processors can modify its value.
With volatile, you'll still end up with just 1 instruction (assuming the JIT is efficient as it should) - inc dword ptr [m_Var]. However, the processor (cpuA) doesn't ask for exclusive ownership of the cache line while doing all it did with the interlocked version. As you can imagine, this means other processors could write an updated value back to m_Var after it's been read by cpuA. So instead of now having incremented the value twice, you end up with just once.
Hope this clears up the issue.
For more info, see 'Understand the Impact of Low-Lock Techniques in Multithreaded Apps' - http://msdn.microsoft.com/en-au/magazine/cc163715.aspx
p.s. What prompted this very late reply? All the replies were so blatantly incorrect (especially the one marked as answer) in their explanation I just had to clear it up for anyone else reading this. shrugs
p.p.s. I'm assuming that the target is x86/x64 and not IA64 (it has a different memory model). Note that Microsoft's ECMA specs is screwed up in that it specifies the weakest memory model instead of the strongest one (it's always better to specify against the strongest memory model so it is consistent across platforms - otherwise code that would run 24-7 on x86/x64 may not run at all on IA64 although Intel has implemented similarly strong memory model for IA64) - Microsoft admitted this themselves - http://blogs.msdn.com/b/cbrumme/archive/2003/05/17/51445.aspx.
A: I would like to add to mentioned in the other answers the difference between volatile, Interlocked, and lock:
The volatile keyword can be applied to fields of these types:
*
*Reference types.
*Pointer types (in an unsafe context). Note that although the pointer itself can be volatile, the object that it points to cannot. In other
words, you cannot declare a "pointer" to be "volatile".
*Simple types such as sbyte, byte, short, ushort, int, uint, char, float, and bool.
*An enum type with one of the following base types: byte, sbyte, short, ushort, int, or uint.
*Generic type parameters known to be reference types.
*IntPtr and UIntPtr.
Other types, including double and long, cannot be marked "volatile"
because reads and writes to fields of those types cannot be guaranteed
to be atomic. To protect multi-threaded access to those types of
fields, use the Interlocked class members or protect access using the
lock statement.
A: Interlocked functions do not lock. They are atomic, meaning that they can complete without the possibility of a context switch during increment. So there is no chance of deadlock or wait.
I would say that you should always prefer it to a lock and increment.
Volatile is useful if you need writes in one thread to be read in another, and if you want the optimizer to not reorder operations on a variable (because things are happening in another thread that the optimizer doesn't know about). It's an orthogonal choice to how you increment.
This is a really good article if you want to read more about lock-free code, and the right way to approach writing it
http://www.ddj.com/hpc-high-performance-computing/210604448
A: EDIT: As noted in comments, these days I'm happy to use Interlocked for the cases of a single variable where it's obviously okay. When it gets more complicated, I'll still revert to locking...
Using volatile won't help when you need to increment - because the read and the write are separate instructions. Another thread could change the value after you've read but before you write back.
Personally I almost always just lock - it's easier to get right in a way which is obviously right than either volatility or Interlocked.Increment. As far as I'm concerned, lock-free multi-threading is for real threading experts, of which I'm not one. If Joe Duffy and his team build nice libraries which will parallelise things without as much locking as something I'd build, that's fabulous, and I'll use it in a heartbeat - but when I'm doing the threading myself, I try to keep it simple.
A: lock(...) works, but may block a thread, and could cause deadlock if other code is using the same locks in an incompatible way.
Interlocked.* is the correct way to do it ... much less overhead as modern CPUs support this as a primitive.
volatile on its own is not correct. A thread attempting to retrieve and then write back a modified value could still conflict with another thread doing the same.
A: I did some test to see how the theory actually works: kennethxu.blogspot.com/2009/05/interlocked-vs-monitor-performance.html. My test was more focused on CompareExchnage but the result for Increment is similar. Interlocked is not necessary faster in multi-cpu environment. Here is the test result for Increment on a 2 years old 16 CPU server. Bare in mind that the test also involves the safe read after increase, which is typical in real world.
D:\>InterlockVsMonitor.exe 16
Using 16 threads:
InterlockAtomic.RunIncrement (ns): 8355 Average, 8302 Minimal, 8409 Maxmial
MonitorVolatileAtomic.RunIncrement (ns): 7077 Average, 6843 Minimal, 7243 Maxmial
D:\>InterlockVsMonitor.exe 4
Using 4 threads:
InterlockAtomic.RunIncrement (ns): 4319 Average, 4319 Minimal, 4321 Maxmial
MonitorVolatileAtomic.RunIncrement (ns): 933 Average, 802 Minimal, 1018 Maxmial
A: I'm just here to point out the mistake about volatile in Orion Edwards' answer.
He said:
"If it is volatile, this just ensures the two CPUs see the same data at
the same time."
It's wrong. In microsoft' doc about volatile, mentioned:
"On a multiprocessor system, a volatile read operation does not
guarantee to obtain the latest value written to that memory location
by any processor. Similarly, a volatile write operation does not
guarantee that the value written would be immediately visible to other
processors."
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "774"
} |
Q: minOccurs="0" on required parameters in WSDL on ASP.NET web service I'm writing a simple web service using Microsoft Visual Web Developer 2005 (Express Edition), and the dynamically generated WSDL has a minOccurs="0" for all the parameters.
How do I get minOccurs="1" for the required parameters without resorting to creating a static WSDL file?
I need to do this using a ASP.NET Web Service (.NET v2). So, no WCF.
A: I think that the XmlElement(IsNullable = true) attribute will do the job:
using System.Xml.Serialization;
[WebMethod]
public string MyService([XmlElement(IsNullable = true)] string arg)
{
return "1";
}
A: from an msdn forum
"If you are creating a new web service, I highly recommend building the web service using the Windows Communication Foundation (WCF) instead of using ASP.NET Web Services.
In WCF, when you specify the data contract for your service you can speicfy that a given data member is required using the IsRequired property on the DataMemberAttribute.
"
source -
http://social.msdn.microsoft.com/forums/en-US/asmxandxml/thread/40ab5748-d32c-42a6-a47f-984ba18a1fe2/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: splitting files for P2P application I have to implement a middleware system for file sharing, and it has to split the files not unlike what happens on bittorrent, where it sends and receives separate pieces simultaneously from varios sources. How do i do that? Is it a library or i have to implement the file splitting myself?
A: Split the files into blocks let's say of 100KB each. Then calculate a SHA hash (or some other hashing algorithm) on each of the blocks. so if the file is 905KB, you would have 10 such hashes calculated.
The server would contain a hash definition file for each file that it serves. This hash definition file would contain a list of all of the blocks of the file, along with the hash. So if the server is serving our 905KB file called test.exe. Then we would have another file called test.exe.hashes which contains a listing of the 10 hashes of the file.
The client would download the hash definition file, and ensure that it has all of the blocks. The client can request each block individually and after it's downloaded, it can calculate the hash on its end again to ensure there is no corruption.
You don't need to physically split the file, splitting a file is just reading the part of it that you are interested in. The first block of the file is from byte range 0 to 102399, the next block is from 102400 to 204800, and so on. So just open the file, seek to that position, read the data, and close the file.
A: Look at the implementation of Split & Concat (GNU software).
A: You might want to consider using Solomon-Reed encoding. It will make getting the final blocks much easier. This is the route Microsoft took in Avalaunch.
A: Out of interest: Why not just implement BitTorrent or something like that? There are many OpenSource Clients (i.e. Azureus) and the protocol is really simple. There is also an article with a little more detail, but this contains some extensions - in doubt, the official spec is always right.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Polymorphism vs Overriding vs Overloading In terms of Java, when someone asks:
what is polymorphism?
Would overloading or overriding be an acceptable answer?
I think there is a bit more to it than that.
IF you had a abstract base class that defined a method with no implementation, and you defined that method in the sub class, is that still overridding?
I think overloading is not the right answer for sure.
A: The clearest way to express polymorphism is via an abstract base class (or interface)
public abstract class Human{
...
public abstract void goPee();
}
This class is abstract because the goPee() method is not definable for Humans. It is only definable for the subclasses Male and Female. Also, Human is an abstract concept — You cannot create a human that is neither Male nor Female. It’s got to be one or the other.
So we defer the implementation by using the abstract class.
public class Male extends Human{
...
@Override
public void goPee(){
System.out.println("Stand Up");
}
}
and
public class Female extends Human{
...
@Override
public void goPee(){
System.out.println("Sit Down");
}
}
Now we can tell an entire room full of Humans to go pee.
public static void main(String[] args){
ArrayList<Human> group = new ArrayList<Human>();
group.add(new Male());
group.add(new Female());
// ... add more...
// tell the class to take a pee break
for (Human person : group) person.goPee();
}
Running this would yield:
Stand Up
Sit Down
...
A: The classic example, Dogs and cats are animals, animals have the method makeNoise. I can iterate through an array of animals calling makeNoise on them and expect that they would do there respective implementation.
The calling code does not have to know what specific animal they are.
Thats what I think of as polymorphism.
A: Polymorphism simply means "Many Forms".
It does not REQUIRE inheritance to achieve...as interface implementation, which is not inheritance at all, serves polymorphic needs. Arguably, interface implementation serves polymorphic needs "Better" than inheritance.
For example, would you create a super-class to describe all things that can fly? I should think not. You would be be best served to create an interface that describes flight and leave it at that.
So, since interfaces describe behavior, and method names describe behavior (to the programmer), it is not too far of a stretch to consider method overloading as a lesser form of polymorphism.
A: Although, Polymorphism is already explained in great details in this post but I would like put more emphasis on why part of it.
Why Polymorphism is so important in any OOP language.
Let’s try to build a simple application for a TV with and without Inheritance/Polymorphism. Post each version of the application, we do a small retrospective.
Supposing, you are a software engineer at a TV company and you are asked to write software for Volume, Brightness and Colour controllers to increase and decrease their values on user command.
You start with writing classes for each of these features by adding
*
*set:- To set a value of a controller.(Supposing this has controller specific code)
*get:- To get a value of a controller.(Supposing this has controller specific code)
*adjust:- To validate the input and setting a controller.(Generic validations.. independent of controllers)
*user input mapping with controllers :- To get user input and invoking controllers accordingly.
Application Version 1
import java.util.Scanner;
class VolumeControllerV1 {
private int value;
int get() {
return value;
}
void set(int value) {
System.out.println("Old value of VolumeController \t"+this.value);
this.value = value;
System.out.println("New value of VolumeController \t"+this.value);
}
void adjust(int value) {
int temp = this.get();
if(((value > 0) && (temp >= 100)) || ((value < 0) && (temp <= 0))) {
System.out.println("Can not adjust any further");
return;
}
this.set(temp + value);
}
}
class BrightnessControllerV1 {
private int value;
int get() {
return value;
}
void set(int value) {
System.out.println("Old value of BrightnessController \t"+this.value);
this.value = value;
System.out.println("New value of BrightnessController \t"+this.value);
}
void adjust(int value) {
int temp = this.get();
if(((value > 0) && (temp >= 100)) || ((value < 0) && (temp <= 0))) {
System.out.println("Can not adjust any further");
return;
}
this.set(temp + value);
}
}
class ColourControllerV1 {
private int value;
int get() {
return value;
}
void set(int value) {
System.out.println("Old value of ColourController \t"+this.value);
this.value = value;
System.out.println("New value of ColourController \t"+this.value);
}
void adjust(int value) {
int temp = this.get();
if(((value > 0) && (temp >= 100)) || ((value < 0) && (temp <= 0))) {
System.out.println("Can not adjust any further");
return;
}
this.set(temp + value);
}
}
/*
* There can be n number of controllers
* */
public class TvApplicationV1 {
public static void main(String[] args) {
VolumeControllerV1 volumeControllerV1 = new VolumeControllerV1();
BrightnessControllerV1 brightnessControllerV1 = new BrightnessControllerV1();
ColourControllerV1 colourControllerV1 = new ColourControllerV1();
OUTER: while(true) {
Scanner sc=new Scanner(System.in);
System.out.println(" Enter your option \n Press 1 to increase volume \n Press 2 to decrease volume");
System.out.println(" Press 3 to increase brightness \n Press 4 to decrease brightness");
System.out.println(" Press 5 to increase color \n Press 6 to decrease color");
System.out.println("Press any other Button to shutdown");
int button = sc.nextInt();
switch (button) {
case 1: {
volumeControllerV1.adjust(5);
break;
}
case 2: {
volumeControllerV1.adjust(-5);
break;
}
case 3: {
brightnessControllerV1.adjust(5);
break;
}
case 4: {
brightnessControllerV1.adjust(-5);
break;
}
case 5: {
colourControllerV1.adjust(5);
break;
}
case 6: {
colourControllerV1.adjust(-5);
break;
}
default:
System.out.println("Shutting down...........");
break OUTER;
}
}
}
}
Now you have our first version of working application ready to be deployed. Time to analyze the work done so far.
Issues in TV Application Version 1
*
*Adjust(int value) code is duplicate in all three classes. You would like to minimize the code duplicity. (But you did not think of common code and moving it to some super class to avoid duplicate code)
You decide to live with that as long as your application works as expected.
After sometimes, your Boss comes back to you and asks you to add reset functionality to the existing application. Reset would set all 3 three controller to their respective default values.
You start writing a new class (ResetFunctionV2) for the new functionality and map the user input mapping code for this new feature.
Application Version 2
import java.util.Scanner;
class VolumeControllerV2 {
private int defaultValue = 25;
private int value;
int getDefaultValue() {
return defaultValue;
}
int get() {
return value;
}
void set(int value) {
System.out.println("Old value of VolumeController \t"+this.value);
this.value = value;
System.out.println("New value of VolumeController \t"+this.value);
}
void adjust(int value) {
int temp = this.get();
if(((value > 0) && (temp >= 100)) || ((value < 0) && (temp <= 0))) {
System.out.println("Can not adjust any further");
return;
}
this.set(temp + value);
}
}
class BrightnessControllerV2 {
private int defaultValue = 50;
private int value;
int get() {
return value;
}
int getDefaultValue() {
return defaultValue;
}
void set(int value) {
System.out.println("Old value of BrightnessController \t"+this.value);
this.value = value;
System.out.println("New value of BrightnessController \t"+this.value);
}
void adjust(int value) {
int temp = this.get();
if(((value > 0) && (temp >= 100)) || ((value < 0) && (temp <= 0))) {
System.out.println("Can not adjust any further");
return;
}
this.set(temp + value);
}
}
class ColourControllerV2 {
private int defaultValue = 40;
private int value;
int get() {
return value;
}
int getDefaultValue() {
return defaultValue;
}
void set(int value) {
System.out.println("Old value of ColourController \t"+this.value);
this.value = value;
System.out.println("New value of ColourController \t"+this.value);
}
void adjust(int value) {
int temp = this.get();
if(((value > 0) && (temp >= 100)) || ((value < 0) && (temp <= 0))) {
System.out.println("Can not adjust any further");
return;
}
this.set(temp + value);
}
}
class ResetFunctionV2 {
private VolumeControllerV2 volumeControllerV2 ;
private BrightnessControllerV2 brightnessControllerV2;
private ColourControllerV2 colourControllerV2;
ResetFunctionV2(VolumeControllerV2 volumeControllerV2, BrightnessControllerV2 brightnessControllerV2, ColourControllerV2 colourControllerV2) {
this.volumeControllerV2 = volumeControllerV2;
this.brightnessControllerV2 = brightnessControllerV2;
this.colourControllerV2 = colourControllerV2;
}
void onReset() {
volumeControllerV2.set(volumeControllerV2.getDefaultValue());
brightnessControllerV2.set(brightnessControllerV2.getDefaultValue());
colourControllerV2.set(colourControllerV2.getDefaultValue());
}
}
/*
* so on
* There can be n number of controllers
*
* */
public class TvApplicationV2 {
public static void main(String[] args) {
VolumeControllerV2 volumeControllerV2 = new VolumeControllerV2();
BrightnessControllerV2 brightnessControllerV2 = new BrightnessControllerV2();
ColourControllerV2 colourControllerV2 = new ColourControllerV2();
ResetFunctionV2 resetFunctionV2 = new ResetFunctionV2(volumeControllerV2, brightnessControllerV2, colourControllerV2);
OUTER: while(true) {
Scanner sc=new Scanner(System.in);
System.out.println(" Enter your option \n Press 1 to increase volume \n Press 2 to decrease volume");
System.out.println(" Press 3 to increase brightness \n Press 4 to decrease brightness");
System.out.println(" Press 5 to increase color \n Press 6 to decrease color");
System.out.println(" Press 7 to reset TV \n Press any other Button to shutdown");
int button = sc.nextInt();
switch (button) {
case 1: {
volumeControllerV2.adjust(5);
break;
}
case 2: {
volumeControllerV2.adjust(-5);
break;
}
case 3: {
brightnessControllerV2.adjust(5);
break;
}
case 4: {
brightnessControllerV2.adjust(-5);
break;
}
case 5: {
colourControllerV2.adjust(5);
break;
}
case 6: {
colourControllerV2.adjust(-5);
break;
}
case 7: {
resetFunctionV2.onReset();
break;
}
default:
System.out.println("Shutting down...........");
break OUTER;
}
}
}
}
So you have your application ready with Reset feature. But, now you start realizing that
Issues in TV Application Version 2
*
*If a new controller is introduced to the product, you have to change Reset feature code.
*If the count of the controller grows very high, you would have issue in holding the references of the controllers.
*Reset feature code is tightly coupled with all the controllers Class’s code(to get and set default values).
*Reset feature class (ResetFunctionV2) can access other method of the Controller class’s (adjust) which is undesirable.
At the same time, You hear from you Boss that you might have to add a feature wherein each of controllers, on start-up, needs to check for the latest version of driver from company’s hosted driver repository via internet.
Now you start thinking that this new feature to be added resembles with Reset feature and Issues of Application (V2) will be multiplied if you don’t re-factor your application.
You start thinking of using inheritance so that you can take advantage from polymorphic ability of JAVA and you add a new abstract class (ControllerV3) to
*
*Declare the signature of get and set method.
*Contain adjust method implementation which was earlier replicated among all the controllers.
*Declare setDefault method so that reset feature can be easily implemented leveraging Polymorphism.
With these improvements, you have version 3 of your TV application ready with you.
Application Version 3
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
abstract class ControllerV3 {
abstract void set(int value);
abstract int get();
void adjust(int value) {
int temp = this.get();
if(((value > 0) && (temp >= 100)) || ((value < 0) && (temp <= 0))) {
System.out.println("Can not adjust any further");
return;
}
this.set(temp + value);
}
abstract void setDefault();
}
class VolumeControllerV3 extends ControllerV3 {
private int defaultValue = 25;
private int value;
public void setDefault() {
set(defaultValue);
}
int get() {
return value;
}
void set(int value) {
System.out.println("Old value of VolumeController \t"+this.value);
this.value = value;
System.out.println("New value of VolumeController \t"+this.value);
}
}
class BrightnessControllerV3 extends ControllerV3 {
private int defaultValue = 50;
private int value;
public void setDefault() {
set(defaultValue);
}
int get() {
return value;
}
void set(int value) {
System.out.println("Old value of BrightnessController \t"+this.value);
this.value = value;
System.out.println("New value of BrightnessController \t"+this.value);
}
}
class ColourControllerV3 extends ControllerV3 {
private int defaultValue = 40;
private int value;
public void setDefault() {
set(defaultValue);
}
int get() {
return value;
}
void set(int value) {
System.out.println("Old value of ColourController \t"+this.value);
this.value = value;
System.out.println("New value of ColourController \t"+this.value);
}
}
class ResetFunctionV3 {
private List<ControllerV3> controllers = null;
ResetFunctionV3(List<ControllerV3> controllers) {
this.controllers = controllers;
}
void onReset() {
for (ControllerV3 controllerV3 :this.controllers) {
controllerV3.setDefault();
}
}
}
/*
* so on
* There can be n number of controllers
*
* */
public class TvApplicationV3 {
public static void main(String[] args) {
VolumeControllerV3 volumeControllerV3 = new VolumeControllerV3();
BrightnessControllerV3 brightnessControllerV3 = new BrightnessControllerV3();
ColourControllerV3 colourControllerV3 = new ColourControllerV3();
List<ControllerV3> controllerV3s = new ArrayList<>();
controllerV3s.add(volumeControllerV3);
controllerV3s.add(brightnessControllerV3);
controllerV3s.add(colourControllerV3);
ResetFunctionV3 resetFunctionV3 = new ResetFunctionV3(controllerV3s);
OUTER: while(true) {
Scanner sc=new Scanner(System.in);
System.out.println(" Enter your option \n Press 1 to increase volume \n Press 2 to decrease volume");
System.out.println(" Press 3 to increase brightness \n Press 4 to decrease brightness");
System.out.println(" Press 5 to increase color \n Press 6 to decrease color");
System.out.println(" Press 7 to reset TV \n Press any other Button to shutdown");
int button = sc.nextInt();
switch (button) {
case 1: {
volumeControllerV3.adjust(5);
break;
}
case 2: {
volumeControllerV3.adjust(-5);
break;
}
case 3: {
brightnessControllerV3.adjust(5);
break;
}
case 4: {
brightnessControllerV3.adjust(-5);
break;
}
case 5: {
colourControllerV3.adjust(5);
break;
}
case 6: {
colourControllerV3.adjust(-5);
break;
}
case 7: {
resetFunctionV3.onReset();
break;
}
default:
System.out.println("Shutting down...........");
break OUTER;
}
}
}
}
Although most of the Issue listed in issue list of V2 were addressed except
Issues in TV Application Version 3
*
*Reset feature class (ResetFunctionV3) can access other method of the Controller class’s (adjust) which is undesirable.
Again, you think of solving this problem, as now you have another feature (driver update at startup) to implement as well. If you don’t fix it, it will get replicated to new features as well.
So you divide the contract defined in abstract class and write 2 interfaces for
*
*Reset feature.
*Driver Update.
And have your 1st concrete class implement them as below
Application Version 4
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
interface OnReset {
void setDefault();
}
interface OnStart {
void checkForDriverUpdate();
}
abstract class ControllerV4 implements OnReset,OnStart {
abstract void set(int value);
abstract int get();
void adjust(int value) {
int temp = this.get();
if(((value > 0) && (temp >= 100)) || ((value < 0) && (temp <= 0))) {
System.out.println("Can not adjust any further");
return;
}
this.set(temp + value);
}
}
class VolumeControllerV4 extends ControllerV4 {
private int defaultValue = 25;
private int value;
@Override
int get() {
return value;
}
void set(int value) {
System.out.println("Old value of VolumeController \t"+this.value);
this.value = value;
System.out.println("New value of VolumeController \t"+this.value);
}
@Override
public void setDefault() {
set(defaultValue);
}
@Override
public void checkForDriverUpdate() {
System.out.println("Checking driver update for VolumeController .... Done");
}
}
class BrightnessControllerV4 extends ControllerV4 {
private int defaultValue = 50;
private int value;
@Override
int get() {
return value;
}
@Override
void set(int value) {
System.out.println("Old value of BrightnessController \t"+this.value);
this.value = value;
System.out.println("New value of BrightnessController \t"+this.value);
}
@Override
public void setDefault() {
set(defaultValue);
}
@Override
public void checkForDriverUpdate() {
System.out.println("Checking driver update for BrightnessController .... Done");
}
}
class ColourControllerV4 extends ControllerV4 {
private int defaultValue = 40;
private int value;
@Override
int get() {
return value;
}
void set(int value) {
System.out.println("Old value of ColourController \t"+this.value);
this.value = value;
System.out.println("New value of ColourController \t"+this.value);
}
@Override
public void setDefault() {
set(defaultValue);
}
@Override
public void checkForDriverUpdate() {
System.out.println("Checking driver update for ColourController .... Done");
}
}
class ResetFunctionV4 {
private List<OnReset> controllers = null;
ResetFunctionV4(List<OnReset> controllers) {
this.controllers = controllers;
}
void onReset() {
for (OnReset onreset :this.controllers) {
onreset.setDefault();
}
}
}
class InitializeDeviceV4 {
private List<OnStart> controllers = null;
InitializeDeviceV4(List<OnStart> controllers) {
this.controllers = controllers;
}
void initialize() {
for (OnStart onStart :this.controllers) {
onStart.checkForDriverUpdate();
}
}
}
/*
* so on
* There can be n number of controllers
*
* */
public class TvApplicationV4 {
public static void main(String[] args) {
VolumeControllerV4 volumeControllerV4 = new VolumeControllerV4();
BrightnessControllerV4 brightnessControllerV4 = new BrightnessControllerV4();
ColourControllerV4 colourControllerV4 = new ColourControllerV4();
List<ControllerV4> controllerV4s = new ArrayList<>();
controllerV4s.add(brightnessControllerV4);
controllerV4s.add(volumeControllerV4);
controllerV4s.add(colourControllerV4);
List<OnStart> controllersToInitialize = new ArrayList<>();
controllersToInitialize.addAll(controllerV4s);
InitializeDeviceV4 initializeDeviceV4 = new InitializeDeviceV4(controllersToInitialize);
initializeDeviceV4.initialize();
List<OnReset> controllersToReset = new ArrayList<>();
controllersToReset.addAll(controllerV4s);
ResetFunctionV4 resetFunctionV4 = new ResetFunctionV4(controllersToReset);
OUTER: while(true) {
Scanner sc=new Scanner(System.in);
System.out.println(" Enter your option \n Press 1 to increase volume \n Press 2 to decrease volume");
System.out.println(" Press 3 to increase brightness \n Press 4 to decrease brightness");
System.out.println(" Press 5 to increase color \n Press 6 to decrease color");
System.out.println(" Press 7 to reset TV \n Press any other Button to shutdown");
int button = sc.nextInt();
switch (button) {
case 1: {
volumeControllerV4.adjust(5);
break;
}
case 2: {
volumeControllerV4.adjust(-5);
break;
}
case 3: {
brightnessControllerV4.adjust(5);
break;
}
case 4: {
brightnessControllerV4.adjust(-5);
break;
}
case 5: {
colourControllerV4.adjust(5);
break;
}
case 6: {
colourControllerV4.adjust(-5);
break;
}
case 7: {
resetFunctionV4.onReset();
break;
}
default:
System.out.println("Shutting down...........");
break OUTER;
}
}
}
}
Now all of the issue faced by you got addressed and you realized that with the use of Inheritance and Polymorphism you could
*
*Keep various part of application loosely coupled.( Reset or Driver Update feature components don’t need to be made aware of actual controller classes(Volume, Brightness and Colour), any class implementing OnReset or OnStart will be acceptable to Reset or Driver Update feature components respectively).
*Application enhancement becomes easier.(New addition of controller wont impact reset or driver update feature component, and it is now really easy for you to add new ones)
*Keep layer of abstraction.(Now Reset feature can see only setDefault method of controllers and Reset feature can see only checkForDriverUpdate method of controllers)
Hope, this helps :-)
A: Polymorphism means more than one form, same object performing different operations according to the requirement.
Polymorphism can be achieved by using two ways, those are
*
*Method overriding
*Method overloading
Method overloading means writing two or more methods in the same class by using same method name, but the passing parameters is different.
Method overriding means we use the method names in the different classes,that means parent class method is used in the child class.
In Java to achieve polymorphism a super class reference variable can hold the sub class object.
To achieve the polymorphism every developer must use the same method names in the project.
A: Polymorphism is the ability for an object to appear in multiple forms. This involves using inheritance and virtual functions to build a family of objects which can be interchanged. The base class contains the prototypes of the virtual functions, possibly unimplemented or with default implementations as the application dictates, and the various derived classes each implements them differently to affect different behaviors.
A: overloading is when you define 2 methods with the same name but different parameters
overriding is where you change the behavior of the base class via a function with the same name in a subclass.
So Polymorphism is related to overriding but not really overloading.
However if someone gave me a simple answer of "overriding" for the question "What is polymorphism?" I would ask for further explanation.
A: Both overriding and overloading are used to achieve polymorphism.
You could have a method in a class
that is overridden in one or
more subclasses. The method does
different things depending on which
class was used to instantiate an object.
abstract class Beverage {
boolean isAcceptableTemperature();
}
class Coffee extends Beverage {
boolean isAcceptableTemperature() {
return temperature > 70;
}
}
class Wine extends Beverage {
boolean isAcceptableTemperature() {
return temperature < 10;
}
}
You could also have a method that is
overloaded with two or more sets of arguments. The method does
different things based on the
type(s) of argument(s) passed.
class Server {
public void pour (Coffee liquid) {
new Cup().fillToTopWith(liquid);
}
public void pour (Wine liquid) {
new WineGlass().fillHalfwayWith(liquid);
}
public void pour (Lemonade liquid, boolean ice) {
Glass glass = new Glass();
if (ice) {
glass.fillToTopWith(new Ice());
}
glass.fillToTopWith(liquid);
}
}
A: Here's an example of polymorphism in pseudo-C#/Java:
class Animal
{
abstract string MakeNoise ();
}
class Cat : Animal {
string MakeNoise () {
return "Meow";
}
}
class Dog : Animal {
string MakeNoise () {
return "Bark";
}
}
Main () {
Animal animal = Zoo.GetAnimal ();
Console.WriteLine (animal.MakeNoise ());
}
The Main function doesn't know the type of the animal and depends on a particular implementation's behavior of the MakeNoise() method.
Edit: Looks like Brian beat me to the punch. Funny we used the same example. But the above code should help clarify the concepts.
A: Neither:
Overloading is when you have the same function name that takes different parameters.
Overriding is when a child class replaces a parent's method with one of its own (this in iteself does not constitute polymorphism).
Polymorphism is late binding, e.g. the base class (parent) methods are being called but not until runtime does the application know what the actual object is - it may be a child class whose methods are different. This is because any child class can be used where a base class is defined.
In Java you see polymorphism a lot with the collections library:
int countStuff(List stuff) {
return stuff.size();
}
List is the base class, the compiler has no clue if you're counting a linked list, vector, array, or a custom list implementation, as long as it acts like a List:
List myStuff = new MyTotallyAwesomeList();
int result = countStuff(myStuff);
If you were overloading you'd have:
int countStuff(LinkedList stuff) {...}
int countStuff(ArrayList stuff) {...}
int countStuff(MyTotallyAwesomeList stuff) {...}
etc...
and the correct version of countStuff() would be picked by the compiler to match the parameters.
A:
what is polymorphism?
From java tutorial
The dictionary definition of polymorphism refers to a principle in biology in which an organism or species can have many different forms or stages. This principle can also be applied to object-oriented programming and languages like the Java language. Subclasses of a class can define their own unique behaviors and yet share some of the same functionality of the parent class.
By considering the examples and definition, overriding should be accepted answer.
Regarding your second query:
IF you had a abstract base class that defined a method with no implementation, and you defined that method in the sub class, is that still overridding?
It should be called overriding.
Have a look at this example to understand different types of overriding.
*
*Base class provides no implementation and sub-class has to override complete method - (abstract)
*Base class provides default implementation and sub-class can change the behaviour
*Sub-class adds extension to base class implementation by calling super.methodName() as first statement
*Base class defines structure of the algorithm (Template method) and sub-class will override a part of algorithm
code snippet:
import java.util.HashMap;
abstract class Game implements Runnable{
protected boolean runGame = true;
protected Player player1 = null;
protected Player player2 = null;
protected Player currentPlayer = null;
public Game(){
player1 = new Player("Player 1");
player2 = new Player("Player 2");
currentPlayer = player1;
initializeGame();
}
/* Type 1: Let subclass define own implementation. Base class defines abstract method to force
sub-classes to define implementation
*/
protected abstract void initializeGame();
/* Type 2: Sub-class can change the behaviour. If not, base class behaviour is applicable */
protected void logTimeBetweenMoves(Player player){
System.out.println("Base class: Move Duration: player.PlayerActTime - player.MoveShownTime");
}
/* Type 3: Base class provides implementation. Sub-class can enhance base class implementation by calling
super.methodName() in first line of the child class method and specific implementation later */
protected void logGameStatistics(){
System.out.println("Base class: logGameStatistics:");
}
/* Type 4: Template method: Structure of base class can't be changed but sub-class can some part of behaviour */
protected void runGame() throws Exception{
System.out.println("Base class: Defining the flow for Game:");
while ( runGame) {
/*
1. Set current player
2. Get Player Move
*/
validatePlayerMove(currentPlayer);
logTimeBetweenMoves(currentPlayer);
Thread.sleep(500);
setNextPlayer();
}
logGameStatistics();
}
/* sub-part of the template method, which define child class behaviour */
protected abstract void validatePlayerMove(Player p);
protected void setRunGame(boolean status){
this.runGame = status;
}
public void setCurrentPlayer(Player p){
this.currentPlayer = p;
}
public void setNextPlayer(){
if ( currentPlayer == player1) {
currentPlayer = player2;
}else{
currentPlayer = player1;
}
}
public void run(){
try{
runGame();
}catch(Exception err){
err.printStackTrace();
}
}
}
class Player{
String name;
Player(String name){
this.name = name;
}
public String getName(){
return name;
}
}
/* Concrete Game implementation */
class Chess extends Game{
public Chess(){
super();
}
public void initializeGame(){
System.out.println("Child class: Initialized Chess game");
}
protected void validatePlayerMove(Player p){
System.out.println("Child class: Validate Chess move:"+p.getName());
}
protected void logGameStatistics(){
super.logGameStatistics();
System.out.println("Child class: Add Chess specific logGameStatistics:");
}
}
class TicTacToe extends Game{
public TicTacToe(){
super();
}
public void initializeGame(){
System.out.println("Child class: Initialized TicTacToe game");
}
protected void validatePlayerMove(Player p){
System.out.println("Child class: Validate TicTacToe move:"+p.getName());
}
}
public class Polymorphism{
public static void main(String args[]){
try{
Game game = new Chess();
Thread t1 = new Thread(game);
t1.start();
Thread.sleep(1000);
game.setRunGame(false);
Thread.sleep(1000);
game = new TicTacToe();
Thread t2 = new Thread(game);
t2.start();
Thread.sleep(1000);
game.setRunGame(false);
}catch(Exception err){
err.printStackTrace();
}
}
}
output:
Child class: Initialized Chess game
Base class: Defining the flow for Game:
Child class: Validate Chess move:Player 1
Base class: Move Duration: player.PlayerActTime - player.MoveShownTime
Child class: Validate Chess move:Player 2
Base class: Move Duration: player.PlayerActTime - player.MoveShownTime
Base class: logGameStatistics:
Child class: Add Chess specific logGameStatistics:
Child class: Initialized TicTacToe game
Base class: Defining the flow for Game:
Child class: Validate TicTacToe move:Player 1
Base class: Move Duration: player.PlayerActTime - player.MoveShownTime
Child class: Validate TicTacToe move:Player 2
Base class: Move Duration: player.PlayerActTime - player.MoveShownTime
Base class: logGameStatistics:
A: The term overloading refers to having multiple versions of something with the same name, usually methods with different parameter lists
public int DoSomething(int objectId) { ... }
public int DoSomething(string objectName) { ... }
So these functions might do the same thing but you have the option to call it with an ID, or a name. Has nothing to do with inheritance, abstract classes, etc.
Overriding usually refers to polymorphism, as you described in your question
A: I think guys your are mixing concepts. Polymorphism is the ability of an object to behave differently at run time. For achieving this, you need two requisites:
*
*Late Binding
*Inheritance.
Having said that overloading means something different to overriding depending on the language you are using. For example in Java does not exist overriding but overloading. Overloaded methods with different signature to its base class are available in the subclass. Otherwise they would be overridden (please, see that I mean now the fact of there is no way to call your base class method from outside the object).
However in C++ that is not so. Any overloaded method, independently whether the signature is the same or not (diffrrent amount, different type) is as well overridden. That is to day, the base class' method is no longer available in the subclass when being called from outside the subclass object, obviously.
So the answer is when talking about Java use overloading. In any other language may be different as it happens in c++
A: overriding is more like hiding an inherited method by declaring a method with the same name and signature as the upper level method (super method), this adds a polymorphic behaviour to the class .
in other words the decision to choose wich level method to be called will be made at run time not on compile time .
this leads to the concept of interface and implementation .
A: Polymorphism is more likely as far as it's meaning is concerned ... to OVERRIDING in java
It's all about different behavior of the SAME object in different situations(In programming way ... you can call different ARGUMENTS)
I think the example below will help you to understand ... Though it's not PURE java code ...
public void See(Friend)
{
System.out.println("Talk");
}
But if we change the ARGUMENT ... the BEHAVIOR will be changed ...
public void See(Enemy)
{
System.out.println("Run");
}
The Person(here the "Object") is same ...
A: Polymorphism is a multiple implementations of an object or you could say multiple forms of an object. lets say you have class Animals as the abstract base class and it has a method called movement() which defines the way that the animal moves. Now in reality we have different kinds of animals and they move differently as well some of them with 2 legs, others with 4 and some with no legs, etc.. To define different movement() of each animal on earth, we need to apply polymorphism. However, you need to define more classes i.e. class Dogs Cats Fish etc. Then you need to extend those classes from the base class Animals and override its method movement() with a new movement functionality based on each animal you have. You can also use Interfaces to achieve that. The keyword in here is overriding, overloading is different and is not considered as polymorphism. with overloading you can define multiple methods "with same name" but with different parameters on same object or class.
A: You are correct that overloading is not the answer.
Neither is overriding. Overriding is the means by which you get polymorphism. Polymorphism is the ability for an object to vary behavior based on its type. This is best demonstrated when the caller of an object that exhibits polymorphism is unaware of what specific type the object is.
A: Specifically saying overloading or overriding doesn't give the full picture. Polymorphism is simply the ability of an object to specialize its behavior based on its type.
I would disagree with some of the answers here in that overloading is a form of polymorphism (parametric polymorphism) in the case that a method with the same name can behave differently give different parameter types. A good example is operator overloading. You can define "+" to accept different types of parameters -- say strings or int's -- and based on those types, "+" will behave differently.
Polymorphism also includes inheritance and overriding methods, though they can be abstract or virtual in the base type. In terms of inheritance-based polymorphism, Java only supports single class inheritance limiting it polymorphic behavior to that of a single chain of base types. Java does support implementation of multiple interfaces which is yet another form of polymorphic behavior.
A: Polymorphism is the ability of a class instance to behave as if it were an instance of another class in its inheritance tree, most often one of its ancestor classes. For example, in Java all classes inherit from Object. Therefore, you can create a variable of type Object and assign to it an instance of any class.
An override is a type of function which occurs in a class which inherits from another class. An override function "replaces" a function inherited from the base class, but does so in such a way that it is called even when an instance of its class is pretending to be a different type through polymorphism. Referring to the previous example, you could define your own class and override the toString() function. Because this function is inherited from Object, it will still be available if you copy an instance of this class into an Object-type variable. Normally, if you call toString() on your class while it is pretending to be an Object, the version of toString which will actually fire is the one defined on Object itself. However, because the function is an override, the definition of toString() from your class is used even when the class instance's true type is hidden behind polymorphism.
Overloading is the action of defining multiple methods with the same name, but with different parameters. It is unrelated to either overriding or polymorphism.
A: import java.io.IOException;
class Super {
protected Super getClassName(Super s) throws IOException {
System.out.println(this.getClass().getSimpleName() + " - I'm parent");
return null;
}
}
class SubOne extends Super {
@Override
protected Super getClassName(Super s) {
System.out.println(this.getClass().getSimpleName() + " - I'm Perfect Overriding");
return null;
}
}
class SubTwo extends Super {
@Override
protected Super getClassName(Super s) throws NullPointerException {
System.out.println(this.getClass().getSimpleName() + " - I'm Overriding and Throwing Runtime Exception");
return null;
}
}
class SubThree extends Super {
@Override
protected SubThree getClassName(Super s) {
System.out.println(this.getClass().getSimpleName()+ " - I'm Overriding and Returning SubClass Type");
return null;
}
}
class SubFour extends Super {
@Override
protected Super getClassName(Super s) throws IOException {
System.out.println(this.getClass().getSimpleName()+ " - I'm Overriding and Throwing Narrower Exception ");
return null;
}
}
class SubFive extends Super {
@Override
public Super getClassName(Super s) {
System.out.println(this.getClass().getSimpleName()+ " - I'm Overriding and have broader Access ");
return null;
}
}
class SubSix extends Super {
public Super getClassName(Super s, String ol) {
System.out.println(this.getClass().getSimpleName()+ " - I'm Perfect Overloading ");
return null;
}
}
class SubSeven extends Super {
public Super getClassName(SubSeven s) {
System.out.println(this.getClass().getSimpleName()+ " - I'm Perfect Overloading because Method signature (Argument) changed.");
return null;
}
}
public class Test{
public static void main(String[] args) throws Exception {
System.out.println("Overriding\n");
Super s1 = new SubOne(); s1.getClassName(null);
Super s2 = new SubTwo(); s2.getClassName(null);
Super s3 = new SubThree(); s3.getClassName(null);
Super s4 = new SubFour(); s4.getClassName(null);
Super s5 = new SubFive(); s5.getClassName(null);
System.out.println("Overloading\n");
SubSix s6 = new SubSix(); s6.getClassName(null, null);
s6 = new SubSix(); s6.getClassName(null);
SubSeven s7 = new SubSeven(); s7.getClassName(s7);
s7 = new SubSeven(); s7.getClassName(new Super());
}
}
A: Polymorphism relates to the ability of a language to have different object treated uniformly by using a single interfaces; as such it is related to overriding, so the interface (or the base class) is polymorphic, the implementor is the object which overrides (two faces of the same medal)
anyway, the difference between the two terms is better explained using other languages, such as c++: a polymorphic object in c++ behaves as the java counterpart if the base function is virtual, but if the method is not virtual the code jump is resolved statically, and the true type not checked at runtime so, polymorphism include the ability for an object to behave differently depending on the interface used to access it; let me make an example in pseudocode:
class animal {
public void makeRumor(){
print("thump");
}
}
class dog extends animal {
public void makeRumor(){
print("woff");
}
}
animal a = new dog();
dog b = new dog();
a.makeRumor() -> prints thump
b.makeRumor() -> prints woff
(supposing that makeRumor is NOT virtual)
java doesn't truly offer this level of polymorphism (called also object slicing).
animal a = new dog();
dog b = new dog();
a.makeRumor() -> prints thump
b.makeRumor() -> prints woff
on both case it will only print woff..
since a and b is refering to class dog
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "375"
} |
Q: I am getting a "fatal error 7987" error from SQL Server 2000 The error happens trying to do an insert from a stored proc. I tried running DBCC CHECKDB as suggested by the kb article that Jonathan Holland suggested and it returned with the all clear.
A: Bummer dude.
http://support.microsoft.com/kb/828337
A: Ran a dbcc dbreindex ('tablename') against the tables that were being affected by the stored procedure that was being called. This forced all of the pages to be moved, which appears to have corrected the problem. This would indicate it was a page corruption that the DBCC CHECKDB didn't catch.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Anyone have a pointer for what is needed to make an MP3 Tag applicaiton? I get FLAC files a lot and want to automate the taging of the end point MP3 files after I have converted them.
What is my best library to interface with? Vista machine and C# for my code base.
The flac files come with a text file for the show, and the numbers performed. I'll edit that any way possible.
I use winamp for a player but will try others if free. :)
TIA.
A: Check out libid3tag... http://sourceforge.net/project/showfiles.php?group_id=12349...
And, actually, the ID3 tag is prettty simple, it's just text (with fixed length fields) tacked on to the front of the MP3 file...
Just make sure you follow the standard, as not all players, etc. do. For more on that, check out this article on Wikipedia
link text
A: You might also want to check out TagLib# (google it). It's not too hard to use, and makes reading and writing ID3 tags and other metadata pretty easy.
I've used it an ASP.NET project in the past, to automatically populate a database record using metadata from the ID3 tag of uploaded mp3 and mp4 files, with no major problems.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to alter a column and a computed column In SQL SERVER DB, I need to alter a column baseColumn and a computed column upperBaseColumn. The upperBaseColumn has index on it.
This is how the table looks
create table testTable (baseColumn varchar(10), upperBaseColumn AS (upper(baseColumn))
create index idxUpperBaseColumn ON testTable (upperBaseColumn)
Now I need to increase the column length of both the baseColumn and the upperBaseColumn.
What's the best way to do it?
A: I suggest you drop the index, then drop the computed column. Alter the size, then re-add the computed column and the index. Using your example....
create table testTable (baseColumn varchar(10), upperBaseColumn AS (upper(baseColumn)))
create index idxUpperBaseColumn ON testTable (upperBaseColumn)
Drop Index TestTable.idxUpperBaseColumn
Alter Table testTable Drop Column upperBaseColumn
Alter Table testTable Alter Column baseColumn VarChar(20)
Alter Table testTable Add upperBaseColumn As Upper(BaseColumn)
create index idxUpperBaseColumn ON testTable (upperBaseColumn)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Python module for wiki markup Is there a Python module for converting wiki markup to other languages (e.g. HTML)?
A similar question was asked here, What's the easiest way to convert wiki markup to html, but no Python modules are mentioned.
Just curious. :) Cheers.
A: Django uses the following libraries for markup:
*
*Markdown
*Textile
*reStructuredText
You can see how they're used in Django.
A: You should look at a good parser for Creole syntax: creole.py. It can convert Creole (which is "a common wiki markup language to be used across different wikis") to HTML.
A: mwlib provides ways of converting MediaWiki formatted text into HTML, PDF, DocBook and OpenOffice formats.
A: with python-creole you can convert html to creole and creole to html... So you can convert other markups to html and then to creole...
https://code.google.com/p/python-creole/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: Is there a way to define an action for unhandled exceptions in a WinForms .NET 3.5 app? Note, I realize that this has been addressed here. That post discusses exception handling in .NET 1.1 while implying that there is a better solution for >.NET 2.0 so this question is specifically about the more recent .NET versions.
I have a windows forms application which is expected to frequently and unexpectedly lose connectivity to the database, in which case it is to reset itself to its initial state.
I am already doing error logging, retry connection, etc. through a set of decorators on my custom DBWrapper object. After that is taken care of however, I would like to let the error fall through the stack. Once it reaches the top and is unhandled I would like it to be swallowed and my ApplicationResetter.Reset() method to be executed.
Can anyone tell me how to do this?
If this is impossible, then is there at least a way to handle this without introducing a dependency on ApplicationResetter to every class which might receive such an error and without actually shutting down and restarting my application (which would just look ugly)?
A: caveat: not familiar with 3.5 yet, there may be a better answer ...
...but my understanding is that by the time the event gets to the unhandled exception handler, the app is probably going to die - and if it doesn't die, it may be so corrupted that it should die anyway
if you are already handling a db-not-there case and are letting other exceptions pass through, then the app should die as it may be unstable
A: Perhaps the Application.ThreadException event will suit your needs:
static void Main()
{
Application.ThreadException += Application_ThreadException;
//...
}
static void Application_ThreadException(object sender, System.Threading.ThreadExceptionEventArgs e)
{
// call ApplicationResetter.Reset() here
}
A: There are the System.Windows.Forms.Application.ThreadException event and the System.AppDomain.CurrentDomain.UnhandledException events.
As mentioned by Steven, these will leave your application in an unknown state. There's really no other way to do this except putting the call that could throw the exception in a try/catch block.
A: A pretty in-depth explanation of unhandled exceptions in the latest MSDN issue:
September 2008
A: For Windows Forms threads (which call Application.Run()), assign a ThreadException handler at the beginning of Main(). Also, I found it was necessary to call SetUnhandledExceptionMode:
Application.SetUnhandledExceptionMode(UnhandledExceptionMode.Automatic);
Application.ThreadException += ShowUnhandledException;
Application.Run(...);
Here is an example handler. I know it's not what you're looking for, but it shows the format of the handler. Notice that if you want the exception to be fatal, you have to explicitly call Application.Exit().
static void ShowUnhandledException(object sender, ThreadExceptionEventArgs t)
{
Exception ex = t.Exception;
try {
// Build a message to show to the user
bool first = true;
string msg = string.Empty;
for (int i = 0; i < 3 && ex != null; i++) {
msg += string.Format("{0} {1}:\n\n{2}\n\n{3}",
first ? "Unhandled " : "Inner exception ",
ex.GetType().Name,
ex.Message,
i < 2 ? ex.StackTrace : "");
ex = ex.InnerException;
first = false;
}
msg += "\n\nAttempt to continue? (click No to exit now)";
// Show the message
if (MessageBox.Show(msg, "Unhandled exception", MessageBoxButtons.YesNo, MessageBoxIcon.Error) == DialogResult.No)
Application.Exit();
} catch (Exception e2) {
try {
MessageBox.Show(e2.Message, "Fatal error", MessageBoxButtons.OK, MessageBoxIcon.Stop);
} finally {
Application.Exit();
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Implementing a replacement for Ruby's IO.popen() and system() IO.popen() and system() in Ruby is sorely lacking several useful features, such as:
*
*obtaining the return value of the function
*capturing both stdout and stderr (seperately and merged)
*running without spawning an extra cmd.exe or /bin/sh process
Python has a module "subprocess" which I was thinking about using as inspiration for a similar module in Ruby. Now to the questions:
*
*How are Ruby-programmers working around the issues above, for example obtaining the return value when doing a popen() call?
*Is this something which has already been implemented?
A: *
*system() exit status can be captured with $?.exitstatus
*stderr can be captured with something like system 'command 2>&1'
A: Take a look at the standard Ruby library open3. This will give you access to stdin, stdout and stderr.
There is also an external project called open4, which allows you to get the exit status without using a magic variable name.
A: I've felt the need to do exactly that when testing git_remote_branch. The tool calls out to the shell and I wanted to capture exactly what was displayed during test runs, no matter what git was displaying, and no matter if it was being spit out in stdout or stderr.
I have a module that's perfectly reusable that can be observed here (MIT license: use at will, just don't sue me ;-)
You can see it in action in the tests for git_remote_branch here.
Also, I've set up a repo specifically for capture_fu, it includes some tests and stuff. The project's not terribly well set up though. I haven't spent much time making it releasable ;-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: ${1:+"$@"} in /bin/sh I've noticed that sometimes wrapper scripts will use ${1:+"$@"} for the parameters rather than just "$@".
For example, http://svn.macosforge.org/repository/macports/trunk/dports/editors/vim-app/files/gvim.sh uses
exec "$binary" $opts ${1:+"$@"}
Can anyone break ${1:+"$@"} down into English and explain why it would be an advantage over plain "$@"?
A: From the bash man page:
${parameter:+word}
Use Alternate Value. If parameter is null or unset, nothing is
substituted, otherwise the expansion of word is substituted.
So, "$@" is substituted unless $1 is unset. I can't see why they couldn't have just used "$@".
A: To quote the relevant portion of man bash for the information that Jonathan Leffler referred to in his comment:
When not performing substring expansion, bash tests for a parameter that is unset or null; omitting the colon results in a test only for a parameter that is unset.
(emphasis mine)
A: 'Hysterical Raisins', aka Historical Reasons.
The explanation from JesperE (or the Bash man page on shell parameter expansion) is accurate for what it does:
*
*If $1 exists and is not an empty string, then substitute the quoted list of arguments.
Once upon 20 or so years ago, some broken minor variants of the Bourne Shell substituted an empty string "" for "$@" if there were no arguments, instead of the correct, current behaviour of substituting nothing. Whether any such systems are still in use is open to debate.
[Hmm: that expansion would not work correctly for:
command '' arg2 arg3 ...
In this context, the correct notation is:
${1+"$@"}
This works correctly whether $1 is an empty argument or not. So, someone remembered the notation incorrectly, accidentally introducing a bug.]
A: Here are some other clues for a more complete answer...
The usage can concern the shebang line, which has never thoroughly be documented and where a single parameter is often expected.
Thereby it seems to be a workaround if filename contains spaces or exceed allowed length.
From Perl man page:
A better construct than $* would be ${1+"$@"}, which handles embedded
spaces and such in the filenames, but doesn't work if the script is
being interpreted by csh.
From TCL man page:
Many UNIX systems do not allow the #! line to exceed about 30
characters in length, so be sure that the tclsh executable can be
accessed with a short file name.
And finally, some utilities could support the abstruse but handy feature of the cat command :
The $@ special parameter finds use as a tool for filtering input into
shell scripts. The cat "$@" construction accepts input to a script
either from stdin or from files given as parameters to the script.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: performance testing for web services (Microsoft web application stress tool?) We need to provide a solution to do performance testing for our Web Services residing in our development environment. We were planning to create the test scripts using the object model of Microsoft Web Application Stress Tool. I have researched and not been able to find any examples.
Can anyone, who may have perhaps used this same tool for a similar solution, to please provide any advise and/or examples? Also, please provide any suggestions if a different tool can provide a more convenient solution.
Any help would be much appreciated.
Thank You
A: So a year or two back I had the same problem and was able to apply my experience with the grinder to the problem:
http://grinder.sourceforge.net/
Others in my environment reported success using ACT:
http://en.wikipedia.org/wiki/Application_Center_Test
A: I recommend Webload.I used it and it was a great tool. The IDE it provides allows you to record your browser action. When doing the stress testing, Webload will draw a nice graph on the fly.
edit: here is the link to the webload site: http://www.webload.org/
A: WAST is a very old tool. You might have better luck checking out some of the better open source load/performance tools like: OpenSTA, JMeter, Webload, Grinder, etc.
A: I advise to use the new version of SOAPbox from Vordel. This tool is very simple to use but provide very interesting functionalities.
You can find it here : http://www.vordel.com/products/soapbox/
A: VSTT provides load test option for web services - http://blogs.msdn.com/b/nikhiln/archive/2007/02/05/performance-testing-asp-net-web-services-using-vsts.aspx
A: You may have to stress test IIS itself since it's webservice, just loop open 5000 request.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Recommended GCC warning options for C Other than -Wall, what other warnings have people found useful?
Options to Request or Suppress Warnings
A: I routinely use:
gcc -m64 -std=c99 -pedantic -Wall -Wshadow -Wpointer-arith -Wcast-qual \
-Wstrict-prototypes -Wmissing-prototypes
This set catches a lot for people unused to it (people whose code I get to compile with those flags for the first time); it seldom gives me a problem (though -Wcast-qual is occasionally a nuisance).
A: I also use:
-Wstrict-overflow=5
To catch those nasty bugs that may occur if I write code that relies on the overflow behaviour of integers.
And:
-Wextra
Which enables some options that are nice to have as well. Most are for C++ though.
A: As of 2011-09-01, with GCC version 4.6.1
My current "development" alias
gcc -std=c89 -pedantic -Wall \
-Wno-missing-braces -Wextra -Wno-missing-field-initializers \
-Wformat=2 -Wswitch-default -Wswitch-enum -Wcast-align \
-Wpointer-arith -Wbad-function-cast -Wstrict-overflow=5 \
-Wstrict-prototypes -Winline -Wundef -Wnested-externs \
-Wcast-qual -Wshadow -Wunreachable-code -Wlogical-op \
-Wfloat-equal -Wstrict-aliasing=2 -Wredundant-decls \
-Wold-style-definition -Werror \
-ggdb3 \
-O0 \
-fno-omit-frame-pointer -ffloat-store \
-fno-common -fstrict-aliasing \
-lm
The "release" alias
gcc -std=c89 -pedantic -O3 -DNDEBUG -lm
As of 2009-11-03
"development" alias
gcc -Wall -Wextra -Wformat=2 -Wswitch-default -Wcast-align \
-Wpointer-arith -Wbad-function-cast -Wstrict-prototypes \
-Winline -Wundef -Wnested-externs -Wcast-qual -Wshadow \
-Wwrite-strings -Wconversion -Wunreachable-code \
-Wstrict-aliasing=2 \
-ffloat-store -fno-common -fstrict-aliasing \
-lm -std=c89 -pedantic -O0 -ggdb3 -pg --coverage
"release" alias
gcc -lm -std=c89 -pedantic -O3 -DNDEBUG --combine \
-fwhole-program -funroll-loops
A: I usually compile with "-W -Wall -ansi -pedantic".
This helps ensure maximum quality and portability of the code.
A: -pedantic -Wall -Wextra -Wno-write-strings -Wno-unused-parameter
For "Hurt me plenty" mode, I leave away the -Wno...
I like to have my code warning free, especially with C++. While C compiler warnings can often be ignored, many C++ warnings show fundamental defects in the source code.
A: Right now I use:
-Wall -W -Wextra -Wconversion -Wshadow -Wcast-qual -Wwrite-strings -Werror
I took that list mostly from the book "An Introduction to GCC" (by rms) and then some from Ulrich Drepper's recommendation about defensive programming (slides for Defensive Programming).
But I don't have any science behind my list. It just felt like a good list.
Note: I don't like those pedantic flags though...
Note: I think that -W and -Wextra are more or less the same thing.
A: I started out with C++, so when I made the switch to learning C I made sure to be extra-anal:
*
*-fmessage-length=0
*-ansi -pedantic -std=c99
*-Werror
*-Wall
*-Wextra
*-Wwrite-strings
*-Winit-self
*-Wcast-align
*-Wcast-qual
*-Wpointer-arith
*-Wstrict-aliasing
*-Wformat=2
*-Wmissing-declarations
*-Wmissing-include-dirs
*-Wno-unused-parameter
*-Wuninitialized
*-Wold-style-definition
*-Wstrict-prototypes
*-Wmissing-prototypes
A: I like -Werror. It keeps the code warning free.
A: Get the manual for the GCC version you use, find all warning options available, and then deactivate only those for which you have a compelling reason to do so. (For example, non-modifiable third-party headers that would give you lots of warnings otherwise.) Document those reasons. (In the Makefile or wherever you set those options.) Review the settings at regular intervalls, and whenever you upgrade your compiler.
The compiler is your friend. Warnings are your friend. Give the compiler as much chance to tell you of potential problems as possible.
A: It would be the option -pedantic-errors.
A: -Wfloat-equal, -Wshadow, and -Wmissing-prototypes.
A: *
*-Wredundant-decls
*-Wnested-externs
*-Wstrict-prototypes
*-Wextra
*-Werror-implicit-function-declaration
*-Wunused
*-Wno-unused-value
*-Wreturn-type
A: I generally just use
gcc -Wall -W -Wunused-parameter -Wmissing-declarations -Wstrict-prototypes -Wmissing-prototypes -Wsign-compare -Wconversion -Wshadow -Wcast-align -Wparentheses -Wsequence-point -Wdeclaration-after-statement -Wundef -Wpointer-arith -Wnested-externs -Wredundant-decls -Werror -Wdisabled-optimization -pedantic -funit-at-a-time -o
With references:
*
*-Wall
*-W
*-Wunused-parameter
*-Wmissing-declarations
*-Wstrict-prototypes
*-Wmissing-prototypes
*-Wsign-compare
*-Wconversion
*-Wshadow
*-Wcast-align
*-Wparentheses
*-Wsequence-point
*-Wdeclaration-after-statement
*-Wundef
*-Wpointer-arith
*-Wnested-externs
*-Wredundant-decls
*-Werror
*-Wdisabled-optimization
*-pedantic
*-funit-at-a-time
*-o
A: The warning about uninitialized variables doesn't work unless you specify -O, so I include that in my list:
-g -O -Wall -Werror -Wextra -pedantic -std=c99
Documentation on each warning:
*
*-g
*-O
*-Wall
*-Werror
*-Wextra
*-pedantic
*-std=c99
A: I use this option:
-Wfatal-errors
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "86"
} |
Q: Embedding the Java h2 database programmatically At the moment we use HSQLDB as an embedded database, but we search for a database with less memory footprint as the data volume grows.
Derby / JavaDB is not an option at the moment because it stores properties globally in the system properties. So we thought of h2.
While we used HSQLDB we created a Server-object, set the parameters and started it. This is described here (and given as example in the class org.hsqldb.test.TestBase).
The question is: Can this be done analogous with the h2 database, too? Do you have any code samples for that? Scanning the h2-page, I did not find an example.
A: Yes, you can run H2 in embedded mode. You just use the JDBC driver and connect to an embedded url like this (their example):
This database can be used in embedded
mode, or in server mode. To use it in
embedded mode, you need to:
* Add h2.jar to the classpath
* Use the JDBC driver class: org.h2.Driver
* The database URL jdbc:h2:~/test opens the database 'test' in your user home directory
Example of connecting with JDBC to an embedded H2 database (adapted from http://www.h2database.com/javadoc/org/h2/jdbcx/JdbcDataSource.html ):
import org.h2.jdbcx.JdbcDataSource;
// ...
JdbcDataSource ds = new JdbcDataSource();
ds.setURL("jdbc:h2:˜/test");
ds.setUser("sa");
ds.setPassword("sa");
Connection conn = ds.getConnection();
If you're looking to use H2 in a purely in-memory / embedded mode, you can do that too. See this link for more:
*
*http://www.h2database.com/html/features.html#in_memory_databases
You just need to use a special URL in normal JDBC code like "jdbc:h2:mem:db1".
A: If for some reason you need an embedded H2 database in server mode you can do it either manually using the API
at http://www.h2database.com/javadoc/org/h2/tools/Server.html - or by
appending ;AUTO_SERVER=TRUE to the database URL.
A: From the download, I see that the file tutorial.html has this
import org.h2.tools.Server;
...
// start the TCP Server
Server server = Server.createTcpServer(args).start();
...
// stop the TCP Server
server.stop();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: Tips/Tricks for DataFlex Is anyone out there still using DataFlex? If so, what are you favorite tips and tricks for this venerable 4GL?
A: It all depends on the version of DF you're using, but here's a couple:
*
*Do not use "While" when traversing record sets. Always use repeat. (see example at bottom)
*The dataflex newsgroups (news.dataaccess.com) is the best place to ask questions.
*Other useful sites include http://sture.dk/wasp and http://www.vdf-guidance.com
*Use entering_scope instead of activating to initialise values on forms.
*With deferred modal objects, use a container object above the deferred object to pass in parameters.
I've got loads more. But I'm just going to have to go and lie down. I can't believe someone asked a dataflex question.
clear orders
move const.complete to orders.status
find ge orders by index.2
repeat
if orders.status ne const.complete indicate finderr true
if (not(finderr)) begin
send doYourStuffHere
find gt orders by index.2
end
until (finderr)
A: The new Data Access World Wide forums!
http://support.dataaccess.com/forums/
A: long time no see!
Yes, DataFlex is still alive and well and being used by lots of people and organisations.
The current version is the "Visual" form (i.e. Widows GUI): Visual DataFlex (VDF) 14.1, although v15.0 is just about to release (I've been using alphas, betas and RCs for development for a few months now).
The character mode product (now v3.2) is still around as well, for DOS, Unix and Linux.
VDF now has good support for Web Applications, web services (since about v10), an Ajax library (which will come "in the box" with 15.0), CodeJock controls for nicer UI design, a development environment (VDF Studio) that has for some time (since v12.0) been so complete that I rarely step outside it any more (I even code my JavaScript in it, when doing that for VDF projects). It also comes with a free CMS called Electos (now itself in v4.0 with VDF 15.0).
It has connectivity kits in the box for Pervasive, MS SQL Server, DB2 and ODBC databases, with Oracle, MySQL and other drivers provided by Mertech Data Systems (Riaz Merchant's company: www.mertechdata.com).
You can download a free "Personal" edition (for non-commercial use) from here - it is a fully-featured product, but if you make money from it you are required to buy a kosher licence. Give it a whirl! ;-)
Good to hear from you again!
Mike
(Still fighting with the b4stard descendants of your thrice-damned DataSets!!! ;-) )
A: My "working language" (i.e. what I am working on as an employed developer) is Visual Dataflex, currently on version 14.0. It's not the best language/environment available, but it certainly isn't the worst either.
My number 1 tip would be, to quote Steve McConnell's Code Complete: "Program into your language, not in it. Don't limit your programming thinking only to the concepts that are supported automatically by your language. The best programmers think of what they want to do, and then they assess how to accomplish their objectives with the programming tools at their disposal."
A: Another good new site for VDF/DF tips is VDF Wiki.
A: The vdfguidance URL has a typo, it is http://www.vdf-guidance.com
A: mixin inheritance was an excellent feature - the methods of any other class could be reused in your class; as long as you provided the properties that they needed to work, everything was fine = multiple inheritance (MI) without the 'diamond problem', name conflicts, and other MI issues
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Question about XMLTextWriters and Streams We have a VXML project that a 3rd party parses to provide us with a phone navigation system. We require them to enter an id code to leave a message, which is later reviewed by our company.
We currently have this working as follows:
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Stream m = new MemoryStream(); //Create Memory Stream - Used to create XML document in Memory
XmlTextWriter XML_Writer = new XmlTextWriter(m, System.Text.Encoding.UTF8);
XML_Writer.Formatting = Formatting.Indented;
XML_Writer.WriteStartDocument();
/* snip - writing a valid XML document */
XML_Writer.WriteEndDocument();
XML_Writer.Flush();
m.Position = 0;
byte[] b = new byte[m.Length];
m.Read(b, 0, (int)m.Length);
XML_Writer.Close();
HttpContext.Current.Response.Write(System.Text.Encoding.UTF8.GetString(b, 0, b.Length));
I'm just maintaining this app, I didn't write it...but the end section seems convoluted to me.
I know it's taking the output stream and feeding the written XML into it...but why is it first reading the entire string? Isn't that inefficient?
Is there a better way to write the above code?
A: Yes, just write directly to the Response Output (IO.StreamWriter) or OutputStream (IO.Stream):
XmlTextWriter XML_Writer = new XmlTextWriter(HttpContext.Current.Response.OutputStream, HttpContext.Current.Response.Encoding);
//...
XML_Writer.Flush();
A: After that I can just call XML_Writer.Flush(), right? That'll flush the XML to the stream?
A: You can write directly to the response stream:
Response.Cache.SetCacheability(HttpCacheability.NoCache);
XmlWriter XML_Writer = XmlWriter.Create(HttpContext.Current.Response.Output);
To add settings to the writer you are better off using the newer XmlWriterSettings class. Give it as a parameter to the XmlWriter.Create function.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Pre-setting locations for looking for source files in Visual C++ 6.0 Due to the legacy nature of some of our code, we're still using Microsoft Visual 6.0 (SP6). When I attach to a running process to debug it for the first time, it has no knowledge of where the source files are located when I break into the process. It therefore asks me to navigate to the appropriate directory in my source tree, given a source file name. It remembers these directories, so I don't have to enter the same directory twice, but it's still painful.
Is there a way of pre-configuring VC6 with all the source file directories in my tree? Note that our project is built using makefiles (using nmake), rather than via DSPs.
A: The paths to the source files are recorded in the debugging information (Program Database, .pdb). Make the build tree on your machine the same as the machine it was built on.
A: Yes.
go into
TOOLS
OPTIONS
DIRECTORY (tab)
and you can set the SOURCES/LIBRARIES/INCLUDE directory locations. These values apply to all projects within the workspace.
I do not know if setting those values will allow the information to be seen using direct makefiles.
A: Absolute path information is not recorded in our PDBs files, since we are deliberately not wanting to tie our source tree to a particular top-level directory; when it is deployed, it is not possible to drop the source tree in the same position as was used on the build machine.
EvilTeach's solution certainly gives the desired effect, though our source tree consists of literally hundreds of directories, making entering them manually somewhat cumbersome. There's also the problem that a developer may have multiple source trees that they're running from at any given time, so being able to switch between those trees when debugging a given executable is essential.
I subsequently found that you can programmatically (well, at least from the command line) switch a set of source directories by directly updating the registry:
REGEDIT4
[HKEY_CURRENT_USER\Software\Microsoft\Devstudio\6.0\Build
System\Components\Platforms\Win32 (x86)\Directories]
"Source Dirs"="<path1>;<path2>"
That's not too bad, and would certainly do the trick.
However, the solution I settled upon was setting the SOURCE environment variable to contain all the source paths (as a semicolon-separated list of directories). A very simple batch file could do this, and allow switching between different trees. Then, you run up Visual C++ from the command line, using the option telling it from read SOURCE (and INCLUDE, LIB and PATH) from the environment:
msdev /useenv
Looking under Tools->Options, you'll see that the directories from SOURCE have indeed been loaded. I was then able to attach to a running process, and the debugger was able to locate any code that I debugged into.
Life just got that much easier!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What can I do to increase the performance of a Lua program? I asked a question about Lua perfromance, and on of the responses asked:
Have you studied general tips for keeping Lua performance high? i.e. know table creation and rather reuse a table than create a new one, use of 'local print=print' and such to avoid global accesses.
This is a slightly different question from Lua Patterns,Tips and Tricks because I'd like answers that specifically impact performance and (if possible) an explanation of why performance is impacted.
One tip per answer would be ideal.
A: In response to some of the other answers and comments:
It is true that as a programmer you should generally avoid premature optimization. But. This is not so true for scripting languages where the compiler does not optimize much -- or at all.
So, whenever you write something in Lua, and that is executed very often, is run in a time-critical environment or could run for a while, it is a good thing to know things to avoid (and avoid them).
This is a collection of what I found out over time. Some of it I found out over the net, but being of a suspicious nature when the interwebs are concerned I tested all of it myself. Also, I have read the Lua performance paper at Lua.org.
Some reference:
*
*Lua Performance Tips
*Lua-users.org Optimisation Tips
Avoid globals
This is one of the most common hints, but stating it once more can't hurt.
Globals are stored in a hashtable by their name. Accessing them means you have to access a table index. While Lua has a pretty good hashtable implementation, it's still a lot slower than accessing a local variable. If you have to use globals, assign their value to a local variable, this is faster at the 2nd variable access.
do
x = gFoo + gFoo;
end
do -- this actually performs better.
local lFoo = gFoo;
x = lFoo + lFoo;
end
(Not that simple testing may yield different results. eg. local x; for i=1, 1000 do x=i; end here the for loop header takes actually more time than the loop body, thus profiling results could be distorted.)
Avoid string creation
Lua hashes all strings on creation, this makes comparison and using them in tables very fast and reduces memory use since all strings are stored internally only once. But it makes string creation more expensive.
A popular option to avoid excessive string creation is using tables. For example, if you have to assemble a long string, create a table, put the individual strings in there and then use table.concat to join it once
-- do NOT do something like this
local ret = "";
for i=1, C do
ret = ret..foo();
end
If foo() would return only the character A, this loop would create a series of strings like "", "A", "AA", "AAA", etc. Each string would be hashed and reside in memory until the application finishes -- see the problem here?
-- this is a lot faster
local ret = {};
for i=1, C do
ret[#ret+1] = foo();
end
ret = table.concat(ret);
This method does not create strings at all during the loop, the string is created in the function foo and only references are copied into the table. Afterwards, concat creates a second string "AAAAAA..." (depending on how large C is). Note that you could use i instead of #ret+1 but often you don't have such a useful loop and you won't have an iterator variable you can use.
Another trick I found somewhere on lua-users.org is to use gsub if you have to parse a string
some_string:gsub(".", function(m)
return "A";
end);
This looks odd at first, the benefit is that gsub creates a string "at once" in C which is only hashed after it is passed back to lua when gsub returns. This avoids table creation, but possibly has more function overhead (not if you call foo() anyway, but if foo() is actually an expression)
Avoid function overhead
Use language constructs instead of functions where possible
function ipairs
When iterating a table, the function overhead from ipairs does not justify it's use. To iterate a table, instead use
for k=1, #tbl do local v = tbl[k];
It does exactly the same without the function call overhead (pairs actually returns another function which is then called for every element in the table while #tbl is only evaluated once). It's a lot faster, even if you need the value. And if you don't...
Note for Lua 5.2: In 5.2 you can actually define a __ipairs field in the metatable, which does make ipairs useful in some cases. However, Lua 5.2 also makes the __len field work for tables, so you might still prefer the above code to ipairs as then the __len metamethod is only called once, while for ipairs you would get an additional function call per iteration.
functions table.insert, table.remove
Simple uses of table.insert and table.remove can be replaced by using the # operator instead. Basically this is for simple push and pop operations. Here are some examples:
table.insert(foo, bar);
-- does the same as
foo[#foo+1] = bar;
local x = table.remove(foo);
-- does the same as
local x = foo[#foo];
foo[#foo] = nil;
For shifts (eg. table.remove(foo, 1)), and if ending up with a sparse table is not desirable, it is of course still better to use the table functions.
Use tables for SQL-IN alike compares
You might - or might not - have decisions in your code like the following
if a == "C" or a == "D" or a == "E" or a == "F" then
...
end
Now this is a perfectly valid case, however (from my own testing) starting with 4 comparisons and excluding table generation, this is actually faster:
local compares = { C = true, D = true, E = true, F = true };
if compares[a] then
...
end
And since hash tables have constant look up time, the performance gain increases with every additional comparison. On the other hand if "most of the time" one or two comparisons match, you might be better off with the Boolean way or a combination.
Avoid frequent table creation
This is discussed thoroughly in Lua Performance Tips. Basically the problem is that Lua allocates your table on demand and doing it this way will actually take more time than cleaning it's content and filling it again.
However, this is a bit of a problem, since Lua itself does not provide a method for removing all elements from a table, and pairs() is not the performance beast itself. I have not done any performance testing on this problem myself yet.
If you can, define a C function that clears a table, this should be a good solution for table reuse.
Avoid doing the same over and over
This is the biggest problem, I think. While a compiler in a non-interpreted language can easily optimize away a lot of redundancies, Lua will not.
Memoize
Using tables this can be done quite easily in Lua. For single-argument functions you can even replace them with a table and __index metamethod. Even though this destroys transparancy, performance is better on cached values due to one less function call.
Here is an implementation of memoization for a single argument using a metatable. (Important: This variant does not support a nil value argument, but is pretty damn fast for existing values.)
function tmemoize(func)
return setmetatable({}, {
__index = function(self, k)
local v = func(k);
self[k] = v
return v;
end
});
end
-- usage (does not support nil values!)
local mf = tmemoize(myfunc);
local v = mf[x];
You could actually modify this pattern for multiple input values
Partial application
The idea is similar to memoization, which is to "cache" results. But here instead of caching the results of the function, you would cache intermediate values by putting their calculation in a constructor function that defines the calculation function in it's block. In reality I would just call it clever use of closures.
-- Normal function
function foo(a, b, x)
return cheaper_expression(expensive_expression(a,b), x);
end
-- foo(a,b,x1);
-- foo(a,b,x2);
-- ...
-- Partial application
function foo(a, b)
local C = expensive_expression(a,b);
return function(x)
return cheaper_expression(C, x);
end
end
-- local f = foo(a,b);
-- f(x1);
-- f(x2);
-- ...
This way it is possible to easily create flexible functions that cache some of their work without too much impact on program flow.
An extreme variant of this would be Currying, but that is actually more a way to mimic functional programming than anything else.
Here is a more extensive ("real world") example with some code omissions, otherwise it would easily take up the whole page here (namely get_color_values actually does a lot of value checking and recognizes accepts mixed values)
function LinearColorBlender(col_from, col_to)
local cfr, cfg, cfb, cfa = get_color_values(col_from);
local ctr, ctg, ctb, cta = get_color_values(col_to);
local cdr, cdg, cdb, cda = ctr-cfr, ctg-cfg, ctb-cfb, cta-cfa;
if not cfr or not ctr then
error("One of given arguments is not a color.");
end
return function(pos)
if type(pos) ~= "number" then
error("arg1 (pos) must be in range 0..1");
end
if pos < 0 then pos = 0; end;
if pos > 1 then pos = 1; end;
return cfr + cdr*pos, cfg + cdg*pos, cfb + cdb*pos, cfa + cda*pos;
end
end
-- Call
local blender = LinearColorBlender({1,1,1,1},{0,0,0,1});
object:SetColor(blender(0.1));
object:SetColor(blender(0.3));
object:SetColor(blender(0.7));
You can see that once the blender was created, the function only has to sanity-check a single value instead of up to eight. I even extracted the difference calculation, though it probably does not improve a lot, I hope it shows what this pattern tries to achieve.
A: *
*Making the most used functions locals
*Making good use of tables as HashSets
*Lowering table creation by reutilization
*Using luajit!
A: It must be also pointed that using array fields from tables is much faster than using tables with any kind of key. It happens (almost) all Lua implementations (including LuaJ) store a called "array part" inside tables, which is accessed by the table array fields, and doesn't store the field key, nor lookup for it ;).
You can even also imitate static aspects of other languages like struct, C++/Java class, etc.. Locals and arrays are enough.
A: If your lua program is really too slow, use the Lua profiler and clean up expensive stuff or migrate to C. But if you're not sitting there waiting, your time is wasted.
The first law of optimization: Don't.
I'd love to see a problem where you have a choice between ipairs and pairs and can measure the effect of the difference.
The one easy piece of low-hanging fruit is to remember to use local variables within each module. It's general not worth doing stuff like
local strfind = string.find
unless you can find a measurement telling you otherwise.
A: Keep tables short, the larger the table the longer the search time.
And in the same line iterating over numerically indexed tables (=arrays) is faster than key based tables (thus ipairs is faster than pairs)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: LINQ query with multiple aggregates How would I create the equivalent Linq To Objects query?
SELECT MIN(CASE WHEN p.type = "In" THEN p.PunchTime ELSE NULL END ) AS EarliestIn,
MAX(CASE WHEN p.type = "Out" THEN p.PunchTime ELSE NULL END ) AS LatestOUt
FROM Punches p
A: Single enumeration yielding both min and max (and any other aggregate you want to throw in there). This is much easier in vb.net.
I know this doesn't handle the empty case. That's pretty easy to add.
List<int> myInts = new List<int>() { 1, 4, 2, 0, 3 };
var y = myInts.Aggregate(
new { Min = int.MaxValue, Max = int.MinValue },
(a, i) =>
new
{
Min = (i < a.Min) ? i : a.Min,
Max = (a.Max < i) ? i : a.Max
});
Console.WriteLine("{0} {1}", y.Min, y.Max);
A: You can't efficiently select multiple aggregates in vanilla LINQ to Objects. You can perform multiple queries, of course, but that may well be inefficient depending on your data source.
I have a framework which copes with this which I call "Push LINQ" - it's only a hobby (for me and Marc Gravell) but we believe it works pretty well. It's available as part of MiscUtil, and you can read about it in my blog post on it.
It looks slightly odd - because you define where you want the results to go as "futures", then push the data through the query, then retrieve the results - but once you get your head round it, it's fine. I'd be interested to hear how you get on with it - if you use it, please mail me at skeet@pobox.com.
A: It is possible to do multiple aggregates with LINQ-to-Objects, but it is a little ugly.
var times = punches.Aggregate(
new { EarliestIn = default(DateTime?), LatestOut = default(DateTime?) },
(agg, p) => new {
EarliestIn = Min(
agg.EarliestIn,
p.type == "In" ? (DateTime?)p.PunchTime : default(DateTime?)),
LatestOut = Max(
agg.LatestOut,
p.type == "Out" ? (DateTime?)p.PunchTime : default(DateTime?))
}
);
You would also need Min and Max functions for DateTime since these are not available standard.
public static DateTime? Max(DateTime? d1, DateTime? d2)
{
if (!d1.HasValue)
return d2;
if (!d2.HasValue)
return d1;
return d1.Value > d2.Value ? d1 : d2;
}
public static DateTime? Min(DateTime? d1, DateTime? d2)
{
if (!d1.HasValue)
return d2;
if (!d2.HasValue)
return d1;
return d1.Value < d2.Value ? d1 : d2;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Specify remote port to use for mail via exim4 I've got a stock Debian Etch system, using Exim4. The domains are mostly local but there are some that are remote. To handle the delivery of remote mail I use the Debian configuration file:
/etc/exim4/hubbed_hosts
This file lists the domain names, and remote MX machines to deliver to. For example:
example.org: mx.example.com
example.com: mx2.example.com
Looking at the exim4 configuration file I see that this used as follows:
hubbed_hosts:
debug_print = "R: hubbed_hosts for $domain"
driver = manualroute
domains = "${if exists{CONFDIR/hubbed_hosts}\
{partial-lsearch;CONFDIR/hubbed_hosts}\
fail}"
route_data = ${lookup{$domain}partial-lsearch{CONFDIR/hubbed_hosts}}
transport = remote_smtp
The issue I have is that some of the hosts I'm using need to have their mail delivered to a non-standard port. Unfortunately the Debian hubbed_hosts file doesn't work if I try to change it to include a port:
example.org: mx1.example.org:2525
example.com: 1.2.3.4.2525
Is it possible to dynamically allow the port to be specified?
A: This is actually supported by default without any changes to your exim4 config.
In hubbed_hosts, you separate hosts with a colon, and you specify a port number with a double-colon.
EX:
domain1: server1:server2::port:server3
domain2: server1::port
domain3: server1:server2
For more info check out http://www.exim.org/exim-html-current/doc/html/spec_html/ch20.html#SECID122
A: I was hoping for something a little more dynamic - and this solution works:
port = ${if exists{/etc/exim4/ports.list}\
{${lookup{$domain}lsearch{/etc/exim4/ports.list}\
{$value}{25}}}{25}}
Then a simple file may have a list of ports on a per-domain basis:
example.org: 2525
example.com: 26
A: You could probably use the ${extract} operator to let you combine the port numbers and host names, like in the example in your original question.
Something like (untested):
route_data = ${extract{1}{:}{${lookup{$domain}partial-lsearch{CONFDIR/hubbed_hosts}}}}
A: make a new transport that specifies the port
remote_hub_2525:
driver = smtp
port = 2525
and then create a router for the domains needing non-standard delivery
non_standard_hub:
driver = manualroute
domains = example.org : example.com
transport = remote_hub_2525
no_more
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Get Numeric Value from DataGridViewCell? I'm trying to retrieve numeric values from a DataGridView. So far, the only way I've found is to retrieve them as a string and convert them to numeric.
Convert.ToDouble(MyGrid.SelectedRows[0].Cells[0].Value.ToString());
There must be an easier way. The cell is originally populated from a DataSet with a numeric field value but since the DataGridViewCell object returns it as an object, I can't do a straight assignment. I must be missing something simple here.
Thanks.
A: I've actually just recently dealt with this problem and TryParse I think is your best bet with respect to robustness, but don't forget to check if the value in the Cell is null, if so TryParse will fail and throw up an error.
double d = 0;
if(grid[col,row].Value != null)
double.TryParse(grid[col,row].Value.ToString(), out d);
I would also recommend avoiding a straight cast, unless you know absolutely what type you are converting and that there will, in fact, be a value there, it will probably at some point cause an error in your code.
A: With DataGridViewCell you can just cast the .Value to your known type; the following is a complete example that shows this happening (using double) from a DataTable (like your example).
Additionally, Convert.To{blah}(...) and Convert.ChangeType(...) might be helpful.
using System.Data;
using System.Windows.Forms;
static class Program
{
static void Main()
{
Application.EnableVisualStyles();
DataTable table = new DataTable
{
Columns = {
{"Foo", typeof(double)},
{"Bar", typeof(string)}
},
Rows = {
{123.45, "abc"},
{678.90, "def"}
}
};
Form form = new Form();
DataGridView grid = new DataGridView {
Dock = DockStyle.Fill, DataSource = table};
form.Controls.Add(grid);
grid.CurrentCellChanged += delegate
{
form.Text = string.Format("{0}: {1}",
grid.CurrentCell.Value.GetType(),
grid.CurrentCell.Value);
if (grid.CurrentCell.Value is double)
{
double val = (double)grid.CurrentCell.Value;
form.Text += " is a double: " + val;
}
};
Application.Run(form);
}
}
A: DataGridViewCell has ValueType property. You can use to do directly cast the value to that type without first converting it to string:
if(MyGrid.SelectedRows[0].Cells[0].ValueType!=null &&
MyGrid.SelectedRows[0].Cells[0].ValueType == Double)
return (Double)MyGrid.SelectedRows[0].Cells[0].Value;
A: What is the error you are getting? Convert.ToDouble has an overloaded method that takes an object, so you shouldn't need the ToString()? Unless you are doing a TryParse?
A: Since DataGridViewCell.Value is an Object type, you really need to convert it to the appropriate type, Double or any numeric type in your case. The DataGridView contents aren't strongly typed so you have to cast the values retrieved from its cells.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I keep a class from being inherited in C#? In java, I could do this with the 'final' keyword. I don't see 'final' in C#. Is there a substitute?
A: Also be aware that "I don't think anybody will ever need to inherit from this" is not a good reason to use "sealed". Unless you've got a specific need to ensure that a particular implementation is used, leave the class unsealed.
A: You're looking for the sealed keyword. It does exactly what the final keyword in Java does. Attempts to inherit will result in a compilation error.
A: As Joel already advised, you can use sealed instead of final in C#.
http://en.csharp-online.net/CSharp_FAQ:_Does_CSharp_support_final_classes
A: The sealed modifier will do what final does in Java.
Also, although this probably isn't what you're looking for in this situation, marking a class as static also keeps it from being inherited (it becomes sealed behind the scenes).
A: The sealed keyword would work, but still you can derive from the class using reflection IIRC.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: CrystalReportViewer Buttons Broken using MVC Framework We are using the MVC framework (release 5) and the CrystalReportViewer control to show our reports. I cannot get any of the buttons at the top of the report viewer control to work.
If I'm working with the report 'HoursSummary'. If I hover over any of the buttons on the report viewer in IE the displayed link at the bottom of the pages is '../HoursSummary'. This creates a url of 'http://localhost/HoursSummary'. There is no 'HoursSummary' controller so I keep receiving 404 errors.
*
*I believe I want to redirect to 'http://localhost/reports/HoursSummary' since I do have a reports controller. If this is the correct method does anyone know which property I should set on the CrystalReportViewer control to make that happen?
*Is there an easier method to handle this situation?
A: If that's a server control, it won't work. ASP.NET MVC doesn't use any postbacks, so most of webforms server controls don't function.
What you can do is embed the report viewer in an iFrame and output that in your MVC view. The iframe can point to a page outside of the MVC stuff, say in a subfolder called Legacy or something.
A: We were able to get the report viewer to work and have been using it for the past few months in production without any issues.
*
*We have a reports controller that lists links to the reports we want to run
*Clicking on one of the links will make an ajax call to the back end and return a partial page where we can fill in all the parameters we need.
*After the parameters are filled out we submit the form to '\reports\Name of Report'.
*Back in the Reports controller we call SQL, return our data, and then call a different view called 'Full Report'
*The 'Full Report' View only has a crystal report viewer control on it where it automatically takes the report data we pass over to it through ViewData, populates the report, renders it, and sends it to the user
Everything seems to work great.
UPDATE
I've added some code and clarification to the steps I originally listed above. The key item I left out was there is some code behind with the final View so it will work with Crystal Reports. The code behind is minimal, but needed. For Crystal Reports to work you are going to end up with the following files:
*
*A layout file.rpt where you design the report
*A aspx file that holds the Crystal Reports Report control. This is the file that will have some code behind.
Details on how to create a View that will work with Crystal Reports:
*
*Create the layout of your report using the Crystal Reports Designer. The resulting file will be an .rpt file. For the sake of this example, let's call this file AllJobsSummaryReportLayout.rpt.
*While designing your report, for the 'Database Fields' select one of the business entities or DTOs that holds the results coming back from SQL.
*A quick aside, we have a few data transfer objects (DTOs) in our system that contain nothing more than scalar values and strings, there is no intelligence in these DTOs. When the Controller is called, it calls the Model, the Model for most of these reports returns a List of DTOs that we then pass to the View to be rendered. These DTOs do not know how to query themselves, display themselves, they only contain actual values returned from SQL that someone else then renders.
*Once the layout Crystal Report file is completed, the AllJobsSummaryReportLayout.rpt, we design our Controller. In the Controller we take in any parameters needed to run the report, call the Model, Model returns our list of DTOs, as seen in the snippet below from the Controller:
var reportViewData = model.AllJobsSummaryQuery(startDate, endDate);
if (0 != reportViewData.Count())
{
var report = new AllJobsSummaryReportLayout();
report.SetDataSource(reportViewData);
report.SetParameterValue("startDate", startDate);
report.SetParameterValue("endDate", endDate);
ViewData["ReportData"] = report;
returnView = "AllJobsSummaryView";
}
else
returnView = "noReportView";
return View(returnView);
*Note a couple of items here, we are creating a varible 'report' that is a type of the Crystal Report layout file, AllJobsSummaryReportLayout.rpt, that we created above.
*Once we created the 'report' variable we set the data source values and any parameters we need, and bundle the item up into the ViewData.
*Now let's take a look at AllJobsSummaryView.aspx. This file has a form on it with a Crystal Reports Viewer and a code behind file:
<%@ Page Title="All Jobs Summary Report" Language="C#" AutoEventWireup="true" CodeBehind="AllJobsSummaryView.aspx.cs" Inherits="V.Views.Reports.AllJobsSummaryView"%>
<%@ Register Assembly="CrystalDecisions.Web, Version=10.5.3700.0, Culture=neutral, PublicKeyToken=692fbea5521e1304" Namespace="CrystalDecisions.Web" TagPrefix="CR" %>
<form id="form1" runat="server">
<div>
<a href="/Reports" id="Report"><< Return to Report Main
Page</a><br />
<CR:CrystalReportViewer ID="ReportViewer" runat="server" AutoDataBind="True" EnableDatabaseLogonPrompt="False"
EnableParameterPrompt="False" HasCrystalLogo="False" DisplayGroupTree="False"
HasDrillUpButton="False" HasToggleGroupTreeButton="False" HasViewList="False"
HasSearchButton="False" EnableDrillDown="False" EnableViewState="True"
Height="50px" ReportSourceID="CrystalReportSource1" Width="350px" />
<CR:CrystalReportSource ID="CrystalReportSource1" runat="server">
<Report FileName="AllJobsSummaryReportLayout.rpt">
</Report>
</CR:CrystalReportSource>
</div>
</form>
*
*And the code behind file:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
namespace V.Views.Reports
{
public partial class AllJobsSummaryView : ViewPage
{
protected void Page_Init(object sender, EventArgs e)
{
ReportViewer.ReportSource = ViewData["ReportData"];
}
protected void Page_Unload(object sender, EventArgs e)
{
((AllJobsSummaryReportLayout)ViewData["ReportData"]).Close();
((AllJobsSummaryReportLayout)ViewData["ReportData"]).Dispose();
}
}
}
*The Page_Unload is key, without it you will have an error generated by Crystal Reports 'You have exceeded the max number of reports set by your administrator.'
This method is still working in a production environment for well over two years now.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Can I split a string in c# VB6-style without referencing Microsoft.VisualBasic? Unfortunately, there seems to be no string.Split(string separator), only string.Split(char speparator).
I want to break up my string based on a multi-character separator, a la VB6. Is there an easy (that is, not by referencing Microsoft.VisualBasic or having to learn RegExes) way to do this in c#?
EDIT: Using .NET Framework 3.5.
A: String.Split() has other overloads. Some of them take string[] arguments.
string original = "first;&second;&third";
string[] splitResults = original.Split( new string[] { ";&" }, StringSplitOptions.None );
A: the regex for spliting string is extremely simple so i would go with that route.
http://msdn.microsoft.com/en-us/library/8yttk7sy.aspx
A: Which version of .Net? At least 2.0 onwards includes the following overloads:
.Split(string[] separator, StringSplitOptions options)
.Split(string[] separator, int count, StringSplitOptions options)
Now if they'd only fix it to accept any IEnumerable<string> instead of just array.
A: The regex version is probably prettier but this works too:
string[] y = { "bar" };
string x = "foobarfoo";
foreach (string s in x.Split(y, StringSplitOptions.None))
Console.WriteLine(s);
This'll print foo twice.
A: string[] stringSeparators = new string[] {"[stop]"};
string[] result;
result = someString.Split(stringSeparators, StringSplitOptions.None);
via http://msdn.microsoft.com/en-us/library/tabh47cf.aspx
A: I use this under .NET 2.0 all the time.
string[] args = "first;&second;&third".Split(";&".ToCharArray(),StringSplitOptions.RemoveEmptyEntries);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154706",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What is the best way to store media files on a database? I want to store a large number of sound files in a database, but I don't know if it is a good practice. I would like to know the pros and cons of doing it in this way.
I also thought on the possibility to have "links" to those files, but maybe this will carry more problems than solutions. Any experience in this direction will be welcome :)
Note: The database will be MySQL.
A: Advantages of using a database:
*
*Easy to join sound files with other
data bits.
*Avoiding file i/o operations that
bypass database security.
*No need for separation operations to
delete sound files when database
records are deleted.
Disadvantages of using a database:
*
*Database bloat
*Databases can be more expensive than file systems
A: I've experimented in different projects with doing it both ways and we've finally decided that it's easier to use the file system as well. After all, the file system is already optimized for storing, retrieving, and indexing files.
The one tip that I would have about that is to only store a "root relative" path to the file in the database, then have your program or your queries/stored procedures/middle-ware use an installation specific root parameter to retrieve the file.
For example, if you store XYZ.Wav in C:\MyProgram\Data\Sounds\X\ the full path would be
C:\MyProgram\Data\Sounds\X\XYZ.Wav
But you would store the path and or filename in the database as:
X\XYZ.Wav
Elsewhere, in the database or in your program's configuration files, store a root path like SoundFilePath equal to
C:\MyProgram\Data\Sounds\
Of course, where you split the root from the database path is up to you. That way if you move your program installation, you don't have to update the database.
Also, if there are going to be lots of files, find some way of hashing the paths so you don't wind up with one directory containing hundreds or thousands of files (in my little example, there are subdirectories based on the first character of the filename, but you can go deeper or use random hashes). This makes search indexers happy as well.
A: Some advantages of using blobs to store files
*
*Lower management overhead - use a single tool to backup / restore etc
*No possibility for database and filesystem to be out of sync
*Transactional capability (if needed)
Some disadvantages
*
*blows up your database servers' RAM with useless rubbish it could be using to store rows, indexes etc
*Makes your DB backups very large, hence less manageable
*Not as convenient as a filesystem to serve to clients (e.g. with a web server)
What about performance? Your mileage may vary. Filesystems are extremely varied, so are databases in their performance. In some cases a filesystem will win (probably with fewer larger files). In some cases a DB might be better (maybe with a very large number of smallish files).
In any case, don't worry, do what seems best at the time.
Some databases offer a built-in web server to serve blobs. At the time of writing, MySQL does not.
A: You could store them as BLOBs (or LONGBLOBs) and then retrieve the data out when you want to actually access the media files.
or
You could simply store the media files on a drive and store the metadata in the DB.
I lean toward the latter method. I don't know how this is done overall in the world, but I suspect that many others would do the same.
You can store links (partial paths to the data) and then retrieve this info. Makes it easy to move things around on drives and still access it.
I store off the relative path of each file in the DB along with other metadata about the files. The base path can then be changed on the fly if I need to relocate the actual data to another drive (either local or via UNC path).
That's how I do it. I'm sure others will have ideas too.
A: Store them as external files. Then save the path in a varchar field. Putting large binary blobs into a relational database is generally very inefficient - they only use up space and slow things down as caches are filled are unusable. And there's nothing to be gained - the blobs themselves cannot be searched. You might want to save media meta data into the the database though.
A: A simple solution would be to just store the relative locations of the files as strings and let the filesystem handle it. I've tried it on a project (we were storing office file attachments to a survey), and it worked fine.
A: I think storing them in the database is ok, as long as you use a good implementation. You can read this older but good article for ideas on how to keep the larger amounts of data in the database from affecting performance.
http://www.dreamwerx.net/phpforum/?id=1
I've had literally 100's of gigs loaded in mysql databases without any issues. The design and implementation is key, do it wrong and you'll suffer.
More DB Advantages (not already mentioned):
*
*Works better in a load balanced environment
*You can build in more backend storage scalability
A: Every system I know of that stores large numbers of big files stores them externally to the database. You store all of the queryable data for the file (title, artist, length, etc) in the database, along with a partial path to the file. When it's time to retrieve the file, you extract the file's path, prepend some file root (or URL) to it, and return that.
So, you'd have a "location" column, with a partial path in it, like "a/b/c/1000", which you then map to:
"http://myserver/files/a/b/c/1000.mp3"
Make sure that you have an easy way to point the media database at a different server/directory, in case you need that for data recovery. Also, you might need a routine that re-syncs the database with the contents of the file archive.
Also, if you're going to have thousands of media files, don't store them all in one giant directory - that's a performance bottleneck on some file systems. Instead,break them up into multiple balanced sub-trees.
A: The best way to storing audio/video files, you can use any distributed storage that can be local or on cloud.
https://min.io/
for cloud:
AWS S3
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67"
} |
Q: Multi-Line group and search with Regex Ok, Regex wizards. I want to be able to search through my logfile and find any sessions with the word 'error' in it and then return the entire session log entry.
I know I can do this with a string/array but I'd like to learn how to do it with Regex but here's the question. If I decide to do this with Regex do I have one or two problems? ;o)
Here's the log:
PS: I'm using the perl Regex engine.
Note: I don't think I can get this done in Regex. In other words, I now have two problems. ;o) I've tried the solutions below but, since I've confused the issue by stating that I was using a Perl engine, many of the answers were in Perl (which cannot be used in my case). I did however post my solution below.
2008.08.27 08:04:21 (Wed)------------Start of Session-----------------
Blat v2.6.2 w/GSS encryption (build : Feb 25 2007 12:06:19)
Sending stdin.txt to foo@bar.com
Subject: test 1
Login name is foo@bar.com
The SMTP server does not require AUTH LOGIN.
Are you sure server supports AUTH?
The SMTP server does not like the sender name.
Have you set your mail address correctly?
2008.08.27 08:04:24 (Wed)-------------End of Session------------------
2008.08.27 08:05:56 (Wed)------------Start of Session-----------------
Blat v2.6.2 w/GSS encryption (build : Feb 25 2007 12:06:19)
Error: Wait a bit (possible timeout).
SMTP server error
Error: Not a socket.
Error: Not a socket.
2008.08.27 08:06:26 (Wed)-------------End of Session------------------
2008.08.27 08:07:58 (Wed)------------Start of Session-----------------
Blat v2.6.2 w/GSS encryption (build : Feb 25 2007 12:06:19)
Sending stdin.txt to foo@bar.com
Subject: Lorem Update 08/27/2008
Login name is foo@bar.com
2008.08.27 08:07:58 (Wed)-------------End of Session------------------
A: Kyle's answer is probably the most perlish, but in case you have it all in one string and want to use a single regex, here's a (tested) solution:
(Second update: fixed a bit, now more readable then ever ;-)
my $re = qr{
( # capture in $1
(?:
(?!\n\n). # Any character that's not at a paragraph break
)* # repeated
error
(?:
(?!\n\n).
)*
)
}msxi;
while ($s =~ m/$re/g){
print "'$1'\n";
}
Ugly, but you asked for it.
A: It looks as if your sessions are delimited by blank lines (in addition to the start/end markers). If that's the case, this is a one liner:
perl -ne 'BEGIN{$/=""} print if /error/i' < logfile
A: /(?:[^\n\r]|\r?\n(?!\r|\n))*?Error:(?:[^\n\r]|\r?\n(?!\r|\n))*/g
This takes advantage of the blank lines in between the entries. It works for both unix and windows line breaks. You can replace the text "Error:" in the middle with almost anything else if you would like.
A: Like the last guy said, perl from the command line will work. So will awk from the command line:
awk '/-Start of Session-/ { text=""; gotError=0; } /Error/{gotError=1;}/-End of Session-/{ if(gotError) {print text}} { text=text "\n" $0}' logFileName.txt
Basically, start recording on a line with "-Start of Session-", set a flag on a line with "Error", and conditionally output on a line with "-End of Session-".
Or put this into errorLogParser.awk:
/-Start of Session-/{
text="";
gotError=0;
}
/Error/{
gotError=1;
}
/-End of Session-/{
if(gotError)
{
print text
}
}
{
text=text "\n" $0
}
... and invoke like so:
awk -f errorLineParser.awk logFileName.txt
A: With a perl regexp engine, the simple regexp
Error:.+
does the trick according to quickrex.
(With a java regexp engine, another regexp would have been required:
(?ms)^Error:[^\r\n]+$
)
a regexp with a capturing group would allow to redirect only the error message and not 'Error' itself, as in:
Error:\s*(\S.+)
The group n°1 capture only what follows 'Error: '
Anyhow, for for to regexp, see regular-Expressions.info tutorial, a first-class introduction to this technique.
A: If you want to understand or play with any of these solutions, I high recommend downloading Regex Coach, which helps you build up and test regular expressions
A: What I did was to run the entire log into a string then went through line by line and added each line to a third variable until the line contained "--End of Session--". I then added that line to the 3rd var as well and then searched that 3rd var for the word "error". If it contained it, I added the 3rd var to a forth and then cleared the 3rd var and started going back through the var with the log on the next line.
It looks like this:
str a b email gp lgf
lgf.getfile( "C:\blat\log.txt")
foreach a lgf
if(find(a "--End of Session--")>-1)
gp.from(gp "[]" a)
if(find(gp "error" 0 1)>-1)
gp.trim
email.from(email gp "[]")
gp=""
continue
gp.from(gp "[]" a)
email.trim
It turns out that regex can really be a bear-cat to implement when it doesn't fit well. Kind of like using a screwdriver instead of a hammer. It'll get the job done, but takes a long time, break the screwdriver, and probably hurt you in the process.
A: Once in a while when only Vim was available (and sed, awk which I did not master at that time), I did something like:
Via Vim I had joined all the lines between - in your case - Start of Session/End of Session to a Single line:
*
*First replaced all the line endings to some specific char:
:%s:$:#
*Then turned the double enters into some other separator:
:%s:#\n#\n:#\r@\r
*Joining the lines:
:%s:#\n:#
*Displayed only the lines with Error:
:v/[Ee]rror/d
*Split lines to their original format:
:%s:#:\r
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Precedence: header in email My web application sends email fairly often, and it sends 3 kinds of emails: initiated by user, in response to an event in the system, and in automatic response to an email received by the application.
I would like to make sure that the third type of email does not get stuck in an endless loop of auto-responders talking to each other. Currently, I use the header:
Precedence: junk
but Yahoo! mail is treating these messages as spam. This is obviously not ideal, because we would like SOMEBODY to read our auto-response and make a decision on it, just not an out-of-office reply.
What is the best way to send an email without triggering either junk filters or auto-responders?
Precedence: junk?
Precedence: bulk?
Precedence: list?
X-Priority: 2?
A: You can set these headers:
Precedence: bulk
Auto-Submitted: auto-generated
Source: http://www.redmine.org/projects/redmine/repository/revisions/2655/diff
A: There is a RFC 3834 dedicated for automated email responses.
In short, it recommends:
*
*Send auto-responses only to address contained in the Return-Path header of an incoming message, if it is valid email address. Particularly "<>" (null address) in the Return-Path of the message means that auto-responses must not be sent for this message.
*When sending auto-response, MAIL FROM smtp command must contain "<>" (null address). This would lead to Return-Path:<> when message will be delivered.
*Use Auto-Submitted header with value other than "no" to explicitly indicate automated response.
One note: it is not worth to explicitly set Return-Path header in outgoing message, as this header must be rewritten by envelop address (from MAIL FROM smtp command) during delivery.
A: RFC 2076 discourages the use of the precedence header. as you have noted, many clients will just filter that off (especially the precedence: junk variety). it may be better to use a null path to avoid auto responder wars:
Return-Path: <>
Ultimately you could use priority to try to get around this, but this seems like going against the spirit of the header. i'd suggest just using the return-path header for this, and avoiding precedence. in some cases you may have to write in some way to drop auto-responders in your application (to avoid getting into a responder war), but i can't remember a situation in which this happened using an appropriate return-path. (most auto responder wars i recall having to deal with were the result of very badly formed emails)
Note: the Return-Path header is, in short, the destination for notifications (bounces, delay delivery, etc...), and is described in RFC 2821 -- because it's required by SMTP. It's also one method to drop bad mail (as theoretically all good mail will set an appropriate return-path).
A: The traditional way of dealing with this is to send the email with a null envelope-sender (traditionally written as <>). This prevents the autoresponder on the other end from responding because there's no sender to respond to.
A: How about configuring a white list on your email account?
I would assume that any email key words could get flagged by a junk filter.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38"
} |
Q: Can anyone explain how the oracle "hash group" works? I've recently come across a feature of doing a large query in oracle, where changing one thing resulted in a query that used to take 10 minutes taking 3 hours.
To briefly summarise, I store a lot of coordinates in the database, with each coordinate having a probability. I then want to 'bin' these coordinates into 50 metre bins (basically round the coordinate down to the nearest 50 metres) and sum the probability.
To do this, part of the query is 'select x,y,sum(probability) from .... group by x,y'
Initially I was storing a large number of points with a probability of 0.1 and queries were running reasonably ok, taking about 10 minutes for each one.
Then I had a request to change how the probabilities were calculated to adjust the distribution, so rather than all of them being 0.1, they were different values (e.g. 0.03, 0.06, 0.12, 0.3, 0.12, 0.06, 0.03). Running exactly the same query resulted in queries of about 3 hours.
Changing back to all 0.1 brought the queries back to 10 minutes.
Looking at the query plan and performance of the system, it looked like the problem was with the 'hash group' functionality designed to speed up grouping in oracle. I'm guessing that it was creating hash entries for each unique x,y,probability value and then summing probability for each unique x,y value.
Can anyone explain this behaviour any better?
Additional Info
Thanks to the answers. They allowed me to verify what was going on. I'm currently running a query and the tempseg_size from v$sql_workarea_active is currently at 7502561280 and growing rapidly.
Given that the development server I'm running on only has 8gb of ram, it looks like the query needs to use temporary tables.
I've managed to workaround this for now by changing the types of queries and precalculating some of the information.
A: Hash group (and hash joins, as well as other operations such as sorts etc.) can use either optimal (i.e. in-memory), one-pass or multi-pass methods. The last two methods use TEMP storage and is thus much slower.
By increasing the number of possible items you might have exceeded the number of items that will fit in memory reserved for this type of operations.
Try looking at v$sql_workarea_active whilst the query is running, to see if this is the case. Or look at v$sql_workarea for historical information. It will also give you an indication of how much memory and/or temp space is needed for the operation.
If turns out to be the actual problem - try increasing the pga_aggregate_target initialization parameter, if possible. The amount of memory available for optimal hash/sort operations is usually around a 5% fraction of the pga_aggregate_target.
See the Performance Tuning Guide for more detail.
A: "'m guessing that it was creating hash entries for each unique x,y,probability value and then summing probability for each unique x,y value" -- almost certainly so, since that is what the query requires.
You can check for the likelihood of a query requiring temporary dfisk space to complete a sort or group-by (etc) by using the explain plan.
explain plan for
select x,y,sum(probability) from .... group by x,y
/
select * from table(dbms_xplan.display)
/
If the optimizer can correctly deduce from statistics the approximate unique number of combinations of x and y then there's a pretty good chance that in the TempSpc column of the output of the second query it will show you just how much disk space (if any) will be required to complete the query (no column = no disk space requirement).
Way too much information here: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_xplan.htm#i999234
If the temp space usage is high then as CaptP says, it may be time for some memory tweakage. On databases that perform a lot of sorts and aggregations it is common to specify a higher PGA target than an SGA target.
A: Is your PGA_AGGREGATE_TARGET set to zero by any chance? It's unlikely that it's the HASH GROUPBY on its own that caused the issue, it's probably something before it or after it. Downgrade your OPTIMIZER_FEATURES_ENABLE to 10.1.0.4 and rerun the query - you'll see that now you'll get a SORT GROUPBY which should pretty much always be outperformed by a HASH GROUPBY, unless your PGA sizing is set to MANUAL and your hash work area is undersized.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: When would you use a WeakHashMap or a WeakReference? The use of weak references is something that I've never seen an implementation of so I'm trying to figure out what the use case for them is and how the implementation would work. When have you needed to use a WeakHashMap or WeakReference and how was it used?
A:
One problem with strong references is
caching, particular with very large
structures like images. Suppose you
have an application which has to work
with user-supplied images, like the
web site design tool I work on.
Naturally you want to cache these
images, because loading them from disk
is very expensive and you want to
avoid the possibility of having two
copies of the (potentially gigantic)
image in memory at once.
Because an image cache is supposed to
prevent us from reloading images when
we don't absolutely need to, you will
quickly realize that the cache should
always contain a reference to any
image which is already in memory. With
ordinary strong references, though,
that reference itself will force the
image to remain in memory, which
requires you to somehow determine when
the image is no longer needed in
memory and remove it from the cache,
so that it becomes eligible for
garbage collection. You are forced to
duplicate the behavior of the garbage
collector and manually determine
whether or not an object should be in
memory.
Understanding Weak References, Ethan Nicholas
A: WeakReference versus SoftReference
One distinction to be clear on is the difference between a WeakReference and a SoftReference.
Basically a WeakReference will be GC-d by the JVM eagerly, once the referenced object has no hard references to it. A SoftReferenced object on the other hand, will tend to be left about by the garbage collector until it really needs to reclaim the memory.
A cache where the values are held inside WeakReferences would be pretty useless (in a WeakHashMap, it is the keys which are weakly referenced). SoftReferences are useful to wrap the values around when you want to implement a cache which can grow and shrink with the available memory.
A: This blog post demonstrates the use of both classes: Java: synchronizing on an ID. The usage goes something like this:
private static IdMutexProvider MUTEX_PROVIDER = new IdMutexProvider();
public void performTask(String resourceId) {
IdMutexProvider.Mutex mutext = MUTEX_PROVIDER.getMutex(resourceId);
synchronized (mutext) {
// look up the resource and do something with it
}
}
IdMutextProvider provides id-based objects to synchronize on. The requirements are:
*
*must return a reference to the same object for concurrent use of equivalent IDs
*must return a different object for different IDs
*no release mechanism (objects are not returned to the provider)
*must not leak (unused objects are eligible for garbage collection)
This is achieved using an internal storage map of type:
WeakHashMap<Mutex, WeakReference<Mutex>>
The object is both key and value. When nothing external to the map has a hard reference to the object, it can be garbage collected. Values in the map are stored with hard references, so the value must be wrapped in a WeakReference to prevent a memory leak. This last point is covered in the javadoc.
A: If you for example want to keep track of all objects created of a certain class. To still allow these objects to be garbage collected, you keep a list/map of weak references to the objects instead of the objects themselves.
Now if someone could explain phantom-references to me, I'd be happy...
A: One Common use of WeakReferences and WeakHashMaps in particular is for adding properties to objects. Occasionally you want to add some functionality or data to an object but subclassing and/or composition are not an option in that case the obvious thing to do would be to create a hashmap linking the object you want to extend to the property you want to add. then whenever you need the property you can just look it up in the map. However, if the objects you are adding properties to tend to get destroyed and created a lot, you can end up with a lot of old objects in your map taking up a lot of memory.
If you use a WeakHashMap instead the objects will leave your map as soon as they are no longer used by the rest of your program, which is the desired behavior.
I had to do this to add some data to java.awt.Component to get around a change in the JRE between 1.4.2 and 1.5, I could have fixed it by subclassing every component I was interested int (JButton, JFrame, JPanel....) but this was much easier with much less code.
A: As stated above, weak reference are held for as long as a strong reference exists.
An example usage would be to use WeakReference inside listeners, so that the listeners are no longer active once the main reference to their target object is gone.
Note that this does not mean the WeakReference is removed from the listeners list, cleaning up is still required but can be performed, for example, at scheduled times.
This has also the effect of preventing the object listened to from holding strong references and eventually be a source of memory bloat.
Example: Swing GUI components refering a model having a longer lifecycle than the window.
While playing with listeners as described above we rapidly realised that objects get collected "immediately" from a user's point of view.
A: Another useful case for WeakHashMap and WeakReference is a listener registry implementation.
When you create something which wants to listen to certain events, usually you register a listener, e.g.
manager.registerListener(myListenerImpl);
If the manager stores your listener with a WeakReference, that means you don't need to remove the register e.g. with a manager.removeListener(myListenerImpl) because it will be automatically removed once your listener or your component holding the listener becomes unavailable.
Of course you still can manually remove your listener, but if you don't or you forget it, it will not cause a memory leak, and it will not prevent your listener being garbage collected.
Where does WeakHashMap come into the picture?
The listener registry which whishes to store registered listeners as WeakReferences needs a collection to store these references. There is no WeakHashSet implementation in the standard Java library only a WeakHashMap but we can easily use the latter one to "implement" the functionality of the first one:
Set<ListenerType> listenerSet =
Collections.newSetFromMap(new WeakHashMap<ListenerType, Boolean>());
With this listenerSet to register a new listener you just have to add it to the set, and even if it is not removed explicitly, if the listener is no longer referenced, it will be removed automatically by the JVM.
A: One real world use I had for WeakReferences is if you have a single, very large object that's rarely used. You don't want to keep it in memory when it's not needed; but, if another thread needs the same object, you don't want two of them in memory either. You can keep a weak reference to the object somewhere, and hard references in the methods that use it; when the methods both finish, the object will be collected.
A: I did a google code search for "new WeakHashMap()".
I got a bunch of matches from the GNU classpath project and
*
*Apache xbean Project : WeakHashMapEditor.java
*Apache Lucene project : CachingWrapperFilter.java
A: you can use weakhashmap to implement a resource-free caching for expansive object creation.
but note that it is not desireable to have mutable objects.
i used it to cache query results (which take about 400 ms to execute) to a text-search engine, which is rarely updated.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "167"
} |
Q: TableLayoutPanel does the TableLayoutPanel exist in VS 2005?
A: Yes.
A: Yes, it first appeared in .NET 2.0
http://msdn.microsoft.com/en-us/library/system.windows.forms.tablelayoutpanel(VS.80).aspx
A: Yes, TableLayoutPanel is a standard component of the .NET 2.0, thus it is usable from VS2k5.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: How do I use ResourceBundle to avoid hardcoded config paths in Java apps? I'd like to eliminate dependencies on hardcoded paths for configuration data in my Java apps, I understand that using ResourceBundle will help me use the classloader to find resources.
Can someone tell me how I would replace a hardcoded path to a resource (say a .properties configuration data file required by a class) with appropriate use of ResourceBundle? Simple clear example if possible, thanks all.
A: You will want to examine Resource.getBundle(String). You pass it the fully qualified name of a resource on the classpath to load as a properties file.
A: Prior to Java 6, ResourceBundle typically allowed:
*
*Strings from a group of localised properties files, using PropertyResourceBundle
*Objects from a group of localised classes, using ListResourceBundle
Java 6 comes with the ResourceBundle.Control class which opens the door to other sources of ResourceBundles, for example:
*
*XML files (see example 2 in Javadoc)
*Database rows
Hope this helps.
A: You don't need a ResourceBundle. A simple Properties object can do the job. Just use the classloader to get an inputstream for you properties file and use it to load de values. If you need to handle more sophisticated user configurations, use the preferences api.
Kind Regards
A: The trick behind Resource.getBundle(..) is the use of the classloader. You can load everything thats in your classpath by accessing it via this.getClass().getClassLoader().
Resource.getBundle(..) is a practical helper to use it in the resource/locatization topic.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Capturing video out of an OpenGL window in Windows I am supposed to provide my users a really simple way of capturing video clips out of my OpenGL application's main window. I am thinking of adding buttons and/or keyboard shortcuts for starting and stopping the capture; when starting, I could ask for a filename and other options, if any. It has to run in Windows (XP/Vista), but I also wouldn't like to close the Linux door which I've so far been able to keep open.
The application uses OpenGL fragment and shader programs, the effects due to which I absolutely need to have in the eventual videos.
It looks to me like there might be even several different approaches that could potentially fulfill my requirements (but I don't really know where I should start):
*
*An encoding library with functions like startRecording(filename), stopRecording, and captureFrame. I could call captureFrame() after every frame rendered (or every second/third/whatever). If doing so makes my program run slower, it's not really a problem.
*A standalone external program that can be programmatically controlled from my application. After all, a standalone program that can not be controlled almost does what I need... But as said, it should be really simple for the users to operate, and I would appreciate seamlessness as well; my application typically runs full-screen. Additionally, it should be possible to distribute as part of the installation package for my application, which I currently prepare using NSIS.
*Use the Windows API to capture screenshots frame-by-frame, then employ (for example) one of the libraries mentioned here. It seems to be easy enough to find examples of how to capture screenshots in Windows; however, I would love a solution which doesn't really force me to get my hands super-dirty on the WinAPI level.
*Use OpenGL to render into an offscreen target, then use a library to produce the video. I don't know if this is even possible, and I'm afraid it might not be the path of least pain anyway. In particular, I would not like the actual rendering to take a different execution path depending on whether video is being captured or not. Additionally, I would avoid anything that might decrease the frame rate in the normal, non-capture mode.
If the solution were free in either sense of the word, then that would be great, but it's not really an absolute requirement. In general, the less bloat there is, the better. On the other hand, for reasons beyond this question, I cannot link in any GPL-only code, unfortunately.
Regarding the file format, I cannot expect my users to start googling for any codecs, but as long as also displaying the videos is easy enough for a basic-level Windows user, I don't really care what the format is. However, it would be great if it were possible to control the compression quality of the output.
Just to clarify: I don't need to capture video from an external device like camcorder, nor am I really interested in mouse movements, even though getting them does not harm either. There are no requirements regarding audio; the application makes no noise whatsoever.
I write C++ using Visual Studio 2008, for this very application also taking benefit of GLUT and GLUI. I have a solid understanding regarding C++ and linking in libraries and that sort of stuff, but on the other hand OpenGL is quite new for me: so far, I've really only learnt the necessary bits to actually get my job done.
I don't need a solution super-urgently, so feel free to take your time :)
A: There are two different questions here - how to grab frames from an OpenGL application, and how to turn them into a movie file.
The first question is easy enough; you just grab each frame with glReadPixels() (via a PBO if you need the performance).
The second question is a little harder since the cross-platform solutions (ffmpeg) tend to be GPL'd or LGPL'd. Is LGPL acceptable for your project? The Windows way of doing this (DirectShow) is a bit of a headache to use.
Edit: Since LGPL is ok and you can use ffmpeg, see here for an example of how to encode video.
A: This does look pretty relevant for merging into an AVI (as suggested by Andrew), however I was really hoping to avoid the LPBITMAPINFOHEADERs etc.
Thanks for the answers, I will report on the success if there is going to be any :)
In the meantime, additional tips for encoding the raw frames from glReadPixels into video clips would be appreciated.
Edit: So far, ffmpeg suggested by Mike F seems to be the way to go. However, I didn't get into the actual implementation yet, but hopefully that will change in the near future!
A: The easiest option is going to be saving each rendered frame from within your app and then merging them into an AVI. When you have the AVI there are many libraries available that can convert it into a more optimal format, or possibly skip the AVI step altogether.
In terms of getting each frame, you could accomplish this either by rendering into an offscreen texture as you suggest or using the backbuffer directly as a source if your hardware supports this. Doing either of these (and saving each frame) is going to be difficult without a heavy penalty on framerate.
Providing your application is deterministic you could "record" the users actions as a series of inputs and then have an export mode that sequentially renders these to an offscreen surface to generate the AVI.
A: I had to create a demo project of recording an OpenGL rendering into a video. I used glReadPixels to get the pixel data and created the video with OpenCV's cvWriteFrame. OpenCV lets you write in divx or even x264/vp8(with ffmpeg compiled in).
I have a more detailed writeup on my blog post along with a sample project.
http://tommy.chheng.com/2013/09/09/encode-opengl-to-video-with-opencv/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Understanding IIS6 permissions, ACL, and identity--how can I restrict access? When an ASP.NET application is running under IIS6.0 in Windows 2003 Server with impersonation, what user account is relevant for deciding file read/write/execute access privileges? I have two scenarios where I am trying to understand what access to grant/revoke. I thought the most relevant user is probably the identity specified in the Application Pool, but that doesn't seem to be the whole story.
The first issue concerns executing a local batch file via System.Diagnostics.Process.Start()--I can't do so when the AppPool is set to IWAM_WIN2K3WEB user, but it works fine if it is set to the Network Service identity. I of course made sure that the IWAM user has execute rights on the file.
The second involves writing to a file on the local hard drive--I'd like to be able to prevent doing so via the access control list via folder properties, but even when I set up all users in the folder as "read" (no users/groups with "write" at all), our ASP.NET still writes out the file no problem. How can it if it doesn't have write access?
Google search turns up bits and pieces but never the whole story.
A:
what user account is relevant for [..] file read/write/execute access
As a rule: Always the user account the application/page runs under.
The IWAM account is pretty limited. I don't think it has permissions to start an external process. File access rights are irrelevant at this point.
If a user account (Network Service in your case) owns a file (i.e. has created it), it can do anything to this file, even if not explicitly allowed. Check who owns your file.
Process Monitor from Microsoft is a great tool to track down subtleties like this one.
A: A bit more searching reveals that the IWAM user isn't that well documented and we should stick with NETWORK SERVICE or a manually-supplied identity if we want to specify permissions for that user.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/154735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.