Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I must to implement command : `java -jar test.jar page.xml | mysql -u user -p base` in ant. So i Have tried with this task: ``` <java jar="test.jar" fork="true"> <arg line="page.xml | mysql -u user -p base"/> </java> ``` But i have got en exception with pipe - "|" : ``` java.lang.IllegalArgumentException: Input already set; can't set to | ``` So, that's the problem:)
The pipe (|) can only be used in a shell script. You're passing it as an argument to the java process. So you need to execute a shell script. You can do this by executing (say) `bash -c` and passing the above as a shell statement (albeit *inline* - you could write a separate script file but it seems a bit of an overhead here) ``` <exec executable="bash"> <arg value="-c"/> <arg line="java -jar test.jar page.xml | mysql -u user -p base"/> </exec> ```
I don't know if this was ever resolved, but I was having a similar problem which I solved by using the following: ``` <exec executable="bash"> <arg value="-c"/> <arg line='"java -jar test.jar page.xml | mysql -u user -p base"'/> </exec> ``` Just thought I would share.
Ant run command with pipes
[ "", "java", "ant", "" ]
I'm using PHP & i wanted to put a text file above(outside) the website root so users can't access it. But i wanted to know how can i read it from my code, i want to open, write/edit some data then save it. Please give me an example. Thanks ,
in PHP's manual, File System section you find a lot of good examples to do that. Check the links: * <http://php.net/manual/en/ref.filesystem.php> * <http://php.net/fopen> * <http://php.net/file_get_contents>
You just need to use the full path instead of a relative path. To get the directory directly above the document root (Where the website HTML begins) do this: ``` echo dirname($_SERVER['DOCUMENT_ROOT']); ``` then, take that value, and use it in your includes/fopens/fgets/file\_get\_contents ``` include(dirname($_SERVER['DOCUMENT_ROOT'])."/file.php"); ```
PHP - editing text file above root
[ "", "php", "header", "root", "" ]
Given a generic parameter TEnum which always will be an enum type, is there any way to cast from TEnum to int without boxing/unboxing? See this example code. This will box/unbox the value unnecessarily. ``` private int Foo<TEnum>(TEnum value) where TEnum : struct // C# does not allow enum constraint { return (int) (ValueType) value; } ``` The above C# is release-mode compiled to the following IL (note boxing and unboxing opcodes): ``` .method public hidebysig instance int32 Foo<valuetype .ctor ([mscorlib]System.ValueType) TEnum>(!!TEnum 'value') cil managed { .maxstack 8 IL_0000: ldarg.1 IL_0001: box !!TEnum IL_0006: unbox.any [mscorlib]System.Int32 IL_000b: ret } ``` Enum conversion has been treated extensively on SO, but I could not find a discussion addressing this specific case.
This is similar to answers posted here, but uses expression trees to emit il to cast between types. `Expression.Convert` does the trick. The compiled delegate (caster) is cached by an inner static class. Since source object can be inferred from the argument, I guess it offers cleaner call. For e.g. a generic context: ``` static int Generic<T>(T t) { int variable = -1; // may be a type check - if(... variable = CastTo<int>.From(t); return variable; } ``` The class: ``` /// <summary> /// Class to cast to type <see cref="T"/> /// </summary> /// <typeparam name="T">Target type</typeparam> public static class CastTo<T> { /// <summary> /// Casts <see cref="S"/> to <see cref="T"/>. /// This does not cause boxing for value types. /// Useful in generic methods. /// </summary> /// <typeparam name="S">Source type to cast from. Usually a generic type.</typeparam> public static T From<S>(S s) { return Cache<S>.caster(s); } private static class Cache<S> { public static readonly Func<S, T> caster = Get(); private static Func<S, T> Get() { var p = Expression.Parameter(typeof(S)); var c = Expression.ConvertChecked(p, typeof(T)); return Expression.Lambda<Func<S, T>>(c, p).Compile(); } } } ``` --- You can replace the `caster` func with other implementations. I will compare performance of a few: ``` direct object casting, ie, (T)(object)S caster1 = (Func<T, T>)(x => x) as Func<S, T>; caster2 = Delegate.CreateDelegate(typeof(Func<S, T>), ((Func<T, T>)(x => x)).Method) as Func<S, T>; caster3 = my implementation above caster4 = EmitConverter(); static Func<S, T> EmitConverter() { var method = new DynamicMethod(string.Empty, typeof(T), new[] { typeof(S) }); var il = method.GetILGenerator(); il.Emit(OpCodes.Ldarg_0); if (typeof(S) != typeof(T)) { il.Emit(OpCodes.Conv_R8); } il.Emit(OpCodes.Ret); return (Func<S, T>)method.CreateDelegate(typeof(Func<S, T>)); } ``` **Boxed casts**: 1. `int` to `int` > object casting -> 42 ms > caster1 -> 102 ms > caster2 -> 102 ms > caster3 -> 90 ms > caster4 -> 101 ms 2. `int` to `int?` > object casting -> 651 ms > caster1 -> fail > caster2 -> fail > caster3 -> 109 ms > caster4 -> fail 3. `int?` to `int` > object casting -> 1957 ms > caster1 -> fail > caster2 -> fail > caster3 -> 124 ms > caster4 -> fail 4. `enum` to `int` > object casting -> 405 ms > caster1 -> fail > caster2 -> 102 ms > caster3 -> 78 ms > caster4 -> fail 5. `int` to `enum` > object casting -> 370 ms > caster1 -> fail > caster2 -> 93 ms > caster3 -> 87 ms > caster4 -> fail 6. `int?` to `enum` > object casting -> 2340 ms > caster1 -> fail > caster2 -> fail > caster3 -> 258 ms > caster4 -> fail 7. `enum?` to `int` > object casting -> 2776 ms > caster1 -> fail > caster2 -> fail > caster3 -> 131 ms > caster4 -> fail --- `Expression.Convert` puts a direct cast from source type to target type, so it can work out explicit and implicit casts (not to mention reference casts). So this gives way for handling casting which is otherwise possible only when non-boxed (ie, in a generic method if you do `(TTarget)(object)(TSource)` it will explode if it is not identity conversion (as in previous section) or reference conversion (as shown in later section)). So I will include them in tests. **Non-boxed casts:** 1. `int` to `double` > object casting -> fail > caster1 -> fail > caster2 -> fail > caster3 -> 109 ms > caster4 -> 118 ms 2. `enum` to `int?` > object casting -> fail > caster1 -> fail > caster2 -> fail > caster3 -> 93 ms > caster4 -> fail 3. `int` to `enum?` > object casting -> fail > caster1 -> fail > caster2 -> fail > caster3 -> 93 ms > caster4 -> fail 4. `enum?` to `int?` > object casting -> fail > caster1 -> fail > caster2 -> fail > caster3 -> 121 ms > caster4 -> fail 5. `int?` to `enum?` > object casting -> fail > caster1 -> fail > caster2 -> fail > caster3 -> 120 ms > caster4 -> fail For the fun of it, I tested a **few reference type conversions:** 1. `PrintStringProperty` to `string` (representation changing) > object casting -> fail (quite obvious, since it is not cast back to original type) > caster1 -> fail > caster2 -> fail > caster3 -> 315 ms > caster4 -> fail 2. `string` to `object` (representation preserving reference conversion) > object casting -> 78 ms > caster1 -> fail > caster2 -> fail > caster3 -> 322 ms > caster4 -> fail Tested like this: ``` static void TestMethod<T>(T t) { CastTo<int>.From(t); //computes delegate once and stored in a static variable int value = 0; var watch = Stopwatch.StartNew(); for (int i = 0; i < 10000000; i++) { value = (int)(object)t; // similarly value = CastTo<int>.From(t); // etc } watch.Stop(); Console.WriteLine(watch.Elapsed.TotalMilliseconds); } ``` --- Note: 1. My estimate is that unless you run this at least a hundred thousand times, it's not worth it, and you have almost nothing to worry about boxing. Mind you caching delegates has a hit on memory. But beyond that limit, **the speed improvement is significant, especially when it comes to casting involving nullables**. 2. But the real advantage of the `CastTo<T>` class is when it allows casts that are possible non-boxed, like `(int)double` in a generic context. As such `(int)(object)double` fails in these scenarios. 3. I have used `Expression.ConvertChecked` instead of `Expression.Convert` so that arithmetic overflows and underflows are checked (ie results in exception). Since il is generated during run time, and checked settings are a compile time thing, there is no way you can know the checked context of calling code. This is something you have to decide yourself. Choose one, or provide overload for both (better). 4. If a cast doesn't exist from `TSource` to `TTarget`, exception is thrown while the delegate is compiled. If you want a different behaviour, like get a default value of `TTarget`, you can check type compatibility using reflection before compiling delegate. You have the full control of the code being generated. Its going to be extremely tricky though, you have to check for reference compatibility (`IsSubClassOf`, `IsAssignableFrom`), conversion operator existence (going to be hacky), and even for some built in type convertibility between primitive types. Going to be extremely hacky. Easier is to catch exception and return default value delegate based on `ConstantExpression`. Just stating a possibility that you can mimic behaviour of `as` keyword which doesnt throw. Its better to stay away from it and stick to convention.
I know I'm way late to the party, but if you just need to do a safe cast like this you can use the following using `Delegate.CreateDelegate`: ``` public static int Identity(int x){return x;} // later on.. Func<int,int> identity = Identity; Delegate.CreateDelegate(typeof(Func<int,TEnum>),identity.Method) as Func<int,TEnum> ``` now without writing `Reflection.Emit` or expression trees you have a method that will convert int to enum without boxing or unboxing. Note that `TEnum` here must have an underlying type of `int` or this will throw an exception saying it cannot be bound. Edit: Another method that works too and might be a little less to write... ``` Func<TEnum,int> converter = EqualityComparer<TEnum>.Default.GetHashCode; ``` This works to convert your 32bit **or less** enum from a TEnum to an int. Not the other way around. In .Net 3.5+, the `EnumEqualityComparer` is optimized to basically turn this into a return `(int)value`; You are paying the overhead of using a delegate, but it certainly will be better than boxing. This was fairly old, but if you're still coming back here looking for a solution that works on .net 5/.Net core (or netfx with the unsafe package) and remains optimal... ``` [JitGeneric(typeof(StringComparison), typeof(int))] [MethodImpl(MethodImplOptions.AggressiveInlining)] public static bool TryConvert<TEnum, T>(this TEnum @enum, out T val) where TEnum : struct, Enum where T : struct, IConvertible, IFormattable, IComparable { if (Unsafe.SizeOf<T>() == Unsafe.SizeOf<TEnum>()) { val = Unsafe.As<TEnum, T>(ref @enum); return true; } val = default; return false; } ``` An example usage might be like so:: ``` public static int M(MethodImplOptions flags) => flags.TryConvert(out int v) ? v : 0; ``` Here we can see on sharplab, that this method gets inlined entirely away:: <https://sharplab.io/#gist:802b8d21ee1de26e791294ba48f69d97>
C# non-boxing conversion of generic enum to int?
[ "", "c#", ".net", "enums", "boxing", "" ]
I have a problem with selecting and filtering elements inside a div. HTML : ``` <div id="wrapper"> <input type="text" value="you can edit me"> <input type="button" value="click me"> </div> ``` jQuery : ``` $("#wrapper").children().click(function() { alert("hi there"); }); ``` The problem is I get alerted every time I click anything inside the div. But my requirement is to alert only when the user clicks on the button. I know that filtering the elements in jQuery is using `:button` This is what I have tried : ``` $("#wrapper").children(":button").click(function() { alert("hi there"); }); ``` and ``` $("#wrapper").children().filter(":button").click(function() { alert("hi there"); }); ``` It didn't work Anyone know how to do this?
``` $("#wrapper input[type=button]").click(function() { alert("hi there"); }); ```
use id for a specific button- ``` <div id="wrapper"> <input type="text" value="you can edit me"> <input type="button" id='btnMyButton' value="click me"> <input type="button" class='btnClass' id='btnMyButton2' value="click me 2"> <input type="button" class='btnClass' id='btnMyButton3' value="click me 3"> </div> $('#btnMyButton').click(function(){ alert("hi there"); }); ``` For all buttons in the div, follow John's answer. Use class for some buttons- ``` $('.btnClass').click(function(){ alert("all class"); }); ``` btw, i like to put my all jquery function inside ready function like- ``` $(document).ready(function(){ }); ```
jQuery selecting and filtering elements inside a div
[ "", "javascript", "jquery", "filter", "css-selectors", "" ]
I'm basically trying to setup my own private pastebin where I can save html files on my private server to test and fool around - have some sort of textarea for the initial input, save the file, and after saving I'd like to be able to view all the files I saved. I'm trying to write this in python, just wondering what the most practical way would be of storing the file(s) or the code? SQLite? Straight up flat files? One other thing I'm worried about is the uniqueness of the files, obviously I don't want conflicting filenames ( maybe save using 'title' and timestamp? ) - how should I structure it?
I wrote something similar a while back in Django to test jQuery snippets. See: <http://jquery.nodnod.net/> I have the code available on GitHub at <http://github.com/dz/jquerytester/tree/master> if you're curious. If you're using straight Python, there are a couple ways to approach naming: 1. If storing as files, ask for a name, salt with current time, and generate a hash for the filename. 2. If using mysqlite or some other database, just use a numerical unique ID. Personally, I'd go for #2. It's easy, ensures uniqueness, and allows you to easily fetch various sets of 'files'.
Have you considered trying [lodgeit](http://dev.pocoo.org/projects/lodgeit/). Its a free pastbin which you can host yourself. I do not know how hard it is to set up. Looking at their code they have gone with a database for storage (sqllite will do). They have structured there paste table like, (this is sqlalchemy table declaration style). The code is just a text field. ``` pastes = Table('pastes', metadata, Column('paste_id', Integer, primary_key=True), Column('code', Text), Column('parent_id', Integer, ForeignKey('pastes.paste_id'), nullable=True), Column('pub_date', DateTime), Column('language', String(30)), Column('user_hash', String(40), nullable=True), Column('handled', Boolean, nullable=False), Column('private_id', String(40), unique=True, nullable=True) ) ``` They have also made a hierarchy (see the self join) which is used for versioning.
Storing files for testbin/pastebin in Python
[ "", "python", "web-applications", "" ]
I know the exception is kind of pointless, but I was trying to learn how to use / create exceptions so I used this. The only problem is for some reason my error message generated by my exception is printing to console twice. import java.io.File; import java.io.FileNotFoundException; import java.io.PrintStream; import java.util.Scanner; ``` public class Project3 { public static void main(String[] args) { try { String inputFileName = null; if (args.length > 0) inputFileName = args[0]; File inputFile = FileGetter.getFile( "Please enter the full path of the input file: ", inputFileName); String outputFileName = null; if (args.length > 1) outputFileName = args[1]; File outputFile = FileGetter.getFile( "Please enter the full path of the output file: ", outputFileName); Scanner in = new Scanner(inputFile); PrintStream out = new PrintStream(outputFile); Person person = null; // Read records from input file, get an object from the factory, // output the class to the output file. while(in.hasNext()) { String personRecord = in.nextLine(); person = PersonFactory.getPerson(personRecord); person.display(); person.output(out); } } catch (Exception e) { System.err.println(e.getMessage()); } } } import java.util.Scanner; class Student extends Person { private double gpa; public Student() { super(); gpa = 0.0; } public Student(String firstName, String lastName, double gpa) { super(firstName, lastName); this.gpa = gpa; } public String toString(){ try{ if (gpa >= 0.0 && gpa <= 4.0){ return super.toString() + "\n\tGPA: " + gpa; } else { throw new InvalidGpaException(); } } catch (InvalidGpaException e){ System.out.println(e); return super.toString() + "\n\tGPA: " + gpa; } } public void display() { System.out.println("<<Student>>" + this); } @Override public void input(Scanner in) { super.input(in); if (in.hasNextDouble()) { this.gpa = in.nextDouble(); } } class InvalidGpaException extends Exception { public InvalidGpaException() { super("Invalid GPA: " + gpa); } } } ``` This is my console readout. Not sure what's causing the exception to print twice. ``` project3.Student$InvalidGpaException: Invalid GPA: -4.0 << Student>> Id: 2 Doe, Junior GPA: -4.0 project3.Student$InvalidGpaException: Invalid GPA: -4.0 ``` edit: The main code is on the top. The input is a file designated by the user. What I shown right here is my console printout, not what is returned to the output file. The output file shows the exact same thing minus the error message. The Error message from the exception (which I know is not necessary) is only printed to the console. I don't see where I'm printing it twice.
My guess is that your `Person.output()` method has a call to `toString()` in it, which will print the exception before returning the proper string, which doesn't show up because you're outputting it to `out`. E: If you want my deduction, here it is: The first error message and normal message are printed out within the call to `display()`, as it should be. Immediately after that is the `output()` call, which by the name I guess is meant to do what `display()` does, except to a file. However, you forgot that the exception is printed directly to `System.out`, so it appears in the console, while the string that `toString()` actually returns is written to the file.
What is your main ? what is your INPUT.. Change your exception to something different. Where are you printing this data ? ``` << Student>> Id: 2 Doe, Junior GPA: -4.0 ``` Are you sure you aren't calling person.toString() twice ?
Java Exception printing twice
[ "", "java", "exception", "console", "printing", "" ]
i want to write a code in a way,if there is a text file placed in a specified path, one of the users edited the file and entered new text and saved it.now,i want to get the text which is appended last time. here am having file size for both before and after append the text my text file size is 1204kb from that i need to take the end of 200kb text alone is it possible
You can keep track of the file pointer . Eg If you are using C language then you can go to the end of the file using fseek(fp,SEEK\_END) and then use ftell(fp) which will give you the current position of the file pointer . After the user edits and saves the file , when you rerun the code you can check with the new position original position . If the new position is greater than the original position offset those number of bytes with the file pointer
This can only be done if you're monitoring the file size in real-time, since files do not maintain their own histories. If watching the files as they are modified is a possibility, you could perhaps use a `FileSystemWatcher` and calculate the increase in file size upon any modification. You could then read the bytes appended since the file last changes, which would be very straightforward.
Find appended text from txt file
[ "", "c#", "" ]
Does anyone know why Google Analytics requires two separate script tags? Specifically, their instructions advise users to embed the following snippet of code into a web page for tracking purposes: ``` <!-- Google Analytics --> <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try { var pageTracker = _gat._getTracker("UA-8720817-1"); pageTracker._trackPageview(); } catch(err) {}</script> ``` Why couldn't users use only one script block like this: ``` <!-- Google Analytics --> <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-8720817-1"); pageTracker._trackPageview(); } catch(err) {}</script> ```
`<script>` tags are executed in sequence. A `<script>` block cannot execute if the previous one isn't done executing. The first `<script>` tag is in charge of creating the Google `<script>` tag which will load the external js. After the first `<script>` is finished executing, the DOM looks like the following: ``` <script></script> <!-- First Script Tag --> <script></script> <!-- Google Injected Script --> <script></script> <!-- Second Script Tag --> ``` This guarantees that the second `<script>` tag will not execute until the `.js` is done loading. If the first and second `<script>` would be combined, this would cause the `_gat` variable to be undefined (since the Google injected script will not start loading until the first script is done executing).
`document.write` occurs as soon as it is executed in code. So if we used your "one script block" example, the actual generated source code would end up looking like this: ``` <!-- Google Analytics --> <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-8720817-1"); pageTracker._trackPageview(); } catch(err) {}</script> <script src='http://www.google-analytics.com/ga.js' type='text/javascript'></script> ``` Hence the `var pageTracker = _gat._getTracker("UA-8720817-1"); pageTracker._trackPageview();` code would fail because `_gat` wouldn't be defined until the ga.js file is loaded. Does that make sense?
Two separate script tags for Google Analytics?
[ "", "javascript", "google-analytics", "" ]
In JavaScript, I've created an object like so: ``` var data = { 'PropertyA': 1, 'PropertyB': 2, 'PropertyC': 3 }; ``` Is it possible to add further properties to this object after its initial creation if the properties name is not determined until run time? i.e. ``` var propName = 'Property' + someUserInput //imagine someUserInput was 'Z', how can I now add a 'PropertyZ' property to //my object? ```
Yes. ``` var data = { 'PropertyA': 1, 'PropertyB': 2, 'PropertyC': 3 }; data["PropertyD"] = 4; // dialog box with 4 in it alert(data.PropertyD); alert(data["PropertyD"]); ```
ES6 for the win! ``` const b = 'B'; const c = 'C'; const data = { a: true, [b]: true, // dynamic property [`interpolated-${c}`]: true, // dynamic property + interpolation [`${b}-${c}`]: true } ``` If you log `data` you get this: ``` { a: true, B: true, interpolated-C: true, B-C: true } ``` This makes use of the new [Computed Property](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Object_initializer#New_notations_in_ECMAScript_2015) syntax and [Template Literals](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals).
Is it possible to add dynamically named properties to JavaScript object?
[ "", "javascript", "" ]
Is it possible to get request.user data in a form class? I want to clean an email address to make sure that it's unique, but if it's the current users email address then it should pass. This is what I currently have which works great for creating new users, but if I want to edit a user I run into the problem of their email not validating, because it comes up as being taken already. If I could check that it's their email using request.user.email then I would be able to solve my problem, but I'm not sure how to do that. ``` class editUserForm(forms.Form): email_address = forms.EmailField(widget=forms.TextInput(attrs={'class':'required'})) def clean_email_address(self): this_email = self.cleaned_data['email_address'] test = UserProfiles.objects.filter(email = this_email) if len(test)>0: raise ValidationError("A user with that email already exists.") else: return this_email ```
As ars and Diarmuid have pointed out, you can pass `request.user` into your form, and use it in validating the email. Diarmuid's code, however, is wrong. The code should actually read: ``` from django import forms class UserForm(forms.Form): email_address = forms.EmailField( widget=forms.TextInput( attrs={ 'class': 'required' } ) ) def __init__(self, *args, **kwargs): self.user = kwargs.pop('user', None) super(UserForm, self).__init__(*args, **kwargs) def clean_email_address(self): email = self.cleaned_data.get('email_address') if self.user and self.user.email == email: return email if UserProfile.objects.filter(email=email).count(): raise forms.ValidationError( u'That email address already exists.' ) return email ``` Then, in your view, you can use it like so: ``` def someview(request): if request.method == 'POST': form = UserForm(request.POST, user=request.user) if form.is_valid(): # Do something with the data pass else: form = UserForm(user=request.user) # Rest of your view follows ``` ~~Note that you should pass request.POST as a keyword argument, since your constructor expects 'user' as the first positional argument.~~ Doing it this way, you need to pass `user` as a keyword argument. You can either pass `request.POST` as a positional argument, or a keyword argument (via `data=request.POST`).
Here's the way to get the user in your form when using generic views: In the view, pass the `request.user` to the form using `get_form_kwargs`: ``` class SampleView(View): def get_form_kwargs(self): kwargs = super(SampleView, self).get_form_kwargs() kwargs['user'] = self.request.user return kwargs ``` In the form you will receive the `user` with the `__init__` function: ``` class SampleForm(Form): def __init__(self, user, *args, **kwargs): super(SampleForm, self).__init__(*args, **kwargs) self.user = user ```
get request data in Django form
[ "", "python", "django", "" ]
Say I have a function like so: ``` function foo(bar) { if (bar > 1) { return [1,2,3]; } else { return 1; } } ``` And say I call `foo(1)`, how do I know it returns an array or not?
I use this function: ``` function isArray(obj) { return Object.prototype.toString.call(obj) === '[object Array]'; } ``` Is the way that [jQuery.isArray](http://docs.jquery.com/Utilities/jQuery.isArray) is implemented. Check this article: * [isArray: Why is it so bloody hard to get right?](http://ajaxian.com/archives/isarray-why-is-it-so-bloody-hard-to-get-right)
``` if(foo(1) instanceof Array) // You have an Array else // You don't ``` **Update:** I have to respond to the comments made below, because people are still claiming that this won't work without trying it for themselves... For some other objects this technique does not work (e.g. "" instanceof String == false), but this works for Array. I tested it in IE6, IE8, FF, Chrome and Safari. **Try it and see for yourself before commenting below.**
What is the best way to check if an object is an array or not in Javascript?
[ "", "javascript", "arrays", "instanceof", "typeof", "" ]
During the last 10 minutes of Ander's talk [The Future of C#](http://channel9.msdn.com/pdc2008/tl16/) he demonstrates a really cool C# Read-Eval-Print loop which would be a tremendous help in learning the language. Several .NET4 related downloads are already available: [Visual Studio 2010 and .NET Framework 4.0 CTP](http://www.microsoft.com/downloads/details.aspx?FamilyId=922B4655-93D0-4476-BDA4-94CF5F8D4814&displaylang=en), [Visual Studio 2010 and .NET Framework 4 Training Kit](http://www.microsoft.com/downloads/details.aspx?FamilyId=922B4655-93D0-4476-BDA4-94CF5F8D4814&displaylang=en). Do you know what happened to this REPL? Is it somewhere hidden among examples? *I know about mono repl. Please, no alternative solutions.*
The REPL demo was part of "what might happen next", i.e. *after* 4.0; in .NET 5.0 or something similar. This is **not** 4.0 functionality, and never has been.
It's probably worth mentioning that the Mono project already **does** have a C# REPL which i tend to use for those small checks you do now and then. [Take a look.](http://www.mono-project.com/CsharpRepl/) Also, if I'm testing an idea which I'm uncomfortable Mono is going to handle to well and it's not worth starting a new test project then [Snippet Compiler](http://www.sliver.com/dotnet/SnippetCompiler/) always comes in handy.
Anders Hejlsberg's C# 4.0 REPL
[ "", "c#", "c#-4.0", "read-eval-print-loop", "" ]
Recently I came across with the following quiz. Imagine we have this table ``` +--------+ | colors | +--------+ | red | | black | | white | | green | | orange | +--------+ ``` The task is to write a SQL query that will select all pairs without allowing duplicates. Permutations are counted too ({red, black} = {black, red}, hence only one of the pair is allowed).
Try this ``` Select A.Color, B.Color From Colors A Cross Join Colors B Where A.Color > B.Color ```
``` Select A.Colors, B.Colors From Colors A Cross Join Colors B Where A.Colors < B.Colors or B.colors < A.colors ```
Interesting SQL quiz
[ "", "sql", "" ]
I have this code: ``` Thread t = new Thread(() => UpdateImage(origin)); t.Name = "UpdateImageThread"; t.Start(); ``` This code is created on a Custom Control. I want to stop this thread (if it's running) when the object is going to be dispose. This custom control has the following method: ``` void IDisposable.Dispose() { /* My own code */ base.Dispose(true); } ``` I think this is the place to put the code but: How can I know is the thread is running? How can I take a referece for the thread and stop it? By the way, UpdateImage call a web service, so I think that it's waiting all of its life. How can I finish this wait? Thank you!
It depends a lot on what `UpdateImage()` does and how well it copes with the Image being disposed while it it still active. If `UpdateImage()` is your code and contains a loop you can tell it to stop (using a field like \_stopping). If not, the best thing may be to do nothing - in the rare case of Disposing the control while the image is still updating you take the penalty of leaving it to the GC. About how to get the Thread: By saving the reference when and where you create it, for instance int the private member \_updateThread. Now actually stopping (aborting) the thread is a (very) bad idea. So you'll need an indicator, like ``` private bool _stopping = false; ``` And it is up to the `UpdateImage()` method to react to `_stopping == true` and stop with what it is doing. Your Dispose() can then use ``` _stopping = true; _updateThread.Join() ```
Save your thread variable 't' so that you can re-use it later. Within your Dispose method you want something like: ``` void IDisposable.Dispose() { if(t.IsRunning) { cancelThreads = true; // Set some cancel signal that the thread should check to determine the end t.Join(500); // wait for the thread to tidy itself up t.Abort(); // abort the thread if its not finished } base.Dispose(true); } ``` You should be careful aborting threads though, ensure that you place critical section of code within regions that won't allow the thread to stop before it has finished, and catch ThreadAbortExceptions to tidy anything up if it is aborted. You can do something like this in the threads start method ``` public void DoWork() { try { while(!cancelThreads) { // Do general work Thread.BeginCriticalRegion(); // Do Important Work Thread.EndCriticalRegion(); } } catch(ThreadAbortException) { // Tidy any nastiness that occured killing thread early } } ```
Compact Framework 2.0: How can I stop a thread when an object is dispose?
[ "", "c#", "multithreading", "compact-framework", "" ]
I'd like to use of the Vista+ feature of [I/O prioritization](https://stackoverflow.com/questions/301290/how-can-i-o-priority-of-a-process-be-increased). Is there a platform independent way of setting I/O priority on an operation in Java (e.g. a library, in Java 7) or should I revert to a sleeping-filter or JNx solution? Do other platforms have a similar feature?
This is the kind of thing that is difficult for Java to support because it depends heavily on the capabilities of the underlying operating system. Java tries very hard to offer APIs that work the same across multiple platform. (It doesn't always succeed, but that's a different topic.) In this case, a Java API would need to be implementable across multiple versions of Windows, multiple versions of Linux, Solaris, and various other third party platforms. Coming up with a platform independent model of IO prioritization that can be mapped to the functionality of the range of OS platforms would be hard. For a now, I suggest that you look for a platform specific solution that goes outside of Java to make the necessary tuning adjustments; e.g. use Process et al to run an external command, or do the work in a wrapper script before starting your JVM.
If you really need to use this feature and you really want to do this in Java, you can always use [Java JNI](http://java.sun.com/docs/books/jni/) to hook the JVM into your own, custom C/C++ implementation of an I/O handler. It allows you to write native (OS specific) code and call it from a Java application.
I/O prioritization in Java
[ "", "java", "io", "" ]
I have an idea that reading values from config files instead of using hard code values, but I'm not sure it is a good practice or not. First I created a utility class: ```` ``` public class ConfigValues { public static int Read(string key, int defaultValue){....} public static string Read(string key, string defaultValue){....} public static bool Read(string key, bool defaultValue){....} ... } ``` ```` The Read function tries to read value for the given key. If the key doesnot exist or the value has bad format, it returns the default value. And I'm going to use this class like: ```` ``` public class MyClass { private int _age = ConfigValues.Read("Com.MyClass.Age", 0); ... } ``` ```` So that, we can make almost all variables in the application customizable. Will it be a good practice? Please comment it for free.
People who think you should make things configurable: * Some of the other answers * <http://www.ebizq.net/blogs/decision_management/2007/04/dont_softcode_use_business_rul.php> * Many good software development theories (I don't have links handy). People who think differently: * <http://ayende.com/Blog/archive/2008/08/21/Enabling-change-by-hard-coding-everything-the-smart-way.aspx> (And the [rest of his entries](http://www.google.com/search?q=site:ayende.com+JFHCI)) * <http://thedailywtf.com/Articles/Soft_Coding.aspx> * <http://benbro.com/blog/on-configuration/> * <http://jeffreypalermo.com/blog/hardcoding-considered-harmful-or-is-it/> The answer comes down to your requirements: why are you setting this value here? * Is it something that different users will want set differently? => config file. * Is it just a default value to be changed later? => Hardcode it. * Is it something which affects operational use of the program (i.e. default homepage for browser)? => Config file. * Is it something which might have complex impacts on various parts of the program? ... Answer depends on your userbase. Etc. It's not a simple yes-it's-good or no-it's-bad answer.
Configuration files are always a good idea. Think of the **`INI`** files, for example. It would be immensely useful to introduce a version numbering scheme in your config files. So you know what values to expect in a file and when to look for defaults when these are not around. You might have hardcoded defaults to be used when the configurations are missing from the config file. This gives you flexibility and fallback. Also decide if you will be updating the file from your application. If so, you need to be sure it can manage the format of the file. You might want to restrict the format beforehand to make life simpler. You could have **CSV** files or "`name=value`" **INI** style files. Keep it simple for your code and the user who will edit them.
Is it good that reading config values rather than using magic numbers?
[ "", "c#", "" ]
I am using HTML Purifier to protect my application from XSS attacks. Currently I am purifying content from WYSIWYG editors because that is the only place where users are allowed to use XHTML markup. My question is, should I use HTML Purifier also on username and password in a login authentication system (or on input fields of sign up page such as email, name, address etc)? Is there a chance of XSS attack there?
You should Purify anything that will ever possibly be displayed on a page. Because with XSS attacks, hackers put in `<script>` tags or other malicious tags that can link to other sites. Passwords and emails should be fine. Passwords should never be shown and emails should have their own validator to make sure that they are in the proper format. Finally, always remember to put in htmlentities() on content. Oh .. and look at [filter\_var](https://www.php.net/filter) aswell. Very nice way of filtering variables.
XSS risks exist where ever data entered by one user *may* be viewed by other users. Even if this data isn't currently viewable, don't assume that a need to do this won't arise. As far as the username and password go, you should never display a password, or even store it in a form that can be displayed (i.e. encyrpt it with `sha1()`). For usernames, have a restriction on legal characters like `[A-Za-z0-9_]`. Finally, as the other answer suggests, use your languages html entity encoding function for any entered data that may contain reserved or special html characters, which prevents this data from causing syntax errors when displayed.
HTML Purifier - what to purify?
[ "", "php", "html", "xhtml", "xss", "htmlpurifier", "" ]
We're often told we should protect encapsulation by making getter and setter methods (properties in C#) for class fields, instead of exposing the fields to the outside world. But there are many times when a field is just there to hold a value and doesn't require any computation to get or set. For these we would all do this number: ``` public class Book { private string _title; public string Title { get => _title; set => _title = value; } } ``` Well, I have a confession, I couldn't bear writing all that (really, it wasn't having to write it, it was having to look at it), so I went rogue and used public fields. Then along comes C# 3.0 and I see they added automatic properties: ``` public class Book { public string Title { get; set; } } ``` Which is tidier, and I'm thankful for it, but really, what's so different than just making a public field? ``` public class Book { public string Title; } ```
In a [related question](https://stackoverflow.com/questions/9304/) I had some time ago, there was a link to a posting on Jeff's blog, explaining some differences. [Properties vs. Public Variables](http://blog.codinghorror.com/properties-vs-public-variables/) * Reflection works differently on variables vs. properties, so if you rely on reflection, it's easier to use all properties. * You can't databind against a variable. * Changing a variable to a property is a breaking change. For example: ``` TryGetTitle(out book.Title); // requires a variable ```
Ignoring the API issues, the thing I find most valuable about using a property is debugging. The CLR debugger does not support data break points (most native debuggers do). Hence it's not possible to set a break point on the read or write of a particular field on a class. This is very limiting in certain debugging scenarios. Because properties are implemented as very thin methods, it is possible to set breakpoints on the read and write of their values. This gives them a big leg up over fields.
Public Fields versus Automatic Properties
[ "", "c#", "class", "properties", "field", "automatic-properties", "" ]
I'm trying to extract a bunch of lines from a CSV file and write them into another, but I'm having some problems. ``` import csv f = open("my_csv_file.csv", "r") r = csv.DictReader(f, delimiter=',') fieldnames = r.fieldnames target = open("united.csv", 'w') w = csv.DictWriter(united, fieldnames=fieldnames) while True: try: row = r.next() if r.line_num <= 2: #first two rows don't matter continue else: w.writerow(row) except StopIteration: break f.close() target.close() ``` Running this, I get the following error: ``` Traceback (most recent call last): File "unify.py", line 16, in <module> w.writerow(row) File "C:\Program Files\Python25\lib\csv.py", line 12 return self.writer.writerow(self._dict_to_list(row File "C:\Program Files\Python25\lib\csv.py", line 12 if k not in self.fieldnames: TypeError: argument of type 'NoneType' is not iterable ``` Not entirely sure what I'm dong wrong.
I don't know either, but since all you're doing is copying lines from one file to another why are you bothering with the `csv` stuff at all? Why not something like: ``` f = open("my_csv_file.csv", "r") target = open("united.csv", 'w') f.readline() f.readline() for line in f: target.write(line) ```
To clear up the confusion about the error: you get it because `r.fieldnames` is only set once you read from the input file for the first time using `r`. Hence the way you wrote it, `fieldnames` will always be initialized to `None`. You may initialize `w = csv.DictWriter(united, fieldnames=fieldnames)` with `r.fieldnames` only after you read the first line from `r`, which means you would have to restructure your code. This behavior is documented in the [Python Standard Library documentation](http://docs.python.org/library/csv.html#csv.csvreader.fieldnames) > DictReader objects have the following public attribute: > > csvreader.fieldnames > > If not passed as a parameter when creating the object, this attribute is initialized upon first access or when the first record is read from the file.
Python CSV DictReader/Writer issues
[ "", "python", "csv", "" ]
I can't seem to find the answer to this question. It seems like I should be able to go from a number to a character in C# by simply doing something along the lines of (char)MyInt to duplicate the behaviour of vb's Chr() function; however, this is not the case: In VB Script w/ an asp page, if my code says this: ``` Response.Write(Chr(139)) ``` It outputs this: ``` ‹ (character code 8249) ``` Opposed to this: > (character code 139) I'm missing something somewhere with the encoding, but I can't find it. What encoding is Chr() using?
`Chr()` uses the system default encoding, I believe - so it's *roughly* equivalent to: ``` byte[] bytes = new byte[] { 139 }; char c = Encoding.Default.GetString(bytes)[0]; ``` On my box (Windows CP1252 as the default) that does indeed give Unicode 8249.
If you want to call something that has exactly the behaviour of VB's Chr from C#, then, why not simply call it rather than trying to deduce its behaviour? Just put a "using Microsoft.VisualBasic;" at the top of your C# program, add the VB runtime DLL to your references, and go to town.
Encoding issue: vbscript "Chr()" to .Net C#
[ "", "c#", ".net", "encoding", "vbscript", "" ]
I'm looking to write a Truth Table Generator as a personal project. There are several web-based online ones [here](http://www.brian-borowski.com/Software/Truth/) and [here](http://www-cs-students.stanford.edu/~silver/truth/). ![alt text](https://i.stack.imgur.com/Tv3nj.png) *`(Example screenshot of an existing Truth Table Generator`)* I have the following questions: * How should I go about parsing expressions like: *((P => Q) & (Q => R)) => (P => R)* * Should I use a parser generator like ANTLr or YACC, or use straight regular expressions? * Once I have the expression parsed, how should I go about generating the truth table? Each section of the expression needs to be divided up into its smallest components and re-built from the left side of the table to the right. How would I evaluate something like that? Can anyone provide me with tips concerning the parsing of these arbitrary expressions and eventually evaluating the parsed expression?
This sounds like a great personal project. You'll learn a lot about how the basic parts of a compiler work. I would skip trying to use a parser generator; if this is for your own edification, you'll learn more by doing it all from scratch. The way such systems work is a formalization of how we understand natural languages. If I give you a sentence: "The dog, Rover, ate his food.", the first thing you do is break it up into words and punctuation. "The", "SPACE", "dog", "COMMA", "SPACE", "Rover", ... That's "tokenizing" or "lexing". The next thing you do is analyze the token stream to see if the sentence is grammatical. The grammar of English is extremely complicated, but this sentence is pretty straightforward. SUBJECT-APPOSITIVE-VERB-OBJECT. This is "parsing". Once you know that the sentence is grammatical, you can then analyze the sentence to actually get meaning out of it. For instance, you can see that there are three parts of this sentence -- the subject, the appositive, and the "his" in the object -- that all refer to the same entity, namely, the dog. You can figure out that the dog is the thing doing the eating, and the food is the thing being eaten. This is the semantic analysis phase. Compilers then have a fourth phase that humans do not, which is they generate code that represents the actions described in the language. So, do all that. Start by defining what the tokens of your language are, define a base class Token and a bunch of derived classes for each. (IdentifierToken, OrToken, AndToken, ImpliesToken, RightParenToken...). Then write a method that takes a string and returns an IEnumerable'. That's your lexer. Second, figure out what the grammar of your language is, and write a recursive descent parser that breaks up an IEnumerable into an abstract syntax tree that represents grammatical entities in your language. Then write an analyzer that looks at that tree and figures stuff out, like "how many distinct free variables do I have?" Then write a code generator that spits out the code necessary to evaluate the truth tables. Spitting IL seems like overkill, but if you wanted to be really buff, you could. It might be easier to let the expression tree library do that for you; you can transform your parse tree into an expression tree, and then turn the expression tree into a delegate, and evaluate the delegate. Good luck!
I think a parser generator is an overkill. You could use the idea of converting an expression to postfix and [evaluating postfix expressions](https://stackoverflow.com/questions/423898/postfix-notation-to-expression-tree) (or directly building an expression tree out of the infix expression and using that to generate the truth table) to solve this problem.
How can I build a Truth Table Generator?
[ "", "c#", "parsing", "boolean-logic", "parser-generator", "truthtable", "" ]
Is there a best-practice for scalable http session management? Problem space: * Shopping cart kind of use case. User shops around the site, eventually checking out; session must be preserved. * Multiple data centers * Multiple web servers in each data center * Java, linux I know there are tons of ways doing that, and I can always come up with my own specific solution, but I was wondering whether stackoverflow's wisdom of crowd can help me focus on best-practices In general there seem to be a few approaches: * Don't keep sessions; Always run stateless, religiously [doesn't work for me...] * Use j2ee, ejb and the rest of that gang * use a database to store sessions. I suppose there are tools to make that easier so I don't have to craft all by myself * Use memcached for storing sessions (or other kind of intermediate, semi persistent storage) * Use key-value DB. "more persistent" than memcached * Use "client side sessions", meaning all session info lives in hidden form fields, and passed forward and backward from client to server. Nothing is stored on the server. Any suggestions? Thanks
I would go with some standard distributed cache solution. Could be your application server provided, could be memcached, could be [terracotta](http://www.terracotta.org/) Probably doesn't matter too much which one you choose, as long as you are using something sufficiently popular (so you know most of the bugs are already hunted down). As for your other ideas: * Don't keep session - as you said not possible * Client Side Session - too unsecure - suppose someone hacks the cookie to put discount prices in the shopping cart * Use database - databases are usually the hardest bottleneck to solve, don't put any more there than you absolutely have to. Those are my 2 cents :) Regarding multiple data centers - you will want to have some affinity of the session to the data center it started on. I don't think there are any solutions for distributed cache that can work between different data centers.
You seem to have missed out vanilla replicated http sessions from your list. Any servlet container worth its salt supports replication of sessions across the cluster. As long as the items you put into the session aren't huge, and are serializable, then it's very easy to make it work. <http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html> edit: It seems, however, that tomcat session replication doesn't scale well to large clusters. For that, I would suggest using JBoss+Tomcat, which gives the idea of "buddy replication": <http://www.jboss.org/community/wiki/BuddyReplicationandSessionData>
Scalable http session management (java, linux)
[ "", "java", "linux", "http", "session", "scalability", "" ]
I want to tunnel through an HTTP request from my server to a remote server, passing through all the cookies. So I create a new `HttpWebRequest` object and want to set cookies on it. `HttpWebRequest.CookieContainer` is type `System.Net.CookieContainer` which holds `System.Net.Cookies`. On my incoming request object: `HttpRequest.Cookies` is type `System.Web.HttpCookieCollection` which holds `System.Web.HttpCookies`. Basically I want to be able to assign them to each other, but the differing types makes it impossible. Do I have to convert them by copying their values, or is there a better way?
Here's the code I've used to transfer the cookie objects from the incoming request to the new HttpWebRequest... ("myRequest" is the name of my HttpWebRequest object.) ``` HttpCookieCollection oCookies = Request.Cookies; for ( int j = 0; j < oCookies.Count; j++ ) { HttpCookie oCookie = oCookies.Get( j ); Cookie oC = new Cookie(); // Convert between the System.Net.Cookie to a System.Web.HttpCookie... oC.Domain = myRequest.RequestUri.Host; oC.Expires = oCookie.Expires; oC.Name = oCookie.Name; oC.Path = oCookie.Path; oC.Secure = oCookie.Secure; oC.Value = oCookie.Value; myRequest.CookieContainer.Add( oC ); } ```
I had a need to do this today for a SharePoint site which uses Forms Based Authentication (FBA). If you try and call an application page without cloning the cookies and assigning a CookieContainer object then the request will fail. I chose to abstract the job to this handy extension method: ``` public static CookieContainer GetCookieContainer(this System.Web.HttpRequest SourceHttpRequest, System.Net.HttpWebRequest TargetHttpWebRequest) { System.Web.HttpCookieCollection sourceCookies = SourceHttpRequest.Cookies; if (sourceCookies.Count == 0) return null; else { CookieContainer cookieContainer = new CookieContainer(); for (int i = 0; i < sourceCookies.Count; i++) { System.Web.HttpCookie cSource = sourceCookies[i]; Cookie cookieTarget = new Cookie() { Domain = TargetHttpWebRequest.RequestUri.Host, Name = cSource.Name, Path = cSource.Path, Secure = cSource.Secure, Value = cSource.Value }; cookieContainer.Add(cookieTarget); } return cookieContainer; } } ``` You can then just call it from any HttpRequest object with a target HttpWebRequest object as a parameter, for example: ``` HttpWebRequest request; request = (HttpWebRequest)WebRequest.Create(TargetUrl); request.Method = "GET"; request.Credentials = CredentialCache.DefaultCredentials; request.CookieContainer = SourceRequest.GetCookieContainer(request); request.BeginGetResponse(null, null); ``` where TargetUrl is the Url of the page I am after and SourceRequest is the HttpRequest of the page I am on currently, retrieved via Page.Request.
Sending cookies using HttpCookieCollection and CookieContainer
[ "", "c#", ".net", "cookies", "cookiecontainer", "" ]
I've read about this issue on MSDN and on CLR via c#. Imagine we have a 2Mb unmanaged HBITMAP allocated and a 8 bytes managed bitmap pointing to it. What's the point of telling the GC about it with AddMemoryPressure if it is never going to be able to make anything about the object, as it is allocated as unmanaged resource, thus, not susceptible to garbage collections?
The point of AddMemoryPressure is to tell the garbage collector that there's a large amount of memory allocated with that object. If it's unmanaged, the garbage collector doesn't know about it; only the managed portion. Since the managed portion is relatively small, the GC may let it pass for garbage collection several times, essentially wasting memory that might need to be freed. Yes, you still have to manually allocate and deallocate the unmanaged memory. You can't get away from that. You just use AddMemoryPressure to ensure that the GC knows it's there. **Edit:** > *Well, in case one, I could do it, but it'd make no big difference, as the GC wouldn't be able to do a thing about my type, if I understand this correctly: 1) I'd declare my variable, 8 managed bytes, 2mb unmanaged bytes. I'd then use it, call dispose, so unmanaged memory is freed. Right now it will only ocuppy 8 bytes. Now, to my eyes, having called in the beggining AddMemoryPressure and RemoveMemoryPressure at the end wouldn't have made anything different. What am I getting wrong? Sorry for being so anoying about this.* -- Jorge Branco I think I see your issue. Yes, if you can guarantee that you always call `Dispose`, then yes, you don't need to bother with AddMemoryPressure and RemoveMemoryPressure. There is no equivalence, since the reference still exists and the type would never be collected. That said, you still want to use AddMemoryPressure and RemoveMemoryPressure, for completeness sake. What if, for example, the user of your class forgot to call Dispose? In that case, assuming you implemented the Disposal pattern properly, you'll end up reclaiming your unmanaged bytes at finalization, i.e. when the managed object is collected. In that case, you want the memory pressure to still be active, so that the object is more likely to be reclaimed.
It is provided so that the GC knows the true cost of the object during collection. If the object is actually bigger than the managed size reflects, it may be a candidate for quick(er) collection. Brad Abrams [entry](http://blogs.msdn.com/brada/archive/2003/12/12/50948.aspx) about it is pretty clear: > Consider a class that has a very small > managed instance size but holds a > pointer to a very large chunk of > unmanaged memory. Even after no one > is referencing the managed instance it > could stay alive for a while because > the GC sees only the managed instance > size it does not think it is “worth > it” to free the instance. So we need > to “teach” the GC about the true cost > of this instance so that it will > accurately know when to kick of a > collection to free up more memory in > the process.
What is the point of using GC.AddMemoryPressure with an unmanaged resource?
[ "", "c#", ".net", "vb.net", "garbage-collection", "" ]
I've noticed with my source control that the content of the output files generated with ConfigParser is never in the same order. Sometimes sections will change place or options inside sections even without any modifications to the values. Is there a way to keep things sorted in the configuration file so that I don't have to commit trivial changes every time I launch my application?
Looks like this was fixed in [Python 3.1](http://docs.python.org/dev/py3k/whatsnew/3.1.html) and 2.7 with the introduction of ordered dictionaries: > The standard library now supports use > of ordered dictionaries in several > modules. The configparser module uses > them by default. This lets > configuration files be read, modified, > and then written back in their > original order.
If you want to take it a step further than Alexander Ljungberg's answer and also sort the sections and the contents of the sections you can use the following: ``` config = ConfigParser.ConfigParser({}, collections.OrderedDict) config.read('testfile.ini') # Order the content of each section alphabetically for section in config._sections: config._sections[section] = collections.OrderedDict(sorted(config._sections[section].items(), key=lambda t: t[0])) # Order all sections alphabetically config._sections = collections.OrderedDict(sorted(config._sections.items(), key=lambda t: t[0] )) # Write ini file to standard output config.write(sys.stdout) ``` This uses OrderdDict dictionaries (to keep ordering) and sorts the read ini file from outside ConfigParser by overwriting the internal \_sections dictionary.
Keep ConfigParser output files sorted
[ "", "python", "configuration", "configparser", "" ]
How can I find the order of nodes in an XML document? What I have is a document like this: ``` <value code="1"> <value code="11"> <value code="111"/> </value> <value code="12"> <value code="121"> <value code="1211"/> <value code="1212"/> </value> </value> </value> ``` and I'm trying to get this thing into a table defined like ``` CREATE TABLE values( code int, parent_code int, ord int ) ``` Preserving the order of the values from the XML document (they can't be ordered by their code). I want to be able to say ``` SELECT code FROM values WHERE parent_code = 121 ORDER BY ord ``` and the results should, deterministically, be ``` code 1211 1212 ``` I have tried ``` SELECT value.value('@code', 'varchar(20)') code, value.value('../@code', 'varchar(20)') parent, value.value('position()', 'int') FROM @xml.nodes('/root//value') n(value) ORDER BY code desc ``` But it doesn't accept the `position()` function ('`position()`' can only be used within a predicate or XPath selector). I guess it's possible some way, but how?
You can emulate the `position()` function by counting the number of sibling nodes preceding each node: ``` SELECT code = value.value('@code', 'int'), parent_code = value.value('../@code', 'int'), ord = value.value('for $i in . return count(../*[. << $i]) + 1', 'int') FROM @Xml.nodes('//value') AS T(value) ``` Here is the result set: ``` code parent_code ord ---- ----------- --- 1 NULL 1 11 1 1 111 11 1 12 1 2 121 12 1 1211 121 1 1212 121 2 ``` **How it works:** * The [`for $i in .`](http://msdn.microsoft.com/en-us/library/ms190945.aspx) clause defines a variable named `$i` that contains the current node (`.`). This is basically a hack to work around XQuery's lack of an XSLT-like `current()` function. * The `../*` expression selects all siblings (children of the parent) of the current node. * The `[. << $i]` predicate filters the list of siblings to those that precede ([`<<`](http://msdn.microsoft.com/en-us/library/ms190935.aspx)) the current node (`$i`). * We `count()` the number of preceding siblings and then add 1 to get the position. That way the first node (which has no preceding siblings) is assigned a position of 1.
SQL Server's `row_number()` actually accepts an xml-nodes column to order by. Combined with a [recursive CTE](https://technet.microsoft.com/en-us/library/ms186243(v=sql.105).aspx) you can do this: ``` declare @Xml xml = '<value code="1"> <value code="11"> <value code="111"/> </value> <value code="12"> <value code="121"> <value code="1211"/> <value code="1212"/> </value> </value> </value>' ;with recur as ( select ordr = row_number() over(order by x.ml), parent_code = cast('' as varchar(255)), code = x.ml.value('@code', 'varchar(255)'), children = x.ml.query('./value') from @Xml.nodes('value') x(ml) union all select ordr = row_number() over(order by x.ml), parent_code = recur.code, code = x.ml.value('@code', 'varchar(255)'), children = x.ml.query('./value') from recur cross apply recur.children.nodes('value') x(ml) ) select * from recur where parent_code = '121' order by ordr ``` *As an aside, you can do this and it'll do what do you expect:* ``` select x.ml.query('.') from @Xml.nodes('value/value')x(ml) order by row_number() over (order by x.ml) ``` Why, if this works, you can't just `order by x.ml` directly without `row_number() over` is beyond me.
Finding node order in XML document in SQL Server
[ "", "sql", "sql-server", "xml", "xquery", "" ]
How can I detect server-side (c#, asp.net mvc) if the loaded page is within a iframe? Thanks
This is not possible, however. ``` <iframe src="mypage?iframe=yes"></iframe> ``` and then check serverside if the querystring contains iframe=yes or with the Referer header send by the browser.
Use the following Code inside the form: ``` <asp:HiddenField ID="hfIsInIframe" runat="server" /> <script type="text/javascript"> var isInIFrame = (self != top); $('#<%= hfIsInIframe.ClientID %>').val(isInIFrame); </script> ``` Then you can check easily if it's an iFrame in the code-behind: ``` bool bIsInIFrame = (hfIsInIframe.Value == "true"); ``` Tested and worked for me. Edit: Please note that you require jQuery to run my code above. To run it without jQuery just use some code like the following (untested) code to set the value of the hidden field: ``` document.getElementById('<%= hfIsInIframe.ClientID %>').value = isInIFrame; ``` Edit 2: This only works when the page was loaded once. If someone have idea's to improve this, let me know. In my case I luckily only need the value after an postback.
Detect if a page is within a iframe - serverside
[ "", "c#", "asp.net", "html", "asp.net-mvc", "" ]
I have a table like this ``` <tr> <td>No.</td> <td>Username</td> <td>Password</td> <td>Valid Until</td> <td>Delete</td> <td>Edit</td> </tr> <tr> <td>1</td> <td id="1"> <div class="1u" style="display: none;">Username</div> <input type="text" class="inputTxt" value="Username" style="display: block;"/> </td> <td id="1"> <div class="1p" style="display: none;">Password</div> <input type="text" class="inputTxt" value="Password" style="display: block;"/></td> <td>18 Jul 09</td> <td><button value="1" class="deleteThis">x</button></td> <td class="editRow">Edit</td> </tr> ``` When edit is clicked i run this function ``` $('.editRow').click(function() { var row = $(this).parent('tr'); row.find('.1u').slideUp('fast'); row.find('.1p').slideUp('fast'); row.find('.inputTxt').slideDown('fast'); }); ``` this replaces the text with input field, so what i want is to cancel this back to text when somewhere else is click instead of save. How can I do this and any suggestions for improving my function `$('.editRow').click` **//////////// Edited //////////** ``` $('.editRow').click(function() { var row = $(this).parent('tr'); row.find('.1u').slideUp('fast'); row.find('.1p').slideUp('fast'); row.find('.inputTxt').slideDown('fast'); }).blur(function() { row.find('.inputTxt').slideUp('fast'); row.find('.1u').slideDown('fast'); row.find('.1p').slideDown('fast'); }); ``` I am using this but the input fields are not changing back to text. Thank You.
This blur function was not working so i just added a cancel button to do the job.
You could just handle the blur event just like you did for the click. ``` $('.editRow').click(function() { var row = $(this).parent('tr'); row.find('.1u').slideUp('fast'); row.find('.1p').slideUp('fast'); row.find('.inputTxt').slideDown('fast'); }).blur(function(){ do something else}); ``` hope this helps **UPDATE** ``` $('.editRow').click(function() { var row = $(this).parent('tr'); row.find('.1u').slideUp('fast'); row.find('.1p').slideUp('fast'); row.find('.inputTxt').slideDown('fast').blur(function(){ //change the .inputTxt control to a span }); }) ```
How to change element style back to normal if clicked somewhere else?
[ "", "javascript", "jquery", "" ]
I have an external style sheet with this in it: ``` .box { padding-left:30px; background-color: #BBFF88; border-width: 0; overflow: hidden; width: 400px; height: 150px; } ``` I then have this: ``` <div id="0" class="box" style="position: absolute; top: 20px; left: 20px;"> ``` When I then try to access the width of the div: ``` alert(document.getElementById("0").style.width); ``` A blank alert box comes up. How can I access the width property which is defined in my style sheet? NOTE: The div displays with the correct width.
You should use `window.getComputedStyle` to get that value. I would recommend against using `offsetWidth` or `clientWidth` if you're looking for the CSS value because those return a width which includes padding and other calculations. Using `getComputedStyle`, your code would be: ``` var e = document.getElementById('0'); var w = document.defaultView.getComputedStyle(e,null).getPropertyValue("width"); ``` The documentation for this is given at MDC : [window.getComputedStyle](https://developer.mozilla.org/En/DOM:window.getComputedStyle)
offsetWidth displays the actual width of your div: ``` alert(document.getElementById("0").offsetWidth); ``` This width can be different to what you have set in your css, though. The jQuery way would be (I really don't want to mention them all the time, but that's what all the libraries are there for): ``` $("#0").width(); // should return 400 $("#0").offsetWidth(); // should return 400 as well $("#0").css("width"); // returns the string 400px ```
How can I access style properties on javascript objects which are using external style sheets?
[ "", "javascript", "html", "css", "" ]
I have a `QWidget` which handles the `mouseevent`, i.e. it stores the `mouseposition` in a list when the left mouse button is pressed. The problem is, I cannot tell the widget to take only one point every x ms. What would be the usual way to get these samples? Edit: since the `mouseevent` is not called very often, is it possible to increase the rate?
It sounds like you don't want asynchronous event handling at all, you just want to get the location of the cursor at fixed intervals. Set up a timer to fire every x milliseconds. Connect it to a slot which gets the value of `QCursor::pos()`. Use `QWidget::mapFromGlobal()` if you need the cursor position in coordinates local to your widget. If you only want to do this while the left mouse button is held down, use `mousePressEvent()` and `mouseReleaseEvent()` to start/stop the timer.
You have two choices. You could either put some logic in the event handler that stores the timestamp (in milliseconds) of the last event. You then check that timestamp with every event and only store the point if the proper timespan has passed. (this is the ugly way) You could always also have a process somewhere in your app that registers the event handler every x milliseconds (if one isn't already registered) and then have your event handler un-register for the event in your handler). That way, when the event happens, the event handler gets un-registered and the timer re-registers for the event at your specified interval.
How to acquire an event only at defined times?
[ "", "c++", "qt", "mouseevent", "timing", "" ]
I have this code ``` private static Set<String> myField; static { myField = new HashSet<String>(); myField.add("test"); } ``` and it works. But when I flip the order, I get an **illegal forward reference** error. ``` static { myField = new HashSet<String>(); myField.add("test"); // illegal forward reference } private static Set<String> myField; ``` I'm a little bit shocked, I didn't expect something like this from Java. :) What happens here? Why is the order of declarations important? Why does the assignment work but not the method call?
First of all, let's discuss what a "forward reference" is and why it is bad. A forward reference is a reference to a variable that has not yet been initialized, and it is not confined only to static initalizers. These are bad simply because, if allowed, they'd give us unexpected results. Take a look at this bit of code: ``` public class ForwardRef { int a = b; // <--- Illegal forward reference int b = 10; } ``` What should j be when this class is initialized? When a class is initialized, initializations are executed in order the first to the last encountered. Therefore, you'd expect the line ``` a = b; ``` to execute prior to: ``` b = 10; ``` In order to avoid this kind of problems, Java designers completely disallowed such uses of forward references. **EDIT** this behaviour is specified by [section 8.3.2.3 of Java Language Specifications](http://java.sun.com/docs/books/jls/second_edition/html/classes.doc.html): > The declaration of a member needs to appear before it is used only if the member is an instance (respectively static) field of a class or interface C and all of the following conditions hold: > > * The usage occurs in an instance (respectively static) variable initializer of C or in an instance (respectively static) initializer of C. > * The usage is not on the left hand side of an assignment. > * C is the innermost class or interface enclosing the usage. > > A compile-time error occurs if any of the three requirements above are not met.
try this: ``` class YourClass { static { myField = new HashSet<String>(); YourClass.myField.add("test"); } private static Set<String> myField; } ``` it should compile without errors according the JLS... (don't really help, or?)
Why is the order of declarations important for static initializers?
[ "", "java", "language-features", "" ]
What's the best way to pretty-print xml in JavaScript? I obtain xml content through ajax call and before displaying this request in textarea i want to format it so it looks nice to the eye :)
This does not take care of any indenting, but helps to encode the XML for use within `<pre>` or `<textarea>` tags: ``` /* hack to encode HTML entities */ var d = document.createElement('div'); var t = document.createTextNode(myXml); d.appendChild(t); document.write('<pre>' + d.innerHTML + '</pre>'); ``` And if, instead of a `<textarea>`, you'd want highlighting and the nodes to be collapsable/expandable, then see [Displaying XML in Chrome Browser](https://superuser.com/questions/972/displaying-xml-in-chrome-browser/) on Super User.
take a look at the **vkBeautify.js** plugin <http://www.eslinstructor.net/vkbeautify/> it is exactly what you need. it's written in plain javascript, less then 1.5K minified and very fast: takes less then 5 msec. to process 50K XML text.
How to print pretty xml in javascript?
[ "", "javascript", "xml", "pretty-print", "" ]
I need to detect where the user has just clicked from - as my AJAX content needs to be displayed differently depending on the source page it is to be inserted into. If it's to go into about.php it needs to be data only, but if it's to go into about-main.php it needs to be the whole middle column so needs a header/footer wrapper around the data. The html called via AJAX is held in a php page which uses this code to see who's asking, and then formats the HTML response appropriately. ``` $array[] = "/cf/about.php"; $array[] = "/cf/about/about-main.php"; $array[] = "/cf/about/other-page.php"; $ispage = ""; foreach ($array as $value) { if ($_SERVER['HTTP_REFERER'] != $value) { $ispage = ""; } else { $ispage = "woot"; } } if ($ispage == "woot") { echo $content; } else { include_once 'about-header.php'; echo $content; include_once 'about-footer.php'; } ``` The problem is... HTTP\_REFERER seems to be a bit hit and miss. It works just fine while I'm at work on the network, but I've tried it out on my computer at home and it's obviously completely failing to work - the results are horrible :o Is there another way of achieving this? I guess session variables could be used but then I've not got much experience of that! Any and all hints/tips are appreciated ;) Thanks! edit: The page is actually a staff profile page. Its normal location is about.php and the 2nd column div displays a grid of thumbnails which when clicked, load the profile in that place via AJAX. All nice and simple - back button reloads the grid of photos. The problem is, each staff member also needs a static page. I've created these at about/staff-name.php. The content is THE SAME though. I want the server to detect if someone has come to the about/staff-name.php directly and if so, wrap a header/footer around it. If the request has come from the grid of photos (ie AJAX) then it doesn't need the header/footer wrapper. Is that clear? :o 1) If AJAX request - no wrapper 2) If not AJAX request - add header/footer wrapper
Wouldn't it be easier to just pass a flag in your AJAX call to tell the script which type of content to display? Edit: So about/staff-name.php displays the content. Call it via AJAX as about/staff-name.php?FromAjax=1 Then in the about/staff-name.php file: ``` if (isset($_REQUEST['FromAjax']) ) { echo $content; } else { include_once 'about-header.php'; echo $content; include_once 'about-footer.php'; } ```
No matter what, the Referer is not an information you should base your whole website upon : it is sent by the client, which means (at least) : * the client does not necessarily have to send it + it can be disabled in the browser + some firewall / antivirus remove it * it can be forged / altered (an easy way with firefox is to use an extension to do that) You definitly must find a better/saffer/more reliable way to do what you need. One solution *(which you already discarded)* would be to pass an additionnal information in all your links, saying which page the request comes from. In my opinion, this would probably be the best thing to do... Maybe a simpler way would be to add a parameter to your Ajax request, saying where it comes from ? ie, instead of relying on the Referer in the PHP script, just have a look at a parameter in the request, which would act as some "kind of referer", but put by you ? It will not be more secure *(users could still forge request, with that parameter)*, but, at least, it would not be disabled / modified *(except if the user does it by hand)* --- In the end, you also say this : > 1) If AJAX request - no wrapper > 2) If > not AJAX request - add header/footer > wrapper Well, if it's only a matter of determining if the PHP script was called through an Ajax request, here too, two solutions : * Add a parameter to the Request when it's done through Ajax (you only add this parameter in the JS script doing the request ; and when the parameter is here, PHP knows it's an Ajax request) * Or, in the PHP script, look for an `X-Requested-With` HTTP header, which is often here with the value `XMLHttpRequest` when a request is made through an Ajax call. *(But you should check that it's set with your JS Framework / or maybe it depends on the browser -- not sure about that :-( )*
PHP HTTP_REFERRER - how to detect last page?
[ "", "php", "session-variables", "http-referer", "" ]
I have a querystring alike value set in a plain string. I started to split string to get value out but I started to wonder that I can proabably write this in one line instead. Could you please advice if there is more optimal way to do this? I am trying to read "123" and "abc" like in Request.QueryString but from normal string. ``` protected void Page_Load(object sender, EventArgs e) { string qs = "id=123&xx=abc"; string[] urlInfo = qs.Split('&'); string id = urlInfo[urlInfo.Length - 2]; Response.Write(id.ToString()); } ```
You can do it this way: ``` using System.Collections.Specialized; NameValueCollection query = HttpUtility.ParseQueryString(queryString); Response.Write(query["id"]); ``` Hope it helps.
Look at [HttpUtility.ParseQueryString](http://msdn.microsoft.com/en-gb/library/system.web.httputility.parsequerystring.aspx). Don't reinvent the wheel.
Most optimal way to parse querystring within a string in C#
[ "", "c#", "string", "parsing", "" ]
Ok, I've read a couple of topics about it, but here it goes. Let's imagine I have an application where basically every now and then I will click on a button, a lot of things will happen for a couple of minutes, and then it'll stay idle possible for another hour, or maybe just 1 minute. Wouldn't just after that whole ended a good situation to call GC.Collect? I mean, I do know that just at that moment I will not being using my application, and GC cannot guess it.
I can see that several people have gone extreme about not recommending to call GC.Collect. GC.Collect is there for a reason, here are my recommendation of when and why to call GC.Collect. 1. In General, don't worry about calling it, GC tunes itself very well and will do the right thing. 2. Sometimes, you end up in situation where you know for sure that this is the right time to call it, the situation you described above is exactly the right time to call it, in fact Asp.Net calls GC.Collect at certain points that are similar to what you described. 3. The GC is smart about calling GC.Collect, if you called GC.Collect, the GC can override your decision and still doesn't collect ( you have to set a flag when you call GC.Collect to choose this behavior), this is the recommended way of calling GC.Collect, since you are still letting the GC decides if it is a good time to collect. 4. Don't take my recommendation is a general statement for calling GC.Collect, you should always avoid calling it, unless you REALLY sure you need to call it, situation like the one you described is exactly why GC.Collect is there. 5. The benefit you will get from calling it is freeing the garbage quickly, generally you will care about this situation if 1. You are in low memory situation and want to be eager about collection, if you are in low memory situation, the GC will be aggressive anyway and will kick in automatically if the memory pressure is high on the machine 2. If you want to avoid getting in low memory situation and you want to collect eagerly. Hope this helps. Thanks
It's almost always a premature optimization to worry about calling GC.Collect before you've prototyped or built the application and tested its performance. The GC is usually very good at collecting memory at the appropriate times. It will certainly run a collection while your application is idle, especially if there is memory pressure in the system. It is much more important that you follow good generational GC allocation practices (small objects, short usage, etc.) and you will likely get the performance characteristics you are wanting. If you still don't have the performance you need, after profiling and good design you might think about GC.Collect as a solution.
GC.Collect()
[ "", "c#", ".net", "vb.net", "garbage-collection", "" ]
This is a two part question; firstly, does anyone out there have some insight as to why PHP contract developers would be available at a much lower rate than their .NET counterparts (around a 30% premium for the Microsoft guys)? I have some theories relating to ease of learning and cost of tools and servers but would like to get some feedback from other people. Secondly, what is the impact of this on total project cost and ongoing maintenance? Generally speaking, would you consider the total effort for a typical website build similar for the two or does one technology impose a premium time wise?
I think this also has something to do with the nature of the clients. PHP (this is a BIG genralisation!) projects tend to for small web sites for smallish companies, - or -, mega web sites for companies whose main business is web based. In both cases development is a major cost which must be kept down if profits are to be made, they choose LAMP stacks because the costs are lower and they choose php because the rates are lower. .NET projects tend to be for large corporations. Development costs are not thier main concern (indeed they often pay way over the top to ensure that software development does not delay the rest of the project). Also they expect more than just a coder for thier money. A knowledge of some formal method (RUP etc.), business domain experience, and numerous other technical skills (SQL, CORBA, SOAP etc. ) are expected. Contract rates are driven by supply, demand and fashion. They have very little to do with the difficulty or skill level required. For years one of the highest paying gigs has been implementing SAP - where the key skill for a SAP developer is the ability to withstand the mind numing tedium.
I am suspicious that two developers, one .NET and one PHP, of the same skill level and seniority, in the same region, have any statistically significant difference in their respective rates. It is possible that, given PHP's much lower barrier to entry, the PHP skills market is more saturated with beginner level and relatively unskilled programmers, compared to .NET. That could skew averages, but that is very different than saying "PHP developers are cheaper than .NET"
Why are PHP developers cheaper than .NET developers?
[ "", ".net", "php", "" ]
I have a list of questions in a table, some of which are only to be displayed if certain criteria are met. A record might have criteria as simple as 4002=Y where 4002 is the question number and Y is the answer. If 4002=Y then the question is to be displayed. For records with only one criteria I have no problem. But then there are records that have criteria like the following: ``` 402=Y OR 7003=Y OR 905=Y OR 7007=Y OR 7008=Y OR 7010=Y OR 7011=Y OR 7013=Y ``` In this case I would need to evaluate each option to see if the question is to be displayed or not. Other questions will have similar strings; some shorter, some longer. How would I best split the string up so I can evaluate each section at a time and still be able to compare them correctly? I can reformat the data to some degree, but I would prefer not to if at all possible. Is this a `regex()` task (I'm not very familiar with that yet)? I've tried `list()`, `split()` and `explode()` with little success. Any pointers would be appreciated.
If your input string really is just a bunch of simple criteria separated with " OR ", then a simple explode() will indeed do the trick: ``` $criteria = "402=Y OR 7003=Y OR 905=Y OR 7007=Y OR 7008=Y OR 7010=Y OR 7011=Y OR 7013=Y"; $split_criteria = explode("OR", $criteria); foreach ($split_criteria as $single) { echo trim($single) . "\n"; } ``` However if it is more complicated (if you allow AND as well as OR, say) then you will need a correspondingly smarter parser.
``` $criteria = '402=Y OR 7003=Y OR 905=Y OR 7007=Y OR 7008=Y OR 7010=Y OR 7011=Y OR 7013=Y'; $rules = array(); foreach (explode(' OR ', $criteria) as $criterium) { $rule = explode('=', $criterium); $rules[$rule[0]] = ($rule[1] == 'Y'); } var_dump($rules); // array() { // [402]=> bool(true) // [7003]=> bool(true) // [905]=> bool(true) // ... // } $isAnyRuleTrue = in_array(true, $rules); ```
Splitting a string up into parts
[ "", "php", "string", "extract", "" ]
I have written a user control for our SharePoint site that builds an HTML menu - this has been injected into the master page and as such ends up rendering on all pages that use it. There are some pretty computationally expensive calls made while generating this HTML and caching is the logical choice for keeping the page loads snappy. The HttpRuntime Cache has worked perfectly for this so far. Now we are embarking down version 1.1 of this user control and a new requirement has crept in to allow per-user customization of the menu. Not a problem, except that I can no longer blindly use the HttpRuntime Cache object - or at least, not use it without prefacing the user id and making it user specific someway. Ideally I would like to be able to use the ASP.NET Session collection to store the user specific code. I certainly don't need it hanging around in the cache if the user isn't active and further, this really is kind of session specific data. I have looked at several options including it in the ViewState or enabling Session management (by default it is disabled for [a good reason](http://www.bluedoglimited.com/SharePointThoughts/ViewPost.aspx?ID=69)). But I am really not all that happy with either of them. So my question is: How should I go about caching output like this on a per user basis? Right now, my best bet seems to be include their user id in the Cache key, and give it a sliding expiration.
It's worth pointing out that I believe the 'end of days' link providing is relevant for SPS2003 not MOSS2007 - afaik MOSS integration into Asp.Net means that the mentioned issue is not a problem in MOSS.
I use ViewState on a fairly large MOSS 2007 deployment (1000+ users) for custom webparts and pages, and I haven't noticed a detrimental effect on the deployment performance at all. Just use it, is my advice.
Options for storing Session state with SharePoint
[ "", "c#", "sharepoint", "moss", "wss", "" ]
I'm using C++ to write a ROOT script for some task. At some point I have an array of doubles in which many are quite similar and one or two are different. I want to average all the number except those sore thumbs. How should I approach it? For an example, lets consider: ``` x = [2.3, 2.4, 2.11, 10.5, 1.9, 2.2, 11.2, 2.1] ``` I want to somehow average all the numbers except `10.5` and `11.2`, the dissimilar ones. This algorithm is going to repeated several thousand times and the array of doubles has 2000 entries, so optimization (while maintaining readability) is desired. Thanks SO! Check out: <http://tinypic.com/r/111p0ya/3> The "dissimilar" numbers of the y-values of the pulse. The point of this to determine the ground value for the waveform. I am comparing the most negative value to the ground and hoped to get a better method for grounding than to average the first N points in the sample.
Given that you are using ROOT you might consider looking at the `TSpectrum` classes which have support for extracting backgrounds from under an unspecified number of peaks... I have never used them with so much baseline noise, but they ought to be robust. BTW: what is the source of this data. The peak looks like a particle detector pulse, but the high level of background jitter suggests that you could really improve things by some fairly minor adjustments in the DAQ hardware, which might be better than trying to solve a difficult software problem. Finally, unless you are restricted to some very primitive hardware (in which case why and how are you running ROOT?), if you only have a couple thousand such spectra you can afford a pretty slow algorithm. Or is that 2000 spectra per event and a high event rate?
If you can, maintain a sorted list; then you can easily chop off the head and the tail of the list each time you work out the average. This is much like removing outliers based on the median (ie, you're going to need two passes over the data, one to find the median - which is almost as slow as sorting for floating point data, the other to calculate the average), but requires less overhead at the time of working out the average at the cost of maintaining a sorted list. Which one is fastest will depend entirely on your circumstances. It may be, of course, that what you really want is the median anyway! If you had discrete data (say, bytes=256 possible values), you could use 256 histogram 'bins' with a single pass over your data putting counting the values that go in each bin, then it's really easy to find the median / approximate the mean / remove outliers, etc. This would be my preferred option, if you could afford to lose some of the precision in your data, followed by maintaining a sorted list, if that is appropriate for your data.
How to select an unlike number in an array in C++?
[ "", "c++", "average", "" ]
I have this code and it's giving me an undefined error if country is not a variable in the URL. I've never had this problem before so it's obviously something to do with server settings but I'd rather not change those because this is probably done for security purposes. I've searched Google and can't find the workaround, although I'm sure it's blindingly obvious! ``` $country = $_GET['country']; if(!$country){ $incoming = $outgoing = $sms = $free = "---"; } else { get_rates($country); } ```
you should use the following approach: ``` if (!isset($_GET['country'])) { $incoming = $outgoing = $sms = $free = "---"; } else { get_rates($_GET['country']); } ```
isset allows you to check if the variable exists (if not we give it the value false). ``` $country = isset($_GET['country'])? $_GET['country'] : false; ```
PHP Undefined Index
[ "", "php", "" ]
I created a sample project to disable the keyboard shortcuts of windows. Then I included the `.exe` of this sample project to the reference of my main project where I want to implement this feature. Problem is, it is not working in my main project. But working perfectly in my sample project. Am I missing something like invoking the referenced `.exe` ?? I don't want to implement the code of sample project to the main project I just want to reference the *.exe* to the main project. How? ``` namespace BlockShortcuts { public class DisableKeys { private delegate int LowLevelKeyboardProcDelegate(int nCode, int wParam, ref KBDLLHOOKSTRUCT lParam); [DllImport("user32.dll", EntryPoint = "SetWindowsHookExA", CharSet = CharSet.Ansi)] private static extern int SetWindowsHookEx(int idHook, LowLevelKeyboardProcDelegate lpfn, int hMod, int dwThreadId); [DllImport("user32.dll")] private static extern int UnhookWindowsHookEx(int hHook); [DllImport("user32.dll", EntryPoint = "CallNextHookEx", CharSet = CharSet.Ansi)] private static extern int CallNextHookEx(int hHook, int nCode, int wParam, ref KBDLLHOOKSTRUCT lParam); const int WH_KEYBOARD_LL = 13; private int intLLKey; private struct KBDLLHOOKSTRUCT { public int vkCode; int scanCode; public int flags; int time; int dwExtraInfo; } private int LowLevelKeyboardProc(int nCode, int wParam, ref KBDLLHOOKSTRUCT lParam) { bool blnEat = false; switch (wParam) { case 256: case 257: case 260: case 261: //Alt+Tab, Alt+Esc, Ctrl+Esc, Windows Key if (((lParam.vkCode == 9) && (lParam.flags == 32)) || ((lParam.vkCode == 27) && (lParam.flags == 32)) || ((lParam.vkCode == 27) && (lParam.flags == 0)) || ((lParam.vkCode == 91) && (lParam.flags == 1)) || ((lParam.vkCode == 92) && (lParam.flags == 1)) || ((true) && (lParam.flags == 32))) { blnEat = true; } break; } if (blnEat) return 1; else return CallNextHookEx(0, nCode, wParam, ref lParam); } public void DisableKeyboardHook() { intLLKey = SetWindowsHookEx(WH_KEYBOARD_LL, new LowLevelKeyboardProcDelegate(LowLevelKeyboardProc), System.Runtime.InteropServices.Marshal.GetHINSTANCE(System.Reflection.Assembly.GetExecutingAssembly().GetModules()[0]).ToInt32(), 0); } private void ReleaseKeyboardHook() { intLLKey = UnhookWindowsHookEx(intLLKey); } #endregion } } ```
From one of your comments I get the idea that you are just trying to instantiate the code and run methods on it. Have you tried: ``` namespace test { using BlockShortcuts; class MyTest { public static void Main(string[] args) { DisableKeys dk = new DisableKeys(); dk.DisableKeyboardHook(); } } } ``` Also have you included all the required dlls into you main project on just the one?
Referencing the project just tells visual studio (or what ever IDE you're using) that you might want to use some code from that library, it will not include anything or execute anything on it's own. If you wish to execute code from the sample project you can either build it as a dll (if you have the source code) or you can execute the program from you main program (you will not need to include it as a referance in the latter case).
Invoking a .exe referenced to the c# windows application
[ "", "c#", "" ]
I have an XHTML document being passed to a PHP app via Greasemonkey AJAX. The PHP app uses UTF8. If I output the POST content straight back to a textarea in the AJAX receiving div, everything is still properly encoded in UTF8. When I try to parse using XPath ``` $dom = new DOMDocument(); $dom->loadHTML($raw2); $xpath = new DOMXPath($dom); $query = '//td/text()'; $nodes = $xpath->query($query); foreach($nodes as $node) { var_dump($node->wholeText); } ``` dumped strings are not utf8. How do I force DOM/XPath to use UTF8?
If it is a fully fledged valid xhtml document you shouldn't use loadhtml() but load()/loadxml(). Given the example xhtml document ``` <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>xhtml test</title> </head> <body> <h1>A Table</h1> <table> <tr><th>A</th><th>O</th><th>U</th></tr> <tr><td>Ä</td><td>Ö</td><td>Ü</td></tr> <tr><td>ä</td><td>ö</td><td>ü</td></tr> </table> </body> </html> ``` the script ``` <?php $raw2 = 'test.html'; $dom = new DOMDocument(); $dom->load($raw2); $xpath = new DOMXPath($dom); var_dump($xpath->registerNamespace('h', 'http://www.w3.org/1999/xhtml')); $query = '//h:td/text()'; $nodes = $xpath->query($query); foreach($nodes as $node) { foo($node->wholeText); } function foo($s) { for($i=0; $i<strlen($s); $i++) { printf('%02X ', ord($s[$i])); } echo "\n"; } ``` prints ``` bool(true) C3 84 C3 96 C3 9C C3 A4 C3 B6 C3 BC ``` i.e. the output/strings are utf-8 encoded
I had the same problem and I couldn't use tidy in my webserver. I found this solution and it worked fine: ``` $html = mb_convert_encoding($html, 'HTML-ENTITIES', "UTF-8"); $dom = new DomDocument(); $dom->loadHTML($html); ```
How to force XPath to use UTF8?
[ "", "php", "xhtml", "xpath", "utf-8", "" ]
I have to store hindi text in a MySQL database, fetch it using a PHP script and display it on a webpage. I did the following: I created a database and set its encoding to UTF-8 and also the collation to `utf8_bin`. I added a varchar field in the table and set it to accept UTF-8 text in the charset property. Then I set about adding data to it. Here I had to copy data from an [existing site](http://www.mypanchang.com/2009/hi/Jammu-Kashmir-India/0/july.html). The hindi text looks like this: सूर्योदय:05:30 I directly copied this text into my database and used the PHP code `echo(utf8_encode($string))` to display the data. Upon doing so the browser showed me "??????". When I inserted the UTF equivalent of the text by going to "view source" in the browser, however, सूर्योदय translates into `&#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351;`. If I enter and store `&#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351;` in the database, it converts perfectly. So what I want to know is how I can directly store सूर्योदय into my database and fetch it and display it in my webpage using PHP. Also, can anyone help me understand if there's a script which when I type in सूर्योदय, gives me `&#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351;`? **Solution Found** I wrote the following sample script which worked for me. Hope it helps someone else too ``` <html> <head> <title>Hindi</title></head> <body> <?php include("connection.php"); //simple connection setting $result = mysql_query("SET NAMES utf8"); //the main trick $cmd = "select * from hindi"; $result = mysql_query($cmd); while ($myrow = mysql_fetch_row($result)) { echo ($myrow[0]); } ?> </body> </html> ``` The dump for my database storing hindi utf strings is ``` CREATE TABLE `hindi` ( `data` varchar(1000) character set utf8 collate utf8_bin default NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `hindi` VALUES ('सूर्योदय'); ``` Now my question is, how did it work without specifying "META" or header info? Thanks!
Did you set proper charset in the HTML Head section? ``` <meta http-equiv="Content-Type" content="text/html;charset=UTF-8"> ``` or you can set content type in your php script using - ``` header( 'Content-Type: text/html; charset=utf-8' ); ``` There are already some discussions here on StackOverflow - please have a look [How to make MySQL handle UTF-8 properly](https://stackoverflow.com/questions/202205/how-to-make-mysql-handle-utf-8-properly/202246) [setting utf8 with mysql through php](https://stackoverflow.com/questions/624301/setting-utf8-with-mysql-through-php) [PHP/MySQL with encoding problems](https://stackoverflow.com/questions/405684/php-mysql-with-encoding-problems) > So what i want to know is how can i > directly store सूर्योदय into my > database and fetch it and display in > my webpage using PHP. I am not sure what you mean by "directly storing in the database" .. did you mean entering data using PhpMyAdmin or any other similar tool? If yes, I have tried using PhpMyAdmin to input unicode data, so it has worked fine for me - You could try inputting data using phpmyadmin and retrieve it using a php script to confirm. If you need to submit data via a Php script just set the NAMES and CHARACTER SET when you create mysql connection, before execute insert queries, and when you select data. Have a look at the above posts to find the syntax. Hope it helps. \*\* UPDATE \*\* Just fixed some typos etc
``` <meta http-equiv="Content-Type" content="text/html;charset=UTF-8"> <?php $con = mysql_connect("localhost","root",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_query('SET character_set_results=utf8'); mysql_query('SET names=utf8'); mysql_query('SET character_set_client=utf8'); mysql_query('SET character_set_connection=utf8'); mysql_query('SET character_set_results=utf8'); mysql_query('SET collation_connection=utf8_general_ci'); mysql_select_db('onlinetest',$con); $nith = "CREATE TABLE IF NOT EXISTS `TAMIL` ( `data` varchar(1000) character set utf8 collate utf8_bin default NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1"; if (!mysql_query($nith,$con)) { die('Error: ' . mysql_error()); } $nithi = "INSERT INTO `TAMIL` VALUES ('இந்தியா நாட்டின் பக்கங்கள்')"; if (!mysql_query($nithi,$con)) { die('Error: ' . mysql_error()); } $result = mysql_query("SET NAMES utf8");//the main trick $cmd = "select * from TAMIL"; $result = mysql_query($cmd); while($myrow = mysql_fetch_row($result)) { echo ($myrow[0]); } ?> </body> </html> ```
Storing and displaying unicode string (हिन्दी) using PHP and MySQL
[ "", "php", "mysql", "unicode", "utf-8", "internationalization", "" ]
I've been reading an PHP5 book, and the author commonly used this syntax ``` ${"{$something}_somethingelse"}; ``` I have no idea what that means. Does it dynamically generate a variable name? Someone help me out?
It is a language feature called [**Variable variables**](http://www.php.net/manual/en/language.variables.variable.php). Consider the following piece of code: ``` $a = 'hello'; ``` This is pretty straight forward. It creates the variable `$a` and sets its value to `'hello'`. Let's move on with: ``` $$a = 'world'; ${$a} = 'world'; ``` Basically, since `$a = 'hello'`, those two statement are the equivalent of doing: ``` $hello = 'world'; ``` So the following: ``` echo "$a ${$a}"; ``` Is the equivalent of doing: ``` echo "$a $hello"; ``` --- # Braces `{ }` The braces are used to prevent ambiguity problems from occurring. Consider the following: ``` $$a[1] = 'hello world'; ``` Do you want to assign a variable named after the value of `$a[1]` or do you want to assign the index `1` of the variable named after `$a`? For the first choice, you would write it as such: ``` ${$a[1]} = 'hello world'; ``` For the second choice: ``` ${$a}[1] = 'hello world'; ``` --- # Your example Now, for your example. Let's consider that: ``` $something = 'hello'; ``` Using your example as such: ``` ${"{$something}_somethingelse"} = 'php rocks'; ``` Would essentially be equivalent of doing: ``` $hello_somethingelse = 'php rocks'; ```
They are 'variable variables'. See [this](http://www.php.net/manual/en/language.variables.variable.php).
PHP Syntax ${"{$type}_method"}
[ "", "php", "" ]
If this question is too broad, my apologies. I am a DBA and I am working with a developer that thinks column defaults are a bad idea and that just setting the column to disallow nulls is good enough. I am looking for the benefits of column defaults from the developers point of view. Thank you for your comments.
Column defaults allow you to remove a lot of the hassle of creating records from your code or application logic. **Advantage #1**: If you have a table with 2 columns of user information and 20 columns of tinyint/boolean fields, let's say they are privacy settings, and you go to create a new record in that table, without a `DEFAULT` value you're going to have to specify every column in your query. There's probably a common setup that you want these records to have by default, and you can set this up using `DEFAULT` values. When you `INSERT` a record, you only need to specify the two user information fields, and voila, your record is created with a nice set of common privacy settings. You can tweak the flags individually after that. **Advantage #2**: Forward compatibility! If you have a bunch of `INSERT`s in your code, and then some columns are added on, you're going to have to go back and modify all those `INSERT`s if you don't specify a `DEFAULT` value (assuming, as is common, that `NULL`s aren't going to cut it). Often it isn't necessary to update old code for a new column (since the old code by nature doesn't care about the new column), so this would be a **huge** pain in the @$$ if you had to start going back and handling every new column in your code as it came along.
It helps with versioning. For example if old code (INSERT statements) expects a table with 10 columns, and that old code must work with the new table with 12, then you'll need to provide a default value for the new columns, or make the new columns nullable. And it also depends on the exact semantics of what NULL means for a given column, because a choice of defaults is also a choice for or against NULL in a column. Does the empty token mean unknown, unknowable, not changed yet, something in the unknown past, something in the unknown future. These are slightly different tokens and NULL can only mean one of them.
What are the benefits of putting a default value in a column?
[ "", "sql", "sql-server", "database", "database-design", "" ]
Does anyone know if there is a way to install a Windows service created in C# without making an installer?
You can use **installutil**. From the command line: ``` installutil YourWinService.exe ``` This utility is installed with the .NET Framework
You could try the windows [sc command](http://support.microsoft.com/kb/251192) `C:\WINDOWS\system32>sc create` DESCRIPTION: SC is a command line program used for communicating with the NT Service Controller and services.
How do I install a C# Windows service without creating an installer?
[ "", "c#", "windows-services", "" ]
What is the difference between a `DateTime?` and a `DateTime` (without a question mark) in C#?
`DateTime?` can be null as opposed to `DateTime`
A question mark after a value type is a shorthand notation for the [`Nullable<T>`](http://msdn.microsoft.com/en-us/library/b3h38hb0.aspx) structure. > Represents an object whose underlying > type is a value type that can also be > assigned null like a reference type. The `Nullable<T>` structure allows you to wrap value types (like `DateTime`, `Int32`, `Guid`, etc.) and treat them like reference types in certain respects. It does get a bit more complicated (in terms of assignment, lifted operators, and other things) and as such I would recommend that you read [Nullable Types (C# Programming Guide)](http://msdn.microsoft.com/en-us/library/1t3y8s4s(VS.80).aspx) and its related articles. > Nullable types are instances of the > `System.Nullable` struct. A nullable > type can represent the normal range of > values for its underlying value type, > plus an additional null value. For > example, a `Nullable<Int32>`, pronounced > "Nullable of Int32," can be assigned > any value from -2147483648 to > 2147483647, or it can be assigned the > null value. A `Nullable<bool>` can be > assigned the values true or false, or > null. The ability to assign null to > numeric and Boolean types is > particularly useful when dealing with > databases and other data types > containing elements that may not be > assigned a value. For example, a > Boolean field in a database can store > the values true or false, or it may be > undefined.
What is a "DateTime?" as opposed to just a DateTime in C#?
[ "", "c#", "datetime", "nullable", "" ]
I have a question about LINQ query. Normally a query returns a `IEnumerable<T>` type. If the return is empty, not sure if it is null or not. I am not sure if the following `ToList()` will throw an exception or just a empty `List<string>` if nothing found in `IEnumerable` result? ``` List<string> list = {"a"}; // is the result null or something else? IEnumerable<string> ilist = from x in list where x == "ABC" select x; // Or directly to a list, exception thrown? List<string> list1 = (from x in list where x == "ABC" select x).ToList(); ``` I know it is a very simple question, but I don't have VS available for the time being.
It will return an empty enumerable. It won't be null. You can sleep sound :)
You can also check the `.Any()` method: ``` if (!YourResult.Any()) ``` Just a note that `.Any` will still retrieve the records from the database; doing a `.FirstOrDefault()/.Where()` will be just as much overhead but you would then be able to catch the object(s) returned from the query
What does LINQ return when the results are empty
[ "", "c#", "linq", "" ]
I have a dictionary with one key and two values and I want to set each value to a separate variable. d= {'key' : ('value1, value2'), 'key2' : ('value3, value4'), 'key3' : ('value5, value6')} I tried d[key][0] in the hope it would return "value1" but instead it return "v" Any suggestions?
A better solution is to store your value as a two-tuple: ``` d = {'key' : ('value1', 'value2')} ``` That way you don't have to split every time you want to access the values.
Try something like this: ``` d = {'key' : 'value1, value2'} list = d['key'].split(', ') ``` `list[0]` will be "value1" and `list[1]` will be "value2".
Basic Python dictionary question
[ "", "python", "dictionary", "" ]
There are different JavaScript frameworks like jQuery, Dojo, mooTools, Google Web Toolkit(GWT), YUI, etc. Which one from this is suitable for high performance websites?
*(Full disclaimer: I am a Dojo developer and this is my unofficial perspective).* All major libraries can be used in high load scenarios. There are several things to consider: **Initial load** The initial load affects your response time: from requesting a web page to being responsive and in working mode. Trivial things to do are: * concatenate several JavaScript files together (works for CSS files too) * minimize and/or compress your JavaScript The idea is to send less — good for the server, and good for the client. The less trivial thing to do: * structure your program in such a way so it is operational without all modules loaded Example of the latter: divide your modules into essential (e.g., the core logic), and non-essential (e.g., helpers: tooltips, hints, verifiers, help facilities, various "gradual enhancers", and so on). The idea is that frequently there are things which are not important for frequent users, but nice for casual users ⇒ they can be delayed. We can load essential modules first and load the rest asynchronously. Example: if user wants to edit an object we need to show it first, after that we have several hundred milliseconds to load the rest: lookup tables, hints, and so on. Obviously it helps when asynchronous loading of modules is supported by the framework you use. Dojo has this facility built-in. **Distribute files** Everybody knows that due to browser restrictions on number of parallel downloads from the same site it is beneficial to load resources (images, CSS, JavaScript) from different domains: * we can download more in parallel, if user's line has enough bandwidth — these days it is almost always true * we can set up web servers optimized for serving static files: huge disk cache, small workers, keep-alive, async serving, and so on * we can remove all unnecessary features we don't need when serving static files: sessions, cookies, and so on One frequently overlooked optimization in JavaScript applications is to use [CDN](http://en.wikipedia.org/wiki/Content_delivery_network): * your web site can benefit from the geographical distribution of CDN (files can be served from the closest/fastest server) * user may have required files in her cache, if they were used by other application * intermediate/corporate caches increase the probability that required files are already cached * the last but not least: these are files that you don't serve — think about it Again, Dojo supports CDNs for a long time and distributed publicly by [AOL CDN](http://dev.aol.com/dojo) and [Google CDN](http://code.google.com/apis/ajaxlibs/documentation/). The latter carries practically all popular JavaScript toolkits too. Obviously you can create your own CDN and your very own CDN- and app- specific Dojo build, if you feel you need it — it is trivial and well documented. **Communication bandwidth** How that can be different for different toolkits? XHR is XHR. You need to reduce the load on your servers as much as possible. Analyze **all** traffic and consider how much static/immutable stuff is sent over the pipe. For example, typically a lot of HTML is redundant across several pages: a header, a footer, a menu, and so on. Do you really need all of these to be sent over every time? One obvious solution is to move from static HTML + "gradual enhancements" with JavaScript to real "one page" JavaScript applications. Again, this is a frequently overlooked, but the most rewarding optimization. While the idea sounds easy, in reality it is not as simple as it seems. As soon as we go from one-liners to apps we have a plethora of questions, and the biggest of them is the packaging: what your components are, what components are provided by the toolkit, and how to package and deliver them. Dojo provides modules, good OOP for general classes, widgets (a combination of an optional HTML and related behaviors), and a lot of facilities to work with them. You can: * load modules on demand rather than in the head * load modules asynchronously * find all dependencies between modules automatically and create a "build" — one file in simple cases, or more, if your app is big and requires several layers * while doing the "build" it can inline all HTML snippets for your widgets, optimize CSS, and minify/compress JavaScript * Dojo can automatically find and instantiate widgets in HTML saving a lot of boilerplate code * and much much more All these features help greatly when building applications on the client side. [That's why I like Dojo](https://stackoverflow.com/questions/394601/which-javascript-framework-jquery-vs-dojo-vs/394668#394668). Obviously there are more ways to optimize high load web sites but according to my practice these are the most specific for JavaScript frameworks.
Quite simply: **all of them**. All frameworks have been built in order to provide the fastest performance possible and provide the developers with useful functions and tools. **Your choice should be based on your requirements**. JavaScript runs on the client-side, so none will affect your server performance. The only difference server-side is the amount of bandwidth used to transfer the `.js` files to the client. I'm personally fond of [MooTools](http://mootools.net/) because it answers my requirements and also sticks to my coding ideals. A lot of people adopted [jQuery](http://jquery.com/) (I personally don't like it, doesn't mean it's not great). I haven't used the other ones. But none is better than the other, **it's all a question of requirements and personal preference**.
Which JavaScript framework is generally used for high performance websites?
[ "", "javascript", "jquery", "gwt", "frameworks", "dojo", "" ]
Is there a way to make the code continue (not exit) when you get a fatal error in PHP? For example I get a timeout fatal error and I want whenever it happens to skip this task and the continue with others. In this case the script exits.
There is a hack using output buffering that will let you log certain fatal errors, but there's no way to continue a script after a fatal error occurs - that's what makes it fatal! If your script is timing out you can use [`set_time_limit()`](http://php.net/set_time_limit) to give it more time to execute.
"Fatal Error", as it's name indicates, is Fatal : it stop the execution of the script / program. If you are using PHP to generate web pages and get a Fatal error related to [`max_execution_time`](http://fr.php.net/manual/en/info.configuration.php#ini.max-execution-time) which, by defaults, equals 30 seconds, you are certainly doing something that really takes too mych time : users won't probably wait for so long to get the page. If you are using PHP to do some heavy calculations, not in a webpage *(but via CLI, or a cron, or stuff like that)*, you can set another *(greater)* value for `max_execution_time`. You have two ways of doing that : First is to modify php.ini, to set this value (it's already in the file ; just edit the property's value). Problem is it'll modify it also for the web server, which is bad *(this is a security measure, after all)*. Better way is to create a copy of php.ini, called, for instance, phpcli.ini, and modify this file. Then, use it when invoking php : ``` php -c phpcli.ini myscript.php ``` This'll work great if you have many properties you need to configure for CLI execution. (Like `memory_limit`, which often has to be set to a higher value for long-running batches) The other way is to define a different value for `max_execution_time` when you invoke php, like this : ``` php -d max_execution_time=60 myscript.php ``` This is great if you launch this via the crontab, for instance.
Fatal error php
[ "", "php", "error-handling", "" ]
I have a shared object (a so - the Linux equivalent of a Windows dll) that I'd like to import and use with my test code. I'm sure it's not this simple ;) but this is the sort of thing I'd like to do.. ``` #include "headerforClassFromBlah.h" int main() { load( "blah.so" ); ClassFromBlah a; a.DoSomething(); } ``` I assume that this is a really basic question but I can't find anything that jumps out at me searching the web.
There are two ways of loading shared objects in C++ For either of these methods you would always need the header file for the object you want to use. The header will contain the definitions of the classes or objects you want to use in your code. Statically: ``` #include "blah.h" int main() { ClassFromBlah a; a.DoSomething(); } gcc yourfile.cpp -lblah ``` Dynamically (In Linux): ``` #include <stdio.h> #include <stdlib.h> #include <dlfcn.h> int main(int argc, char **argv) { void *handle; double (*cosine)(double); char *error; handle = dlopen ("libm.so", RTLD_LAZY); if (!handle) { fprintf (stderr, "%s\n", dlerror()); exit(1); } dlerror(); /* Clear any existing error */ cosine = dlsym(handle, "cos"); if ((error = dlerror()) != NULL) { fprintf (stderr, "%s\n", error); exit(1); } printf ("%f\n", (*cosine)(2.0)); dlclose(handle); return 0; } ``` \*Stolen from [dlopen Linux man page](http://linux.die.net/man/3/dlopen) The process under windows or any other platform is the same, just replace dlopen with the platforms version of dynamic symbol searching. For the dynamic method to work, all symbols you want to import/export must have extern'd C linkage. There are some words [Here](http://publib.boulder.ibm.com/infocenter/systems/index.jsp?topic=/com.ibm.aix.prftungd/doc/prftungd/when_dyn_linking_static_linking.htm) about when to use static and when to use dynamic linking.
It depends on the platform. To do it at runtime, on Linux, you use [dlopen](http://www.opengroup.org/onlinepubs/009695399/functions/dlopen.html), on windows, you use [LoadLibrary](http://msdn.microsoft.com/en-us/library/ms684175(VS.85).aspx). To do it at compile time, on windows you export the function name using [dllexport](http://msdn.microsoft.com/en-us/library/a90k134d(VS.80).aspx) and [dllimport](http://msdn.microsoft.com/en-us/library/8fskxacy(VS.80).aspx). On linux, gcc exports all public symbols so you can just link to it normally and call the function. In both cases, typically this requires you to have the name of the symbol in a header file that you then `#include`, then you link to the library using the facilities of your compiler.
How do I load a shared object in C++?
[ "", "c++", "load", "shared-objects", "" ]
I have a generic interface, say IGeneric. For a given type, I want to find the generic arguments which a class imlements via IGeneric. It is more clear in this example: ``` Class MyClass : IGeneric<Employee>, IGeneric<Company>, IDontWantThis<EvilType> { ... } Type t = typeof(MyClass); Type[] typeArgs = GetTypeArgsOfInterfacesOf(t); // At this point, typeArgs must be equal to { typeof(Employee), typeof(Company) } ``` What is the implementation of GetTypeArgsOfInterfacesOf(Type t)? Note: It may be assumed that GetTypeArgsOfInterfacesOf method is written specifically for IGeneric. **Edit:** Please note that I am specifically asking how to filter out IGeneric interface from all the interfaces that MyClass implements. Related: [Finding out if a type implements a generic interface](https://stackoverflow.com/questions/1121834/finding-out-if-a-type-implements-a-generic-interface)
To limit it to just a specific flavor of generic interface you need to get the generic type definition and compare to the "open" interface (`IGeneric<>` - note no "T" specified): ``` List<Type> genTypes = new List<Type>(); foreach(Type intType in t.GetInterfaces()) { if(intType.IsGenericType && intType.GetGenericTypeDefinition() == typeof(IGeneric<>)) { genTypes.Add(intType.GetGenericArguments()[0]); } } // now look at genTypes ``` Or as LINQ query-syntax: ``` Type[] typeArgs = ( from iType in typeof(MyClass).GetInterfaces() where iType.IsGenericType && iType.GetGenericTypeDefinition() == typeof(IGeneric<>) select iType.GetGenericArguments()[0]).ToArray(); ```
``` typeof(MyClass) .GetInterfaces() .Where(i => i.IsGenericType && i.GetGenericTypeDefinition() == typeof(IGeneric<>)) .SelectMany(i => i.GetGenericArguments()) .ToArray(); ```
Getting type arguments of generic interfaces that a class implements
[ "", "c#", "generics", "reflection", "" ]
I'm running a J2ME Application and run into some serious memory problems. So I built in another step to clear the huge input string and process its data and clear it. But it didn't solve the problem until I set **`input = null`** and not **`input = ""`**. Shouldn't it be the same in terms of memory management? Can somebody explain me the difference please? Thanks, rAyt ``` for(int x = 0; x <= ChunksPartCount; x++) { _model.setLoading_bar_progress((x * ChunkSize)); input += web_service.FullCompanyListChunksGet(x, ChunkSize); if((x * ChunkSize) > 5000) { ReadXML(input); input = null; } } ``` **Edit:** I still want to flag an answer as the solution. I think mmyers remarks are going in the right direction.
Every variable is actually a pointer to "Data" in memory. input = "" assigns input to a string object. It has a length (0) and an empty array, as well as a few other pieces of data associated with it. input.length() will return 0 at this point. input = null makes input point to "Invalid". Null is kind of a special case that means this pointer points to NOTHING, it's unassigned. input.length() will now throw an exception because you are calling .length on nothing.
`input = null` deletes (let's the Garbage collector delete) the object in memory, while `input = ""` instantiates an `String` object containing the empty string `""`. By setting input to null, you are making `input` an empty object, so it won't take *any* memory, while setting input = "", you are setting it to a new object, that will definitely take *some* memory (obviously it should be minimal). You could look into [this article](http://www.ibm.com/developerworks/java/library/j-jtp01274.html#3.2) from `IBM` talking about Java GC and performance, that discourages my previous recommendation. It says: > Explicit nulling is simply the > practice of setting reference objects > to null when you are finished with > them. **The idea behind nulling is that > it assists the garbage collector by > making objects unreachable earlier. Or > at least that's the theory.** > > **There is one case where the use of > explicit nulling is not only helpful, > but virtually required, and that is > where a reference to an object is > scoped more broadly than it is used or > considered valid by the program's > specification.** This includes cases > such as using a static or instance > field to store a reference to a > temporary buffer, rather than a local > variable, or using an array to store > references that may remain reachable > by the runtime but not by the implied > semantics of the program. And furthermore, > In the September 1997 "Java Developer Connection Tech Tips" column (see Resources), Sun warned of this risk and explained how explicit nulling was needed in cases like the pop() example above. Unfortunately, programmers often take this advice too far, using explicit nulling in the hope of helping the garbage collector. **But in most cases, it *doesn't help the garbage collector at all*, and in some cases, it can actually *hurt* your program's performance.**
Java input = "" isn't the same as input = null?
[ "", "java", "memory-management", "java-me", "garbage-collection", "" ]
Is it faster to do multiple SQL finds on one table with different conditions or to get all the items from the table in an array and then separate them out that way? I realize I'm not explaining my question that well, so here is an example: I want to pull records on posts and display them in categories based on when they were posted, say within one year, within one month, one week, etc. The nature of the categories results in lower level categories being entirely contained within upper level ones. Should I do a SQL find with different conditions for each category, resulting in multiple calls to the database, or should I do one search, returning all of the items and then sort them out from the array? Thanks for your responses, sorry I'm new at this.
Typically I would say that you are going to get better performance by letting your database engine do the sorting work. Each database engine has this functionality and typically it can do it faster than you can. So I would vote to use the database to get your multiple groups rather than trying to do it yourself in memory.
I typically perform one large sql query and then break the array up in ruby to minimize the number or duration of database connections. This isn't necessarily any faster, and I have never benchmarked it, but less reads to the db hopefully means it will scale longer.
Multiple SQL searches vs searching through one returned array
[ "", "sql", "mysql", "ruby-on-rails", "database", "performance", "" ]
I believe javascript can be anywhere (almost), but I almost always see it in between `<head></head>`. I am using jquery and wanted to know if it has to be in the head tags for some reason or if will break something if I move it. Thank you. EDIT: Why is it almost always in the head tags?
JavaScript is executed wherever it is found in the document. If you place inline JavaScript in the body, it will be executed when the browser comes to it. If you're using `$(document).ready(...)` to execute things, then the positioning shouldn't matter. Otherwise, you may find corner cases where it matters. In general, it does not matter. Scripts end up in the `head` tag mostly out of tradition.
No, it can be anywhere. In fact, it’s sometimes a good idea to put it at the bottom of the document. For an explanation why, see <http://developer.yahoo.com/performance/rules.html#js_bottom>.
Does javascript have to be in the head tags?
[ "", "javascript", "" ]
I am using inno setup to make a installation package for my application, and my application is written by C# + .Net 2.0 + VSTS 2008. Inno setup => <http://www.jrsoftware.org/isinfo.php> and I install my application under Program Files/Foo folder (Foo is my application name). My application is targeting to Windows Vista. The issue I found is my program cannot write to the folder Program Files/Foo. And I need the permission of write to this folder in order to save some configuration files. The strange thing I notice is the folder Program Files/Foo is marked as readonly and I have checked all folders under Program Files are marked with read only, such as Office. My questions are, 1. Why all folders are marked as read only under Program Files? It means we should not write to individual application folders under Program Files? If not, where should we write information to disk like user last selected configuration information of an individual application? 2. If we could write to individual application folders under Program Files, what is the solution? I do not want my application to Run As administrator to solve this issue, and if there are solution to write to this folder, I want to require minimal permission if possible.
You should write user specific config data to the Application Data folder for the current user, using the [special folders enum](http://msdn.microsoft.com/en-us/library/system.environment.specialfolder(VS.100).aspx) and the [Enivronment.GetFolderPath](http://msdn.microsoft.com/en-us/library/14tx8hby(VS.100).aspx).
Best Practice is to not store config data in the Program Files folder. Instead, store your application's data in %AppData%\YourApplicationName. Depending on whether you want to store your config data per-user or in a shared common folder, use one of the following enums to get the folder path: ``` string userAppData = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData); string commonAppData = Envrionment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData); ``` By default, Vista users *do not* run programs as Administrators and hence those programs have **only read access** to the folders under "Program Files". Users can change this behavior by disabling UAC and you *could* ask your users to do that, but in an office setting users might not have that option. That's why you use AppData instead -- applications can always read *and* write data to the AppData folder. Information on UAC can be found at Microsoft's site. Although this page is fairly long, it's a starting point for understanding UAC: <http://msdn.microsoft.com/en-us/library/bb530410.aspx>
file write permission issue under "Program Files" folder
[ "", "c#", ".net", "permissions", "" ]
**Edit**: What I really need to know is if there is any javascript event that will reliably fire when the user arrives at the page via the back button. I tried the `onload` event for the `body` element, but it doesn't fire on Firefox or Safari. --- I'm working with some old code that tries to prevent double-submission of form data by disabling all form submission buttons as soon as the user clicks one (by listening to the form's onSubmit event). This code is applied to every form in the app, regardless of whether double-submission would even be a problem. The problem is that if the user hits the back button in Firefox or Safari, all the submit buttons are still disabled when he gets to the page. On Firefox, they are even disabled after a refresh (only a Ctrl+F5 refresh will do it)! I've tried listening to `body`'s `onLoad` event, but FF and Safari don't fire that when you get to the page via back button. (Interestingly, it is fired on a soft refresh on FF, even though the button stays disabled.) --- Here's a very simple HTML doc that will show what I mean: ``` <html><body> <form name="theForm" id="theForm" action="test2.html" onSubmit="document.theForm.theButton.disabled = true;"> <input id="theButton" type="submit" name="theButton" value="Click me!" /> </form> </body></html> ``` Note: I've tested on WinXP with IE8, FF3.5, Safari 4, Opera 9, and Chrome 2.0. Only Safari and FF leave the button disabled when you use back to get to them.
I don't think you can test for the back button specifically, but I believe you can check the browser's history position with Javascript.
Isn't that the behaviour you want, so they don't double-submit using the back button? Anyway, you could set a cookie on the following page, and detect that on load on the page with the form.
Can I detect when a user gets to a page using the back button?
[ "", "javascript", "cross-browser", "" ]
I have a code something like this ``` Enumeration parameterEnum = request.getParameterNames() while(parameterEnum.hasMoreElements()){} ``` what is the difference if I change it from a while statement into an if statement?
If you change it to an `if` it will either execute once or not at all - a `while` statement will execute indefinitely until `ParameterEnum.hasMoreElements()` returns false.
If I understand what you are asking, the current code would keep running whatever is in the brackets until there are not elements. This assumes that what is in the brackets takes off elements. As literally shown above, it is an infinite loop and will never end. If you convert the `while` to an `if`, then what is in the brackets will run only once. If `Request.getParameterNames()` returns an "empty" whatever-it-is, then neither case will do anything.
Java: Impact of using 'if' instead of while loop during an iteration
[ "", "java", "" ]
I'm thinking about developing a web app to visualize the agile wall. The reason is that the project I'm working in has multiple distributed teams, so it is very difficult to share the information on the agile wall across the teams. I know some tools like JIRA do have agile wall functionality built in, what I want to have is a dedicated agile wall web app which could potentially integrate with those popular project management systems. Does this idea sound sensible and interesting to you? Please let me know if you get better idea about sharing the agile wall across distributed teams. Thanks. John
I'm not sure how well the card wall translates to a small screen. I've seen one similar implementation of something like this ([www.cardmeeting.com](http://www.cardmeeting.com)) that I was not impressed with. One of the drawbacks to it is that the cards are unreadable until you click on them. That being the case, the tools used by most commercial vendors to capture stories at least the advantage that you can immediately see the titles on the stories even if they don't implement the wall format. Another concern that I have is trying to take a passive information radiator and insert it into an active presentation medium. The wall format works partly (or even mainly, I haven't really looked at the research closely) because it's highly visible but not intrusive. Whenever you see it you get an immediate snapshot of the current state of the project. Translated into a web browser, you lose this aspect. It's not clear to me that in an active medium, where users need to navigate to information rather than simply absorb it, if the wall is still the right tool.
Have you looked at [Mingle](http://studios.thoughtworks.com/mingle-agile-project-management) from ThoughtWorks? I haven't looked at it recently, but I'd expect it to be open to integration with other systems. Even if you end up wanting to write your own, you should look at Mingle to see what a similar system looks like.
An idea about developing an agile wall web tool
[ "", "java", "agile", "" ]
I am using the String split method and I want to have the last element. The size of the Array can change. **Example:** ``` String one = "Düsseldorf - Zentrum - Günnewig Uebachs" String two = "Düsseldorf - Madison" ``` I want to split the above Strings and get the last item: ``` lastone = one.split("-")[here the last item] // <- how? lasttwo = two.split("-")[here the last item] // <- how? ``` I don't know the sizes of the arrays at runtime :(
Save the array in a local variable and use the array's `length` field to find its length. Subtract one to account for it being 0-based: ``` String[] bits = one.split("-"); String lastOne = bits[bits.length-1]; ``` Caveat emptor: if the original string is composed of only the separator, for example `"-"` or `"---"`, `bits.length` will be 0 and this will throw an ArrayIndexOutOfBoundsException. Example: <https://onlinegdb.com/r1M-TJkZ8>
You could use `lastIndexOf()` method on String ``` String last = string.substring(string.lastIndexOf('-') + 1); ```
Java: Get last element after split
[ "", "java", "string", "split", "" ]
I'm tinkering with a domain name finder and want to favour those words which are easy to pronounce. Example: nameoic.com (bad) versus namelet.com (good). Was thinking something to do with soundex may be appropriate but it doesn't look like I can use them to produce some sort of comparative score. PHP code for the win.
Here is a function which should work with the most common of words... It should give you a nice result between 1 (perfect pronounceability according to the rules) to 0. The following function far from perfect (it doesn't quite like words like Tsunami [0.857]). But it should be fairly easy to tweak for your needs. ``` <?php // Score: 1 echo pronounceability('namelet') . "\n"; // Score: 0.71428571428571 echo pronounceability('nameoic') . "\n"; function pronounceability($word) { static $vowels = array ( 'a', 'e', 'i', 'o', 'u', 'y' ); static $composites = array ( 'mm', 'll', 'th', 'ing' ); if (!is_string($word)) return false; // Remove non letters and put in lowercase $word = preg_replace('/[^a-z]/i', '', $word); $word = strtolower($word); // Special case if ($word == 'a') return 1; $len = strlen($word); // Let's not parse an empty string if ($len == 0) return 0; $score = 0; $pos = 0; while ($pos < $len) { // Check if is allowed composites foreach ($composites as $comp) { $complen = strlen($comp); if (($pos + $complen) < $len) { $check = substr($word, $pos, $complen); if ($check == $comp) { $score += $complen; $pos += $complen; continue 2; } } } // Is it a vowel? If so, check if previous wasn't a vowel too. if (in_array($word[$pos], $vowels)) { if (($pos - 1) >= 0 && !in_array($word[$pos - 1], $vowels)) { $score += 1; $pos += 1; continue; } } else { // Not a vowel, check if next one is, or if is end of word if (($pos + 1) < $len && in_array($word[$pos + 1], $vowels)) { $score += 2; $pos += 2; continue; } elseif (($pos + 1) == $len) { $score += 1; break; } } $pos += 1; } return $score / $len; } ```
I think the problem could be boiled down to parsing the word into a candidate set of [phonemes](http://en.wikipedia.org/wiki/Phoneme), then using a predetermined list of phoneme pairs to determine how pronouncible the word is. For example: "skill" phonetically is "/s/k/i/l/". "/s/k/", "/k/i/", "/i/l/" should all have high scores of pronouncibility, so the word should score highly. "skpit" phonetically is "/s/k/p/i/t/". "/k/p/" should have a low pronouncibility score, so the word should score low.
Measure the pronounceability of a word?
[ "", "php", "domain-name", "" ]
How do I get todays date one year ago in C#?
Todays date one year ago would be ``` DateTime lastYear = DateTime.Today.AddYears(-1); ```
What do you mean by "last years date"? If you just want the date of today minus one year, try the following: ``` DateTime myDateTime = DateTime.Now.AddYears(-1); ``` I hope that is what you need. **UPDATE:** Damn, I'm way to slow it seems :(
How do I get todays date one year ago in C#?
[ "", "c#", "datetime", "" ]
I have a php page which has a chart, a date picker(calendar) and a few buttons. I want to add another button "Print Chart" which ONLY prints the chart & not the entire page ,in a local printer. I am trying to do this by a having another script(which only outputs a chart) and using the javascript function 'window.print' html ``` <input type="button" onClick="printChart()" value="Print Chart"> ``` javascript ``` function printChart(){ var myParameters = window.location.search;// Get the parameters from the current page var URL = "http://my_server/my_folder/my_script_that_outputs_only_my_chart.php"+myParameters; var W = window.open(URL); W.window.print(); // Is this the right syntax ? This prints a blank page and not the above URL } ``` I tried the above code - it doesnt work. A blank page gets printed. Is there a way to print a target URL ? If yes, is it possible to print it without having to open a new window ? Thanks for your time
You could use a print stylesheet... ``` <link rel="stylesheet" type="text/css" media="print" href="print.css" /> ``` ...in addition to your normal style sheet. In print.css, just make everything "display: none;" except what you want to print.
Try removing the W.window.print() call and adding a ``` <body onload="window.print();"> ... </body> ``` to your php document to make sure your document is ready before it prints, and just have that page print itself. As for printing from that page by itself, adding a ``` <input type="button" value="Print" onclick="window.print();" /> ``` should work.
Printing the contents of a URL - javascript
[ "", "javascript", "url", "printing", "" ]
I am trying to display all the names from the table `vocabulary` where the `vid`s do not match a `vid` in `collapse_menu`. How would I do this? ``` Table vocabulary vid name 1 Sections 2 Posts 6 Forums 5 Departments 13 Free Tags 8 Committees 9 Training and Workshops 10 Policies 12 Projects 14 Teams Table collapse_menu vid 8 5 10 ```
select name from vocubulary where vid not in (select vid from collapse\_menu)
I assume that you are asking for the those names in vocabulary, where the vid is not in the collapse\_menu table. In which case ``` SELECT name FROM vocabulary LEFT JOIN collapse_menu ON vocabulary.vid = collapse_menu.vid WHERE collapse_menu.vid IS NULL ```
How do I select the rows of a table that don't match another table in SQL?
[ "", "sql", "" ]
I've a problem with setting cookies in php. I've to say that I'm not very experienced with php, so maybe is a very stupid problem. I've an ajax rating system that should check a cookie to see if the the photo has already been voted. The page called with ajax check for the cookie, add the id of the photo you are voting to it and call this function: ``` setcookie("Name", $cookie, time()+(60*24*365), "/", $_SERVER['HTTP_HOST'], 0); ``` The page that display the photo also call the cookie ``` $cookie = $_COOKIE['Name']; ``` and check to see if you have already voted. A problem may be the fact that the ajax page is in a different directory than the page that display the photo. The page that display the photo is in the root directory, the page that cast the vote is in /ajax/vote.php The voting system works, before I was checking the IPs, but know i need to check the cookies. It work in Firefox without any problem, but when I've started testing on IE and Safari it seem they don't see the cookie. I've checked with IECookieViewer and when I cast a vote the cookie is created allright, but when I go back to the page, it look like the page don't find the cookie. Also if I cast another vote the cookie is replaced with a new one. Sorry for the bad english, I hope the problem is understandable P.S. Forgot to point something that might be related to the problem. The page is inside an iframe.
Check the cookie settings of the other browsers and if they're set to block all or empty on exit. If the cookies work in one browser, but not another, you will need to make sure that the other browser is letting you set cookies in the first place. Sometimes it will look like you can create the cookie, but then it will disappear or be deleted with each page reload. ### Cookies from an iframe It's also possible that because you're setting the cookies in an iframe, that the browsers may view it as a third-party cookie and reject it unless explicitly set out in the browser preferences to [allow third-party cookies](http://viralpatel.net/blogs/2008/12/how-to-set-third-party-cookies-with-iframe.html). In that case you would need a compact privacy policy (or [a compact P3P header](https://stackoverflow.com/questions/98127/accessing-domain-cookies-within-an-iframe-on-internet-explorer)) on the pages from where you're trying to set the cookies from. For PHP, you would add this as your header for the page setting the cookie: ``` header('P3P:CP="IDC DSP COR ADM DEVi TAIi PSA PSD IVAi IVDi CONi HIS OUR IND CNT"'); ```
I've had a similar problem in the past where cookies would be set by firefox but not IE if I was working with a local network server (192.168.0.102 or something like that) but when I ported it all over to the public server with a domain name it worked fine. IE might have some kind of strange rules about cookies and domain names. Not sure if this helps in your case.
PHP cookies problem, works in Firefox not in other browser
[ "", "php", "internet-explorer", "firefox", "cookies", "" ]
I became a professional programmer in the era of object oriented code, and have years of experience programming in C++. I often work on large projects that have years of legacy code in a mix of c++ and c. I feel less comfortable working on pure c parts of systems. From programming in C++ I understand all the c syntax, but there's a hole in my knowledge about how to organise a complex c program without objects, and what constitutes best practise for managing memory that I would like to fill. I learnt c++ after working as a java programmer, and think a bit more c would make me a better c++ programmer, and a bit less of a java translated into c++ programmer
In terms of organization, looking at the [POSIX APIs](http://opengroup.org/onlinepubs/007908799/), especially [pthreads](http://opengroup.org/onlinepubs/007908799/xsh/pthread.h.html) will give you a good idea of how to organize C code. The basic rules of good C project organization are: * Don't expose your structures. Use opaque types only. * Use the library and data type names as prefixes for function names. * Provide "create" and "destroy" functions for allocation/construction and destruction/deallocation. * Pass in the opaque type as the first parameter to functions operating on that type. * Implement the C APIs using either C or C++ (it's up to you). Obviously, use non-opaque types there.
I was in basically the same boat as you (albeit with less experience, and I started with Python rather than Java), and what worked best for me was sitting down and reading Kernighan and Ritchie. You'll be able to skim the first half of the book since you're comfortable with the syntax, but you'll definitely walk away with a better understanding of low-level memory management. Reading the parts of the Linux kernel source that have to do with memory management also helps but is not for the faint of heart or the easily bored.
are there any tutorials to help a proficient c++ programmer learn c?
[ "", "c++", "c", "" ]
I have written some code that uses attributes of an object: ``` class Foo: def __init__(self): self.bar = "baz" myFoo = Foo() print (myFoo.bar) ``` Now I want to do some fancy calculation to return `bar`. I could use `@property` to make methods act as the attribute `bar`, or I could refactor my code to use `myFoo.bar()`. Should I go back and add parens to all my `bar` accesses or use `@property`? Assume my code base is small now but due to entropy it will increase.
If it's logically a property/attribute of the object, I'd say keep it as a property. If it's likely to become parametrised, by which I mean you may want to invoke `myFoo.bar(someArgs)` then bite the bullet now and make it a method. Under most circumstances, performance is unlikely to be an issue.
Wondering about performance is needless when it's so easy to measure it: ``` $ python -mtimeit -s'class X(object): > @property > def y(self): return 23 > x=X()' 'x.y' 1000000 loops, best of 3: 0.685 usec per loop $ python -mtimeit -s'class X(object): def y(self): return 23 x=X()' 'x.y()' 1000000 loops, best of 3: 0.447 usec per loop $ ``` (on my slow laptop -- if you wonder why the 2nd case doesn't have secondary shell prompts, it's because I built it from the first with an up-arrow in bash, and that repeats the linebreak structure but not the prompts!-). So unless you're in a case where you know 200+ nanoseconds or so will matter (a tight inner loop you're trying to optimize to the utmost), you can afford to use the property approach; if you do some computations to get the value, the 200+ nanoseconds will of course become a smaller fraction of the total time taken. I do agree with other answers that if the computations become too heavy, or you may ever want parameters, etc, a method is preferable -- similarly, I'll add, if you ever need to stash the callable somewhere but only call it later, and other fancy functional programming tricks; but I wanted to make the performance point quantitatively, since `timeit` makes such measurements so easy!-)
Python @property versus method performance - which one to use?
[ "", "python", "performance", "properties", "" ]
Is there a standard way to do this? I realize this can be somewhat platform dependent. Our product right now is only supported on Windows - so I suppose that's what I'm interested in right now. The only things I can think of are to either scan the registry or crawl the file system. Scanning the file system seems like it can take a really long time - and the registry can be unreliable. Should I do both? Any other suggestions? I tried to look for an API to do this with no luck.
I would firstly start by looking for the JAVA\_HOME environment variable (and possibly JDK\_HOME although thats far less common) and then determining what version that is and whether it's a JDK or JRE. After that check for common instead locations. Find out the system's program files directory (don't just assume it's C:\Program Files even though it is 99.5% of the time) and look for common install locations under that (eg Java). I wouldn't do an exhaustive search. It's worth asking: do you really need to find JDKs this way? Can't you just ask the user what JDK he or she wishes to use, possibly suggesting any easy ones you've found already?
``` System.out.println(System.getProperty("java.version")); ``` Other properties [here](http://docs.oracle.com/javase/7/docs/api/java/lang/System.html#getProperties%28%29)
Programmatically determine what JDK/JRE's are installed on my box
[ "", "java", "windows", "installation", "" ]
Example: ``` $arr = array(1 => 'Foo', 5 => 'Bar', 6 => 'Foobar'); /*... do some function so $arr now equals: array(0 => 'Foo', 1 => 'Bar', 2 => 'Foobar'); */ ```
Use `array_values($arr)`. That will return a regular array of all the values (indexed numerically). [PHP docs for array\_values](http://php.net/array_values)
``` array_values($arr); ```
Built-in PHP function to reset the indexes of an array?
[ "", "php", "arrays", "" ]
This was working fine yesterday with no changes to the code. ``` echo date("M", strtotime("-3 month", time()) ); echo date("M", strtotime("-2 month", time()) ); echo date("M", strtotime("-1 month", time()) ); echo date("M", time()); ``` The output it was producing yesterday was as you would expect- i.e. Apr, May, Jun, Jul Today it echoes May May Jul Jul Any ideas? Thanks in advance.
It might be related to bug [#44073](http://bugs.php.net/bug.php?id=44073) You could try with something like this : ``` echo date("M", strtotime("-3 month", strtotime(date("F") . "1")) ) . "\n"; echo date("M", strtotime("-2 month", strtotime(date("F") . "1")) ) . "\n"; echo date("M", strtotime("-1 month", strtotime(date("F") . "1")) ) . "\n"; echo date("M", time()) . "\n"; ``` *(Solution found in the comments section of [`strtotime`](http://php.net/strtotime) ; [direct link](https://www.php.net/manual/en/function.strtotime.php#83558))* And the output : ``` Apr May Jun Jul ``` *Kind of "cheating" with the date format and month's name and all that...*
Gorden correctly identified the issue, but I wanted to give another solution that was helpful and not as technical. Just use "first day of" or "last day of" in your strtotime. Ex, these following examples overcome the issue on a 31st of a month: ``` // Today is May 31st //All the following return 2012-04-30 echo date('Y-m-d', strtotime("last day of -1 month")); echo date('Y-m-d', strtotime("last day of last month")); echo date_create("last day of -1 month")->format('Y-m-d'); // All the following return 2012-04-01 echo date('Y-m-d', strtotime("first day of -1 month")); echo date('Y-m-d', strtotime("first day of last month")); echo date_create("first day of -1 month")->format('Y-m-d'); ```
PHP Strtotime -1month -2month
[ "", "php", "strtotime", "" ]
How can i cast ``` from ObservableCollection<TabItem> into ObservableCollection<object> ``` this doesnt work for me ``` (ObservableCollection<object>)myTabItemObservableCollection ```
you should copy like this ``` return new ObservableCollection<object>(myTabItemObservableCollection); ```
Basically, you can't. Not now, and [not in .NET 4.0](http://marcgravell.blogspot.com/2009/02/what-c-40-covariance-doesn-do.html). What is the context here? What do you need? LINQ has `Cast<T>` which can get you the data as a *sequence*, or there are some tricks with generic methods (i.e. `Foo<T>(ObservalbleCollection<T> col)` etc). Or you can just use the non-generic `IList`? ``` IList untyped = myTypedCollection; untyped.Add(someRandomObject); // hope it works... ```
How can i cast into a ObservableCollection<object>
[ "", "c#", "observablecollection", "covariance", "" ]
Has anyone done any development of Compass for CSS/SASS in a standard C# ASP.NET environment? Is there a single distribution I can just download that's ready to go for Windows or do I need install every piece of the equation and build compass myself? Are there any plugins that make developing with Compass friendlier with VS2008 such as automagical handling of Compass/SASS in builds, syntax highlighting, and/or intellisense support? If there aren't any VS IDE plugins what are the best options for a standalone text editor for handling coding in Compass?
To complete the last answers, you can install [**Web Workbench**](http://www.mindscapehq.com/products/web-workbench), a plugin for Visual Studio 2010 wich add **syntax highlighting**, **intellisence** and some other stuff for the SASS language (SCSS syntax only). If you prefer using Compass and/or some other tools to compile your CSS, you should disable the built-in compiler. I listed some other SASS compilers here: [Using SASS with ASP.NET](https://stackoverflow.com/questions/796788/using-sass-with-asp-net/8981789#8981789). To disable the built-in compiler: select the .scss file in Solution Explorer, go to the Properties window and delete the text from the Custom Tool box. Since Web Workbench 3 you can now manage more easily what you want to compile with this plugin. See the Mindscape > Web Workbench Settings menu item.
Getting started with Compass, First yes I have to install Ruby and the compass source and compile up my version of compass I followed the instructions on Compass's Wiki [Getting Started](http://wiki.github.com/chriseppstein/compass/getting-started). After getting Compass and all it's dependencies installed and built I created my first project. ``` compass -f blueprint project-name ``` Which creates a default project with compass for the blueprint css framework, currently there's a bug in compass with the creation of the grid.png file in the images directory for compass so you need to copy the original grid.png from the source folder ``` C:\Ruby\lib\ruby\gems\1.8\gems\chriseppstein-compass-0.8.10 \frameworks\blueprint\templates\project ``` Or similarly located file depending on where you installed everything. One of the most important changes IMO for working with compass on asp.net is to change the SASS CACHE directive of compass. The SASS CACHE creates a bunch of temporary folders in your project directory which probably would have poor outcomes if they ended under source control. So open up config.rb and add this line ``` sass_options = {:cache_location => "#{Compass.configuration.project_path}\\tmp\\sass-cache"} ``` Make sure to note the escaped backslashs. After this I modified the names of the folders that compass uses for how I wanted them named inside the config.rb and started getting to it with SASS and Compass. I recommend watching the hour long introduction to compass video it's very helpful and I learned alot from it: [Watch the screen cast](http://wiki.github.com/chriseppstein/compass). One of the things which this showed me was how to set compass to watch for file system changes and automagic compile the sass to css. By using ``` compass -w ``` This is working real well for me, just make sure you keep your css files checked out or turn them off read only if they're under source control if your project doesn't support concurrent checkouts. For editing I'm using SciTE that's included with Ruby by default for the config.rb files or just the editor window in VS2008. For Sass I came across a big list on the [HAML website](http://groups.google.com/group/haml/web/syntax-highlighting). jEdit with highlighting syntax file for SASS was what I ended up using after trying a few. I'd still like to find a VS plugin for syntax highlighting so I don't need to use another editor but jEdit is definitely getting the job done.
Using Compass on Windows with Visual Studio C# and ASP.NET
[ "", "c#", "asp.net", "windows", "visual-studio", "compass-sass", "" ]
I want to call a process via a python program, however, this process need some specific environment variables that are set by another process. How can I get the first process environment variables to pass them to the second? This is what the program look like: ``` import subprocess subprocess.call(['proc1']) # this set env. variables for proc2 subprocess.call(['proc2']) # this must have env. variables set by proc1 to work ``` but the to process don't share the same environment. Note that these programs aren't mine (the first is big and ugly .bat file and the second a proprietary soft) so I can't modify them (ok, I can extract all that I need from the .bat but it's very combersome). N.B.: I am using Windows, but I prefer a cross-platform solution (but my problem wouldn't happen on a Unix-like ...)
Since you're apparently in Windows, you need a Windows answer. Create a wrapper batch file, eg. "run\_program.bat", and run both programs: ``` @echo off call proc1.bat proc2 ``` The script will run and set its environment variables. Both scripts run in the same interpreter (cmd.exe instance), so the variables prog1.bat sets *will* be set when prog2 is executed. Not terribly pretty, but it'll work. (Unix people, you can do the same thing in a bash script: "source file.sh".)
Here's an example of how you can extract environment variables from a batch or cmd file without creating a wrapper script. Enjoy. ``` from __future__ import print_function import sys import subprocess import itertools def validate_pair(ob): try: if not (len(ob) == 2): print("Unexpected result:", ob, file=sys.stderr) raise ValueError except: return False return True def consume(iter): try: while True: next(iter) except StopIteration: pass def get_environment_from_batch_command(env_cmd, initial=None): """ Take a command (either a single command or list of arguments) and return the environment created after running that command. Note that if the command must be a batch file or .cmd file, or the changes to the environment will not be captured. If initial is supplied, it is used as the initial environment passed to the child process. """ if not isinstance(env_cmd, (list, tuple)): env_cmd = [env_cmd] # construct the command that will alter the environment env_cmd = subprocess.list2cmdline(env_cmd) # create a tag so we can tell in the output when the proc is done tag = 'Done running command' # construct a cmd.exe command to do accomplish this cmd = 'cmd.exe /s /c "{env_cmd} && echo "{tag}" && set"'.format(**vars()) # launch the process proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=initial) # parse the output sent to stdout lines = proc.stdout # consume whatever output occurs until the tag is reached consume(itertools.takewhile(lambda l: tag not in l, lines)) # define a way to handle each KEY=VALUE line handle_line = lambda l: l.rstrip().split('=',1) # parse key/values into pairs pairs = map(handle_line, lines) # make sure the pairs are valid valid_pairs = filter(validate_pair, pairs) # construct a dictionary of the pairs result = dict(valid_pairs) # let the process finish proc.communicate() return result ``` So to answer your question, you would create a .py file that does the following: ``` env = get_environment_from_batch_command('proc1') subprocess.Popen('proc2', env=env) ```
How to get environment from a subprocess?
[ "", "python", "windows", "subprocess", "environment-variables", "popen", "" ]
My Django site recently started throwing errors from my caching code and I can't figure out why... I call: ``` from django.core.cache import cache cache.set('blogentry', some_value) ``` And the error thrown by Django is: ``` TransactionManagementError: This code isn't under transaction management ``` But looking at the PostgreSQL database logs, it seems to stem from this error: ``` STATEMENT: INSERT INTO cache_table (cache_key, value, expires) VALUES (E'blogentry', E'pickled_version_of_some_value', E'2009-07-27 11:10:26') ERROR: duplicate key value violates unique constraint "cache_table_pkey" ``` For the life of me I can't figure out why Django is trying to do an INSERT instead of an UPDATE. Any thoughts?
That's a typical race. It checks if the key you inserted exists; if it doesn't, it does an insert, but someone else can insert the key between the count and the insert. Transactions don't prevent this. The code appears to expect this and to try to deal with it, but when I looked at the code to handle this case I could see immediately that it was broken. Reported here: <http://code.djangoproject.com/ticket/11569> I'd strongly recommend sticking to the memcache backend.
The code in core/cache/backend/db.py reads in part: ``` cursor.execute("SELECT cache_key, expires FROM %s WHERE cache_key = %%s" % self._table, [key]) try: result = cursor.fetchone() if result and (mode == 'set' or (mode == 'add' and result[1] < now)): cursor.execute("UPDATE %s SET value = %%s, expires = %%s WHERE cache_key = %%s" % self._table, [encoded, str(exp), key]) else: cursor.execute("INSERT INTO %s (cache_key, value, expires) VALUES (%%s, %%s, %%s)" % self._table, [key, encoded, str(exp)]) ``` So I'd say that you are doing the INSERT INTO instead of the UPDATE because *result* evaluates to false. For some reason, cursor.fetchone() returns 0 rows when there is actually one there. if you can't break in a debugger here, I'd put trace statements into the source to confirm that this is actually happening.
Django cache.set() causing duplicate key error
[ "", "python", "django", "postgresql", "caching", "" ]
You've been excellent with my other questions previously - so here I am again, in need of some help! I've got a query which joins three tables and a strongly typed dataset which has the columns defined for everything which comes back from the query. When I go to fill the dataadapter, nothing gets filled. I've copied the code from another method, so I presume it's okay - the only difference is that this query has joins. Any help appreciated, code follows: **Query:** `select gsh.locid, locations.description, GSH.workorder, GSH.comstatus, GSH.teststatus, GSH.fireresult, GSH.lightresult, GSH.watercold, GSH.waterhot, GSH.responsedate, GSH.comments, GSH.testername from gsh_vhs_locations locs left outer join locations on locs.maximoloc = locations.location left outer join gsh_vhs_comms GSH on locs.LOCID = GSH.locid where gsh.insertdate > sysdate-7 order by locid, locations.description, GSH.workorder, GSH.comstatus, GSH.teststatus, GSH.fireresult, GSH.lightresult, GSH.watercold, GSH.waterhot, GSH.responsedate, GSH.comments, GSH.testername` **Code:** ``` ResponseSheet Tests = new ResponseSheet(); DataSet ReturData = new DataSet(); OracleDataAdapter da; try { using (OracleConnection conn = new OracleConnection(ConnString)) { conn.Open(); OracleCommand cmd = new OracleCommand(); cmd.Connection = conn; cmd.CommandText = @"select gsh.locid, locations.description, GSH.workorder, GSH.comstatus, GSH.teststatus, GSH.fireresult, GSH.lightresult, GSH.watercold, GSH.waterhot, GSH.responsedate, GSH.comments, GSH.testername from gsh_vhs_locations locs left outer join locations on locs.maximoloc = locations.location left outer join gsh_vhs_comms GSH on locs.LOCID = GSH.locid where gsh.insertdate > sysdate-7 order by locid, locations.description, GSH.workorder, GSH.comstatus, GSH.teststatus, GSH.fireresult, GSH.lightresult, GSH.watercold, GSH.waterhot, GSH.responsedate, GSH.comments, GSH.testername "; da = new OracleDataAdapter(cmd.CommandText, conn); da.MissingMappingAction = MissingMappingAction.Error; da.TableMappings.Add("Table", "ResponseSheet"); da.Fill(ReturData, "ResponseSheet"); } } catch (Exception ex) { Console.WriteLine(TimeStamp() + ex.Message.ToString() + "Get Capture Report (TraceCode: 00019)"); } return ReturData; } ``` As you can see, I've turned the error reporting for table mappings on, but I get no errors at run time, just an empty dataset (da = null) Anything you can help with guys, just poke random google phrases at me if needs be - thanks :) Gareth
Okay guys I found the issue. I'm very sorry, but for some reason my TNSnames wasn't set up for the new dev database (literally changed it that afternoon and had missed one letter off) so it was hitting the catch and I wasn't spotting it. I feel very stupid, but thank you all for your help anyway. Henk has tidied up my code considerably and Beth made me take note that there was actually a problem with the query (the joins arent right, but it still returns data). Thanks again all, Gareth EDIT: Further to this, the table mappings were out (a nights sleep seems to be the key to solving this one!) so for future reference anyone: da.MissingMappingAction = MissingMappingAction.Passthrough; da.MissingSchemaAction = MissingSchemaAction.Add; adds all the relevant table mappings to the data adapter and it fills properly now.
You should remove the call to conn.Open() and let .Fill() open the connection. Right now you are leaving the connection open. I am not sure if it is the main problem, but you may get better error reports. Furthermore, you can eliminate the OracleCommand object since you are not really using it. The adapter will create a new Command object.
Strongly typed dataset wont fill, table mappings problem? c#.net 2.0
[ "", "c#", ".net", "oracle", "strongly-typed-dataset", "tableadapter", "" ]
I need to get a hold of every flag, every switch used in the build process by the Visual Studio binaries. I tried to obtain a verbose output by using `vcbuild`, but I wasn't able. What do I have to do to see everything performed by Visual Studio for me? It's not necessary to obtain the output in the build window. Anywhere would be fine.
1. Open the project properties dialog, then choose *Configuration Properties* → *C/C++* → *General* 2. Change the setting for `Suppress Startup Banner` to `No` 3. The `cl` command line(s) will be shown in the output window.
Menu *Tools* → *Options* → *Projects and Solutions* → *Build and Run* → *MSBuild project build output verbosity*: *Diagnostic*
How can I make Visual Studio's build be very verbose?
[ "", "c++", "visual-studio", "build-process", "" ]
Here's the problem, I have a bunch of directories like > S:\HELLO\HI > S:\HELLO2\HI\HElloAgain On the file system it shows these directories as > S:\hello\Hi > S:\hello2\Hi\helloAgain Is there any function in C# that will give me what the file system name of a directory is with the proper casing?
`string FileSystemCasing = new System.IO.DirectoryInfo("H:\...").FullName;` EDIT: As iceman pointed out, the FullName returns the correct casing only if the DirectoryInfo (or in general the FileSystemInfo) comes from a call to the GetDirectories (or GetFileSystemInfos) method. Now I'm posting a tested and performance-optimized solution. It work well both on directory and file paths, and has some fault tolerance on input string. It's optimized for "conversion" of single paths (not the entire file system), and faster than getting the entire file system tree. Of course, if you have to renormalize the entire file system tree, you may prefer iceman's solution, but i tested on 10000 iterations on paths with medium level of deepness, and it takes just few seconds ;) ``` private string GetFileSystemCasing(string path) { if (Path.IsPathRooted(path)) { path = path.TrimEnd(Path.DirectorySeparatorChar); // if you type c:\foo\ instead of c:\foo try { string name = Path.GetFileName(path); if (name == "") return path.ToUpper() + Path.DirectorySeparatorChar; // root reached string parent = Path.GetDirectoryName(path); // retrieving parent of element to be corrected parent = GetFileSystemCasing(parent); //to get correct casing on the entire string, and not only on the last element DirectoryInfo diParent = new DirectoryInfo(parent); FileSystemInfo[] fsiChildren = diParent.GetFileSystemInfos(name); FileSystemInfo fsiChild = fsiChildren.First(); return fsiChild.FullName; // coming from GetFileSystemImfos() this has the correct case } catch (Exception ex) { Trace.TraceError(ex.Message); throw new ArgumentException("Invalid path"); } return ""; } else throw new ArgumentException("Absolute path needed, not relative"); } ```
Here's a basic and relatively fast solution, keep reading below for some commentary: ``` private static string GetCase(string path) { DirectoryInfo dir = new DirectoryInfo(path); if (dir.Exists) { string[] folders = dir.FullName.Split(Path.DirectorySeparatorChar); dir = dir.Root; foreach (var f in folders.Skip(1)) { dir = dir.GetDirectories(f).First(); } return dir.FullName; } else { return path; } } ``` The basic idea is that getting subdirectories from a DirectoryInfo object will get you the correct case, so we just need to split the directory name and walk from the root to the target directory, getting the proper case at each step. My initial answer relied on getting the casing for every folder on the drive, and it worked but was slow. I came up with a slight improvement that stored the results, but it was still too slow for everyday usage. You can see the edit history for this comment if you need to do this for every thing on the drive, and even then there are probably ways to speed up that code. It was "here's how you might do it" and not "here's a great way to do it." Bertu, in his answer, came up with the idea of splitting the path into its components and getting the casing piece by piece, which results in a **huge** speed increase since you're no longer checking *everything* as in my original answer. Bertu also generalized his solution to do files as well as directories. In my tests, the code posted above (which uses Bertu's "split the path and do it by pieces" idea but approaches it iteratively instead of recursively) runs in about half the time of Bertu's code. I'm not sure if that's because his method also handles files, because his use of recursion introduces extra overhead, or because he ends up calling `Path.GetFileName(path)` and `Path.GetDirectoryName(path)` in each iteration. Depending on your exact needs, some combination of his answer and mine will likely solve your problem as well as is possible in C#. On that note, I should mention that there are [some limitations](http://pinvoke.net/default.aspx/kernel32/FindFirstFile.html) to .Net filename handling, and since doing this in .Net requires making a lot of DirectoryInfo objects, you might want to consider unmanaged code if this is your bottleneck.
Normalize directory names in C#
[ "", "c#", ".net", "" ]
I'm trying to write a simple RPG. So far, each time I try to start it instantly becomes a mess and I don't know how to organize anything. So I'm starting over, trying to prototype a new structure that is basically the MVC framework. My app starts execution in the Controller, where it will create the View and Model. Then it will enter the game loop, and the first step in the game loop is to collect user input. User input will be collected by a part of the View, because it can vary (a 3D View will directly poll user input, whereas maybe a remote View will receive it over a telnet connection, or a command-line view would use System.in). The input will be translated into messages, and each message will be given to Controller (by a method call) which can then interpret the message to modify Model data, or send data over the network (as I am hoping to have a networking option). This message handling technique can also be used, in the event of a networked game, to process network messages. Am I keeping the spirit of the MVC so far? Anyway my question is, what is the best way to represent these messages? Here is a use case, with each message in italics: Let's say the user starts the game and *chooses character 2*. Then the user *moves to coordinates (5,2)*. Then he *says to public chat, "hi!"*. Then he chooses to *save and quit*. How should the view wrap up these messages into something the controller can understand? Or do you think I should have separate controller methods like chooseCharacter(), moveCharacterTo(), publicChat()? I'm not sure that such simple implementation would work when I move to a networked game. But at the other end of the extreme, I don't want to just send strings to the Controller. It's just tough because the choose-character action takes one integer, the move-to takes two integers, and the chat takes a string (and a scope (public private global) and in the case of private, a destination user); there's no real set data type to it all. Also any general suggestions are very welcome; am I worrying about this at the right time? Am I headed down the right path to a well-laid-out MVC app? Is there anything I've forgotten? Thanks!
*(Disclaimer: I never programmed games in Java, only in C++. But the general idea should be applicable in Java too. The ideas I present are not my own, but a mash-up of solutions I found in books or "on the internet", see references section. I employ all this myself and so far it results in a clean design where I know exactly where to put new features I add.)* I am afraid this will be a long answer, it might not be clear when reading for the first time, as I can't describe it just top-down very well, so there will be references back and forth, this is due to my lacking explaining skill, not because the design is flawed. In hindsight I overreached and may even be off-topic. But now that I have written all this, I can't bring myself to just throw it away. Just ask if something is unclear. Before starting to design any of the packages and classes, start with an analysis. What are the features you want to have in the game. Don't plan for a "maybe I'll add this later", because almost certainly the design decisions you make up-front before you start to add this feature in earnest, the stub you planned for it will be insufficient. And for motivation, I speak from experience here, don't think of your task as writing a game engine, write a game! Whatever you ponder about what would be cool to have for a future project, reject it unless you put it in the game you are writing right now. No untested dead code, no motivation problems due to not being able to solve a problem that isn't even an issue for the immediate project ahead. There is no perfect design, but there is one good enough. Worth keeping this in mind. As said above, I don't believe that MVC is of any use when designing a game. Model/View separation is not an issue, and the controller stuff is pretty complicated, too much so as to be just called "controller". If you want to have subpackages named model, view, control, go ahead. The following can be integrated into this packaging scheme, though others are at least as sensible. It is hard to find a starting point into my solution, so I just start top-most: In the main program, I just create the Application object, init it and start it. The application's `init()` will create the feature servers (see below) and inits them. Also the first game state is created and pushed on top. (also see below) Feature servers encapsulate orthogonal game features. These can be implemented independently and are loosely coupled by messages. Example features: Sound, visual representation, collision detection, artificial intelligence/decision making, physics, and so on. How the features themselves are organized is described below. ## Input, control flow and the game loop Game states present a way to organize input control. I usually have a single class that collects input events or capture input state and poll it later (InputServer/InputManager) . If using the event based approach the events are given to the single one registered active game state. When starting the game this will be the main menu game state. A game state has `init/destroy` and `resume/suspend` function. `Init()` will initialize the game state, in case of the main menu it will show the top most menu level. `Resume()` will give control to this state, it now takes the input from the InputServer. `Suspend()` will clear the menu view from the screen and `destroy()` will free any resources the main menu needs. GameStates can be stacked, when a user starts the game using the "new game" option, then the MainMenu game state gets suspended and the PlayerControlGameState will be put onto the stack and now receives the input events. This way you can handle input depending on the state of your game. With only one controller active at any given time you simplify control flow enormously. Input collection is triggered by the game loop. The game loop basically determines the frame time for the current loop, updates feature servers, collects input and updates the game state. The frame time is either given to an update function of each of these or is provided by a Timer singleton. This is the canonical time used to determine time duration since last update call. ## Game objects and features The heart of this design is interaction of game objects and features. As shown above a feature in this sense is a piece of game functionality that can be implemented independently of each other. A game object is anything that interacts with the player or any other game objects in any way. Examples: The player avatar itself is a game object. A torch is a game object, NPCs are game objects as are lighting zones and sound sources or any combination of these. Traditionally RPG game objects are the top class of some sophisticated class hierarchy, but really this approach is just wrong. Many orthogonal aspects can't be put into a hierarchy and even using interfaces in the end you have to have concrete classes. An item is a game object, a pick-able item is a game object a chest is a container is an item, but making a chest pick-able or not is an either or decision with this approach, as you have to have a single hierarchy. And it gets more complicated when you want to have a talking magic riddle chest that only opens when a riddle is answered. There just is no one all fitting hierarchy. A better approach is to have just a single game object class and put each orthogonal aspect, which usually is expressed in the class hierarchy, into its own component/feature class. Can the game object hold other items? Then add the ContainerFeature to it, can it talk, add the TalkTargetFeature to it and so on. In my design a GameObject only has an intrinsic unique id, name and location property, everything else is added as a feature component. Components can be added at run-time through the GameObject interface by calling addComponent(), removeComponent(). So to make it visible add a VisibleComponent, make it make sounds, add an AudableComponent, make it a container, add a ContainerComponent. The VisibleComponent is important for your question, as this is the class that provides the link between model and view. Not everything needs a view in the classical sense. A trigger zone will not be visible, an ambient sound zone won't either. Only game objects having the VisibleComponent will be visible. The visual representation is updated in the main loop, when the VisibleFeatureServer is updated. It then updates the view according to the VisibleComponents registered to it. Whether it queries the state of each or just queues messages received from them depends on your application and the underlying visualization library. In my case I use Ogre3D. Here, when a VisibleComponent is attached to a game object it creates a SceneNode that is attached to the scene graph and to the scene node an Entity (representation of a 3d mesh). Every TransformMessage (see below) is processed immediately. The VisibleFeatureServer then makes Ogre3d redraw the scene to the RenderWindow (In essence, details are more complicated, as always) ## Messages So how do these features and game states and game objects communicate with each other? Via messages. A Message in this design is simply any subclass of the Message class. Each concrete Message can have its own interface that is convenient for its task. Messages can be sent from one GameObject to other GameObjects, from a GameObject to its components and from FeatureServers to the components they are responsible for. When a FeatureComponent is created and added to a game object it registers itself to the game object by calling myGameObject.registerMessageHandler(this, MessageID) for every message it wants to receive. It also registers itself to its feature server for every message it wants to receive from there. If the player tries to talk to a character it has in its focus, then the user will somehow trigger the talk action. E.g.: If the char in focus is a friendly NPC, then by pressing the mouse button the standard interaction is triggered. The target game objects standard action is queried by sending it a GetStandardActionMessage. The target game object receives the message and, starting with first registered one, notifies its feature components that want to know about the message. The first component for this message will then set the standard action to the one that will trigger itself (TalkTargetComponent will set standard action to Talk, which it will receive too first.) and then mark message as consumed. The GameObject will test for consumption and see that it is indeed consumed and return to caller. The now modified message is then evaluated and the resulting action invoked Yes this example seems complicated but it already is one of the more complicated ones. Others like TransformMessage for notifying about position and orientation change are easier to process. A TransformMassage is interesting to many feature servers. VisualisationServer needs it to update GameObject's visual representation on screen. SoundServer to update 3d sound position and so on. The advantage of using messages rather than invoking methods should be clear. There is lower coupling between components. When invoking a method the caller needs to know the callee. But by using messages this is completely decoupled. If there is no receiver, then it doesn't matter. Also how the receiver processes the message if at all is not a concern of the caller. Maybe delegates are a good choice here, but Java misses a clean implementation for these and in case of the network game, you need to use some kind of RPC, which has a rather high latency. And low latency is crucial for interactive games. ## Persistence and marshalling This brings us to how to pass messages over the network. By encapsulating GameObject/Feature interaction to messages, we only have to worry about how to pass messages over the network. Ideally you bring messages into a universal form and put them into a UDP package and send it. Receiver unpacks message to a instance of the proper class and channels it to the receiver or broadcasts it, depending on the message. I don't know whether Java's built-in serialization is up to the task. But even if not, there are lots of libs that can do this. GameObjects and components make their persistent state available via properties (C++ doesn't have Serialization built-in.) They have an interface similar to a PropertyBag in Java with which their state can be retrieved and restored. ## References * [The Brain Dump](http://flohofwoe.blogspot.com/): The blog of a professional game developer. Also authors of the open source Nebula engine, a game engine used in commercially successful games. Most of the design I presented here is taken from Nebula's application layer. * [Noteworthy article](http://flohofwoe.blogspot.com/2007/11/nebula3s-application-layer-provides.html) on above blog, it lays out the application layer of the engine. Another angle to what I tried to describe above. * [A lengthy discussion](http://www.ogre3d.org/forums/viewtopic.php?f=1&t=36015) on how to lay out game architecture. Mostly Ogre specific, but general enough to be useful for others too. * [Another argument for component based designs](http://gamearchitect.net/Articles/GameObjects1.html), with useful references at the bottom.
Make mine another answer for "MVC considered potentially harmful in games". If your 3D rendering is 'a view' and your network traffic is 'a view', then don't you end up with remote clients end up essentially treating a view as a model? (Network traffic may look like just another view mechanism when you're sending it, but at the receiving end that's your definitive model upon which your game is based.) Keep MVC where it belongs - separation of visual presentation from logic. Generally, you want to work by sending a message to the server, and waiting until you get a response. Whether that server is on another continent or within the same process doesn't matter if you handle it the same way. > Let's say the user starts the game and > chooses character 2. Then the user > moves to coordinates (5,2). Then he > says to public chat, "hi!". Then he > chooses to save and quit. Keep it simple. MUDs used to simply send the command in plain text (eg. "SELECT character2", "MOVE TO 5,2", "SAY Hi") and there's little reason why you couldn't do that, if you're comfortable writing the text parser. A more structured alternative would be to send a simple XML object, since I know you Java guys love XML ;) ``` <message> <choose-character number='2'/> </message> <message> <move-character x='5' y='2'/> </message> <!--- etc ---> ``` In commercial games we tend to have a binary structure which contains a message-type-id and then an arbitrary payload, with serialisation to pack and unpack such messages at each end. You wouldn't need that sort of efficiency here however.
Data structures for message passing within a program?
[ "", "java", "model-view-controller", "" ]
I'm writing a .NET web application in which administrators can customize the various data entry forms presented to their users. There are about half a dozen different field types that admins can create and customize (i.e. text, numeric, dropdown, file upload). All fields share a set of base attributes/behaviors (is the field required? Will it have a default field value?). There are also a series of field specific attributes/behaviors (i.e dropdown has a data source attribute, but text field does not). I'm leaving out many other characteristics of the problem domain for simplicity's sake. The class hierarchy is straightforward: An abstract superclass that encapsulates common behaviors/attributes and about half a dozen concrete subclasses that deal with field specific stuff. Each field type is rendered (i.e. mapped to) as a specific type of .NET server control, all of which derive from System.Web.UI.Control. I created the following code to map values between the field domain objects and their corresponding UI control: ``` public static void Bind(Control control, IList<DocumentFieldBase> fieldBaseList) foreach (DocumentFieldBase fieldBase in fields){ if (typeof (DocumentFieldText).IsInstanceOfType(fieldBase)){ TextBox textbox = (TextBox) control; textbox.Text = (fieldBase as DocumentFieldText).GetValue(); } if (typeof (DocumentFieldDropDown).IsInstanceOfType(fieldBase)){ DropDown dropDown= (DropDown) control; dropDown.Text = (fieldBase as DocumentFieldSelectOne).GetValue().Text; dropDown.DataSource= (fieldBase as DocumentFieldSelectOne).DataSource; dropDown.Id= (fieldBase as DocumentFieldSelectOne).GetValue().Id; } //more if statements left out for brevity } } ``` I want to ditch those ungodly if statements that perform type checking. The approach I was shooting for was to create a method overload for each combination of field/control using subclass typing. For example: ``` public static void Bind(TextBox control, DocumentFieldText fieldText){ //some implementation code } public static void Bind(DropDown control, DocumentFieldDropDown fieldDropDown){ //some implementation code } ``` I was hoping that I could then rely on .NET to call the appropriate overload at **runtime** using the specific subclass being used: For example: ``` foreach (DocumentFieldBase field in fields){ Control control = FindControl(field.Identifier); Bind(control, field) } ``` Unfortunately, the compiler chokes when I try this: Argument '1': cannot convert from 'System.Web.UI.Control' to 'TextBox'. If I have to cast the first argument to TextBox, I'm back to performing type checking myself and defeats the whole purpose of this exercise. Is what I'm trying to achieve a) possible and b) a good idea?
The "dispatch" tag on this question is quite appropriate: what you want is called "multiple dispatch". C# (like most mainstream languages) only supports "single dispatch", where the method to be executed is selected solely on the (runtime) type of the object you call the method on, not on the (runtime) type of its arguments. The visitor pattern can often be used to work around this. The idea is that you give `DocumentFieldBase` a method (that you override in concrete subclasses) which calls a method on `Control` (also overridden in concrete subclasses) that does the actual work. Unfortunately, the source code of the `Control` class is probably not under your control\*... so you'll have to resort to an even more hackish solution. The accepted answer to [this question](https://stackoverflow.com/questions/42587/double-dispatch-in-c) provides one that uses reflection. \*Extension methods are just syntactic sugar for static methods, and are thus resolved at compile time and of no use in this scenario.
Prior to C# 4, all overloading is done at compile time. You have to use double dispatch or the visitor pattern to effectively overload at execution time, and that gets messy quickly. In C# 4, you could declare a variable as dynamic and let it all get sorted out at execution time: ``` foreach (DocumentFieldBase field in fields){ dynamic control = FindControl(field.Identifier); Bind(control, field) } ``` Obviously that's not much help at the moment though (unless you're using VS2010b1). One option is to use a map from `Type` to `Action<object>` but then you get inheritance issues... (you'd potentially have to keep working up the type hierarchy from the concrete type up to object until you found an entry in the map). You'd also still need to cast to the right type within the action :(
method overloading and polymorphism
[ "", "c#", ".net", "polymorphism", "overloading", "dispatch", "" ]
I'm receiving a string from an external process. I want to use that String to make a filename, and then write to that file. Here's my code snippet to do this: ``` String s = ... // comes from external source File currentFile = new File(System.getProperty("user.home"), s); PrintWriter currentWriter = new PrintWriter(currentFile); ``` If s contains an invalid character, such as '/' in a Unix-based OS, then a java.io.FileNotFoundException is (rightly) thrown. How can I safely encode the String so that it can be used as a filename? Edit: What I'm hoping for is an API call that does this for me. I can do this: ``` String s = ... // comes from external source File currentFile = new File(System.getProperty("user.home"), URLEncoder.encode(s, "UTF-8")); PrintWriter currentWriter = new PrintWriter(currentFile); ``` But I'm not sure whether URLEncoder it is reliable for this purpose.
If you want the result to resemble the original file, SHA-1 or any other hashing scheme is not the answer. If collisions must be avoided, then simple replacement or removal of "bad" characters is not the answer either. Instead you want something like this. (Note: this should be treated as an illustrative example, not something to copy and paste.) ``` char fileSep = '/'; // ... or do this portably. char escape = '%'; // ... or some other legal char. String s = ... int len = s.length(); StringBuilder sb = new StringBuilder(len); for (int i = 0; i < len; i++) { char ch = s.charAt(i); if (ch < ' ' || ch >= 0x7F || ch == fileSep || ... // add other illegal chars || (ch == '.' && i == 0) // we don't want to collide with "." or ".."! || ch == escape) { sb.append(escape); if (ch < 0x10) { sb.append('0'); } sb.append(Integer.toHexString(ch)); } else { sb.append(ch); } } File currentFile = new File(System.getProperty("user.home"), sb.toString()); PrintWriter currentWriter = new PrintWriter(currentFile); ``` This solution gives a reversible encoding (with no collisions) where the encoded strings resemble the original strings in most cases. I'm assuming that you are using 8-bit characters. `URLEncoder` works, but it has the disadvantage that it encodes a whole lot of legal file name characters. If you want a not-guaranteed-to-be-reversible solution, then simply remove the 'bad' characters rather than replacing them with escape sequences. --- The reverse of the above encoding should be equally straight-forward to implement.
My suggestion is to take a "white list" approach, meaning don't try and filter out bad characters. Instead define what is OK. You can either reject the filename or filter it. If you want to filter it: ``` String name = s.replaceAll("\\W+", ""); ``` What this does is replaces any character that *isn't* a number, letter or underscore with nothing. Alternatively you could replace them with another character (like an underscore). The problem is that if this is a shared directory then you don't want file name collision. Even if user storage areas are segregated by user you may end up with a colliding filename just by filtering out bad characters. The name a user put in is often useful if they ever want to download it too. For this reason I tend to allow the user to enter what they want, store the filename based on a scheme of my own choosing (eg userId\_fileId) and then store the user's filename in a database table. That way you can display it back to the user, store things how you want and you don't compromise security or wipe out other files. You can also hash the file (eg MD5 hash) but then you can't list the files the user put in (not with a meaningful name anyway). EDIT:Fixed regex for java
How can I safely encode a string in Java to use as a filename?
[ "", "java", "string", "file", "encoding", "" ]
I tried to create a win32 dll using c++. It has a map declared globally. But when I try to access the map using the dll its giving a run time error that: **WindowsError: exception: access violation reading 0x00000008**. How to solve it? Declaration: `static map<int,urllib> url_container;` The urllib is a class. Error occurance: `url_container[ucid] = urllib();` The error occurs at the above point.
Does this code ``` url_container[ucid] = urllib() ``` get called in a static initialiser for an other global object? If so there is no guarantee that `url_container` has been consutructed before the other global object. Use an accessor function to control when the object is created, or use a singleton library like [boost singleton](http://www.boost.org/doc/libs/1_39_0/libs/pool/doc/implementation/singleton.html) Accessor example ``` map<int,urllib> & get_url_container() { static map<int,urllib> url_container; return url_container } ``` As an aside I would suggest you try to avoid global objects. As you could spend the rest of your life debugging issues like this. Eventually the construction of one global object will depend on another etc. and the order of construction is not defined so it might work on one platform/compiler and fail on another.
I assume urllib is a type or class and not a function? It doesn't look like there's anything wrong with your code. In the debugger, what do you see on the call stack when the exception happens? It would be helpful to see exactly where it's running into the access violation.
access violation error when using map in dll
[ "", "c++", "dll", "" ]
I try to parse articles from wikipedia. I use the \*page-articles.xml file, where they backup all their articles in a wikicode-format. To strip the format and get the raw text, I try to use Regular Expressions, but I am not very used to it. I use C# as programming language. I tried a bit around with Expresso, a designer for Regular Expressions, but I am at the end of my wits. Here is what I want to achieve: The text can contain the following structures: [[TextN]] or [[Text1|TextN]] or [[Text1|Text2|...|TextN]] the [[ .... ]] pattern can appear within the Texti aswell. I want to replace these structure with TextN For identifing the structures withhin the text I tried the following RegEx: ``` \[\[ ( .* \|?)* \]\] ``` Expresso seems to run and endless loop with this one. After 5 minutes for a relative small text, I canceled the Test Run. Then I tried something more simple, I want to capture anything between the brackets: ``` \[\[ .* \]\] ``` but when I have a line like: ``` [[Word1]] text inbetween [[Word2]] ``` the expression returns the whole line, not [[Word1]] [[Word2]] Any tips from Regex-Experts here to solve the problem? Thanks in advance, Frank
`\[\[(.*?\]\]` would do it. The key is the .\*? which means get any characters but as few a possible. EDIT For nested tags one approach would be: ``` \[\[(?<text>(?>\[\[(?<Level>)|\]\](?<-Level>)|(?! \[\[ | \]\] ).)+(?(Level)(?!)))\]\] ``` This ensures that the [[ and ]] match across the text as well.
I wouldn't use regular expressions (since they don't handle recursion/nesting well). Instead I would parse the text *by hand*\*, which isn't particularly difficult in this case. You could represent the text as a stream of **elements** whereas each **element** is either * a plain text **chunk**, or * a **tag** A **tag** might contain multiple **element streams**, separated by `|`. ``` elementStream ::= element* element ::= chunk | tag chunk ::= TEXT tag ::= "[[" elementStream otherStreams "]]" otherStreams ::= "|" elementStream otherStreams ``` Your parser could represent each of those definitions with a method. So you'd have an `elementStream` method that would call `element` as long as there is text available and the next two characters are not "]]" or "|" (if you are inside a tag). Each call to `element` would return the element parsed, either a chunk or a tag. etc. This would essentially be a **recursive descent parser**. Wikipedia: <http://en.wikipedia.org/wiki/Recursive_descent_parser> (the article is rather long/complicated, unfortunately)
Regex for parsing Wikicode in C#
[ "", "c#", "regex", "" ]
In Python you can have multiple iterators in a list comprehension, like ``` [(x,y) for x in a for y in b] ``` for some suitable sequences a and b. I'm aware of the nested loop semantics of Python's list comprehensions. My question is: Can one iterator in the comprehension refer to the other? In other words: Could I have something like this: ``` [x for x in a for a in b] ``` where the current value of the outer loop is the iterator of the inner? As an example, if I have a nested list: ``` a=[[1,2],[3,4]] ``` what would the list comprehension expression be to achieve this result: ``` [1,2,3,4] ``` ?? (Please only list comprehension answers, since this is what I want to find out).
Gee, I guess I found the anwser: I was not taking care enough about which loop is inner and which is outer. The list comprehension should be like: ``` [x for b in a for x in b] ``` to get the desired result, and yes, one current value can be the iterator for the next loop.
Suppose you have a text full of sentences and you want an array of words. ``` # Without list comprehension list_of_words = [] for sentence in text: for word in sentence: list_of_words.append(word) return list_of_words ``` I like to think of list comprehension as stretching code horizontally. Try breaking it up into: ``` # List Comprehension [word for sentence in text for word in sentence] ``` **Example:** ``` >>> text = (("Hi", "Steve!"), ("What's", "up?")) >>> [word for sentence in text for word in sentence] ['Hi', 'Steve!', "What's", 'up?'] ``` This also works for generators ``` >>> text = (("Hi", "Steve!"), ("What's", "up?")) >>> gen = (word for sentence in text for word in sentence) >>> for word in gen: print(word) Hi Steve! What's up? ```
Double Iteration in List Comprehension
[ "", "python", "list-comprehension", "" ]
I'm currently working on a WPF (with C# behind the scenes) system which requires rendering of data from many different files. Most of those files are AutoCAD documents. Each file comes with a set of data that we need to draw on screen essentially on the same canvas. Think of each file as a potential "layer" or overlay that needs to appear on screen. At the moment, each graphics source is parsed and converted to a set of [Path](http://msdn.microsoft.com/en-us/library/system.windows.shapes.path.aspx) objects. Each collection of paths is rendered to it's own [Canvas](http://msdn.microsoft.com/en-us/library/system.windows.controls.canvas.aspx) so that its visibility can be toggled on or off. Each of these canvases is made a child of a parent canvas which has a set of transforms applied to it. Those transforms are basic scale and translate render transforms which are used to support panning and zooming of the image that is being viewed. This functionality currently works fine, but it's slow. We're rendering quite a few Path objects on screen and loading/creating those Path instances is taking quite a while. The load speed in itself isn't so much of an issue; what really is the issue is that I need to create the Path instances on the UI thread, otherwise I can't render them all on the same canvas. Hence, while loading, the **entire UI is locked up** and the user can't do anything. I have searched extensively on the web but can't seem to find a solution to the problem. I did stumble on one article (unfortunately I don't have the link anymore) which described a method of hosting items created on different threads on the same *window*. This didn't work for me at all. I tried a combination of things that I found in the article but I couldn't get anything to render at all. So I guess the crux of my question is: Is it possible to create a set of UI objects, in particular Path objects, on different threads, then load them into a parent canvas on the main UI thread and have them all play nicely together? Any references, articles or tutorials would be greatly appreciated. I'm looking forward to your help! Thanks for reading. OJ **Edit 1:** Each of the Path instances is just a single line with a colour. They aren't complicated. But it seems that creation of those objects themselves is what is taking the time (I might be wrong). Thanks!
One possibility is to create the PathGeometry (the bulk of the work) on a separate thread, Freeze it, and set it into a Path created on your UI thread. (I haven't done this myself, only read about it.) Here is an [MSDN article](http://msdn.microsoft.com/en-us/library/ms750509.aspx) on Freezable objects, of which PathGeometry is one, which states that they can be shared across threads (but no longer modified) once frozen. This may or may not suit your scenario.
Are you using the `Dispatcher` on which exists on every UIElement and provides the `BeginInvoke` method to run code on the right thread? Have you read this MSDN article: [Build More Responsive Apps With The Dispatcher](http://msdn.microsoft.com/en-us/magazine/cc163328.aspx) The Windows Presentation Foundation Threading Model is described [here](http://msdn.microsoft.com/en-us/library/ms741870.aspx).
Creating Paths on Different Threads and Rendering to the Same Canvas
[ "", "c#", "wpf", "" ]
I'm dealing specifically with C++, but this is really language-agnostic. Just to give some context into the problem... I have a tree/graph based structure where each node holds a collection of multiple items. I have a class which encapsulates some generic collection (list, map, vector, LinkedList, whatever). I want to be able to fetch all the items from that collection in the most efficient way possible without the client being able to edit the private collection. Here is a simplified example of what I have: ``` class MyClass { public: // Basic constructors and such void AddItem(int item) { _myItems->push_back(item); } private: list<int>* _myItems; }; ``` So obviously a getter which retrieves the pointer to \_myItems does not work as this will allow the client to edit \_myItems. **Attempt 1:** I could create a new list and return a pointer to that one instead... however I don't like that idea as the responsibility of creating and deleting now lie with different objects. **Attempt 2:** I'm thinking it would be best to create some CopyTo methods such as: ``` void CopyItemsToList(list<int>* inList) { // copy the items to client list } ``` This way, the client takes care of mem management and it is easy to extend this to allow more data structures. My concern with this approach is cost... The list can be very big, and the cost of copying the items could get large **Attempt 3:** Instead of keeping a pointer to a list in the class, just use a value type and return that (letting the copy constructor take care of it). But, this seems like the performance cost will be the same as #2... Anyways, any other way to go about this? Let me know what you guys think... Thanks.
The [Iterator pattern](http://en.wikipedia.org/wiki/Iterator_pattern) is generally considered a proper method to expose a list of elements from a collection agnostic to the actual internal representation of the collection. You could create your iterator in such a way that it returns read-only (const) references, or make copies of items on the fly as they're requested instead of copying the entire list.
You can return const reference to your list instead of a pointer (or just pointer to const). Client will still be able to const cast it, but this would be their own problem then. ``` const list<int>& GetList() const { return *_myItems; } ```
Best practice for fetching a collection of items from an object?
[ "", "c++", "oop", "" ]
I am new to generics. I want to implement my own collection by deriving it from `IList<T>` interface. Can you please provide me some link to a class that implements `IList<T>` interface or provide me a code that at least implements `Add` and `Remove` methods?
Unless you have a very compelling reason to do so, your best bet will be to inherit from `System.Collections.ObjectModel.Collection<T>` since it has everything you need. Please note that although implementors of `IList<T>` are not required to implement `this[int]` (indexer) to be O(1) (basically, constant-time access), it's strongly recommended you do so.
In addition to deriving from `List<T>`, you can facade `List<T>` and add more features to your facade class. ``` class MyCollection<T> : IList<T> { private readonly IList<T> _list = new List<T>(); #region Implementation of IEnumerable public IEnumerator<T> GetEnumerator() { return _list.GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } #endregion #region Implementation of ICollection<T> public void Add(T item) { _list.Add(item); } public void Clear() { _list.Clear(); } public bool Contains(T item) { return _list.Contains(item); } public void CopyTo(T[] array, int arrayIndex) { _list.CopyTo(array, arrayIndex); } public bool Remove(T item) { return _list.Remove(item); } public int Count { get { return _list.Count; } } public bool IsReadOnly { get { return _list.IsReadOnly; } } #endregion #region Implementation of IList<T> public int IndexOf(T item) { return _list.IndexOf(item); } public void Insert(int index, T item) { _list.Insert(index, item); } public void RemoveAt(int index) { _list.RemoveAt(index); } public T this[int index] { get { return _list[index]; } set { _list[index] = value; } } #endregion #region Your Added Stuff // Add new features to your collection. #endregion } ```
Implementing IList interface
[ "", "c#", "generics", "collections", "ilist", "" ]
when I try to click on the designer tab to get the designer view, I get this error: > To prevent possible data loss before > loading the designer, the following > errors must be resolved: > The designer could not be shown for this file because none of the > classes within it can be designed. The > designer inspected the following > classes in the file: Form1 --- The > base class 'System.Windows.Forms.Form' > could not be loaded. Ensure the > assembly has been referenced and that > all projects have been built. I'd like to get into the designer and have no idea what's wrong. My Form1 code looks like this at the top: ``` using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using System.Drawing.Drawing2D; using System.Diagnostics; namespace foobar { public partial class Form1 : Form { List<CharLabel> allChars = new List<CharLabel>(); public Form1() { ... etc ... ``` Any ideas?
When you change the namespace or other item inside a partial class (like Forms) directly from the code editor you are making an invitation for madness. As the name suggest a partial class is a class that is defined "partially" on the code view, but there is another part which is generated automaticall by VS and that is the other part of the class. In that part it contains the definition of all UI elements, fonts, default values, etc. When you change the name space in one part of the class the other part don't know what do and then the interesting errors start. When changing namespaces, class names, event methods names always use the Refactor option in VS. In your case, I would probably go back to the old name it had, and then use Refactor option VS provides (Highlight the component name, Ricgh click, refactor->rename) Hope this help.
Have you checked the recomendation in the message? That is, have you verified that System.Windows.Forms.dll is referenced in your project? To add the reference if it's missing do the following * Click: View -> Solution Explorer * Right Click on the References node and select "Add Referenc" * Go to the .Net Tab * Scroll until you see System.Windows.Forms.dll * Select that and hit OK
Can't see the designer view in Visual Studio 2008 C# Windows Forms due to weird error
[ "", "c#", ".net", "visual-studio", "winforms", "visual-studio-2008", "" ]
I'm having some trouble with making list of objects based on a condition on an `enum`. It seems that after I have completed the list, every item in the list is equivalent to the last item. It's the classic case of different references pointing to the same object, but I don't know how to avoid it: I've pared things down as much as I can while maintaining readability: ``` public class Foo { Digit[] array = new Digit[2]; ArrayList<Foo> foozlets; Foo() { array[0] = Digit.ZERO; foozlets = new ArrayList<Foo>(); } Foo(Foo old, Digit num) { this.array = old.array; \\This line is a problem, what should it say? array[1] = num; } public static void main(String[] args) { Foo f = new Foo(); System.out.println("Initial Foo:"); System.out.println(f); f.listFoozlets(); } void listFoozlets() { for (Digit k : Digit.values()) { if (k == Digit.TWO || k == Digit.FIVE) { foozlets.add(new Foo(this, k)); System.out.println("** Foozlet being added **"); Foo foo = new Foo(this, k); System.out.println(foo); } } System.out.println("** List of Foozlets **"); for (Foo foo : foozlets) { System.out.println(foo); } } public String toString() { return array[0].toString() + " " + array[1].toString(); } } enum Digit { ZERO, ONE, TWO, THREE, FOUR, FIVE } ``` Here is the output: ``` Initial Foo: ZERO NULL ** Foozlet being added ** ZERO TWO ** Foozlet being added ** ZERO FIVE ** List of Foozlets ** ZERO FIVE ZERO FIVE ``` If someone can explain why the first instance of `Foo` on the list changes, and how I can make a list that doesn't change, I'd be grateful. **EDIT:** Ok, I see where the problem is now. In the real, much larger program, I have a much larger array, and I want to keep the old information when I create a new Foo for the list. I've changed the code to reflect that there is additional information that I want to maintain. How do I accomplish this? **EDIT 2021:** Wandered upon this, one of my first SO questions, twelve years later. If this helped you, I'm glad for that. If you are wondering what in the world I was trying to do, the answer is "I have no idea."
This bit is the culprit: ``` Foo(Foo old, Digit num) { this.array = old.array; array[0] = num; } ``` You're copying a *reference* to the old Foo's array, and then changing the value in that array. Why do you even have an array of size 1 instead of just a `Digit`? If you really want the array, you possibly want to clone it instead of just copying the reference, but we can't really tell what the intention is. Here's a shorter example: ``` enum Digit { ZERO, ONE, TWO, THREE, FOUR, FIVE } public class Foo { Digit[] array = new Digit[1]; Foo() { array[0] = Digit.ZERO; } Foo(Foo old, Digit num) { this.array = old.array; array[0] = num; } public String toString() { return array[0].toString(); } public static void main(String[] args) { Foo f = new Foo(); System.out.println(f); Foo other = new Foo(f, Digit.ONE); System.out.println(f); } } ```
In your second constructor: ``` Foo(Foo old, Digit num) { this.array = old.array; array[0] = num; } ``` You are re-using the list from `old`. You want to create a copy of that list rather than using the same list. You can do that by changing the assignment to: ``` Foo(Foo old, Digit num) { this.array = new ArrayList<Foo>(old.array); array[0] = num; } ```
Why are these Java enums changing values?
[ "", "java", "enums", "reference", "" ]
I derive DataGridViewEx class from DataGridView like this: ``` public class DataGridViewEx : DataGridView { // ... [DefaultValue(false)] [Browsable(true)] public new bool AutoGenerateColumns { get { return base.AutoGenerateColumns; } set { base.AutoGenerateColumns = value; } } public DataGridViewEx() { AutoGenerateColumns = false; } // ... } ``` But, when I add my DataGridViewEx control to a form, I see that AutoGenerateColumns property is set to true! My code doesn't set it to true anywhere, so "someone else" :) sets it to true. Of course the code listed above is executed and AutoGenerateColumns is set to false for a moment, but later it becomes "true". Any ideas?
I downloaded .NET sources to be able to debug .NET framework as explained here: <http://referencesource.microsoft.com/serversetup.aspx> Then I put a breakpoint on DataGridView.AutoGenerateColumns property and found out that it is set to true in System.Windows.Forms.Design.DataGridViewDesigner.Initialize() method. I opened this method in reflector and saw the following: ``` public override void Initialize(IComponent component) { ... view.AutoGenerateColumns = view.DataSource == null; ... } ``` So, as DataSource is null, Initialize() method sets AutoGeneratedColumns to true :( I wanted to derive my own designer from DataGridViewDesigner class to override this behavior, but DataGridViewDesigner is internal, so I can't. So it seems like there is no way to solve this problem properly :(
That's to be expected, unfortunately. Because you're declaring `AutogenerateColumns` as `new`, calls to it don't get virtualized. When the parent code sets `AutogenerateColumns` to `true`, it does not pass down into your property setter. While this won't have any direct effect upon the behavior (since you don't do anything but defer to the parent property), it *does* limit your ability to debug since you can't do a `Console.WriteLine(Environment.StackTrace)` or something similar in your setter. You will likely need either to use a tool like Reflector to try to find where it's getting set to true, but this will be problematic for all but the simplest of scenarios (for example, if the parent sets the backing variable directly). You'll really need to do some trial and error to find where the value is being set, *then* call your `AutoGeneratedColumns = false;` code. I would override `OnCreateControl` and inspect the value there as a start.
DataGridView AutoGenerateColumns is set to true somehow
[ "", "c#", ".net", "winforms", "datagridview", "" ]
I am looking to do what the title says. As I am new to client side programming with java script all to together I do not know the "right" and "proper" way of achieveing what I need done. I wish to use a simple javascript function ``` var x; var items = {}; for (x = 0, x < 7; x++) { items[x] = new num; } $("li").addclass("items" + num); ``` Is this right? Am I on the right track even?
I don't know what is `num` in your code but I suspect you want to get something like this: ``` $('li').each(function (i) { $(this).addClass('items' + i); }); ``` This will add a class with incrementing index to every `li` element. If you run `$("li").addClass("items" + num)` this will add the same class to all `li` elements. BTW. JavaScript is case sensitive so you must write `addClass` instead of `addclass`.
RaYell is correct. You need to use `$.each` or you could use `.eq(i)`.. i.e. ``` for(var x = 0; x < $(".nav li).length(); x++){ $(".nav li").eq(x).addClass("items_" + x); } ```
How do I use raw javascript in jquery and add unique functions to jquery selectors?
[ "", "javascript", "jquery", "html", "css", "" ]
Why, with a generic constraint on type parameter T of class P of "must inherit from A", does the first call succeed but the second call fail with the type conversion error detailed in the comment: ``` abstract class A { } static class S { public static void DoFirst(A argument) { } public static void DoSecond(ICollection<A> argument) { } } static class P<T> where T : A, new() { static void Do() { S.DoFirst(new T()); // this call is OK S.DoSecond(new List<T>()); // this call won't compile with: /* cannot convert from 'System.Collections.Generic.List<T>' to 'System.Collections.Generic.ICollection<A>' */ } } ``` Shouldn't the generic constraint ensure that `List<T>` *is* indeed `ICollection<A>`?
This is an example of C#'s lack of [covariance](http://en.wikipedia.org/wiki/Covariance_and_contravariance_(computer_science)) on generic types (C# *does* support array covariance). C# 4 will add this feature on interface types and also will update several BCL interface types to support it as well. Please see [C# 4.0: Covariance and Contravariance](http://blog.t-l-k.com/dot-net/2009/c-sharp-4-covariance-and-contravariance): > In this article I’ll try to cover one > of the C# 4.0 innovations. One of the > new features is covariance and > contravariance on type parameters that > is now supported by generic delegates > and generic interfaces. First let’s > see what does these words mean :)
The constraint has no effect on the problem; the issue is that you're passing a List in a parameter that requires ICollection--C# doesn't support covariance so you need to explicitly cast the list to an ICollection: ``` S.DoSecond((ICollection<A>) new List<T>()); // this call will be happy ```
C# Type Conversion Error Despite Generic Constraint
[ "", "c#", "generics", "type-conversion", "covariance", "constraints", "" ]
I have some doubt about how do programs use shared library. When I build a shared library ( with `-shared -fPIC` switches) I make some functions available from an external program. Usually I do a `dlopen()` to load the library and then `dlsym()` to link the said functions to some function pointers within my program. This approach does not involve including any `.h` file. Is there a way to avoid doing `dlopen()` & `dlsym()` and just including the `.h` of the shared library? I *guess* this may be how C++ programs use code stored in system shared library, i.e - just including `stdlib.h` etc.
Nick, I think all the other answers are actually answering your question, which is how you link libraries, but the way you phrase your question suggests you have a misunderstanding of the difference between headers files and libraries. They are not the same. You need *both*, and they are not doing the same thing. Building an executable has two main phases, compilation (which turns your source into an intermediate form, containing executable binary instructions, but is not a runnable program), and linking (which combines these intermediate files into a single running executable or library). When you do `gcc -c program.c`, you are compiling, and you generate `program.o`. This step is where *headers* matter. You need to `#include <stdlib.h>` in `program.c` to (for example) use `malloc` and `free`. (Similarly you need `#include <dlfcn.h>` for `dlopen` and `dlsym`.) If you don't do that the compiler will complain that it doesn't know what those names are, and halt with an error. But if you do `#include` the header the compiler does *not* insert the code for the function you call into `program.o`. It merely inserts a *reference* to them. The reason is to avoid duplication of code: The code is only going to need to be accessed once by every part of your program, so if you needed further files (`module1.c`, `module2.c` and so on), even if they *all* used `malloc` you would merely end up with many references to a single copy of `malloc`. That single copy is present in the standard *library* in either it's shared or static form (`libc.so` or `libc.a`) but these are not referenced in your source, and the compiler is not aware of them. The linker *is*. In the linking phase you do `gcc -o program program.o`. The linker will then search all libraries you pass it on the command line and find the *single* definition of all functions you've called which are not defined in your own code. That is what the `-l` does (as the others have explained): tell the linker the list of libraries you need to use. Their names often have little to do with the headers you used in the previous step. For example to get use of `dlsym` you need `libdl.so` or `libdl.a`, so your command-line would be `gcc -o program program.o -ldl`. To use `malloc` or most of the functions in the `std*.h` headers you need `libc`, but because that library is used by *every* C program it is *automatically* linked (as if you had done `-lc`). Sorry if I'm going into a lot of detail but if you don't know the difference you will want to. It's very hard to make sense of how C compilation works if you don't. One last thing: `dlopen` and `dlsym` are not the normal method of linking. They are used for special cases where you want to dynamically determine what behavior you want based on information that is, for whatever reason, only available at runtime. If you know what functions you want to call at compile time (true in 99% of the cases) you do not need to use the `dl*` functions.
You can link shared libraries like static one. They are then searched for when launching the program. As a matter of fact, by default -lXXX will prefer libXXX.so to libXXX.a.
Shared libraries and .h files
[ "", "c++", "c", "shared-libraries", "fpic", "" ]
``` <p class="acp3"> <label>Status</label> <select> <% if <option>Active</option> %> <%= account["status"] == "Active" %> <% elsif <option>Disabled</option> %> <%= account["status"] == "Disabled" %> <% end %> </select> </p> ``` I am creating a selector with the html and I want to be able to have it access that variable when the specific selector is choosed. Is there a way to do this with Rails or is there a javascript way of doing it?
I figured out how to do it I needed to do this. ``` <select name="status"> <option value="active">Active</option> <option value="disabled" selected="selected">Disabled</option> ``` that allowed me to access the options like I wanted too.
The ruby code is trying to capture the value you selected and posted(?), or you are trying to do something different? Also any reason you are creating the raw HTML directly rather than just using the select\_tag helper?
HTML select tag allowing you to access Rails variable
[ "", "javascript", "html", "ruby-on-rails", "ajax", "html-select", "" ]
This question is kind of anecdotical but still interesting to me; I was wondering why Visual Studio 2008 is not loving the following use of *constants*: ``` public class Service101 : ServiceBase { /// <remarks> /// Shown at Start -> Settings -> Control Panel -> Administrative Tools -> Services /// </remarks> internal const string SERVICE_NAME = "WinSvc101"; /// <remarks> /// Shown at Start -> Settings -> Control Panel -> Administrative Tools -> Services /// </remarks> internal const string DISPLAY_NAME = "Windows Service 101"; /// <summary> /// Public constructor for Service101. /// </summary> public Service101() { InitializeComponent(); } private void InitializeComponent() { this.ServiceName = Service101.SERVICE_NAME; this.EventLog.Source = Service101.DISPLAY_NAME; this.EventLog.Log = "Application"; if (!EventLog.SourceExists(Service101.DISPLAY_NAME)) { EventLog.CreateEventSource(Service101.DISPLAY_NAME, "Application"); } } #region Events /// <summary> /// Dispose of objects that need it here. /// </summary> /// <param name="disposing">Whether or not disposing is going on.</param> protected override void Dispose(bool disposing) { // TODO: Add cleanup code here (if required) base.Dispose(disposing); } ``` As it's showing the following *Warning* at design time: ``` Warning 1 The designer cannot process the code at line 68: if (!EventLog.SourceExists(DISPLAY_NAME)) { EventLog.CreateEventSource(DISPLAY_NAME, "Application"); } The code within the method 'InitializeComponent' is generated by the designer and should not be manually modified. Please remove any changes and try opening the designer again. E:\Proyectos\beanstalk\dotnetfx\trunk\WinSvc101\WinSvc101\Service101.cs 69 0 ``` Any comment would be quite appreciated. Thanks much in advance.
The designer is not happy when you add code to `InitializeComponent()`. Try something like this instead: ``` public Service101() { InitializeComponent(); this.createEventSource(); } private void InitializeComponent() { this.ServiceName = SERVICE_NAME; this.EventLog.Source = DISPLAY_NAME; this.EventLog.Log = "Application"; } void createEventSource() { if (!EventLog.SourceExists(DISPLAY_NAME)) { EventLog.CreateEventSource(DISPLAY_NAME, "Application"); } } ```
It actually told you. That code is generated by the designer. The designer needs it to be the way it left it. Do not change that code, unless you want the designer to do unpleasant things with it. --- There's a sort of equilibrium between what you see in a visual designer and the code that it has generated. 1. You start with an empty design surface, so there's no generated code 2. You drag something onto the design surface. The designer generates the code necessary to create it. 3. You set properties of that object, and the designer generates the code that sets the properties as you have specified. 4. You Save and close 5. You reopen the document in the designer. The designer has to figure out what to display on the design surface. It reads the code that it generated, and since it knows the code was generated by itself, it knows what that code means in terms of the design surface. 6. Next time there's a change or save, it will regenerate the code. Now, let's say you make some modification to the generated code. Unless you make that change in *exactly* the same way the designer would have done, it will not recognize the change. Your change will not show on the design surface. Next time there's a change or save, the designer will regenerate the code *without your changes*. So, if you don't want to lose your changes to generated code, then don't make any changes to generated code.
Visual Studio 2008 Warning About Changes to Designer-Generated Code
[ "", "c#", "visual-studio-2008", "designer", "visual-studio-designer", "" ]
I checked the passwords for the users against the DB. What is faster, the **MySQL** MD5 function ``` ... pwd = MD5('.$pwd.') ``` OR the **PHP** MD5 function ``` ... pwd = '.md5($pwd).' ``` What is the right way between the two options?
If your application is only calcullating md5 when someone registers on your site, or is logging in, own many calls to md5 will you do per hour ? Couple of hundreds ? If so, I don't think the **really small** difference between PHP and MySQL will be significant at all. The question should be more like "where do I put the fact that password are stored using md5" than "what makes me win almost nothing". And, as a sidenote, another question could be : where can you afford to spend resources for that kind of calculations ? If you have 10 PHP servers and one DB server already under heavy load, you get your answer ;-) But, just for fun : ``` mysql> select benchmark(1000000, md5('test')); +---------------------------------+ | benchmark(1000000, md5('test')) | +---------------------------------+ | 0 | +---------------------------------+ 1 row in set (2.24 sec) ``` And in PHP : ``` $before = microtime(true); for ($i=0 ; $i<1000000 ; $i++) { $a = md5('test'); } $after = microtime(true); echo ($after-$before) . "\n"; ``` gives : ``` $ php ~/developpement/tests/temp/temp.php 3.3341760635376 ``` But you probably won't be calculating a million md5 like this, will you ? *(And this has nothing to do with preventing SQL injections : just escape/quote your data ! always ! or use prepared statements)*
I don't know which is faster, but if you do it in PHP you avoid the possibility of SQL injection.
What's faster/better to use: the MySQL or PHP md5 function?
[ "", "php", "mysql", "performance", "function", "md5", "" ]
i ve a function which is responsible for some validations, disabling and enabling of some text boxes and a group of radio buttons .... now i want to generate the function name to be called from another function using a for loop and then appending the for loop index to the function in question .... something like this ..... ``` function unCheckRadio(num) { var cont = num; var form = document.angular; for (var i = 0; i < cont; i++) { alert(form['lim_set'+i].length); for(var j = 0; j < form['lim_set'+i].length; j++ ) { form['lim_set'+i][j].checked = form['lim_set'+i][j].defaultChecked; } makeChoice_ang(); } } ``` here i want to append the index i to the makeChoice\_ang() function .... tried many ways .... but no go ... i tried using this .... 'makeChoice\_ang'+i; but this is making it an string .... Please help me out or atleast point me in the right direction .... Thanks a million in advance !!
*I'm adding this as a different answer because, well, it's different!* If you want to call a global function (or any function you know the scope of) given a suffix like that, then you could just use array notation: ``` window['makeChoice_ang' + i](); ``` Remember... eval is evil [mostly](https://stackoverflow.com/questions/197769/when-is-javascripts-eval-not-evil)
It seems like it would be easier to re-do `makeChoice_ang()` as a generic function that accepts an index as a parameter. Alternatively, if you can't change the function or if the behavior will vary wildly from function to function you could just use `eval()` to evaluate your string.
how to append a number to a function name to create many functions?
[ "", "javascript", "" ]
I was reading this page : [C++ Tip: How To Get Array Length](http://www.dev102.com/2009/01/12/c-tip-how-to-get-array-length/). The writer presented a piece of code to know the size of static arrays. ``` template<typename T, int size> int GetArrLength(T(&)[size]){return size;} // what does '(&)' mean ? . . . int arr[17]; int arrSize = GetArrLength(arr); // arrSize = 17 ``` Could anyone please shed the light on this code, because I couldn't understand how it really works.
The function is passed a reference (`&`) to an array of type `T`, and size `size`.
sizeof(x)/sizeof(x[0]) Won't catch errors if the array degrads to a pointer type, but it will still compile! Templated version is bullet proof.
Mysterious oneliner template code, any one?
[ "", "c++", "arrays", "templates", "metaprogramming", "" ]
Any examples demonstrating where source compatibility is broken yet binary compatibility is maintained is welcome.
Old version: ``` struct inner { int bar; } struct foo { struct inner i; }; void quux(struct foo *p); ``` New version: ``` struct inner2 { int bar; }; struct foo { struct inner2 i; }; void quux(struct foo *p); ``` Broken code: ``` struct foo x; struct inner *i = &x.i; i->bar = 42; quux(&x); ``` Since the only difference is the name of the struct, and the inner struct's type name is erased during compilation, there's no binary incompatibility.
Different versions of static-linked libraries on various machines might result in a binary compiled on machine A working properly on machine B but attempts to compile it from source on machine B failing. But aside from that, source incompatibility generally implies binary incompatibility.
Does source incompatibility always imply binary incompatibility?
[ "", "c++", "binary-compatibility", "" ]
I've found many similar questions but nothing with quite the answer I'm after. Part of my problem is due to the inability to use Generics in Attributes. I am probably trying to over-complicate things, so if you can think of an easier way of doing this I'm all ears. My specific problem relates to ASP.NET MVC, using Attributes (Filters) on an Action method. I'm trying to create a filter that will paginate the results passed to the ViewData.Model like this: ``` [PagedList(PageSize = 2, ListType = typeof(Invoice))] public ViewResult List() { var invoices = invoicesRepository.Invoices; // Returns an IQueryable<Invoice> return View(Invoices); } ``` My filter's **OnActionExecuted** override then looks like: ``` public override void OnActionExecuted(ActionExecutedContext filterContext) { ViewResult result = (ViewResult)filterContext.Result; var list = (IQueryable<?>)result.ViewData.Model; // Here I want to use the ListType in place of the ? // Perform pagination result.ViewData.Model = list.Skip((Page - 1) * PageSize).Take(PageSize.ToList(); } ``` I realise I could replace my ``` var list = (IQueryable<?>)result.ViewData.Model; ``` With ``` var list = (IQueryable)result.ViewData.Model; IQueryable<Object> oList = list.Cast<Object>(); ``` But my view is strongly typed to expect an ``` IQueryable<Invoice> ``` not an ``` IQueryable<Object> ```
Could you potentially create a generic method in your filter that does what you need to do, and then within `OnActionExecuted` use reflection to call that generic method with `ListType`? --- EDIT: For example, you would create a new method with this signature: ``` private void GenericOnActionExecuted<T>( ActionExecutedContext filterContext ) ``` This method would have the same code as you posted. Then you would rewrite `OnActionExecuted` like so: ``` public void OnActionExecuted( ActionExecutedContext filterContext ) { MethodInfo genericMethod = GetType().GetMethod( "GenericOnActionExecuted", BindingFlags.Instance | BindingFlags.NonPublic ); MethodInfo constructedMethod = genericMethod.MakeGenericMethod( ListType ); constructedMethod.Invoke( this, new object[] { filterContext } ); } ```
I have never used ASP.NET MVC myself but based on your explanation I would say you have two options. Both of these solutions require you to add a generic version of OnActionExecuted to your attribute. ``` protected void DoActionExecuted<T>(ActionExecutedContext filterContext) { var result = (ViewResult) filterContext.Result; var list = (IQueryable<T>) result.ViewData.Model; result.ViewData.Model = list.Skip((Page - 1)*PageSize) .Take(PageSize) .ToList(); } ``` You can either use reflection to call a method with a dynamic type parameter: ``` public override void OnActionExecuted(ActionExecutedContext filterContext) { GetType().GetMethod("DoActionExecuted", BindingFlags.NonPublic | BindingFlags.Instance) .MakeGenericMethod(ListType) .Invoke(this, new[] {filterContext}); } ``` Or you can create a private attribute subclass that knows about the generic type at compile time: ``` [PagedInvoiceList(PageSize = 2, ListType = typeof (Invoice))] public ViewResult List() { ... } private class PagedInvoiceListAttribute : PagedListAttribute { public override void OnActionExecuted(ActionExecutedContext filterContext) { DoActionExecuted<Invoice>(filterContext); } } ```
IList<variableType>; Solve without Generics
[ "", "c#", "asp.net-mvc", "generics", "attributes", "" ]