Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
We've found that [compiling our Linq queries](http://www.dotnetfunda.com/articles/article469-how-to-improve-your-linq-query-performance-by-5-x-times-.aspx) is much, much faster than them having to compile each time, so we would like to start using compiled queries. The problem is that it makes code harder to read, because the actual syntax of the query is off in some other file, away from where it's being used. It occurred to me that it might be possible to write a method (or extension method) that uses reflection to determine what queries are being passed in and cache the compiled versions automatically for use in the future. ``` var foo = (from f in db.Foo where f.ix == bar select f).Cached(); ``` `Cached()` would have to reflect the query object passed in and determine the table(s) selected on and the parameter types for the query. Obviously, reflection is a bit slow, so it might be better to use names for the cache object (but you'd still have to use reflection the first time to compile the query). ``` var foo = (from f in db.Foo where f.ix == bar select f).Cached("Foo.ix"); ``` Does anyone have any experience with doing this, or know if it's even possible? **UPDATE:** For those who have not seen it, you can compile LINQ queries *to SQL* with the following code: ``` public static class MyCompiledQueries { public static Func<DataContext, int, IQueryable<Foo>> getFoo = CompiledQuery.Compile( (DataContext db, int ixFoo) => (from f in db.Foo where f.ix == ixFoo select f) ); } ``` What I am trying to do is have a cache of these `Func<>` objects that I can call into after automatically compiling the query the first time around.
You can't have extension methods invoked on anonymous lambda expressions, so you'll want to use a Cache class. In order to properly cache a query you'll also need to 'lift' any parameters (including your DataContext) into parameters for your lambda expression. This results in very verbose usage like: ``` var results = QueryCache.Cache((MyModelDataContext db) => from x in db.Foo where !x.IsDisabled select x); ``` In order to clean that up, we can instantiate a QueryCache on a per-context basis if we make it non-static: ``` public class FooRepository { readonly QueryCache<MyModelDataContext> q = new QueryCache<MyModelDataContext>(new MyModelDataContext()); } ``` Then we can write a Cache method that will enable us to write the following: ``` var results = q.Cache(db => from x in db.Foo where !x.IsDisabled select x); ``` Any arguments in your query will also need to be lifted: ``` var results = q.Cache((db, bar) => from x in db.Foo where x.id != bar select x, localBarValue); ``` Here's the QueryCache implementation I mocked up: ``` public class QueryCache<TContext> where TContext : DataContext { private readonly TContext db; public QueryCache(TContext db) { this.db = db; } private static readonly Dictionary<string, Delegate> cache = new Dictionary<string, Delegate>(); public IQueryable<T> Cache<T>(Expression<Func<TContext, IQueryable<T>>> q) { string key = q.ToString(); Delegate result; lock (cache) if (!cache.TryGetValue(key, out result)) { result = cache[key] = CompiledQuery.Compile(q); } return ((Func<TContext, IQueryable<T>>)result)(db); } public IQueryable<T> Cache<T, TArg1>(Expression<Func<TContext, TArg1, IQueryable<T>>> q, TArg1 param1) { string key = q.ToString(); Delegate result; lock (cache) if (!cache.TryGetValue(key, out result)) { result = cache[key] = CompiledQuery.Compile(q); } return ((Func<TContext, TArg1, IQueryable<T>>)result)(db, param1); } public IQueryable<T> Cache<T, TArg1, TArg2>(Expression<Func<TContext, TArg1, TArg2, IQueryable<T>>> q, TArg1 param1, TArg2 param2) { string key = q.ToString(); Delegate result; lock (cache) if (!cache.TryGetValue(key, out result)) { result = cache[key] = CompiledQuery.Compile(q); } return ((Func<TContext, TArg1, TArg2, IQueryable<T>>)result)(db, param1, param2); } } ``` This can be extended to support more arguments. The great bit is that by passing the parameter values into the Cache method itself, you get implicit typing for the lambda expression. EDIT: Note that you cannot apply new operators to the compiled queries.. Specifically you cannot do something like this: ``` var allresults = q.Cache(db => from f in db.Foo select f); var page = allresults.Skip(currentPage * pageSize).Take(pageSize); ``` So if you plan on paging a query, you need to do it in the compile operation instead of doing it later. This is necessary not only to avoid an exception, but also in keeping with the whole point of Skip/Take (to avoid returning all rows from the database). This pattern would work: ``` public IQueryable<Foo> GetFooPaged(int currentPage, int pageSize) { return q.Cache((db, cur, size) => (from f in db.Foo select f) .Skip(cur*size).Take(size), currentPage, pageSize); } ``` Another approach to paging would be to return a `Func`: ``` public Func<int, int, IQueryable<Foo>> GetPageableFoo() { return (cur, size) => q.Cache((db, c, s) => (from f in db.foo select f) .Skip(c*s).Take(s), c, s); } ``` This pattern is used like: ``` var results = GetPageableFoo()(currentPage, pageSize); ```
I had to deal with saving a > 15y/o project that was developed using LinqToSql and was too CPU hungry. Benchmarking showed that using compiled query is x7 faster for complex queries, and x2 for simple queries (considering that the running the query itself is negligible, here it's just about the throughput of compiling the query). Caching is NOT done automatically by .Net Framework (no matter what version), this only happens for Entity Framework NOT for LINQ-TO-SQL, and these are different technologies. Usage of compiled queries is tricky, so here are two important highlights: * You MUST compile que query including the materialization instructions (FirstOrDefault/First/Any/Take/Skip/ToList), otherwise you risk bringing your whole database into memory: [LINQ to SQL \*compiled\* queries and when they execute](https://stackoverflow.com/questions/6592386/linq-to-sql-compiled-queries-and-when-they-execute) * You cannot DOUBLE iterate on a compiled query's result (if it's an IQueryable), but this is basically solved once you properly consider the previous point Considering that, I came up with this cache class. Using the static approach as proposed in other comments has some maintainability drawbacks - it's mainly less readable -, plus it is harder to migrate an existing huge codebase. ``` LinqQueryCache<VCDataClasses> .KeyFromQuery() .Cache( dcs.CurrentContext, (ctx, courseId) => (from p in ctx.COURSEs where p.COURSEID == courseId select p).FirstOrDefault(), 5); ``` On very tight loops, using a cache key from the callee instead of the query itself yielded +10% better performance: ``` LinqQueryCache<VCDataClasses> .KeyFromStack() .Cache( dcs.CurrentContext, (ctx, courseId) => (from p in ctx.COURSEs where p.COURSEID == courseId select p).FirstOrDefault(), 5); ``` And here is the code. The cache prevents the coder from returning an IQueryable in a compiled query, just for safety. ``` public class LinqQueryCache<TContext> where TContext : DataContext { protected static readonly ConcurrentDictionary<string, Delegate> CacheValue = new ConcurrentDictionary<string, Delegate>(); protected string KeyValue = null; protected string Key { get => this.KeyValue; set { if (this.KeyValue != null) { throw new Exception("This object cannot be reused for another key."); } this.KeyValue = value; } } private LinqQueryCache(string key) { this.Key = key; } public static LinqQueryCache<TContext> KeyFromStack( [System.Runtime.CompilerServices.CallerFilePath] string sourceFilePath = "", [System.Runtime.CompilerServices.CallerLineNumber] int sourceLineNumber = 0) { return new LinqQueryCache<TContext>(Encryption.GetMd5(sourceFilePath + "::" + sourceLineNumber)); } public static LinqQueryCache<TContext> KeyFromQuery() { return new LinqQueryCache<TContext>(null); } public T Cache<T>(TContext db, Expression<Func<TContext, T>> q) { if (Debugger.IsAttached && typeof(T).IsAssignableFrom(typeof(IQueryable))) { throw new Exception("Cannot compiled queries with an IQueryableResult"); } if (this.Key == null) { this.Key = q.ToString(); } if (!CacheValue.TryGetValue(this.Key, out var result)) { result = CompiledQuery.Compile(q); CacheValue.TryAdd(this.Key, result); } return ((Func<TContext, T>)result)(db); } public T Cache<T, TArg1>(TContext db, Expression<Func<TContext, TArg1, T>> q, TArg1 param1) { if (Debugger.IsAttached && typeof(T).IsAssignableFrom(typeof(IQueryable))) { throw new Exception("Cannot compiled queries with an IQueryableResult"); } if (this.Key == null) { this.Key = q.ToString(); } if (!CacheValue.TryGetValue(this.Key, out var result)) { result = CompiledQuery.Compile(q); CacheValue.TryAdd(this.Key, result); } return ((Func<TContext, TArg1, T>)result)(db, param1); } } ```
Automatically Compile Linq Queries
[ "", "c#", "asp.net-mvc", "linq", "linq-to-sql", "iqueryable", "" ]
I have 3 text box fields. to represent a date eg DD MM YYYY how can i validate only correct data is entered into each text box. is it a regexpression?? **i need to do this inside the ascx/aspx file rather than the .cs codebehind** thanks
You could validate each field with regexes, but it wouldn't take into account different months with different numbers of days: you could enter invalid dates. On the server side it could be validated with something like this: ``` DateTime D; string CombinedDate=String.Format("{0}-{1}-{2}", YearField.Text, MonthField.Text, DayField.Text); if(DateTime.TryParseExact(CombinedDate, "yyyy-M-d", DateTimeFormatInfo.InvariantInfo, DateTimeStyles.None, out D)) { // valid } else { // not valid } ```
**Wouldn't validation in the aspx file introduce logic code in the presentation layer?** I would suggest an AJAX control (there is an AJAX MaskEdit box - alike). They usually okay for these sort of things, look into the AJAX toolkit, if the server you're deploying can support those.
validate 3 textfields representing date of birth
[ "", "c#", ".net", "asp.net", "" ]
How I parse tnsnames.ora file using Visual C# (Visual Studio 2008 Express edition) to get the tnsnames ? For instance, my tnsnames.ora file contains ``` ORCL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = shaman)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) BILL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.10.58)(PORT = 1522)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) ``` How can I parse this file to get the TNSNAMES (ie, ORCL, BILL etc). Forgive me if this question sounds too obvious, I'm learning & trying my hand in C#
First of all, you will need [the syntax rules for this file](http://download.oracle.com/docs/cd/B19306_01/network.102/b14213/syntax.htm). There is probably a hack for this, but I would personally go with a full parser, like [ANTLR](http://www.antlr.org/) combined with [the proper grammar](http://www.harward.us/~nharward/antlr/OracleNetServicesV3.g) (a complete list of ANTLR grammars [can be found here](http://www.antlr.org/grammar/list)).
``` public List<string> ReadTextFile(string FP) { string inputString; List<string> List = new List<string>(); try { StreamReader streamReader = File.OpenText(FP.Trim()); // FP is the filepath of TNS file inputString = streamReader.ReadToEnd(); string[] temp = inputString.Split(new string[] {Environment.NewLine},StringSplitOptions.None); for (int i = 0; i < temp.Length ;i++ ) { if (temp[i].Trim(' ', '(').Contains("DESCRIPTION")) { string DS = temp[i-1].Trim('=', ' '); List.Add(DS); } } streamReader.Close(); } catch (Exception EX) { } return List; } ```
Parsing tnsnames.ora in Visual C# 2008
[ "", "c#", "visual-studio-2008", "oracle", "tnsnames", "" ]
Recently I decided to expand my programming horizons and learn the python programming language. While I have used python a little bit for classes in college and for a project or two at work I am by no means an expert. My question is as follows: should I bother with the 2.x releases or should I jump straight to 3.0? I am leaning towards 3.0 since I will be programming applications more for personal/learning use, but I wanted to see if there were any good arguments against it before I began.
Absolutely not 3.0 - 3.1 is out and is stabler, better, faster in every respect; it makes absolutely no sense to start with 3.0 at this time, if you want to take up the 3 series it should on all accounts be 3.1. As for 2.6 vs 3.1, 3.1 is a better language (especially because some cruft was removed that had accumulated over the years but has to stay in 2.\* for backwards compatibility) but all the rest of the ecosystem (from extensions to tools, from books to collective knowledge) is still very much in favor of 2.6 -- if you don't care about being able to use (e.g.) certain GUIs or scientific extensions, deploy on App Engine, script Windows with COM, have a spiffy third party IDE, and so on, 3.1 is advisable, but if you care about such things, still 2.\* for now.
Use 3.1 Why? 1) Because as long as everyone is still using 2.6, the libraries will have less reasons to migrate to 3.1. As long as those libraries are not ported to 3.1, you are stuck with the choice of either not using the strengths of 3.1, or only doing the jobs half way by using the hackish solution of using a back-ported feature set. **Be a forward thinker and help push Python forward.** 2) If you learn and use 3.1 now, you wont have to relearn it later when the mass port is complete. I know some people say you wont have to learn much, but why learn the old crap at all? **Python itself is moving towards 3.1**, the libraries will move toward 3.1, and it sucks to have to play catch-up and relearn a language you are already using. 3) **3.1 is all around a better language**, more stable and more consistent than 2.6... this is normal. The lessons learned from 2.6 were all poured into 3.1 to make it better. **It is a process called PROGRESS**. This is why nobody still uses Windows 3.1. It is the way things move FORWARD. Why else do you think they went to the trouble of back porting a feature set in the first place? 4) If you are learning Python, and learn 2.6, then by the time you are really comfortable with the language, the ports will be out, and you will have to learn the libraries, and the language all over again. If you start with 3.1, then by the time you are comfortable with the language, the ports will be out, and then you can learn the libraries that you are interested in. **It is a smoother process**. 5) **To be a better developer.** If you have to learn and relearn the same things, your understanding will not be very deep. By learning this language, and its libraries only once, you will have more time to work with them rather than relearning syntax. This allows you to understand them better. If you are really missing some pieces by forgoing on the libraries? WRITE THEM. You will probably not need an entire library, and can usually write only those pieces that you need, and develop tools for yourself. This, again, helps you understand the language better, and more deeply.
Should I Start With Python 3.0?
[ "", "python", "python-3.x", "" ]
Does Oracle have an equivalent column type to MySQL's `TEXT` type? If not, how are larger blobs of text typically stored? `BLOB`, `varchar(32767)`? It's Oracle 10 being accessed via PHP, if it matters. Historical context is more than welcome.
Oracle has BLOB, CLOB and NCLOB for storing binary, character and unicode character data types. You can also specify the LOB storage area which allows a DBA to fine tune the storage if necessary (i.e. putting the LOB data on separate disks) This page gives a bit more info: <http://www.dba-oracle.com/t_blob.htm>
I think you probably want the [CLOB](http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/sql_elements001.htm#sthref152) datatype.
Oracle Equivalent of MySQL's TEXT type
[ "", "php", "oracle", "text", "types", "" ]
I'm wondering how in MATLAB you can get a reference to a Java `enum` or static public field. In MATLAB, if you are trying to use Java objects/methods, there are equivalents to Java object creation / method call / etc.: Java: `new com.example.test.Foo();` MATLAB: `javaObject('com.example.test.Foo');` Java: `com.example.test.Foo.staticMethod();` MATLAB: `javaMethod('staticMethod', 'com.example.test.Foo');` Java: `SomeEnum e = com.example.test.SomeEnum.MY_FAVORITE_ENUM;` MATLAB: ????? Java: `int n = com.example.test.Foo.MAX_FOO`; MATLAB: ?????
You can reference Java enum constants from Matlab using the package.class.FIELD syntax, as with any other static Java field. Let's say you have an enum. ``` package com.example; public enum MyEnum { FOO, BAR, BAZ } ``` You can get at the enum constants in Matlab using a direct reference. (The Java classes must be in Matlab's javaclasspath, of course.) ``` % Static reference foo = com.example.MyEnum.FOO % Import it if you want to omit the package name import com.example.MyEnum; foo = MyEnum.FOO bar = MyEnum.BAR ``` If you want a "dynamic" reference determined at runtime, you can just build a string containing the equivalent static reference and pass it to eval(). This works on almost any Matlab code. ``` % Dynamic reference foo = eval('com.example.MyEnum.FOO') ``` And if you want to get really fancy, you can use Java reflection to get at all the enumerated constants at run time. Make a thin wrapper to put with your other custom classes to get around quirks with Matlab's classloader. (There's no Matlab javaClass() equivalent; IMHO this is a Matlab oversight.) ``` //In Java package com.example; public class Reflector { public static Class forName(String className) throws Exception { return Class.forName(className); } } ``` Then you can enumerate the constants in Matlab. ``` % Constant enumeration using reflection klass = com.example.Reflector.forName('com.example.MyEnum'); enums = klass.getEnumConstants(); ```
Inner classes require conversion of '.' to '$' in Matlab. This may actually due to the way the Java compiler stores internal class objects. It behaves similarly for internal classes (e.g. `javax.swing.plaf.basic.BasicTextUI$UpdateHandler`). Matlab is not as smart as the JVM to automatically convert these internal '$' into '.'. Therefore, we can't use the regular simple dot-notation in these cases in Matlab, and since '$' is an invalid character in Matlab syntax, we must resort to using the '$' within [`javaObject`](http://www.mathworks.com/help/matlab/ref/javaobject.html), [`javaMethod`](http://www.mathworks.com/help/matlab/ref/javamethod.html), `awtinvoke` and their relatives. For example: ``` Java: InnerClass c = new com.example.test.SomeEnum.InnerClass; MATLAB: c = javaObject('com.example.test.SomeEnum$InnerClass') ``` Enums require similar conversion of '.' to '$' in Matlab. But MATLAB's `javaObject` function calls the class constructor, and since enums have no constructor, we get the following error: > *No constructor with appropriate signature exists in Java class* Luckily [enum has the builtin method `valueOf()`](http://docs.oracle.com/javase/6/docs/api/java/lang/Enum.html) that we can use with `javaMethod`: ``` Java: SomeEnum e = com.example.test.SomeEnum.MY_FAVORITE_ENUM; MATLAB: e = javaMethod('valueOf','com.example.test$SomeEnum','MY_FAVORITE_ENUM'); ``` Similarly: ``` Java: int n = com.example.test.Foo.MAX_FOO; MATLAB: n = javaMethod('com.example.test.Foo$MAX_FOO') ``` Static fields can be gotten directly in Matlab using simple dot notation: ``` redColor = java.awt.Color.red; ``` The full list of static fields can be gotten using Matlab's built-in `struct` function: ``` >> staticFields = struct(java.awt.Color.red) staticFields = white: [1x1 java.awt.Color] WHITE: [1x1 java.awt.Color] lightGray: [1x1 java.awt.Color] LIGHT_GRAY: [1x1 java.awt.Color] gray: [1x1 java.awt.Color] GRAY: [1x1 java.awt.Color] darkGray: [1x1 java.awt.Color] DARK_GRAY: [1x1 java.awt.Color] black: [1x1 java.awt.Color] BLACK: [1x1 java.awt.Color] red: [1x1 java.awt.Color] RED: [1x1 java.awt.Color] pink: [1x1 java.awt.Color] PINK: [1x1 java.awt.Color] orange: [1x1 java.awt.Color] ORANGE: [1x1 java.awt.Color] yellow: [1x1 java.awt.Color] YELLOW: [1x1 java.awt.Color] green: [1x1 java.awt.Color] GREEN: [1x1 java.awt.Color] magenta: [1x1 java.awt.Color] MAGENTA: [1x1 java.awt.Color] cyan: [1x1 java.awt.Color] CYAN: [1x1 java.awt.Color] blue: [1x1 java.awt.Color] BLUE: [1x1 java.awt.Color] OPAQUE: 1 BITMASK: 2 TRANSLUCENT: 3 ``` MATLAB's function `javaObject` may not work if the default constructor is private (hidden), and `javaMethod` probably won't work either. If the class with the static methods is nested you may be out of luck. For my systray utility on the File Exchange, I used the reflection approach, as described in this post: <http://UndocumentedMatlab.com/blog/setting-system-tray-popup-messages/> Credit: edited by [Mark Mikofski](https://stackoverflow.com/users/1020470/mark-mikofski)
using Java enums or public static fields in MATLAB
[ "", "java", "matlab", "enums", "" ]
lets say; I have a $friends array with 2,000 different friendID numbers **+** I have a $bulletins array with 10,000 bulletinID numbers, the $bulletins array will also have another value with a userID of who posted the bulletin entry Now is it possible to get all the bulletinID numbers that have a userID matching a userID in the friends array? And if it is even possible, would this be fast or slow or not generally a good method? I am trying to get bulletin type posts on my site and only show ones posted by a friend of a user but some users have a few thousand friends and bulletins could be in the thousands but only some of them a user is allowed to view Also if this is possible, could I limit it to oly get like the first 50 bulletins ID's that match a friendID
Ok, so it sounds like you have an array of friend ids that is not associative, ie `array(0 => 'userid0', 1 => 'userid1', etc)`, and an array of bulletin ids that *is* associative, ie. `array('bulletin1' => 'userid1', 'bulletin2' => 'userid2', etc)`. Going with that assumption, you can get all the matching bulletins using [array\_intersect()](http://php.net/array_intersect). You can then take the first fifty bulletin keys with [array\_slice()](http://php.net/array_slice): ``` $matchingBulletins = array_intersect($bulletins, $friends); $first50 = array_slice(array_keys($matchingBulletins),0,50); ``` It sounds like you might be getting this data out of a database however, in which case it would be much more prudent to filter your database results somehow and avoid returning 10,000 ids each time. You could do the sorting and filtering using `JOIN`s and `WHERE`s on the right tables.
Where are you getting these arrays of thousands of friends/bulletins? If the answer is a relational database (MySQL, PostgreSQL), then this should be done using a SQL query as it is quite trivial, and much more efficient than anything you could do in PHP. Here is an example of how this could be done in SQL: ``` SELECT posts.id FROM posts JOIN users ON posts.user_id = users.id JOIN user_friends ON user_friends.user_id = users.id WHERE posts.type = 'bulletin' AND user_friends.user_id = 7 LIMIT 50; ``` Obviously it is done with no knowledge of your actual database structure, if any, and thus will not work as-is, but should get you on the right path.
Can PHP arrays be used to map two arrays together based on shared values?
[ "", "php", "arrays", "mapping", "" ]
I'm at a stage in which I did not learn internet scripting languages yet, but I do understand JavaScript enough to edit scripts I find on the web to suit my needs. Lately, I've been searching for an RSS to HTML converter, and was surprised to find out that it usually involves PHP. I don't see a reason for JavaScript to not be adequate for the task, so my question is- is it really not? And if so, why? Also if you can show me some code examples I'll greatly appreciate it (I do plan to learn Javascript eventually, I'm not just leeching. I just lack the time at the moment).
I think the reason most examples use server side scripting is that, since the Javascript same domain policy means you have to request the RSS from your own server anyway, then you may as well transform it into 'display format' on the server side too. Also, if you're doing some sort of Ajaxy stuff then there are better ways of getting the data to the script in the browser than just handing off a full RSS feed. Having said all that, there are ways to parse RSS and similar XML feeds on the client side. One option is to just style the RSS directly using [CSS](http://michiel.wordpress.com/2007/07/21/css-for-xml-in-firefox/) and/or XSLT. I don't think using CSS for this is too common in the real world because you have to use different methods in different browsers, but [transforming XML with XSLT in Firefox is fairly straightforward](http://www.ibm.com/developerworks/library/x-think41/) and I'm fairly sure it's possible in IE and the other browsers too, but XSLT may be a bit beyond your comfort zone. A good source for Javascript examples is the [Google Data APIs](http://code.google.com/apis/gdata/overview.html) as they use the [Atom Publishing Protocol](http://www.atomenabled.org/developers/protocol/) which is conceptually similar to RSS. For example, [here is the Javascript documentation for the Analytics API](http://code.google.com/apis/analytics/docs/gdata/1.0/gdataJavascript.html).
[JQuery](http://jquery.com/) has an XML parser built in. [Here](http://marcgrabanski.com/article/jquery-makes-parsing-xml-easy)'s a great tutorial that details the use of the built-in feature. :)
Converting RSS to HTML
[ "", "javascript", "html", "rss", "" ]
So the clock is 18:37 right now in sweden but it prints out 16:37 why is that? ``` $timestamp = time(); date('M d, H:i', $timestamp) ``` What can be wrong?
Your [`date.timezone`](http://www.php.net/date.timezone) setting in your `php.ini` file is incorrect. Make sure that it is set to the proper value for your timezone: ``` date.timezone = Europe/Stockholm ``` If you do not have access to the `php.ini` file, you can use [`date_default_timezone_set()`](http://www.php.net/date_default_timezone_set) to set it during runtime: ``` date_default_timezone_set('Europe/Stockholm'); ``` For a list of supported timezones, refer to the [PHP Documentation](http://www.php.net/timezones). --- If it still doesn't work, make sure your server is set to the proper timezone. If you've set the time manually and the timezone is incorrect (but since the time has been corrected manually it still shows the proper time), PHP has no way to get the `UTC` time properly and therefore returns the incorrect time.
It is possible that your server is located in a time that is 2 hour back from you. You can use this page of the [documentation](https://www.php.net/manual/en/datetime.settimezone.php) to fix the timezone issue.
Php clock 2 hours back
[ "", "php", "date", "time", "" ]
I'm trying to get the field info of an array value from within a struct. So far I have the following, but I dont see how to get the infomration I want. ``` [StructLayout(LayoutKind.Sequential)] public struct Test { public byte Byte1; [MarshalAs(UnmanagedType.ByValArray, SizeConst=3)] public Test2[] Test1; } BindingFlags struct_field_flags = BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Instance | BindingFlags.DeclaredOnly; FieldInfo[] all_struct_fields = typeof(Test).GetFields(struct_field_flags); foreach (FieldInfo struct_field in all_struct_fields) { if(struct_field.FieldType.IsArray) { // Get FieldInfo for each value in the Test1 array within Test structure } } ``` So if I did: ``` Type array_type = struct_field.FieldType.GetElementType(); ``` This would return Test2 type, but I dont want the type of the array, I want the FieldInfo or Fields of that structure so I can set values from within it.
Sorry for the initial wrong answer. I was too lazy to create my own Test2 type so I used a string instead. Here's the right answer (hopefully): I did what you want to do with the following code: ``` class Program { static void Main(string[] args) { object sampleObject = GetSampleObject(); FieldInfo[] testStructFields = typeof(Test).GetFields(); foreach (FieldInfo testStructField in testStructFields) { if (testStructField.FieldType.IsArray) { // We can cast to ILIst because arrays implement it and we verfied that it is an array in the if statement System.Collections.IList sampleObject_test1 = (System.Collections.IList)testStructField.GetValue(sampleObject); // We can now get the first element of the array of Test2s: object sampleObject_test1_Element0 = sampleObject_test1[0]; // I hope this the FieldInfo that you want to get: FieldInfo myValueFieldInfo = sampleObject_test1_Element0.GetType().GetField("MyValue"); // Now it is possible to read and write values object sampleObject_test1_Element0_MyValue = myValueFieldInfo.GetValue(sampleObject_test1_Element0); Console.WriteLine(sampleObject_test1_Element0_MyValue); // prints 99 myValueFieldInfo.SetValue(sampleObject_test1_Element0, 55); sampleObject_test1_Element0_MyValue = myValueFieldInfo.GetValue(sampleObject_test1_Element0); Console.WriteLine(sampleObject_test1_Element0_MyValue); // prints 55 } } } static object GetSampleObject() { Test sampleTest = new Test(); sampleTest.Test1 = new Test2[5]; sampleTest.Test1[0] = new Test2() { MyValue = 99 }; object sampleObject = sampleTest; return sampleObject; } } [StructLayout(LayoutKind.Sequential)] public struct Test2 { public int MyValue; } [StructLayout(LayoutKind.Sequential)] public struct Test { public byte Byte1; [MarshalAs(UnmanagedType.ByValArray, SizeConst = 3)] public Test2[] Test1; } ``` This is the most important line: ``` FieldInfo myValueFieldInfo = sampleObject_test1_Element0.GetType().GetField("MyValue"); ``` It should give you the FieldInfo that you are talking about.
What exactly are you after? There *is* no `FieldInfo` for the items in the array... you can iterate the values by getting the array (as `Array`) and iterating it... just use: ``` Array arr = (Array)field.GetValue(obj); ```
How do I get the FieldInfo of an array field?
[ "", "c#", "reflection", "struct", "fieldinfo", "" ]
How do you develop UI in MFC? do you use any free libray, or usually develop from scratch? There are always so many DLL files in a C++ developed software, what are them used for ? What's the difference between MFC ActiveX Control and MFC DLL ?
Visual Studio 2008 enhances MFC by adding the 'Feature Pack'. This allows you to create MS Office 2007 style GUIs (amongst others), complete with a Ribbon Bar. <http://msdn.microsoft.com/en-us/library/bb982354.aspx> I cut my C++ teeth using MFC, but I'd recommend you look at **Qt** instead - it's a much more modern framework plus you get cross-platform support (Linux, Mac, etc.) for free. MFC is pretty much a dead framework IMHO (the Feature Pack was bought in and is actually a cut-down version of the BCG library.) <http://www.bcgsoft.com/> If you want to stick with MFC there is another popular GUI framework, by CodeJock: <http://www.codejock.com/products/overview.asp?platform=mfc>
[Get yourself a good book](http://www.amazon.co.uk/MFC-Internals-Microsoft-Foundation-Architecture/dp/0201407213/ref=pd_bxgy_b_img_b) to begin with. There are still some [third parties controls](http://www.codejock.com/products/overview.asp?platform=mfc) if you do not mind spending a bit of money. Finally, [codeproject](http://www.codeproject.com/) has hundreds of MFC examples.
some questions about MFC development?
[ "", "c++", "dll", "mfc", "user-interface", "activex", "" ]
What would be a better choice for making a database driven Adobe AIR(Desktop) application? 1. Using PHP+MySql with AIR OR 2. Using SQLite If I choose SQLite, then I cannot reuse my code for an online application. If I choose 1, I have to block few port numbers on User's machine. Also, I am using XAMPP for providing the user with PHP and MySql, so XAMPP opens up a command window as long as it's running. And, users get confused about what's that window for? It makes the end user experience slightly confusing.
I'd definitely use SQLite as its included in Air. May I suggest; write your code in two sections. The UI which uses a JSON feed to populate itself, and the API to provide the JSON data. When it comes to port the application to the web you can use the same UI but with a rewritten API.
Whatever you do, don't open up a command window while the program is running. If you do that, your customers will uninstall like there's no tomorrow. As far as mysql vs sqlite, the standard approach is - if it communicates remotely, feel free to use mysql, but if you're installing the db on the client, you should use an embedded standalone db (sqlite). How complex do you expect your app to be that you can't use sqlite (besides not being able to reuse some of the code that you mentioned)?
Adobe AIR with PHP/MySql or SQLite
[ "", "php", "mysql", "sqlite", "air", "xampp", "" ]
Please have a look at the following code: ``` var splashForm = new SplashForm(); m_Thread = new Thread( () => System.Windows.Forms.Application.Run( splashForm ) ) m_Thread.Start(); // Do some initialization // ... // the following method just invokes `Close()` on the right thread splashForm.Shutdown(); // Loop until the thread is no longer alive // ... System.Windows.Forms.Application.Run( mainForm ); ``` It looks as if all works fine: first I see the splashscreen, later the mainform gets started. But somehow I get strange errors, like: graphical elements (a endless ProgressBar) are not showing up correctly. **EDIT**: I have two progressbars, one on the splashscreen, on on the mainform. They both show the same (wrong) behaviour in endlessmode: no progress, just the pure background.**/EDIT** In my opinion this is due to the call of `Application.Run()` on different threads. This errors can be eliminated by calling any function/property of the mainForm before starting the splashscreen - like, for instance ``` mainForm.Text = mainForm.Text; ``` Can anyone please confirm that this code can cause problems - or that it should behave alright and I have to look for the error somewhere else? I already looked for splashscreen implementations and I know that it can be done differently. But I am interested in understanding this implementation and its possible problems. Thanks!
OMG, I found the answer: `Application.EnableVisualStyles();` was called in the ctor if my mainForm (WHY?). It has to be called before any controls are created. Moving it to the `static Main()` did the trick. The visual styles are required for endless (ProgressBarStyle.Marquee) progressbars. Now this splashscreen solution works as it should.
The thread on which your SplashForm displays needs to have a Windows message pump in order to process messages that each window/control produces consumes. To do that you need to make the thread an STA thread. Try calling `SetApartmentState` **before** starting the thread
Using Application.Run() on different threads
[ "", "c#", "multithreading", "" ]
I need to explode by "," then ":". ``` $alttitle = "35:title1,36:title2, 59:title5" $replacetitles = explode(",",$alttitle); for ($i = 0 ; $i < count($replacetitles) ; $i++) { $replacetitle[] = explode(":", $replacetitles[$i]); } ``` Which produces... ``` Array ( [0] => Array ( [0] => 35 [1] => title1 ) [1] => Array ( [0] => 36 [1] => title2 ) [2] => Array ( [0] => 59 [1] => title5 ) ) ``` ...but the number 35,36,59 are unique so I want this to become the key for the array. ``` Array ( [35] => title1 [36] => title2 [59] => title5 ) ```
Simply set it while looping: ``` $alttitle = "35:title1,36:title2, 59:title5" $tmptitles = explode(",",$alttitle); $replacetitle = array(); foreach($tmptitles as $tmptitle) { $tmparr = explode(":", trim($tmptitle)); $replacetitle[intval($tmparr[0])] = trim($tmparr[1]); } ``` With the above, you will create your array a minimum number of iterations.
Here is another but probably overkill alternative. ``` if (!preg_match_all('/([0-9]+)\s*:\s*([^,]+)/', $alttitle, $matches)) { //not valid } $array = array_combine($matches[1], $matches[2]); print_r($array); ```
Parse string with two sets of delimiters and inconsistent spacing and create a flat associative array
[ "", "php", "arrays", "associative-array", "text-parsing", "delimited", "" ]
I would like to modify the way my C#/.NET application works internally. I have dug into the .NET framework with Reflector and found a pretty good place where I could use a different implementation of a method. This is an internal class in the System.Windows.Forms namespace. You obviously cannot alter the code of this class with the usual means so I thought it would be possible to replace a method in there through reflection at runtime. The method I would like to entirely replace for my application is this: `public static WindowsFontQuality WindowsFontQualityFromTextRenderingHint(Graphics g)` in the class: `internal sealed class System.Windows.Forms.Internal.WindowsFont` Is there any way to load that type and replace the method at runtime, not affecting any other applications that are currently running or started afterwards? I have tried to load the type with `Type.GetType()` and similar things but failed so far.
You *may* be able to do this with the debugger API - but it's a really, really bad idea. For one thing, running with the debugger hooks installed may well be slower - but more importantly, tampering with the framework code could easily lead to unexpected behaviour. Do you know exactly how this method is used, in all possible scenarios, even within your own app? It would also quite possibly have undesirable legal consequences, although you should consult a lawyer about that. I would personally abandon this line of thinking and try to work out a different way to accomplish whatever it is you're trying to do.
1. Anything you do to make this happen would be an unsupported, unreliable hack that could break with any .NET Framework update 2. There's another, more correct, way to do what you are trying to accomplish (and I don't need to know what you're trying to do to know this for certain). Edit: If editing core Framework code is your interest, feel free to experiment with Mono, but don't expect to redistribute your modifications if they are application-specific. :)
Modify an internal .NET class' method implementation
[ "", "c#", ".net", "reflection", "" ]
I started a small project which includes working with MIDI files. I've been wondering, is there any C# or VB.Net code that peforms that cast between MIDI and WAV files?
You could try to somehow interface with [Timidity](http://timidity.sourceforge.net/#info), which is Open Source: > TiMidity++ is a software synthesizer. It can play MIDI files by converting them into PCM waveform data; give it a MIDI data along with digital instrument data files, then it synthesizes them in real-time, and plays. It can not only play sounds, but also can save the generated waveforms into hard disks as various audio file formats. [FluidSynth](http://fluidsynth.resonance.org/trac) is a more recently updated Open Source project in a similar vein: > FluidSynth is a real-time software synthesizer based on the SoundFont 2 specifications. You can download some free SoundFonts (the actual PCM data used by these synthesizers to "render" the MIDI files) from the sites on [this list](http://en.wikipedia.org/wiki/SoundFont#Free_SoundFont_downloads).
[**MIDI files**](http://en.wikipedia.org/wiki/Musical_Instrument_Digital_Interface) contain only note and controller information, not sounds. In order to get sounds from a MIDI file, you have to pass the file through a music synthesizer or sampler, which will convert the note and controller information into actual sounds. In practice this means that any given MIDI file doesn't have a specific sound to it. The sound that results from converting a MIDI file to audio will vary depending on the quality of the synthesizer or sample library, and the sounds that are selected to perform the conversion. Many sound cards have the capability of producing sound from MIDI files. They can do this because many MIDI files follow a standard called the [**General MIDI specification**](http://en.wikipedia.org/wiki/General_MIDI). The General MIDI Specification provides a standardized way to map specific instrument assignments. If your MIDI file conforms to this standard, you can play it through a General MIDI sound generator and expect a snare drum to sound like a snare drum, and not like a trumpet. If you have a sophisticated music production package like [**Cakewalk**](http://www.cakewalk.com/), you can load a MIDI file into it, and it will use its on-board sound libraries to render a sound file for you, and this can actually be done faster than real-time (i.e. it doesn't have to play the sound through the sound card and capture the output). I guess what I'm trying to say is there's a lot of moving parts to this. There isn't a single piece of code or a class module that will do this for you.
How can I convert between midi to wav/mp3 in c#?
[ "", "c#", "wav", "midi", "" ]
I would like to open into memory an existing .sln file. Example of a non-working method: ``` private Solution2 OpenSolution(string filePath) { Solution2 sln; sln.Open(filePath); return sln; } ``` If I have an instance of Solution2 then i can call the method Open; but **how can i get an instance of Solution2**? My goal is then to get the adequate project and read some of its settings... but that's easy having access to the solution.
You can programmatically create a hidden instance of Visual Studio, and then use it to manipulate your solution. This example will list out all the projects that live in the given solution. ``` using System; using System.Runtime.InteropServices; using EnvDTE; using EnvDTE80; namespace so_sln { class Program { [STAThread] static void Main(string[] args) { System.Type t = System.Type.GetTypeFromProgID("VisualStudio.DTE.8.0", true); DTE2 dte = (EnvDTE80.DTE2)System.Activator.CreateInstance(t, true); // See http://msdn.microsoft.com/en-us/library/ms228772.aspx for the // code for MessageFilter - just paste it into the so_sln namespace. MessageFilter.Register(); dte.Solution.Open(@"C:\path\to\my.sln"); foreach (Project project in dte.Solution.Projects) { Console.WriteLine(project.Name); } dte.Quit(); } } public class MessageFilter : IOleMessageFilter { ... Continues at http://msdn.microsoft.com/en-us/library/ms228772.aspx ``` (The nonsense with STAThread and MessageFilter is "due to threading contention issues between external multi-threaded applications and Visual Studio", whatever that means. Pasting in the code from <http://msdn.microsoft.com/en-us/library/ms228772.aspx> makes it work.)
Solution2 is an interface, not a class. You cannot directly make an object of type Solution2, only reference objects as a Solution2 that contain the Solution2 interface. As far as I'm aware, classes that implement the Solution2 interface are only available as part of the interface collection in the Visual Studio integration, so you will have to do something similar to what RichieHindle mentions, and create a new hidden Visual Studio instance to load the solution. If you are just wanting to grab a couple settings out of the sln file, I'd potentially recommend parsing it yourself, the file format is pretty simple. If you are trying to pull a setting out, chances are the odd edge case where parsing the sln yourself wouldn't work also wouldn't work with what you are trying to do if Visual Studio parsed the sln for you.
Open a VS 2005 Solution File (.sln) into memory
[ "", "c#", "visual-studio", "visual-studio-2005", "projects-and-solutions", "vsx", "" ]
If so, is there any limitation to this ability? Specifically, I need to target Mac OSX.
As it turns out, they can.
I have used this before to launch things on a windows system never tried it on a Mac though. ``` public void launchScript(String args) { String cmd = null; try { cmd = getParameter(PARAM_CMD); System.out.println("args value : = " + args); System.out.println("cmd value : = " + cmd); System.out.println("Full command: = " + cmd + " " + args); if (cmd != null && !cmd.trim().equals("")) { if (args == null || args.trim().equals("")) { final String tempcmd = cmd; AccessController.doPrivileged(new PrivilegedAction() { public Object run() { try { Runtime.getRuntime().exec(tempcmd); } catch (Exception e) { System.out.println("Caught exception in privileged block, Exception:" + e.toString()); } return null; // nothing to return } }); System.out.println(cmd); } else { final String tempargs = args; final String tempcmd1 = cmd; AccessController.doPrivileged(new PrivilegedAction() { public Object run() { try { Runtime.getRuntime().exec(tempcmd1 + " " + tempargs); } catch (Exception e) { System.out.println("Caught exception in privileged block, Exception:" + e.toString()); } return null; // nothing to return } }); System.out.println(cmd + " " + args); } } else { System.out.println("execCmd parameter is null or empty"); } } catch (Exception e) { System.out.println("Error executing command --> " + cmd + " (" + args + ")"); System.out.println(e); } } ```
Can trusted 1.5 applets execute system commands?
[ "", "java", "security", "macos", "applet", "" ]
I am running applications on my windows server that run on Java 1.4 In the bottom right hand corner, Java wants me to update. Is this going to update to Java 1.6, or do these auto-updates only put out non breaking patches and my system will still be Java 1.4 Any insight would be great! Thanks.
Allowing the update will install a new Java 1.6 JVM on your machine. It very much depends on how your application is set up - if the path to your Java Home folder is coded into run scripts or configuration files, then it probably won't affect you. However, it will probably change the PATH to point to the new Java executable, so if your programs are run just using "java -jar SomeApp.jar" then it will affect you. Java 1.6 should be backwards compatible with Java 1.4 but it would be worth testing on a separate machine first!
There is no such thing as a "non-breaking patch" - it's always possible that your app somehow depends on unspecified behaviour or even an outright bug that was changed/fixed even in a so-called "bugfix release". But apart from that, Java generally has excellent downwards compatibility, and the chance is very good that your 1.4 will run on a Java 6 VM without any changes.
applications running on java 1.4 but windows server says new java updates available
[ "", "java", "updates", "" ]
I need some help in creating an Oracle SQL which I will execute in .NET. I need to update a column in a table but the value to update the same would be dependent on two different values. To give an example: Dim sqlcmd as String Dim collCmd as Collection For x = 1 to intCount ``` sqlcmd = "Update tableA Set col1 = :val1, col2 = :val2 Where...." collcmd.add(sqlcmd) SELECT col1, col2 FROM tableA Where ..... If col1 = 0 and col2 = 0 then sqlcmd = "Update tableB Set col1 = :value Where...." Else sqlcmd = "Update tableB Set col1 = :value Where.." End If collcmd.add(sqlcmd) Next ``` 'Perform the update with transaction here for the collcmd collection. Apparently, I need to place the update in one sql where the condition is met. Kindly advise. I cannot do a one time execute non query here since if one of the update fails, then I would need to perform a transaction rollback. I am placing all the update statement in one collection and performing the update in one transaction. But the value for the tableA may be different on the next iteration. Kindly take note that I cannot place the same inside a stored proc since there are other sql commands which are executed prior to the statements above. Is there a way to create an SQL where the update would go something like: ``` sqlcmd = "UPDATE tableB b IF select a.col1 = 0 and select a.col2 = 0 from tableA a SET b.col1 = "this value" ELSE SET b.col1 = "other value" WHERE...." ``` Thanks.
You need to use the CASE expression, like so: ``` UPDATE tableB SET col1 = CASE WHEN (0, 0) = (SELECT col1, col2 FROM tableA WHERE <tableA constraints> ) THEN "this value" ELSE "that value" END WHERE <tableB constraints>; ``` Note also the case is using the (a,b,c) = (select A, B, C ...) syntax, which is a handy, but underused sql feature = you can compare not only single values (scalars) but multiple values (vectors).
Why not write whatever logic you need in a stored procedure/package and call that from .NET?
Help Constructing an Oracle SQL with Condition
[ "", "sql", "oracle", "" ]
I'm using cython for a correlation calculation in my python program. I have two audio data sets and I need to know the time difference between them. The second set is cut based on onset times and then slid across the first set. There are two for-loops: one slides the set and the inner loop calculates correlation at that point. This method works very well and it's accurate enough. The problem is that with pure python this takes more than one minute. With my cython code, it takes about 17 seconds. This still is too much. Do you have any hints how to speed-up this code: ``` import numpy as np cimport numpy as np cimport cython FTYPE = np.float ctypedef np.float_t FTYPE_t @cython.boundscheck(False) def delay(np.ndarray[FTYPE_t, ndim=1] f, np.ndarray[FTYPE_t, ndim=1] g): cdef int size1 = f.shape[0] cdef int size2 = g.shape[0] cdef int max_correlation = 0 cdef int delay = 0 cdef int current_correlation, i, j # Move second data set frame by frame for i in range(0, size1 - size2): current_correlation = 0 # Calculate correlation at that point for j in range(size2): current_correlation += f[<unsigned int>(i+j)] * g[j] # Check if current correlation is highest so far if current_correlation > max_correlation: max_correlation = current_correlation delay = i return delay ```
**Edit:** There's now [`scipy.signal.fftconvolve`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.fftconvolve.html) which would be the preferred approach to doing the FFT based convolution approach that I describe below. I'll leave the original answer to explain the speed issue, but in practice use `scipy.signal.fftconvolve`. **Original answer:** Using **FFTs** and the **[convolution theorem](http://en.wikipedia.org/wiki/Convolution#Convolution_theorem)** will give you dramatic speed gains by converting the problem from O(n^2) to O(n log n). This is particularly useful for long data sets, like yours, and can give speed gains of 1000s or much more, depending on length. It's also easy to do: just FFT both signals, multiply, and inverse FFT the product. `numpy.correlate` doesn't use the FFT method in the cross-correlation routine and is better used with very small kernels. Here's an example ``` from timeit import Timer from numpy import * times = arange(0, 100, .001) xdata = 1.*sin(2*pi*1.*times) + .5*sin(2*pi*1.1*times + 1.) ydata = .5*sin(2*pi*1.1*times) def xcorr(x, y): return correlate(x, y, mode='same') def fftxcorr(x, y): fx, fy = fft.fft(x), fft.fft(y[::-1]) fxfy = fx*fy xy = fft.ifft(fxfy) return xy if __name__ == "__main__": N = 10 t = Timer("xcorr(xdata, ydata)", "from __main__ import xcorr, xdata, ydata") print 'xcorr', t.timeit(number=N)/N t = Timer("fftxcorr(xdata, ydata)", "from __main__ import fftxcorr, xdata, ydata") print 'fftxcorr', t.timeit(number=N)/N ``` Which gives the running times per cycle (in seconds, for a 10,000 long waveform) ``` xcorr 34.3761689901 fftxcorr 0.0768054962158 ``` It's clear the fftxcorr method is much faster. If you plot out the results, you'll see that they are very similar near zero time shift. Note, though, as you get further away the xcorr will decrease and the fftxcorr won't. This is because it's a bit ambiguous what to do with the parts of the waveform that don't overlap when the waveforms are shifted. xcorr treats it as zero and the FFT treats the waveforms as periodic, but if it's an issue it can be fixed by zero padding.
The trick with this sort of thing is to find a way to divide and conquer. Currently, you're sliding to every position and check every point at every position -- effectively an **O**( n ^ 2 ) operation. You need to reduce the check of *every* point and the comparison of *every* position to something that does less work to determine a non-match. For example, you could have a shorter "is this even close?" filter that checks the first few positions. If the correlation is above some threshold, then keep going otherwise give up and move on. You could have a "check every 8th position" that you multiply by 8. If this is too low, skip it and move on. If this is high enough, then check all of the values to see if you've found the maxima. The issue is the time required to do all these multiplies -- (`f[<unsigned int>(i+j)] * g[j]`) In effect, you're filling a big matrix with all these products and picking the row with the maximum sum. You don't want to compute "all" the products. Just enough of the products to be sure you've found the maximum sum. The issue with finding maxima is that you have to sum *everything* to see if it's biggest. If you can turn this into a minimization problem, it's easier to abandon computing products and sums once an intermediate result exceeds a threshold. (I think this might work. I have't tried it.) If you used `max(g)-g[j]` to work with negative numbers, you'd be looking for the smallest, not the biggest. You could compute the correlation for the first position. Anything that summed to a bigger value could be stopped immediately -- no more multiplies or adds for that offset, shift to another.
Cython and numpy speed
[ "", "python", "numpy", "cython", "" ]
What is the procedure for signing my code so that when user clicks on the installer it does not prompt unknown vendor. My Setup is, Application is java based. I wrap jar with launch4j Installer is based on nsis. My build platform is xp. One other thing when installation is finished i get a pop up saying installation was not succesfull from vista not nsis how can i get rid of it?
To get rid of the warning, you need to sign the setup.exe. Get the microsoft signtool.exe and a code signing certificate (make sure it is imported into your PC) then just create a batch file: signtool sign /v /s my /n "My Display Company" /d "My Display Application" /du "My Support URL" /t "<http://timestamp.verisign.com/scripts/timstamp.dll>" path\_to\_setup.exe SignTool.exe is installed by Visual Studio and by the Platform SDKs, just google a bit.
Have a look at the [JarSigner](http://java.sun.com/j2se/1.3/docs/tooldocs/win32/jarsigner.html). It works using the public key infrastructure so you'll need to get a ket signed by a CA somewhere which costs $$$. Both Ant and Maven have plugins to do this for you when you build your jar. This needs to be done to your jars before they are wrapped with launch4j and nsis. I'm not familiar with nsis, but if the installer is java based, you may need to sign it's output jar as well.
Code Signing for my Lauch4J/NSIS Installer on Windows XP?
[ "", "java", "windows", "nsis", "code-signing", "launch4j", "" ]
I have an application where the memory profile looks something like this: [![Jaggy](https://i.stack.imgur.com/yVNdD.png)](https://i.stack.imgur.com/yVNdD.png) (source: [kupio.com](http://kupio.com/image-dump/spikeymem.png)) The slow upwards crawl of memory usage is caused by the allocation of lots and lots of small, simple, transient objects. In low-memory situations (This is a mobile app) the GC overhead is noticeable when compared to less restrictive memory amounts. Since we know, due to the nature of the app, that these spikes will just keep on coming, I was considering some sort of pool of multitudinous transient objects (Awesome name). These objects would live for the lifetime of the app and be re-used wherever possible (Where the lifetime of the object is short and highly predictable). Hopefully this would mitigate against the effects of GC by reducing the number of objects collected and improve performance. Obviously this would also have its own performance limits since "allocation" would be more expensive and there would be an overhead in maintaining the cache itself. Since this would be a rather large and intrusive change into a large amount of code, I was wondering if anyone had tried something similar and if it was a benefit, or if there were any other known ways of mitigating against GC in this sort of situation. Ideas for efficient ways to manage a cache of re-usable objects are also welcome.
Normally, I'd say this was a job for tuning the GC parameters of the VM, the reduce the spiky-ness, but for mobile apps that isn't really an option. So if the JVms you are using cannot have their GC behavioure modified, then old-fashioned object pooling may be the best solution. The Apache Commons Pool library is good for that, although if this is a mobile app, then you may not want the library dependency overhead.
This is similar to the [**flyweight**](http://www.dofactory.com/Patterns/PatternFlyweight.aspx) pattern detailed in the GoF patterns book (see edit below). *Object pools* have gone out of favour in a "normal" virtual machine due to the advances made in reducing the object creation, synchronization and GC overhead. However, these have certainly been around for a long time and it's certainly fine to try them to see if they help! Certainly *Object Pools* are still in use for objects which have a very expensive creation overhead when compared with the pooling overheads mentioned above (**database connections** being one obvious example). Only a test will tell you whether the pooling approach works for you on your target platforms! **EDIT** - I took the OP *"re-used wherever possible"* to mean that the objects were immutable. Of course this might not be the case and the flyweight pattern is really about immutable objects being shared (`Enum`s being one example of a flyweight). A mutable (read: unshareable) object is not a candidate for the flyweight pattern but is (of course) for an object pool.
Is it worth mitigating against the effects of garbage collection?
[ "", "java", "memory-management", "caching", "java-me", "garbage-collection", "" ]
I am using Apache DBCP with JNDI in my tomcat container to pool connections to a mysql database. Everything works fine but a problem i am seeing is that once a pconnection is pooled it is never released. So after a load spike connection sit their forever sleeping. Is their a way to shrink the pool through context.xml parameters? here is my ocnfiguration: ``` defaultAutoCommit="false" defaultTransactionIsolation="REPEATABLE_READ" auth="Container" type="javax.sql.DataSource" logAbandoned="true" removeAbandoned="true" removeAbandonedTimeout="300" maxActive="-1" initialSize="15" maxIdle="10" maxWait="10000" username="user" password="password" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost/mydb"/> ```
Try to set `minEvictableIdleTimeMillis` to lower value than its default, which is `1000*60*30`.
try to use c3p0 connection pool`~~
How to shrink DB Connection Pool?
[ "", "java", "tomcat", "jndi", "connection-pooling", "apache-commons-dbcp", "" ]
I need to find a regular expression that would be able to work around an issue I am having. > Query: barfly london > > Should match: Camden **Barfly**, 49 Chalk Farm Road, **London**, NW1 8AN I've tried many, many regex's for this, but none have worked so far. I am considering that maybe I will need to split the search into two separate queries for it to work. Could anyone point me in the right direction? I'm a bit new to this area.
Try this: ``` var r = /barfly|london/gi str = "Camden Barfly, 49 Chalk Farm Road, London, NW1 8AN" alert(str.match(r).length>1) ```
I'd suggest not using regexs if you want to search for two string literals, and instead use normal string searching twice: ``` var test="Camden Barfly, 49 Chalk Farm Road, London, NW1 8AN" if ((test.indexOf("Barfly") != -1) && (test.indexOf("London") != -1)) { alert("Matched!"); } ``` If you're not concerned about case-sensitivity, then you can just lowercase/uppercase your test string and your string literals accordingly.
JavaScript: Searching for a string inside a string
[ "", "javascript", "regex", "" ]
Is there any way for all my PHP and/or HTML file output to be "filtered" before being displayed in the browser? I figured that I could pass it through a global function before it is displayed but I'm stuck on the implementation. Please help. If there is a better way to achieve the same result, I'd be happy to know. Thanks.
Check out [ob\_start](http://php.net/ob_start) which lets you pass a callback handler for post-processing your script output. For example, PHP includes a built-in callback [ob\_gzhandler](http://php.net/ob_gzhandler) for use in compressing the output: ``` <?php ob_start("ob_gzhandler"); ?> <html> <body> <p>This should be a compressed page.</p> </html> <body> ``` Here's a fuller example illustrating how you might tidy your HTML with the [tidy extension](http://php.net/manual/en/book.tidy.php): ``` function tidyhtml($input) { $config = array( 'indent' => true, 'output-xhtml' => true, 'wrap' => 200); $tidy = new tidy; $tidy->parseString($input, $config, 'utf8'); $tidy->cleanRepair(); // Output return $tidy; } ob_start("tidyhtml"); //now output your ugly HTML ``` If you wanted to ensure all your PHP scripts used the same filter without including it directly, check out the [auto\_prepend\_file](http://php.net/manual/en/ini.core.php#ini.auto-prepend-file) configuration directive.
You can use output buffering and specify a callback when you call [`ob_start()`](http://php.net/manual/en/function.ob-start.php) ``` <?php function filterOutput($str) { return strtoupper($str); } ob_start('filterOutput'); ?> <html> some stuff <?php echo 'hello'; ?> </html> ```
Making all PHP file output pass through a "filter file" before being displayed
[ "", "php", "filter", "" ]
I'm using `DataContractJsonSerializer`, which likes to output to a Stream. I want to top-and-tail the outputs of the serializer so I was using a StreamWriter to alternately write in the extra bits I needed. ``` var ser = new DataContractJsonSerializer(typeof (TValue)); using (var stream = new MemoryStream()) { using (var sw = new StreamWriter(stream)) { sw.Write("{"); foreach (var kvp in keysAndValues) { sw.Write("'{0}':", kvp.Key); ser.WriteObject(stream, kvp.Value); } sw.Write("}"); } using (var streamReader = new StreamReader(stream)) { return streamReader.ReadToEnd(); } } ``` When I do this I get an `ArgumentException` "Stream was not readable". I'm probably doing all sorts wrong here so all answers welcome. Thanks.
Three things: * Don't close the `StreamWriter`. That will close the `MemoryStream`. You do need to flush the writer though. * Reset the position of the stream before reading. * If you're going to write directly to the stream, you need to flush the writer first. So: ``` using (var stream = new MemoryStream()) { var sw = new StreamWriter(stream); sw.Write("{"); foreach (var kvp in keysAndValues) { sw.Write("'{0}':", kvp.Key); sw.Flush(); ser.WriteObject(stream, kvp.Value); } sw.Write("}"); sw.Flush(); stream.Position = 0; using (var streamReader = new StreamReader(stream)) { return streamReader.ReadToEnd(); } } ``` There's another simpler alternative though. All you're doing with the stream when reading is converting it into a string. You can do that more simply: ``` return Encoding.UTF8.GetString(stream.GetBuffer(), 0, (int) stream.Length); ``` Unfortunately `MemoryStream.Length` will throw if the stream has been closed, so you'd probably want to call the `StreamWriter` constructor that doesn't close the underlying stream, or just don't close the `StreamWriter`. I'm concerned by you writing directly to the the stream - what is `ser`? Is it an XML serializer, or a binary one? If it's binary, your model is somewhat flawed - you shouldn't mix binary and text data without being very careful about it. If it's XML, you may find that you end up with byte-order marks in the middle of your string, which could be problematic.
setting the memory streams position to the beginning might help. ``` stream.Position = 0; ``` But the core problem is that the StreamWriter is closing your memory stream when it is closed. Simply flushing that stream where you end the using block for it and only disposing of it *fter* you have read the data out of the memory stream will solve this for you. You may also want to consider using a [StringWriter](http://msdn.microsoft.com/en-us/library/system.io.stringwriter.aspx) instead... ``` using (var writer = new StringWriter()) { using (var sw = new StreamWriter(stream)) { sw.Write("{"); foreach (var kvp in keysAndValues) { sw.Write("'{0}':", kvp.Key); ser.WriteObject(writer, kvp.Value); } sw.Write("}"); } return writer.ToString(); } ``` This would require your serialization WriteObject call can accept a TextWriter instead of a Stream.
Writing to then reading from a MemoryStream
[ "", "c#", "" ]
We currently have a number of columns in the database which are of type `varchar`. The application that uses them are in C# and it uses Linq2Sql for the communication (or what to call it). We would like to support unicode characters, which means we would have to convert the `varchar` columns into `nvarchar`. Is this a safe operation? Is it just a matter of changing the column type and updating the dbml file, or are there more stuff that needs to be done? Any changes in the C# code? Do I need to somehow convert the text that already exist in the database manually, or is it handled for me?
It should be pretty safe to do from varchar/char to nvarchar/nchar. I have done it several times and have not experienced any problems. But this is based only on my experience!
Also bear in mind that the maximum length for an nvarchar column is 4000, which is half of varchar's 8000.
Is it safe to convert varchar and char into nvarchar and nchar in SQL Server?
[ "", "c#", "sql-server", "linq-to-sql", "unicode", "types", "" ]
Given the table: ``` CREATE TABLE Table1 ( UniqueID int IDENTITY(1,1) ...etc ) ``` Now why would you ever set the increment to something other than 1? I can understand setting the **initial seed value** differently. For example if, say, you're creating one database table per month of data (e.g. `Table1_082009`, `Table1_092009`) and want to start the UniqueID of the new table where the old one left off. (I probably wouldn't use that strategy myself, but hey, I can see people doing it). But for the **increment**? I can only imagine it being of any use in really odd situations, for example: * after the initial data is inserted, maybe later someone will want to turn identity insert on and insert new rows in the gaps, but for efficient lookup on the index will want the rows to be close to each other? * if you're looking up ids based directly off a URL, and want to make it harder for people to arbitrarily access the other items (for example, instead of the user being able to work out that changing the URL suffix from `/GetData?id=1000` to `/GetData?id=1001`, you set an increment of 437 so that the next url is actually `/GetData?id=1437`)? Of course if this is your "security" then you're probably already in trouble... I can't think of anything else. Has anyone used an increment that wasn't 1, and why? I'm really just curious.
One idea might be using this to facilitate partitionnement of data *(though there might be more "automated" ways to do that)* : * Considering you have two servers : + On one server, you start at 1 and increment by 2 + On the other server, you start at 2 and increment by 2. * Then, from your application, your send half inserts to one server, and the other half to the second server + some kind of software load-balancing This way, you still have the ability to identify your entries : the "UniqueID" is still unique, even if the data is split on two servers / tables. But that's only a wild idea -- there are probably some other uses to that...
Once, for pure fun, (Oh yeah, we have a *wild* side to us) decided to negative increment. It was strange to see the numbers grow in size and smaller in value at the same time. I could hardly sit still in my chair. ***edit*** (afterthought): You think the creator of the IDENTITY was in love with FOR loops? You know.. ``` for (i = 0; i<=99; i+=17) ``` or for those non semi-colon folks out there ``` For i = 0 to 100 step 17 ```
When would you ever set the increment value on a database identity field?
[ "", "sql", "sql-server", "database", "identity", "" ]
I am trying to automate testing of a winform application. I am running it in the same process as the test code, so it is fairly easy to find the .Net controls and simulate user action on them. I got however stuck a bit with a message box (created using the standard MessageBox.Show method). How can I get hold of it and simulate that a button is pressed?
You probably will have to use WinAPI calls (FindWindowEx, ect) and send a messages of LMB down and up to a button handle.
I'd advise treating the underlying disease rather than the symptom. Take a few minutes to read these * [the Humble Dialog box by Michael Feathers](http://www.objectmentor.com/resources/articles/TheHumbleDialogBox.pdf) * [User Interrogator by Tim Haughton](http://osdir.com/ml/programming.test-first-user-interfaces/2005-11/msg00021.html) In short, use an interface to separate out all modal dialog pop-ups - which are a pain in the neck for UI test automation. You can then substitute a mock implementation of the interface that does nothing or returns predetermined test values. The real implementation of course pops up the actual dialog modally... something like this (from the 2nd link) ``` public class UserInterrogator : IUserInterrogator { private Form owner; public UserInterrogator(Form owner) { this.owner = owner; } public Font GetFontFromUser() // member of the IUserInterrogator interface { FontDialog fd = new FontDialog(); fd.ShowDialog( owner ); return fd.Font; } } ``` The easier approach is of course to write some code that finds the dialog and closes/kills it. I've seen some people have some success with Win32 APIs and NUnitForms ...
Programatically finding a message box and generating a click on a button
[ "", "c#", ".net", "winforms", "user-interface", "testing", "" ]
I use a [class for detecting email addresses](http://www.leshazlewood.com/?p=5) which uses static final booleans to configure the matching behavior. Since I upgraded to Eclipse 3.5 I get warnings about dead code, since Eclipse notices that one branch in this can not be reached: ``` private static final boolean ALLOW_DOMAIN_LITERALS = false; private static final String domain = ALLOW_DOMAIN_LITERALS ? rfc2822Domain : rfc1035DomainName; ``` Oddly enough it is happy with this: ``` private static final String domain; static { if(ALLOW_DOMAIN_LITERALS) { domain = rfc2822Domain; } else { domain= rfc1035DomainName; } } ``` since it seems to recognize the common `if(DEBUG)` pattern, but the ternary operator doesn't seem to count. Since I'd rather not fork the class too much just to keep Eclipse happy, I'd prefer putting an `@SuppressWarnings` at the top instead of changing the code. Unfortunately I can't find a matching one apart from the brute-force `"all"`. Is there a value just for the dead code detection?
**UPDATE**: from Adam's comment: In Eclipse 3.6 and newer Eclipse versions `@SuppressWarnings("unused")` can now be used to suppress 'dead code' warnings. See [Christopher Stock's answer](https://stackoverflow.com/a/17998010/122442). See also [Eclipse 4.4(Luna) help](http://help.eclipse.org/luna/topic/org.eclipse.jdt.doc.user/tasks/task-suppress_warnings.htm) for @SuppressWarnings. *Original answer:* All SuppressWarnings values Eclipse 3.5 "knows" are listed in [this page](http://help.eclipse.org/galileo/topic/org.eclipse.jdt.doc.user/tasks/task-suppress_warnings.htm). It seems that there is no value for suppressing only the new dead-code detection. But you can use the `@SuppressWarnings("all")` just before the `domain` declaration so it will suppress warnings for only that line not for the whole class: ``` private static final boolean ALLOW_DOMAIN_LITERALS = false; @SuppressWarnings("all") private static final String domain = ALLOW_DOMAIN_LITERALS ? rfc2822Domain : rfc1035DomainName; ``` Because dead code check is a new one you can also suggest an enchancement in the [Eclipse bug database](https://bugs.eclipse.org/bugs/) for supporting the ternary operation as well.
Select `Ignore` in `Windows -> Preferences > Java > Compiler > Errors/Warnings` under `Potential programming problems` section
How do I suppress Eclipse 3.5's warnings of dead code
[ "", "java", "eclipse", "warnings", "suppress-warnings", "" ]
I'm trying to write a custom method to populate a ListView control using Generics: ``` private void BindDataToListView(List<T> containerItems) { this.View = View.Details; this.GridLines = true; this.FullRowSelect = true; if (this.Items.Count > 0) this.Items.Clear(); this.BeginUpdate(); int i = 0; foreach (T item in containerItems) { // do something } this.EndUpdate(); } ``` The parameter containerItems can have many items since I'm using generics. But I get stuck in the foreach loop. How do I access the values in containerItems? Do I have to use reflection on each instance of T in the foreach loop? I think I do to retrieve the property name. But once I have the property name of the type T, how do I retrieve the value?
The most common way of doing this (with winforms) is via `TypeDescriptor`; this allow you to use things `DataTable` the same as classes; the "full" pattern is quite complex (and involves checking for `IListSource`, `ITypedList`, etc; however, the short version is; to get the available properties: ``` PropertyDescriptorCollection props = TypeDescriptor.GetProperties(typeof(T)); ``` To get a named property: ``` PropertDescriptor prop = props[propName]; ``` To get a value for an instance (`sourceObject`): ``` object val = prop.GetValue(sourceObject); ``` To render a value as a string (using the designated converter): ``` string s = prop.Converter.ConvertToString(val); ```
You could limit T to an interface, and use that interface in the iteration.
How do I write a method that can output data from any class using generics?
[ "", "c#", ".net", "generics", "data-binding", "" ]
i am using [this](http://www.weberdev.com/get_example-4595.html) script for the email attachment but when i execute the form on my server it shows to warning > Warning: fread(): supplied argument is not a valid stream resource > > Warning: fclose(): supplied argument is not a valid stream resource pls help
Change this: ``` $fp = fopen($strresume, "rb"); $file = fread($fp, $strresume_size); ``` to this: ``` $fp = fopen($filetemp, "rb"); $file = fread($fp, $strresume_size); ```
I use this function to send HTML email with an attachment: <http://www.barattalo.it/2010/01/10/sending-emails-with-attachment-and-html-with-php/>
php email attachment
[ "", "php", "email", "" ]
How do i go about looking into a table and searching to see if a row exist. the back gorund behind it is the table is called enemies. Every row has a unique id and is set to auto\_increment. Each row also has a unique value called monsterid. the monster id isn't auto\_increment. when a monster dies the row is deleted and replaced by a new row. so the id is always changing. as well the monsterid is changed too. I am using in php the $\_GET method and the monsterid is passing through it, basically i am trying to do this $monsterID = 334322 //this is the id passed through the $\_GET checkMonsterId = "check to see if the monster id exist within the enemies table" if monsterid exist then {RUN PHP} else {RUN PHP} If you need anymore clarity please ask. and thanks for the help in advance.
Use `count`! If it returns > 0, it exists, else, it doesn't. ``` select count(*) from enemies where monsterid = 334322 ``` You would use it in PHP thusly (after connecting to the database): ``` $monsterID = mysql_real_escape_string($monsterID); $res = mysql_query('select count(*) from enemies where monsterid = ' . $monsterid) or die(); $row = mysql_fetch_row($res); if ($row[0] > 0) { //Monster exists } else { //It doesn't } ```
The `mysql_real_escape_string` is important to prevent [SQL injection](http://en.wikipedia.org/wiki/Sql_injection). ``` $monsterid = mysql_real_escape_string($_GET['monsterid']); $query = intval(mysql_query("SELECT count(*) FROM enemies WHERE monsterid = '$monsterid')); if (mysql_result > 0) { // monster exists } else { // monster doesn't exist } ```
checking to see if data exist with in a table
[ "", "php", "mysql", "" ]
For my site I have a RewriteRule that points the URL <http://www.mysite.com/work> to a work.php file. I also have a directory called "work" that has files in it, like project1.php, project2.php, etc... What rules would I have to write so that the URL <http://www.mysite.com/work> knows to go to the work.php file, but the URL <http://www.mysite.com/work/project1> knows I mean to go inside the directory "work" and display the project1.php file? **EDIT**: Should point out, this is what I'm currently working with: ``` RewriteEngine On RewriteBase /beta/ RewriteRule ^([a-z]+)$ $1.php [L] ``` Any additional tips to improve this security-wise? (Stopping directory jumping, etc...)
Try this: ``` RewriteEngin On RewriteBase / RewriteRule ^work$ /work.php [QSA,L] ``` That will ensure that `http://www.mysite.com/work` (no trailing slash) will go to your `work.php` file. If you also want `http://www.mysite.com/work/` (with trailing slash) to go `work.php`, add this line just above the last `RewriteRule`. ``` RewriteRule ^work/$ /work [R=301,QSA,L] ``` That will redirect it to the URL with no trailing slash thus, displaying the `work.php` file. **UPDATE:** Since you already have a `RewriteBase` directive, just put the `RewriteRule` line(s) right after your `RewriteBase` but before your `RewriteRule` as the rule you're using a catch-all and will match everything.
This is the answer to your question. It works for me. ``` RewriteEngine On RewriteBase / RewriteRule ^([a-z]+)$ $1.php [L] ``` Simply remove `beta/`
.htaccess rewriting url to page or directory
[ "", "php", ".htaccess", "" ]
How do I have actions occur when a field gets changed in one of my models? In this particular case, I have this model: ``` class Game(models.Model): STATE_CHOICES = ( ('S', 'Setup'), ('A', 'Active'), ('P', 'Paused'), ('F', 'Finished') ) name = models.CharField(max_length=100) owner = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) started = models.DateTimeField(null=True) state = models.CharField(max_length=1, choices=STATE_CHOICES, default='S') ``` and I would like to have Units created, and the 'started' field populated with the current datetime (among other things), when the state goes from Setup to Active. I suspect that a model instance method is needed, but the docs don't seem to have much to say about using them in this manner. **Update:** I've added the following to my Game class: ``` def __init__(self, *args, **kwargs): super(Game, self).__init__(*args, **kwargs) self.old_state = self.state def save(self, force_insert=False, force_update=False): if self.old_state == 'S' and self.state == 'A': self.started = datetime.datetime.now() super(Game, self).save(force_insert, force_update) self.old_state = self.state ```
Basically, you need to override the `save` method, check if the `state` field was changed, set `started` if needed and then let the model base class finish persisting to the database. The tricky part is figuring out if the field was changed. Check out the mixins and other solutions in this question to help you out with this: * [Dirty fields in django](https://stackoverflow.com/questions/110803/dirty-fields-in-django)
It has been answered, but here's an example of using signals, post\_init and post\_save. ``` from django.db.models.signals import post_save, post_init class MyModel(models.Model): state = models.IntegerField() previous_state = None @staticmethod def post_save(sender, instance, created, **kwargs): if instance.previous_state != instance.state or created: do_something_with_state_change() @staticmethod def remember_state(sender, instance, **kwargs): instance.previous_state = instance.state post_save.connect(MyModel.post_save, sender=MyModel) post_init.connect(MyModel.remember_state, sender=MyModel) ```
Actions triggered by field change in Django
[ "", "python", "django", "django-models", "" ]
I have a list of around 5,000 to 10,000 (individual user supplied) email addresses from people all over the world, each associated with their username and language codes. I also have a single message translated into the different languages of the users that I want to email. Now, I would like to send a single plain text email to each of the address, with the actual text of the email varying based on the user language, and personalised with the username of the person I'm emailing. Because of the personalised requirement, and the fact that they will only be emailed once (per year or two with a overlapped but different user list), formal mailing list is probably (& preferably) out. 3rd party bulk email service is also out. Ignoring programming time, what is the least manually time consuming way to do this in (preferably) PHP? (I am writing the script(s), but not necessarily the person that end up "pressing the button" to send it.) The ideal result is the person sending only having to type a single command to run the script (supplying the email list) and all the email will be sent with no more user intervention. This mean I am looking to avoid things like setting up cron jobs to run the script repeatedly until the email list is exhausted. When this was done before a year ago, I wrote a PHP script that simply read in the email list line by line processing the username, email address, and language code and build the desired email text out of that before supplying it to PHPMailer to send individually. The problem I had was the script timing out and me not knowing where it got up to so that I can trim the email list at the right place to start again. I ended up manually splitting up the 1 email list into several sub-list that was short enough so that the script doesn't time out. How do I either avoid timing out, or keep track of where the script is up to email address wise so that it can be restarted manually and no person is sent emails more than once? What other issues are there to take into account, such as avoiding blacklisting etc.?
You need to read about the function set\_time\_limit and maybe ini\_set('memory\_limit', xxMB') for memory; You can run a php cli script from your web process, which forks ([pcntl\_fork](http://php.net/manual/en/function.pcntl-fork.php)) then kills the parent (the parent cli process exits). The webserver thread running the script can then continue with other code or exit. Now the cli child process (A) can fork (and become the parent) and monitor a child (B) which sends out the emails. If the child B dies, the now parent A process can fork again and the new child continues where the previous left off. You have to keep track of who you sent email to in a file/db/shared memory or through paired sockets ([socket\_create\_pair](http://php.net/manual/en/function.socket-create-pair.php)) with the parent process. I hope you get the idea.
PHP's not the right tool for the job here - you'll want something that runs independent of a web server (although potentially triggered by it) or you'll certainly run into timeouts. For example, you could have PHP [system()](http://us.php.net/system) out to another script (Perl? Python?), which reads job information from a database then forks into the background to do its work.
Personalised bulk email programmatically without timing out
[ "", "php", "email", "timeout", "queue", "bulk", "" ]
I'm thinking of creating an application that can use Firefox as a download manager. Is there any way to control Firefox (add downloads, start/stop downloads, etc) from an external program in C/C++? If that is not possible, then perhaps an extension that can do that? If an extension is the only way, then how do I communicate with the extension from outside of Firefox?
You're starting with a solution, not a problem. The easier idea is to use XulRunner, the platform on which FireFox is built. You'd effectively implement your own application as a XulRunner plugin and use Necko (the network layer of XulRunner and FireFox) from there.
First of all I suggest that you familiarize yourself with [developer.mozilla.org](https://developer.mozilla.org/) As far as I understand, most Mozilla platform functions are available through a cross language API known as [XPCOM](https://developer.mozilla.org/en/XPCOM_API_Reference). There's also a [plugin API](https://developer.mozilla.org/en/Plugins) but it's primary aim is to visualize stuff (used by Flash, etc.).
Controlling Firefox from C/C++
[ "", "c++", "firefox-addon", "" ]
This is generally how I manage progressive enhancement whilst keep the experience clean, but how safe is it? is there potential for a race condition and this not working? Imagine the simple abstract scenario, you want to display something differently if you have javascript support.. this is generally what I will end up doing: ``` <div id="test">original</div> <script type="text/javascript"> var t = document.getElementById('test'); t.innerHTML = 'changed'; </script> ``` Many may claim you should use a framework and wait for a domready event, and do changes there.. however there is a significant delay where the 'test' element will have already been rendered before the end of the document and the css are ready and a domready triggers.. thus causing a noticable flicker of 'original'. Is this code liable to race condition failures? or can I guarentee that an element is discoverable and modifiable if it exists before the script? Thanks in advance.
You can, but there are issues surrounding doing it. First off, in IE if you attempt to manipulate a node that has not been closed (e.g. BODY before its close tag which should be below your JS) then you can encounter IE's "OPERATION ABORTED" error which will result in a blank page. Manipulation of a node includes appending nodes, moving nodes, etc. In other browsers the behavior is undefined, however they do usually behave as you would expect. The main issue is that as your page evolves, the page may load/parse/run differently. This may cause some script to run before a browser defines referenced elements have actually been created and made available for DOM manipulation. If you are attempting to enhance your user perceived performance (i.e. snappiness). I highly suggest that you avoid this path and look into lightening your pages. You can use Yahoo's YSlow/Google's Page Performance Firebug to help you get started. [Google's Page Speed](http://code.google.com/speed/page-speed/) [Yahoo's YSlow](http://developer.yahoo.com/yslow/)
You can manipulate the DOM before it has fully loaded, but it can be risky. You obviously can't guarantee that the bit of the DOM you are trying to manipulate actually exists yet, so your code may fail intermittently.
Is it ok to manipulate dom before ready state?
[ "", "javascript", "dom", "progressive-enhancement", "domready", "" ]
I wrote a blogging system from scratch (<http://seanhess.net>). I have the last 10 posts displayed on the index page `/`, and each post has it's own page `/post/a_simple_post`. I'm getting a good rank in google when I search for specific info from my posts, but google links to the index page instead of the post page. How do I get the search engine to drill into those post links? ``` <div class="blog_post"> <div class="info"> <span class="tags"> <a href="/tag/framework">Framework</a> <a href="/tag/php">PHP</a> <a href="/tag/tutorial">Tutorial</a> </span> <span class="date">August 03, 2009</span> </div> <div class="content"> <h1><a href="/posts/example_post">Example Post</a></h1> <p>Paragraph</p> </div> </div> ```
Google probably displays your home page because it thinks it is a relevant answer to the search the user did... One way that would help making your "post" pages more important is to display the full-content of the post on the home page for only the last post ; and, for the next ones, only display some excerpt, or a summary, or something like that. It would make your post pages more important... But that would also mean having a "less important" home page... Which may or may not be good. Google also uses links from other websites : if many important sites link to your homepage, and only a few link to your post pages, google will think the home page is more important than those. As your blog is about PHP, one nice thing could be to be syndicated on <http://www.planet-php.net/> : it allows many people to see your blog entries -- and is nice for visibility too *(both to users and to google, as it has a high pagerank, I suppose)* Still, google is probably already exploring your whole site : if there are links to your post pages *(there are on the home page, at least)*, google will visit thoses one day or another... One thing I just noticed, though : your first post was in june, and you've been active only for a something like a month and a half ; it is not **that long**, especially if not many websites have links to yours... In the end, there is only one secret : the more you'll write interesting stuff, the more people will find your blog interesting, the more they'll talk about it and include links to it, the more google will see about it too, the higher you'll be in results, and so on ;-) *But, yes, it takes time... Especially if you want to only write interesting posts -- and you should not post crap just to have lots of content !* I just saw you have a first blog on <http://code.seanhess.net/> and that you now have another one on <http://seanhess.net/> ; do you think it would be wise *(depends on your content, on what you want and all that !)* to move all blog-posts from the first one to the new one, adding permanent redirections on the old pages to the new ones ? You might also want to take a look at some articles on the net, as well as some questions/answers here on SO, that could give you some useful advices. For instance : * [Is SEO knowledge important for web developers?](https://stackoverflow.com/questions/312347/is-seo-knowledge-important-for-web-developers) * [Using SEO-friendly links](https://stackoverflow.com/questions/975240/using-seo-friendly-links) * [What kind of SEO Technique need to apply for my website…](https://stackoverflow.com/questions/1005320/what-kind-of-seo-technique-need-to-apply-for-my-website) * [SEO URL Structure](https://stackoverflow.com/questions/621380/seo-url-structure) * and probably lots of others -- provided you know what you are searching for, you'll find lots of interesting "tips/techniques" And if you search with... google for instance... you might find many interested articles on the net about that too...
The search engines already do it, unless your `robots.txt` file specifies otherwise, or some special attributes in your `<a>` tag (which it seems you are not using). I think the only problem you are having is that your index page ranks much better than your sub-pages. This might be because people link to your index page.
Get Search Engines to link to sub-pages instead of index
[ "", "php", "html", "ruby", "seo", "" ]
I wrote a little commandline-application in Java and wanted to use the new class java.io.Console for this. I use System.console() to get an instance of this class. This call returns a working console, if I call my application via 'java -jar MyApp.jar' but is unset if I execute the application via the java-task of ant. fork is true and spwan false for this call. Why is this difference (System.out.print() works fine under ant)? How can I use a Console also if I start my App via ant?
The Javadoc for [this method](http://java.sun.com/javase/6/docs/api/java/lang/System.html#console()) states: > Returns the unique Console object associated with the current Java virtual machine, if any. And the docs for the [`System.Console`](http://java.sun.com/javase/6/docs/api/java/io/Console.html) class state: > Whether a virtual machine has a console is dependent upon the underlying platform and also upon the manner in which the virtual machine is invoked. If the virtual machine is started from an interactive command line without redirecting the standard input and output streams then its console will exist and will typically be connected to the keyboard and display from which the virtual machine was launched. If the virtual machine is started automatically, for example by a background job scheduler, then it will typically not have a console. I would imagine that when Ant forks a new Java process it redirects standard output.
System.console() returns *null* if input or output is redirected. Ant just does that.
Why is System.console() set if executed with java and unset if executed via ant?
[ "", "java", "ant", "console", "" ]
How would I check if all the div's with class test are hidden. And if they are all hidden set wrap1 to hidden. Thanks. ``` <div id='wrap1'> <div class="header">Header 1</div> <div class='test'><a href="#">Test 1</a></div> <div class='test'><a href="#">Test 2</a></div> <div class='test'><a href="#">Test 3</a></div> </div> ``` UPDATE: Thanks everyone for the really quick answers, I got it working. They were all very helpful.
You can do the check as by using selector as suggested above and to it like this: ``` if ( $("div.test:visible").length === 0) $("#wrap1").hide( ); ```
This snippet will loop all `<div id="wrap#">` and hide them if the test are hidden. ``` $("div[id^='wrap']").each(function() { var wrap = $(this); if(wrap.children("div[class^='test']:visible").length == 0) { wrap.hide(); } else { wrap.show(); } }); ``` If you still want to keep your `<div id="wrap#">` visible if there are no test at all (as in none in the markup), you can use the following modified snippet: ``` $("div[id^='wrap']").each(function() { var wrap = $(this); if(wrap.children("div[class^='test']").length > 0 && wrap.children("div[class^='test']:visible").length == 0) { wrap.hide(); } else { wrap.show(); } }); ``` There is no compelling reason to number classes (other than in edge cases). Your numbering complicates the above code as well as your CSS. It would be easier just to remove the numbering from `test`. (You don't need it as you can always select a subset of them using `:lt(index)`, `:gt(index)`, `:eq(index)`, `:first` and `:last`. As for numbering ids, that's fine since each id must be unique.
Use jQuery to check if all div's are hidden
[ "", "javascript", "jquery", "html", "css", "dom", "" ]
Log4net is a little too good at not throwing errors. I am trying to create some kind of handler that fires if log4net can not start or dies and can no longer log. I am aware of the app settings key to turn on log4net's internal debugging (log4net.Internal.Debug). I don't need all the debugging information all the time though, just if there is an issue with log4net. Does anyone have a way they have programmatically captured and handled errors in log4net?
Well, log4net will make it very hard for you to do this, since it will (by design) suppress exceptions which are thrown during logging operations. That's because a production system shouldn't fail because of an error while formatting a log message. It's worth trying with very simple pattern layouts, as sometimes it's the use of %P{XYZ} elements in pattern layouts which cause problems if the corresponding properties are not properly set. If everything works as expected with a simple pattern layout, you can add the things you need back in one at a time and see if you can pinpoint the problem that way.
Something that may work for you is to create a class that implements [IErrorHandler](http://logging.apache.org/log4net/release/sdk/log4net.Core.IErrorHandler.html) for each appender, then configure each appender to use the custom error handling class. This should give you more control than enabling log4net.Internal.Debug. I've just given this a try and it works (note that in my sample `Logger` is a log4net logger defined elsewhere - the aim behind this is to capture errors from an SMTP appender and log them to a file): ``` using System; using log4net.Core; namespace Test { public class SmtpErrorHandler : IErrorHandler { public void Error(string message) { Logger.Log.Error(message); } public void Error(string message, Exception ex) { Logger.Log.Error(message, ex); } public void Error(string message, Exception ex, ErrorCode errorCode) { Logger.Log.Error(String.Format("{0}{1}Error code: {2}", message, Environment.NewLine, errorCode), ex); } } } ``` Where I configure my appender (of course you can do this in the config too): ``` emailAppender.ErrorHandler = new SmtpErrorHandler(); ```
How to receive events if log4net stops logging
[ "", "c#", "log4net", "" ]
I have developed a C# com component which I am using from managed c++. On my dev machine when everything works fine. However when I distribute the files, I received an error that the component has not been registered. When I try a regsvr32 on the dll it gives me an error (C# dlls cannot be registered). How do I properly register this COM dll?
You use [regasm](http://msdn.microsoft.com/en-us/library/tzat5yw6%28VS.71%29.aspx) with `/codebase` (and it needs to be ComVisible [but as [Patrick McDonald](https://stackoverflow.com/users/61989/patrick-mcdonald) correctly poinhts out, you've already got past that as it works locally])
Use RegAsm with siwtch `/codebase` if your assembly is not installed in the GAC (Global Assembly Cache). [Details of further switches is here](http://msdn.microsoft.com/en-us/library/tzat5yw6(VS.71).aspx)
Register a C# COM component?
[ "", "c++", "com", "unmanaged", "regsvr32", "" ]
I would like to get a list of all the users who have been assigned a certain role. I could write my own SQL but I would like to use the api as much as possible.
There are generally no Drupal API functions for this sort of task (pulling up entities that match certain criteria). It tends to focus on single-entity CRUD functions in the API; everything else is up to a developer's own SQL skills. The Views module allows you to build lists of users filtered by role, permission, etc -- but it could easily be overkill.
You can use entity\_load to get array of users. Here is the sample that will create list of all emails for admin users (used to send notification) ``` <?php $users = entity_load('user'); $emails = ''; foreach($users as $user) { if (array_key_exists(3, $user->roles)) { if (strlen($emails) > 0) { $emails .= ' ' . $user->mail; } else { $emails = $user->mail; } } } ?> ```
Is it possible to use the Drupal api to get a list of users?
[ "", "php", "drupal", "" ]
According to the question: [Is C# code faster than Visual Basic.NET code](https://stackoverflow.com/questions/1223660/is-c-code-faster-than-visual-basic-net-code)? Was said that C# and VB.NET generates the same CLR code at the end. But in the case when I'm using codebehind and inline code, are there different performance (ignoring the language used)?
Inline code *can* require compilation the first time the request is made. After that (or if it's precompiled), there's absolutely zero difference between them. By the way, even if it requires compilation, the speed difference should be insignificant as ASP.NET will have to compile a source file anyway. The difference will come down to a adding a few lines of code in a large source file!
Yes, ish... If you're compiling at run-time you're always going to be more expensive than something which doesn't have to, but that compile will be cached (if you will) after the first request so you'll get zero difference from then on. There's probably somone who knows another reason, but to my mind the only realistic purpose for inline is the ability to make hot fixes without a rebuild + redeploy: the kind of thing you might do in small or early stage dev projects. Personally I also find inline just a little... aesthetically displeasing.
Is codebehind faster than inline code?
[ "", "c#", "asp.net", "" ]
This question is based [on the answer](https://stackoverflow.com/questions/1182798/to-improve-a-relation-figure-for-a-database/1182807#1182807). I would like to know how you can hash your password by SHA1 and then remove the clear-text password in a MySQL database by Python. **How can you hash your password in a MySQL database by Python?**
As the documentation says you should use [hashlib](http://docs.python.org/library/hashlib.html) library not the sha since python 2.5. It is pretty easy to do make a hash. ``` hexhash = hashlib.sha512("some text").hexdigest() ``` This hex number will be easy to store in a database.
If you're storing passwords in a database, a recommended article to read is Jeff's [You're Probably Storing Passwords Incorrectly](http://www.codinghorror.com/blog/archives/000953.html). This article describes the use of *salt* and some of the things about storing passwords that are deceptively easy to get wrong.
To SHA512-hash a password in MySQL database by Python
[ "", "python", "mysql", "database", "hash", "" ]
Which way is better practice: return a value from a method inside an `using` statement or declare a variable before, set it inside and return it after? ``` public int Foo() { using(..) { return bar; } } ``` or ``` public int Foo() { var b = null; using(..) { b = bar; } return b; } ```
I prefer the first example. Fewer variables, fewer lines of code, easier to follow, easier to maintain... ``` public int Foo() { using(..) { return bar; } } ```
Following the "less is more" principle (really just a variant of [KISS](http://en.wikipedia.org/wiki/KISS_principle)), the former. There are fewer lines of code to maintain, no change in semantics and no loss of readability (arguably this style is easier to read).
Best practice regarding returning from using blocks
[ "", "c#", "coding-style", "using-statement", "" ]
I'm creating a jQuery plugin that that is rather large in scope. In fact, the plugin technically consists of a few plugins that all work together. ``` (function($){ $.fn.foo = function(){ //plugin part A } $.fn.bar = function(){ //plugin part B } $.fn.baz = function(){ //plugin part C } }(jQuery)) ``` Is it possible to namespace jQuery plugins such that the minor plugins could be functions of the larger plugin ``` $.fn.foo.bar = function(){} $.fn.foo.baz = funciton(){} ``` This would keep from polluting the jQuery function namespace. You could then call the plugins like ``` $('#example').foo() $('#other_example').foo.bar() ``` The issue I have run into when trying this out myself is that the functions declared as properties of the foo() plugin function don't have their references to 'this' set properly. 'this' ends up referring to the parent object and not the jQuery object. Any ideas or opinions would be appreciated. -Matt
As soon as you use `$.fn.foo.bar()` -- `this` points to `$.fn.foo`, which is what you would expect in JavaScript (`this` being the object that the function is called on.) I have noticed in plugins from jQuery UI (like sortable) where you call functions like: ``` $(...).sortable("serialize"); $(...).sortable({options}); ``` If you were doing something like this - you could extend jQuery itself: ``` $.foo_plugin = { bar: function() {}, baz: function() {} } $.fn.foo = function(opts) { if (opts == 'bar') return $.foo_plugin.bar.call(this); if (opts == 'baz') return $.foo_plugin.baz.call(this); } ```
I know this has already been answered but I have created a plugin that does exactly what you want: <http://code.google.com/p/jquery-plugin-dev/source/browse/trunk/jquery.plugin.js> I've included a small example below, but check out this jQuery Dev Group post for a more in-depth example: <http://groups.google.com/group/jquery-dev/browse_thread/thread/664cb89b43ccb92c/72cf730045d4333a?hl=en&q=structure+plugin+authoring#72cf730045d4333a> It allows you to create an object with as many methods as you want: ``` var _myPlugin = function() { // return the plugin namespace return this; } _myPlugin.prototype.alertHtml = function() { // use this the same way you would with jQuery alert($(this).html()); } $.fn.plugin.add('myPlugin', _myPlugin); ``` now you can go: ``` $(someElement).myPlugin().alertHtml(); ``` There are, of course, many, many other possibilities with this as explained in the dev group post.
jQuery Plugin Namespacing Functions
[ "", "javascript", "jquery", "jquery-plugins", "" ]
I am starting to learn c# and wanted to create an actual app which one of the two would you start with?
Personally, I would learn WPF. We use a lot of Winforms, but we're in the process of migrating to WPF. I think that's a more future proof set of skills. [WPF Virtual Labs](http://msdn.microsoft.com/en-us/aa740389.aspx) are a good place to start.
My rule of thumb is to choose the most recent technology that doesn't require your users to go through extra effort. If you have to support Windows XP, then WinForms is the way to go as it doesn't require XP users to install .NET updates. If you don't have to worry about XP, then WPF is probably the ready to go.
Should I continue learning C# with Windows Forms or WPF Applications?
[ "", "c#", "wpf", "winforms", "" ]
Okay, now I've got this thing I need to do with Javascript but I have no idea the best way of doing it! I'll explain by telling you how I "think" it should work... 1. Inside the web page there will be multiple span tag like this: `<span onload="myfunction(1);"></span>` 2. myfunction() (hosted externally) will wait until the page finishes loading and collect all the information from each of the onload events. 3. Then myfunction() will send this information to a external php page for processing. 4. The php page will return some html. (Probably in an array because I want to do this in 1 call) 5. myfunction() will then replace the span tags with the html. Steps 3, 4, and 5 I know how to do but steps 1 and 2 I'm not sure how to achieve? I've listed all my steps here just in case someone sees another big problem I might run into.
You can try to use a code similar to the next ``` function collector(){ var spans = document.getElementsByTagName("SPAN"); var commands = []; for (var i=0; i<spans.length; i++) if (spans[i].getAttribute("onload")) commands.push(spans[i].getAttribute("onload")); //here you have all onload commands in the array and can go to stage 3 } if (window.addEventListener) window.addEventListener("load",collector,false): else window.attachEVent("onload",collector); ```
Since onload event is supported only by < body>, < frame>, < frameset>, < iframe>, < img>, nothing will happen. I would reccomend you put id's for every span and put also something like this: ``` <body onload="collector.run()"> <span id="s1"></span> <script> collector.delayFunction("s1",/data/) </script> <span id="s2"></span> <script> collector.delayFunction("s2",/data/) </script> <span id="s3"></span> <script> collector.delayFunction("s3",/data/) </script> <span id="s4"></span> <script> collector.delayFunction("s4",/data/) </script> </body> //function in js file somewhere above var collector=(function (){ this.stack={}; this.delayFunction= function(id,data){ this.collector.stack[id]=data; } this.run=function(){// this function will process all collected data } })(); ```
Javascript: collect info from multiple function onload events until web page finishes loading
[ "", "javascript", "onload-event", "" ]
Does anyone know of a way to get only POST parameters from an HttpServletRequest object? IE, PHP has the $\_POST superglobal and Perl's CGI.pm will only retrieve POST parameters if the HTTP method is POST (by default). HttpServletRequest.getParameter(String) will include the ~~GET~~ URL parameters even if the HTTP method is POST.
I guess one way might be to manually parse `HttpServletRequest.getQueryString()` and check that a parameter is not present in it. A naive implementation (ignoring url-escaped key values) would go something like this (untested) : ``` public boolean isInQuery(HttpServletRequest request, String key) { String query = request.getQueryString(); String[] nameValuePairs = query.split("&"); for(String nameValuePair: nameValuePairs) { if(nameValuePair.startsWith(key + "=")) { return true; } } return false; } ```
From my understanding, there are no such things as POST parameters and GET parameters in HTTP, there are POST and GET methods. When a request is made using the POST method, parameters go within the message body. In case of a GET request, parameters go in the URL. My first thought was that it was an implementation bug in your servlet container. But, since things are not always as you expect, java servlet specification (at least the 2.4 version) does not differentiate between the two kind of parameters. So, there is no way to obtain post or url parameters using the servlet API. Surely you already have a plan B. But, just in case, I post two alternatives that came to my mind: 1. If you have access to the parameter name definition, you could use a prefix to differentiate between the two when you iterate the getParameterNames() result. 2. You could parse the URL creating an URL object and using getQuery() method to obtain just the parameters. Then, parse the parameters on the query string using some utility class such as [ParameterParser](http://hc.apache.org/httpclient-3.x/apidocs/org/apache/commons/httpclient/util/ParameterParser.html) in [HttpClient](http://hc.apache.org/httpclient-3.x/) library. And finally, subtract that names from the getParameterNames() result.
Retrieve POST parameters only (Java)
[ "", "java", "post", "servlets", "" ]
In C#, can you make a class visible only within its own namespace without living in a different assembly? This seems useful for typical helper classes that shouldn't be used elsewhere. (i.e. what Java calls package-private classes)
I don't think that what you want is possible.
You can make the classes `internal` but this only prevents anyone outside of the *assembly* from using the class. But you still have to make a *separate assembly* for each namespace that you want to do this with. I'm assuming that is why you *wouldn't* want to do it. **Getting the C# Compiler to Enforce Namespace Visibility** There is an article here ([Namespace visibility in C#](http://fonp.blogspot.com/2008/12/namespace-visibility-in-c.html)) that shows a method of using partial classes as a form of "fake namespace" that you might find helpful. The author points out that this doesn't work perfectly and he discusses the shortcomings. The main problem is that C# designers **designed C# *not* to work this way.** This deviates heavily from expected coding practices in C#/.NET, which is one of the .NET Frameworks greatest advantages. It's a neat trick… now ***don't* do it.**
Namespace-only class visibility in C#/.NET?
[ "", "c#", ".net", "" ]
I am having the following problem under Internet Explorer 7/8: I have a popup that gets activated when user mouseover a link. The popup is a simple `<div>` that contains some data. Inside this `<div>` tag there is a `<select>` tag with some `<option>`s. I have attached mouseover/mouseout events to the `<div`>, so that this popup will stay open while cursor is over it. The problem comes when you click on the `<select>` and then move the cursor over any of the `<option>`s. This triggers the mouseout event of the `<div>` tag and respectively closes it. How can I prevent the closing of the popup in IE ?
You should be able to detect if the situation is the one you want just with the values off the event. It is a little convoluted but it seems to work. In the event handler of your outer `div`, do something like this: ``` <div onmouseover="if (isReal()) { toggle(); }" onmouseout="if (isReal()) { toggle(); }"> </div> ``` Then implement the `isReal` method: ``` function isReal() { var evt = window.event; if (!evt) { return true; } var el; if (evt.type === "mouseout") { el = evt.toElement; } else if (evt.type === "mouseover") { el = evt.fromElement; } if (!el) { return false; } while (el) { if (el === evt.srcElement) { return false; } el = el.parentNode; } return true; } ``` Basically the `isReal` method just detects if the event was coming from within the div. If so, then it returns false which avoids calling the hide toggle.
My suggestion would be to set another flag while the select box has focus. Do not close the div while the flag is set.
Internet Explorer and <select> tag problem
[ "", "javascript", "jquery", "internet-explorer", "jquery-events", "" ]
I have a design and object structuring related question. Here is the problem statement: 1. I have a Robot object which is suppose to traverse the ground on its own. It would be provided movement instructions and it must parse accordingly. For example sample input would be: a. RotateRight|Move|RotateLeft|Move|Move|Move Where move is a unit movement on a grid. I did a very basic design in java. (Complete Code Pasted below) ``` package com.roverboy.entity; import com.roverboy.states.RotateLeftState; import com.roverboy.states.RotateRightState; import com.roverboy.states.State; public class Rover { private Coordinate roverCoordinate; private State roverState; private State rotateRight; private State rotateLeft; private State move; public Rover() { this(0, 0, Compass.NORTH); } public Rover(int xCoordinate, int yCoordinate, String direction) { roverCoordinate = new Coordinate(xCoordinate, yCoordinate, direction); rotateRight = new RotateRightState(this); rotateLeft = new RotateLeftState(this); move = new MoveState(this); } public State getRoverState() { return roverState; } public void setRoverState(State roverState) { this.roverState = roverState; } public Coordinate currentCoordinates() { return roverCoordinate; } public void rotateRight() { roverState = rotateRight; roverState.action(); } public void rotateLeft() { roverState = rotateLeft; roverState.action(); } public void move() { roverState = move; roverState.action(); } } package com.roverboy.states; public interface State { public void action(); } package com.roverboy.entity; import com.roverboy.states.State; public class MoveState implements State { private Rover rover; public MoveState(Rover rover) { this.rover = rover; } public void action() { rover.currentCoordinates().setXCoordinate( (Compass.EAST).equalsIgnoreCase(rover.currentCoordinates() .getFacingDirection()) ? rover.currentCoordinates() .getXCoordinate() + 1 : rover.currentCoordinates() .getXCoordinate()); rover.currentCoordinates().setXCoordinate( (Compass.WEST).equalsIgnoreCase(rover.currentCoordinates() .getFacingDirection()) ? rover.currentCoordinates() .getXCoordinate() - 1 : rover.currentCoordinates() .getXCoordinate()); rover.currentCoordinates().setYCoordinate( (Compass.NORTH).equalsIgnoreCase(rover.currentCoordinates() .getFacingDirection()) ? rover.currentCoordinates() .getYCoordinate() + 1 : rover.currentCoordinates() .getYCoordinate()); rover.currentCoordinates().setYCoordinate( (Compass.SOUTH).equalsIgnoreCase(rover.currentCoordinates() .getFacingDirection()) ? rover.currentCoordinates() .getYCoordinate() - 1 : rover.currentCoordinates() .getYCoordinate()); } } package com.roverboy.states; import com.roverboy.entity.Rover; public class RotateRightState implements State { private Rover rover; public RotateRightState(Rover rover) { this.rover = rover; } public void action() { rover.currentCoordinates().directionOnRight(); } } package com.roverboy.states; import com.roverboy.entity.Rover; public class RotateLeftState implements State { private Rover rover; public RotateLeftState(Rover rover) { this.rover = rover; } public void action() { rover.currentCoordinates().directionOnLeft(); } } package com.roverboy.entity; public class Coordinate { private int xCoordinate; private int yCoordinate; private Direction direction; { Direction north = new Direction(Compass.NORTH); Direction south = new Direction(Compass.SOUTH); Direction east = new Direction(Compass.EAST); Direction west = new Direction(Compass.WEST); north.directionOnRight = east; north.directionOnLeft = west; east.directionOnRight = north; east.directionOnLeft = south; south.directionOnRight = west; south.directionOnLeft = east; west.directionOnRight = south; west.directionOnLeft = north; direction = north; } public Coordinate(int xCoordinate, int yCoordinate, String direction) { this.xCoordinate = xCoordinate; this.yCoordinate = yCoordinate; this.direction.face(direction); } public int getXCoordinate() { return xCoordinate; } public void setXCoordinate(int coordinate) { xCoordinate = coordinate; } public int getYCoordinate() { return yCoordinate; } public void setYCoordinate(int coordinate) { yCoordinate = coordinate; } public void directionOnRight() { direction.directionOnRight(); } public void directionOnLeft() { direction.directionOnLeft(); } public String getFacingDirection() { return direction.directionValue; } } class Direction { String directionValue; Direction directionOnRight; Direction directionOnLeft; Direction(String directionValue) { this.directionValue = directionValue; } void face(String directionValue) { for(int i=0;i<4;i++) { if(this.directionValue.equalsIgnoreCase(directionValue)) break; else directionOnRight(); } } void directionOnRight() { directionValue = directionOnRight.directionValue; directionOnRight = directionOnRight.directionOnRight; directionOnLeft = directionOnRight.directionOnLeft; } void directionOnLeft() { directionValue = directionOnLeft.directionValue; directionOnRight = directionOnLeft.directionOnRight; directionOnLeft = directionOnLeft.directionOnLeft; } } ``` Now my doubt is with this last class "Direction" and "Coordinate". coordinate represents a coordinate object for rover which helps it maintain its direction. Currently to keep track of direction I am using a doubly linked list of Direction objects, which pretty much work like a compass. Rotate left or right. Here are the questions that I have. 1. I have used state pattern and shown design for direction tracking. Is there a better approach to simplify even this? Rem. I need to maintain coordinates correctly; such that if you move towards +y axis, my coordinates should be in + else in minus. Same for X axis. 2. Currently the responsibility for changing the face of the rover is indirectly delegated to coordinates and to direction class. Is this really correct? Isn't rover responsible for maintaining direction? Am I really right in my design to delegate that responsibility down to coordinate and direction class; just because it is easier to manipulate it there? 3. Any simple design improvements and suggestions on code will be most welcome. Feel free to critique. Thanks for your patience and feedback; in advance.
You're asking for how to simplify. If I may suggest something bold, why not use an opaque int for direction and have a static class to deal with it? By "opaque int" I mean that your code would never use it directly, but only as argument to the Direction class. Here's some partial java-styled pseudocode to show what I mean. ``` // 0 = east, 1 = north, 2 = west, ... public class Direction { static int [] moveX = [ 1, 0, -1, 0]; static final int NORTH = 1; // coordinates after moving one step in the given direction static Pair move(int direction, Pair old) { return new Pair( old.x + moveX[direction] , old.y + moveY[direction] ); } static int turnLeft(int direction) { return (direction+1) % 4; } static int turnRight(int direction) { return (direction+3) % 4; } } ``` This way of doing things would have the advantage of using fewer allocations, so the garbage collector won't need to run as often. Another advantage is that the design remains object-oriented in the sense that you can easily change the direction class if later you want to be able to rotate by e.g. 45 degrees at a time. To answer your other questions, I think it's perfectly fine to delegate to the Direction class the task of changing a coordinate along a certain direction. The rover would be responsible for maintaining direction only in the sense that the rover object would contain an int field to store the direction it's facing.
Here's a Direction enum I came up with the other day, of which I am perhaps inordinately fond. Perhaps you will find it useful in your code. ``` import java.awt.Point; public enum Direction { E(1, 0), N(0, 1), W(-1, 0), S(0, -1); private final int dy; private final int dx; private Direction(int dx, int dy) { this.dx = dx; this.dy = dy; } public Direction left() { return skip(1); } public Direction right() { return skip(3); } public Direction reverse() { return skip(2); } private Direction skip(int n) { final Direction[] values = values(); return values[(ordinal() + n) % values.length]; } public Point advance(Point point) { return new Point(point.x + dx, point.y + dy); } } ```
A question of design and object responsibility
[ "", "java", "design-patterns", "oop", "" ]
I like ASP.Net MVC Authorize attribute, I can extend it and build my own logic and decorate my controller with it. BUT, In my architecture, I have one common service layer(C# Class Library). End user can access my application via ASP.Net MVC web site or via my exposed REST WCF Webservice layer. My asp.net MVC application and REST WCF service layer both in turn access my common service layer. I want authorization to happen in this common service layer and not in ASP.Net MVC Controller or in my exposed REST Service layer. Can I create ASP.Net MVC Authorize attribute like thing to decorate my methods in the common C# class library? This attribute will take parameters and will decide if the current user has access to perform that function or not? Thanks & Regards, Ajay
What you're looking for can be achieved using AOP library, like PostSharp (<http://www.postsharp.org/>). It's more complex than using Authorize attribute in mvc, but is still quite simple.
Another way to handle this is to use the `[PrincipalPermission]` attribute in your service layer. This can prevent callers from executing a method (or accessing an entire class) without the defined authorization.
Writing Custom Attribute in C# like ASP.Net MVC Authorize attribute
[ "", "c#", "asp.net-mvc", "wcf", "attributes", "" ]
I'm using `file_get_contents` to get a certain file's contents -- so far that is working. Now I want to search the file and replace all `<a href="` with `<a href="site.php?url=` before showing the file. How can I do this? I know I should use some kind of `str_replace` or even `preg_replace`. But I don't know how to actually search and do it for the file I'm getting with `file_get_contents`.
[`file_get_contents`](http://php.net/file_get_contents) returns a string containing the file's content. So, you can work in this string using whichever string manipulation function you'd want, like the ones you talked about. Something like this, using str\_replace, would probably do : ``` $content = file_get_contents('http://www.google.com'); $new_content = str_replace('<a href="', '<a href="site.php?url=', $content); echo $new_content; ``` But note it will only replace the URL in the `href` attribute when that attribute is the first one of the `<a` tag... Using a regex **might** help you a bit more ; but it probably won't be perfect either, I'm afraid... If you are working with an HTML document and want a "full" solution, using [`DOMDocument::loadHTML`](http://php.net/manual/en/domdocument.loadhtml.php) and working with DOM manipulation methods might be another (a bit more complex, but probably more powerful) solution. The answers given to those two questions might also be able to help you, depending on what you are willing to do : * [underlinks also](https://stackoverflow.com/questions/1245452/php-filegetcontents-also-get-the-pictures) * [file\_get\_contents - also get the pictures](https://stackoverflow.com/questions/1245452/php-filegetcontents-also-get-the-pictures) --- **EDIT** after seeing the comment : If you want to replace two strings, you can pass arrays to the two first parameters of `str_replace`. For instance : ``` $new_content = str_replace( array('<a href="', 'Pages'), array('<a href="site.php?url=', 'TEST'), $content); ``` With that : * '`<a href="`' will be replaced by '`<a href="site.php?url=`' * and '`Pages`' will get replaced by '`TEST`' And, quoting the manual : > If search and replace are arrays, > then str\_replace() takes a value from > each array and uses them to do search > and replace on subject . If replace > has fewer values than search , then an > empty string is used for the rest of > replacement values. If search is an > array and replace is a string, then > this replacement string is used for > every value of search . If you want to replace all instances of '`<a href="`', well, it's what `str_replace` does by default :-)
``` $text = file_get_contents('some_file'); $text = str_replace('<a href="', '<a href="site.php?url=', $text); ```
Change the href attribute of <a> tags in an html document
[ "", "php", "html", "dom", "replace", "hyperlink", "" ]
Just wondering if there is any way to check if the value of a select box drop-down matches the original value at the time of page load (when the value was set using `selected = "yes"`) ? I guess I could use PHP to create the original values as JavaScript variables and check against them, but there are a few select boxes and I'm trying to keep the code as concise as possible!
That's not too hard at all. This will keep track of the value for each `select` on the page: ``` $(document).ready(function() { $("select").each(function() { var originalValue = $(this).val(); $(this).change(function() { if ($(this).val() != originalValue) $(this).addClass('value-has-changed-since-page-loaded'); else $(this).removeClass('value-has-changed-since-page-loaded'); }); }); }); ``` This will apply a new class `value-has-changed-since-page-loaded` (which presumably you'd rename to something more relevant) to any select box whose value is different than it was when the page loaded. You can exploit that class whenever it is you're interested in seeing that the value has changed.
``` $(document).ready(function() { var initialSelectValue = $('#yourselect').val(); // call this function when you want to check the value // returns true if value match, false otherwise function checkSelectValue() { return $('#yourselect').val() === initialSelectValue; } }); ``` PS. You should use `selected="selected"` not `selected="yes"`.
Using JavaScript or jQuery, how do I check if select box matches original value?
[ "", "javascript", "jquery", "" ]
First of all, I am aware that this question is dangerously close to: [How to MapPath in a unit test in C#](https://stackoverflow.com/questions/1231860/how-to-mappath-in-a-unit-test-in-c) I'm hoping however, that it has a different solution. My issue follows: In my code I have an object that needs to be validated. I am creating unit tests for each validation method to make sure it is validating correctly. I am creating mock data and loading it into the object, then validating it. The problem is that within the validation, when an error occurs, an error code is assigned. This error code is used to gather information about the error from an xml file using Server.MapPath. However, when trying to get the xml file, an exception is thrown meaning the file cannot be found. Since MapPath is in my validation code, and not my unit test, how do I get my unit test to recognize the path? Does this question make sense? Error Line (In my Validation code NOT my unit test): ``` XDocument xdoc = XDocument.Load(HttpContext.Current.Server.MapPath("App_Data/ErrorCodes.xml")); ``` Simplified: The Unit Test calls a method in my program that calls Server.MapPath which then fails.
I would abstract out the "filename provider" into an class that simply returns a location, then you can mock it much, much easier. ``` public class PathProvider { public virtual string GetPath() { return HttpContext.Current.Server.MapPath("App_Data/ErrorCodes.xml"); } } ``` Then, you can either use the PathProvider class directly... ``` PathProvider pathProvider = new PathProvider(); XDocument xdoc = XDocument.Load(pathProvider.GetPath()); ``` Or mock it out in your tests: ``` PathProvider pathProvider = new MockPathProvider(); // using a mocking framework XDocument xdoc = XDocument.Load(pathProvider.GetPath()); ```
After some rigorous googling and some help from a colleague we came up with a simple solution already built into .net Above the unit tests that accesses the validation process, I added: ``` [TestMethod()] [HostType("ASP.NET")] [UrlToTest("http://localhost:###/upload_file.aspx")] [AspNetDevelopmentServerHost("Path To Web Application", "Path To Web Root")] ``` This works perfectly. Basically, when the test is called, it loads the URL with the specified unit test in the page load. Since it is a web site that is now calling the unit test, the validation will have access to Server.MapPath. This solution may not work for everyone, but it was perfect for this. Thanks to all you contributed.
C# Unit Testing: Testing a method that uses MapPath
[ "", "c#", ".net", "unit-testing", "server.mappath", "" ]
I am trying to insert PowerPoint slides into HTML preserving links in the PowerPoint slide. I was just wondering if any of you knew a good method to maybe exporting a PowerPoint slide to an HTML and then displaying it in a `div` on your page, with a link to say do a JavaScript function on that same page.
Though not quite what you asked, Google Docs or Slideshare both let you embed Powerpoint as flash - perhaps that would be a more elegant way to do this. That is, if embedding is the aim!
I've never used it, but [PPT2HTML](http://www.pptools.com/ppt2html/index.html) might help. There's also this [blog](http://www.soniacoleman.com/Tutorials/PowerPoint/loop_html.htm) that describes how to save your presentation for the web then modify it. This isn't exactly what you're after, but there are some html-based presentation tools like [S5](http://meyerweb.com/eric/tools/s5/), [DOMSlides](http://icant.co.uk/domslides/) you might want to consider as they wouldn't suffer from being translated.
How to convert a PowerPoint slide into HTML?
[ "", "javascript", "html", "powerpoint", "" ]
I'm dealing with a legacy code base where a class which is not wired up in spring needs to obtain a class that *is* wired up in spring. I was hoping to create a factory class that was wired up on startup and then I could just call the getInstance() method to obtain a wired up object. What is the best way to go about this? Example: ``` public class LegacyA { public void doSomething() { ... Foo foo = FooFactory.getInstance(); ... } } public class FooFactory { private static Foo foo; public static Foo getInstance() { if (foo == null) throw new IllegalStateException(); return foo; } } ``` I need FooFactory to be wired up on startup so that LegacyA can simply call getInstance() so that it returns an instance of Foo (which is also a bean defined in the application context). ``` <bean id="legacyA" class="LegacyA"/> <bean id="foo" class="Foo"/> <!-- I need this bean to be injected with foo so that the FooFactory can return a foo --> <bean id="fooFactory" class="FooFactory"/> ``` Edit: I had to re-work my example a bit as I got it a bit confuzzled in my own head...
Using statics like this really goes against the grain of Spring IoC, but if you really *have* to use them, then I would suggest writing a simple Spring hook which takes the `Foo` and injects it into the `FooFactory`, e.g. ``` public class FooFactoryProcessor implements InitializingBean { private Foo foo; public void setFoo(Foo foo) { this.foo = foo; } public void afterPropertiesSet() throws Exception { Foofactory.setFoo(foo); } } ``` And in your XML: ``` <bean id="foo" class="Foo"/> <bean class="FooFactoryProcessor"> <property name="foo" ref="foo"/> </bean> ``` No need to modify `Foo` or `FooFactory`
Is defining the bean as a [singleton](http://static.springsource.org/spring/docs/1.2.9/reference/beans.html#beans-factory-modes) in the Spring configuration of use here ? You can then inject it into `LegacyB` using property or constructor inject (my preference is the latter) and then only the one instance is available. EDIT: Re. your changed question (!) I not sure why you don't simply inject Foo again as a singleton into your factory. Note also that you can use the `getInstance()` method via the Spring configs by using [factory-method](http://static.springsource.org/spring/docs/2.0.x/reference/beans.html#beans-factory-class-instance-factory-method), and maintain injection through all classes.
Spring Wire a Static Class
[ "", "java", "spring", "" ]
I am currently writing an application using Drools 5.0. This application seems to be running a little slow, but I have a theory why. This application receives many updates for facts already in stored in the knowledge session. The Drools update function under the hood really does a retraction then an insertion. This application has over 200 rules. Some of the rules are written to fire when certain facts are removed. Other rules are written to fire when certain facts are asserted into the knowledge session. Since update really does a retraction then insertion will the retraction and insertion related rules still fire during an update? Even though nothing is really being 'inserted' or retracted from the knowledge session? One thing to note, I 'hooked' up the WorkingMemoryFileLogger to my knowledge session to get a better idea about whats going on. That's when I saw lots of unexpected retraction/insertion rule-activation creations being added to the Agenda, but it seems they don't ever get activated. It seems to me that updating facts can be expensive especially based on your fact model, and should me used sparingly. Is this correct?
I think you have understood it correctly. An update is kind of like a retract plus an assert. The first thing to be sure of is if your rules are giving you what you want - ie do it work but you just want to improve performance? In some ways you can think of an update (and also checkout the "modify" keyword..) as part of the evils of immutability ;) When you update - you tell the network that the fact has changed, but it doesn't yet track it at a field level (that is TBD) so it may cause more work then is necessary as all these activations are created that are not really needed (as they are using fields that didn't actually change in value). Hard to be more specific - if you provided some sample rules/fact model (if you can in a safe way of course !) we might be able to suggest some ideas to break it down to be more granular. Good luck !
The best way to know is to profile the app and find out exactly what's happening. Use something like OptimizeIt or JProbe in combination with the jvisualvm.exe that ships with JDK 1.6. Don't guess - get more data.
Could the Drools update method potentially be causing my problems
[ "", "java", "drools", "" ]
So I have this 2d dynamic array which content I want to free when I am done with it. However I keep running into a heap corruption after the destructor. The code works fine (of course with memory leaks) if I comment out the destructor. (Visual Studio 2005) ``` FrameData::FrameData(int width, int height) { width_ = width; height_ = height; linesize[0] = linesize[1] = linesize[2] = linesize[3] = 0; // Initialise the 2d array // Note: uint8_t is used by FFMPEG (typedef unsigned char uint8_t) red = new uint8_t* [height]; green = new uint8_t* [height]; blue = new uint8_t* [height]; for (int i=0; i < height; i++) { red[i] = new uint8_t [width]; green[i] = new uint8_t [width]; blue[i] = new uint8_t [width]; } } FrameData::~FrameData() { // Delete each column for (int i=0; i < height_; i++) { delete[] ((uint8_t*) red[i]); delete[] ((uint8_t*)green[i]); delete[] ((uint8_t*)blue[i]); } // Final cleanup delete[] red; red = NULL; delete[] green; green = NULL; delete[] blue; blue = NULL; } ``` I have no idea what is wrong with the code. The only another thing is somewhere else, I did this in a loop where the crash occurred ``` FrameData myFrame; std::vector<FrameData> frames; ...snipped... frames.push_back(myFrame); ``` This shouldn't be causing any problem, right? If I remember correct, push\_back makes a copy instead of storing a pointer or a reference. PS. Yes, I should use vectors. But I am not allowed to. Additional Info: The operator= and copy constructor are not defined. I guess that's a reason for the problem.
Your problem is as you guessed in here: ``` FrameData myFrame; std::vector<FrameData> frames; ...snipped... frames.push_back(myFrame); ``` The vector makes copies of the elements that you push in. What do you have for your copy constructor and/or `operator=` for your class? If you have none defined, the default version that the compiler creates for you simply makes copies of the members of your class. This will copy the pointer members `red`, `green` and `blue` to the new instance. Then the old instance that you copied will be destroyed when it goes out of scope, causing the pointers to be deleted. The one you copied into the vector will then have invalid pointers since the target of the pointer is thus deleted. A good rule of thumb is that if you have any raw pointer members, then you need to make a copy constructor and `operator=` that will handle this situation correctly, by making sure that the pointers are given new values and not shared, or that ownership is transferred between the instances. For example, the `std::auto_ptr` class has a raw pointer - the semantics of the copy constructor is to transfer ownership of the pointer to the target. The `boost::shared_ptr` class has a raw pointer - the semantics is to share ownership by means of reference counting. This is a nice way to handle `std::vectors` containing pointers to your class - the shared pointers will control the ownership for you. Another way might be to use vectors to take the place of your member pointers - the member pointers are simply aliases for your arrays anyway, so the vector is a good substitute.
Unless you have a deep copy constructor and assignment operator for the FrameData class my gut feeling is that the compiler generates a copy constructor to use with push\_back. Automatically generated copy constructors and assignment operators will do a memberwise copy, which will result in a shallow copy in this instance. Unfortunately your destructor doesn't know about the copy so during the copying, there is a good chance that a temporary copy of FrameData gets destroyed and that will take all your data with it. Calling the destructor again later in the process will result in a double free, plus other allocations might have used part of the "free" memory. That looks like a good reason for heap corruption from here. Best way to find problems like these is usually to use a tool like ValGrind or Purify to pinpoint the problem.
Problems deleting a 2D dynamic array in C++ (which is eventually store in a vector)
[ "", "c++", "multidimensional-array", "dynamic-arrays", "" ]
A client is insisting that we store some vital and complex configuration data as php arrays while I want it to be stored in the database. He brought up the issue of efficiency/optimization, saying that file i/o will be much faster than database queries. I'm pretty sure I heard somewhere that file includes are actually slow in PHP. Any stats/real info on this?
I don't think that performance is a compelling argument either way. On my Mac, I ran the following tests. First 10,000 includes of a file that doesn't do anything but set a variable: ``` <?php $mtime = microtime(); $mtime = explode(' ', $mtime); $mtime = $mtime[1] + $mtime[0]; $starttime = $mtime; for ($i = 0; $i < 10000; $i++) { include("foo.php"); } $mtime = microtime(); $mtime = explode(" ", $mtime); $mtime = $mtime[1] + $mtime[0]; $endtime = $mtime; $totaltime = ($endtime - $starttime); echo 'Rendered in ' .$totaltime. ' seconds.'; ?> ``` It took about .58 seconds to run each time. (Remember, that's 10,000 includes.) Then I wrote another script that queries the database 10,000 times. It doesn't select any real data, just does a `SELECT NOW()`. ``` <?php mysql_connect('127.0.0.1', 'root', ''); mysql_select_db('test'); $mtime = microtime(); $mtime = explode(' ', $mtime); $mtime = $mtime[1] + $mtime[0]; $starttime = $mtime; for ($i = 0; $i < 10000; $i++) { mysql_query("select now()"); } $mtime = microtime(); $mtime = explode(" ", $mtime); $mtime = $mtime[1] + $mtime[0]; $endtime = $mtime; $totaltime = ($endtime - $starttime); echo 'Rendered in ' .$totaltime. ' seconds.'; ?> ``` This script takes roughly 0.76 seconds to run on my computer each time. Obviously there are a lot of factors that could make a difference in your specific case, but there is no meaningful performance difference in running MySQL queries versus using includes. (Note that I did not include the MySQL connection overhead in my test -- if you're connecting to the database only to get the included data, that would make a difference.)
It's gonna vary heavily based on your specific case. If the database is stored in memory and/or the data you're looking for is cached, then database I/O should be pretty fast. A really complex query on a large database can take a fair bit of time if it's not cached or it has to go to disk, though. File I/O does have to read from the disk, which is slow, though there are also smart caching mechanisms for keeping often-accessed files in memory as well. Profiling on your actual system is gonna be the most definitive.
Are include()s faster or database queries?
[ "", "php", "file-io", "" ]
I'm trying to remove the unsightly embedded `<STYLE>` tag the built-in Recent Comments widget puts in my `<HEAD>`, but I can't seem to get the syntax right. It originally calls ``` add_action( 'wp_head', array(&$this, 'recent_comments_style') ); ``` to add it (in [wp-includes/default-widgets.php, line 609](http://core.trac.wordpress.org/browser/tags/2.8.3/wp-includes/default-widgets.php?rev=11770#L609)), and I'm trying to undo it. I think it should be something like this: ``` remove_action('wp_head', 'WP_Widget_Recent_Comments::recent_comments_style'); ``` but with all the variations I've tried I still can't get it right. Does anyone know how to achieve this? ### Possibly Helpful: * [Function Reference: `remove_action`](http://codex.wordpress.org/Function_Reference/remove_action)
This is the correct code: ``` add_action('wp_head', 'remove_widget_action', 1); function remove_widget_action() { global $wp_widget_factory; remove_action( 'wp_head', array($wp_widget_factory->widgets['WP_Widget_Recent_Comments'], 'recent_comments_style') ); } ``` However, it doesn't work because of [this bug](http://core.trac.wordpress.org/ticket/10535).
``` remove_action('wp_head', array(&$this, 'recent_comments_style')); ``` This should work because Wordpress uses the same functions to create the unique IDs whether you remove or add it.
Need help with remove_action()
[ "", "php", "wordpress", "wordpress-theming", "" ]
This question is based on [this answer](https://stackoverflow.com/questions/1184888/to-understand-a-line-about-a-login-variable-in-a-session/1184931#1184931). I'm looking for a function similar to PHP's `session_start()` for Python. I want to access a dictionary like `$_SESSION` in PHP which becomes available after running the command.
Let me address some things that might be related to your question...it may not be relevant for you, but I think others might come here with the exact same question and might benefit from my (limited) experience...because I also had this question at one time. Speaking as someone who went from PHP to Python (and never looked back), I think it's useful to understand how sessions work under the hood. It's probably *not* a good idea to implement your own session framework unless you (a) want to understand more about sessions management by doing or (b) need something that existing frameworks don't offer. [Wikipedia](http://en.wikipedia.org/wiki/Session_cookie) is always a good place to start. Bottom line: session data gets stored somewhere on the server and indexed by a unique identifier (hash of some sort). This identifier gets passed back and forth between the client and server, usually as a cookie or as part of the query string (the URL). For security's sake, you'll want to use an SSL connection or validate the session ID with some other piece of data (e.g. IP address). By default PHP stores sessions as files, but on a shared server that could pose a security risk, so you might want to override the session engine so you store sessions in a database. Python web frameworks have similar functionality. When I started doing web programming in Python, I noticed two things. First, PHP wrapped a lot of magic into the language, making it easy for a beginning programmer (me in 2003) to learn the language, but not teaching me much about how everything worked. Therefore, I found myself researching many topics about web applications, specifically database connection pooling, URL mapping, sessions, and threading. PHP (and Django, from what I understand) abstract that away for you. Second, PHP is a really crappy language ;) but it gets the job done!! Personally I use CherryPy for web development. It has session management as a "tool" that you can turn on.
As someone who comes from PHP and is working his way into Python I can tell you that [Django](http://www.djangoproject.com/) is a good way to start dealing with Python on the web. This is especially true if you've been using [MVC frameworks in PHP](http://www.phpwact.org/php/mvc_frameworks). That said, Django has built in support for session management and is documented here: <http://docs.djangoproject.com/en/dev/topics/http/sessions/> And, out of curiousity, I took a look around for session management with plain python and found this: <http://code.activestate.com/recipes/325484/> Judging by the comments, it would seem that you're better off using one of the tried and true frameworks to handle this for you. If you're not interested in Django you can also [checkout some of the others](http://wiki.python.org/moin/WebFrameworks)
How do I start a session in a Python web application?
[ "", "python", "session", "" ]
What I need is a collection which allows multiple keys to access a single object. I need to apply frequent alterations to this object. It also must be efficient for 500k+ entries.
Any implementation of `java.util.Map<K,V>` will do this - there is *no restriction* on how many times a particular value can be added under separate keys: ``` Map<String,Integer> m = new HashMap<String, Integer>(); m.put("Hello", 5); m.put("World", 5); System.out.println(m); // { Hello->5, World->5 } ``` If you want a map where a single key is associated with multiple values, this is called a *multi-map* and you can get one from the [google java collections API](http://google-collections.googlecode.com/svn/trunk/javadoc/index.html?com/google/common/collect/Multimap.html) or from [Apache's commons-collections](http://commons.apache.org/collections/api-release/org/apache/commons/collections/MultiMap.html)
I sort of interpreted his request differently. What if one wants two completely different keysets to access the same underlying values. For example: ``` "Hello" ------| |----> firstObject 3 ------| "Monkey" ------| |----> secondObject 72 ------| 14 -----------> thirdObject "Baseball" ------| |----> fourthObject 18 ------| ``` Obviously having two maps, one for the integer keys and one for the String keys, isn't going to work, since an update in one map won't reflect in the other map. Supposing you modified the `Map<String,Object>`, updating "Monkey" to map to fifthObject. The result of this modification is to change the `Entry<String,Object>` within that map, but this of course has no effect on the other map. So whilst what you intended was: ``` "Monkey" ------| |----> fifthObject 72 ------| ``` what you'd get in reality would be this: ``` "Monkey" -----------> fifthObject 72 -----------> secondObject ``` what I do in this situation is to have the two side by side maps, but instead of making them say `Map<String, Integer>` I would make them `Map<String, Integer[]>`, where the associated array is a single member array. The first time I associate a key with a value, if no array exists yet and the key returns null, I create the array, and associate any other key I wish to with it (in that key's map). Subsequently, I only modify the array's contents, but never the reference to the array itself, and this works a charm. ``` "Monkey" -------> fifthObjectArray ------| |-----> fifthObjectArray[0] 72 -------> fifthObjectArray ------| ```
Need a Java map/table with multiple keys to one value. Value is commonly altered
[ "", "java", "collections", "hashmap", "" ]
If I have a Javascript list which will have only numeric keys, which takes less memory? ``` var array = []; array[0] = 'hello'; array[5] = 'world'; array[50] = 'foobar'; var obj = {}; obj[0] = 'hello'; obj[5] = 'world'; obj[50] = 'foobar'; ``` I don't know a ton about Javascript engine internals, so... The reason I ask is because that array, when converted to a string, will have a bunch of undefined's in the middle of it. Are those actually stored in some fashion, or is that just put in at string conversion?
An array is basically an ordered set of values associated with a single variable name. In your example I think you try to do an associative array, and you should use object, Array is not meant to be used for key/value pairs. Also the array length is indirecly increased when you assign a value to an index with higher length of the current array length: ``` var array = new Array(); array[99] = "Test"; // array.length is now 100 ``` Check this [detailed article](http://andrewdupont.net/2006/05/18/javascript-associative-arrays-considered-harmful/) on the subject.
Probably the Javascript array because you can 'only' use numeric key values, where as the object literals provide a space for key values, and even if you use numerical key values, they are probably handled differently than the numerical key values for arrays. Most likely the *reason* arrays can't have text-based key values are because they are treated differently than object literals. I'm guessing that because they are probably treated differently, the processing for the array probably is more optimized for numeric key values, were as a object literal is optimized to use strings or numbers as their keys.
Which takes less memory: a Javascript array or Javascript object?
[ "", "javascript", "" ]
Say I've got a class called "Base", and a class called "Derived" which is a subclass of Base and accesses protected methods and members of Base. What I want to do now is make it so that no other classes can subclass Derived. In Java I can accomplish that by declaring the Derived class "final". Is there some C++ trick that can give me the same effect? (Ideally I'd like to make it so that no class other than Derived can subclass Base as well. I can't just put all the code into the same class or use the friend keyword, since Base and Derived are both templated, with Base having fewer template arguments than Derived does....)
As of C++11, you can add the final keyword (technically a special identifier since it is not actually a keyword) to your class, eg ``` class Derived final { ... ``` You can read more about the final keyword at <http://en.wikipedia.org/wiki/C++11#Explicit_overrides_and_final>
You can have a private constructor for '`Derived`' and a public static Create function for instantiation
Is there a way to forbid subclassing of my class?
[ "", "c++", "final", "keyword", "protected", "subclassing", "" ]
I am a little confused on how to Error handling in Struts2. I wish to make once central page where the users will be directed if an error occurs. Furthermore, when an error occurs i wish to log it, since i am using log4j I'll be logging it as `log.error(e.getMessage(), e);` However, in my action class if I catch the error (put all my code in try/catch) then the central/common error page does not come up. So I decided against catching the error, if i dont catch the error then central error page comes up. But now how do I put the error message/stacktrack into the logs?? After reading this [link](http://struts.apache.org/2.x/docs/exception-configuration.html) I did the following: ``` <global-results> <result name="Exception" type="chain"> <param name="actionName">ErrorPage</param> <param name="namespace">/error</param> </result> </global-results> <global-exception-mappings> <exception-mapping exception="java.lang.Exception" result="Exception"/> </global-exception-mappings> <action name="selectionPage" class="reports.ReportSelection"> <result>/reports/SelectionPage.jsp</result> </action> </package> <package name="secure" namespace="/error"> <action name="ErrorPage" class="com.myErrorClass"> <result>errorpage.jsp</result> </action> </package> ``` According to the above configuration, originally the error is thrown in reports.ReportSelection (but I am not catching it there) so finally the control comes to com.myErrorClass. I CAN log the errors in this class but my question is, whether the log message still be available...since it was originally thrown in reports.ReportSelection?
After you catch and log it, are you retrhowing it? If you do, then the framework exception management should kick in. Your error handling code should look something like: ``` catch (Exception e) { log.error(e.getMessage(), e); throw e; } ``` With that in place you should be able to go back to your simplified approach of logging and retrhowing it in the action class, and configuring a single global error page.
Here is how you log errors that your actions throw. I don't know why this isn't turned on by default. Put this in your struts.xml file. ``` <interceptors> <interceptor-stack name="errorloggingStack"> <interceptor-ref name="defaultStack"> <param name="exception.logEnabled">true</param> <param name="exception.logLevel">ERROR</param> </interceptor-ref> </interceptor-stack> </interceptors> <default-interceptor-ref name="errorloggingStack"/> ``` You don't need to have try-catch blocks around every action method.
Error handling in Struts2
[ "", "java", "struts2", "" ]
i am getting "The object cannot be deleted because it was not found in the ObjectStateManager". while Deleting object. here is codes ; ``` //first i am filling listview control. private void Form1_Load(object sender, EventArgs e) { FirebirdEntity asa = new FirebirdEntity(); ObjectQuery<NEW_TABLE> sorgu = asa.NEW_TABLE; foreach (var item in sorgu) { ListViewItem list = new ListViewItem(); list.Text = item.AD; list.SubItems.Add(item.SOYAD); list.Tag = item; listView1.Items.Add(list); } //than getting New_table entity from listview's tag property. private void button3_Click(object sender, EventArgs e) { using (FirebirdEntity arama = new FirebirdEntity()) { NEW_TABLE del = (NEW_TABLE)listView1.SelectedItems[0].Tag; arama.DeleteObject(del); arama.SaveChanges(); }} ```
You need to [attach](http://msdn.microsoft.com/en-us/library/system.data.objects.objectcontext.attach.aspx) the object to the `ObjectContext`. Try: ``` NEW_TABLE del = (NEW_TABLE)listView1.SelectedItems[0].Tag; arama.Attach(del); arama.DeleteObject(del); arama.SaveChanges(); ``` Attached objects are tracked by the `ObjectContext`. This is needed for performing deletes and updates. You can read more about [attaching objects](http://msdn.microsoft.com/en-us/library/bb896271.aspx) on MSDN. Edit to clarify attach/detach: ``` private void Form1_Load(object sender, EventArgs e) { FirebirdEntity asa = new FirebirdEntity(); ObjectQuery<NEW_TABLE> sorgu = asa.NEW_TABLE; foreach (var item in sorgu) { asa.Detach(item); // add to listView1 } } ``` Also, you should wrap your use of `ObjectContext`s in `using` blocks.
In your method "Form1\_Load" you create a FIRST instance of your "FirebirdEntity" context an fill the ListViewItem with entities selected from this context In your method "button3\_Click" you create a NEW, SECOND instance of your "FirebirdEntity" context. Then you try to delete an entity in this SECOND context, which was selected in the FIRST context. Use the same instance of your context in both of your methods and everything will work fine. (Alternatively you can select the entity you want to delete a from your SECOND context and then delete this entity instead of the origin one)
Entity Framework Delete Object Problem
[ "", "c#", "entity-framework", "ado.net", "" ]
This may be a short & simple question, but I've never found a satisfying answer to it: What code does the main() function usually consist of in a large C++ project? Would it be an incorrect assumption to think that it is usually just initializing a (wrapping) class object and calling a function inside of it to set things off? Why is main() not a method in the first place? Is it to preserve backwards-compatibility with C?
In my code, it's basically a constructor call, possibly a method call, and some exception handling. This is the main for own of my projects (headers and comments omitted, and formatting messed up by SO, as usual): ``` int main( int argc, char * argv[] ) { int result = 0; try { CLIHandler ch( argc, argv ); result = ch.ExecCommand(); } catch( const Exception & ex ) { result = ExceptionHandler::HandleMyError( ex ); } catch( const std::exception & ex ) { result = ExceptionHandler::HandleOtherError( ex ); } catch( ... ) { result = ExceptionHandler::HandleUnknownError(); } return result; } ```
Mine usually do * Command-line parsing * Initialization of top-level objects * Exception handling * entering main 'exec' loop As I understand it, `int main(int argc, char *argv[])` is essentially a convention due to the `C` heritage. Never struck me as odd, but rather as useful. C++ extends C after all ... (and yes there are fine difference but that wasn't the question here).
C++ main() in a large OOP project
[ "", "c++", "oop", "program-entry-point", "" ]
If I set a breakpoint on a method, how can I see what called the method, when the breakpoint is hit in Visual Studio 2008?
Check the Call Stack window (from the menu: Debug > Windows > Call Stack). Double clicking each entry there will take you to the calling statement. You can also right click on it to enable/disable showing external code items and calls from other threads.
When the breakpoint is hit, you can view the entire call stack. You can bring that window up by going through the Debug menu->Windows->Call Stack. You can also bring it up by the shortcut Alt+Ctrl+C EDIT: You can also right-click on a function name, and view the "Callers Graph", which will show you all the callers for your method. Alternatively, you can bring the Call Browser (by going to View->Other windows->Call Browser ) and search for your method's name.
How can I find a method caller when stepping through C# in Visual Studio 2008?
[ "", "c#", ".net", "visual-studio", "debugging", "breakpoints", "" ]
``` <?php $id = intval($_GET['id']); $sql = mysql_query("SELECT username FROM users WHERE id = $id"); $row = mysql_fetch_assoc($sql); $user = htmlspecialchars($row['username']); ?> <h1>User:<?php echo $user ?></h1> ``` Can you see any threats in the above code? Do I have to use htmlspecialchars on everything I output? And should i use is\_numeric or intval to check so that the get is numeric? I'm just building a minimal site. I'm just wondering if the above code is vulnerable to sql injection, xss?
Generally speaking [`mysql_real_escape_string()`](https://www.php.net/mysql_real_escape_string) is preferred but since it's a number, [`intval()`](http://au.php.net/intval) is OK. So yes, it looks OK from a security perspective. One thing though, on many platforms, ints are limited to 32 bits so if you want to deal in numbers larger than ~2.1 billion then it won't work. Well, it won't work how you expect anyway. These sorts of security precautions apply to any form of user input including cookies (something many people forget).
I would strongly recommend using PDO and prepared statements. While your statement above looks safe, you're going to have problems as soon as you do more complex queries. Instead of puzzling over whether a particular query is safe, learn about prepared statements and you won't have to worry. Here is your example, re-written with PDO: ``` # Make a database connection $db = new PDO('mysql:dbname=your_db;host=your_db_server', 'username', 'password'); # The placeholder (:id) will be replaced with the actual value $sql = 'SELECT username FROM users WHERE id=:id'; # Prepare the statement $stmt = $db->prepare($sql); # Now replace the placeholder (:id) with the actual value. This # is called "binding" the value. Note that you don't have to # convert it or escape it when you do it this way. $stmt->bindValue(':id', $id); # Run the query $stmt->execute(); # Get the results $row = $stmt->fetch(); # Clean up $stmt->closeCursor(); # Do your stuff $user = htmlspecialchars($row['username']); ``` I've added a lot of comments; it's not as much code as it looks like. When you use `bindValue`, you never have to worry about SQL injection.
PHP security, intval and htmlspecialchars
[ "", "php", "mysql", "security", "" ]
I have a class Booking ``` public class Booking { public int Id { get; set; } public string From { get; set; } public string To { get; set; } } ``` I create a List bookings with the help of linq and I want some mechanism with which I want to autogenerate the 'Id' property to increment by 1. I.e. if the List bookings contains 10 Booking object then the first object's Id = 1, second Id = 2 and so one... any suggestion
The following will give you a list of NEW bookings with the index projected into your ID property. You could probably do something similar to this to update the existing list with the index... ``` var myBookings = myExistingListOfTen.Select((b, index) => new Booking { Id = index + 1, From=b.From, To=b.To }); ```
Not nice, but it will work. The trick is to use the overload providing the index of the item. ``` list = list.Select((item, index) => { item.Id = index; return item; }); ``` This will update the existing bookings, but you could also select a new instance with the id set and avoid this ugly `return` at the cost of duplicating the bookings and losing the references as Scott Ivey suggests. And of course you have to add one if you want one-based ids. I find it a bit strange to generate ids this way, too, but it might be a acceptable solution if you get a list of new bookings without id and want to generate them. In this case the ids should obviously not start with zero or one, but the largest already assigned id plus one.
sequential autogenerated Id with help of linq
[ "", "c#", "linq", "sequence", "auto-generate", "" ]
Given a table `Records` with the columns `id int`, `Type int` and `Name varchar(50)`, I can make search queries like so: ``` SELECT id, Type, Name FROM Records WHERE Name LIKE '%Foo%' ``` To tweak the performance a little, I'd like to give only a limited amount of results; currently 100 — simply by adding `TOP 100` to the statement. This, however, can cause records of some types to be underrepresented, or not represented at all, as shown by the following query: ``` SELECT Type, COUNT(Type) FROM (SELECT id, Type, Name FROM Records WHERE Name LIKE '%Foo%') x GROUP BY Type ORDER BY Type ``` Without the `TOP 100`, I might get: ``` 42 5 49 1 50 1 52 1 59 1 76 40 87 567 90 3 ``` …and with it: ``` 42 5 49 1 50 1 52 1 59 1 76 26 87 65 ``` This could lead the user to conclude that no record of type `90` exists. I'd prefer `TOP` to behave differently: give me at least one result of *any* type for which there are some, then keep adding to them until the count is reached. E.g., `42`, `76` and `87` would have fewer results, but `90` would show up. Ideally, I'd also like to provide the user with a "x more results of this type" UI element. Do I have to forego `TOP` altogether to accomplish this?
``` WITH RecordsWithRn AS ( SELECT id, Type, Name, ROW_NUMBER() OVER (PARTITION BY Type ORDER BY ... intra-type ordering ...) as rn FROM Records WHERE Name LIKE '%Foo%') SELECT TOP 100 id, Type, Name FROM RecordsWithRn ORDER BY RN, ... inter-type ordering ... ``` This gives you one hundred records. You will have at least one of each type, assuming less than one hundred types. Use the ORDER BY for ROW\_NUMBER() to control the order of ROW\_NUMBER of records within a type. The final ORDER BY, orders by the earlier assigned row\_number and then add other criteria, if you wish, to control the order of records between types at each row\_number. **EDIT:** To get the number of records of a type not shown: ``` WITH RecordsWithRn AS ( SELECT id, Type, Name, ROW_NUMBER() OVER (PARTITION BY Type ORDER BY Type) as rn, COUNT(*) OVER (PARTITION BY Type) as CountType FROM Records WHERE Name LIKE '%Foo%') , Top100Records as ( SELECT TOP 100 id, Type, Name, CountType FROM RecordsWithRn ORDER BY RN) select Id, Type, Name, CountType - (COUNT(*) over (PARTITION BY Type)) as CountTypeNotIncluded from Top100Records ```
``` WITH cte AS ( SELECT id, Type, Name, ROW_NUMBER() OVER (PARTITION BY Type ORDER BY Type) as rn FROM Records WHERE Name LIKE '%Foo%') SELECT * FROM cte WHERE rn <= 100; ``` This will get at most 100 records from each Type. Note that this by no means implies the query will be faster, in fact it may well be slower. It depends on the available indexes, the data in the tables, and the speed at which the client is able to process the result. If you can to show the total count of each type, then you must compute it: ``` WITH totals AS ( SELECT Type, COUNT(*) AS count FROM Records WHERE Name LIKE '%Foo%' GROUP BY Type) SELECT * FROM totals; ``` and then join the two results: ``` WITH cte AS ( SELECT id, Type, Name, ROW_NUMBER() OVER (PARTITION BY Type ORDER BY Type) as rn FROM Records WHERE Name LIKE '%Foo%') , totals AS ( SELECT Type, COUNT(*) AS count FROM Records WHERE Name LIKE '%Foo%' GROUP BY Type) SELECT * FROM cte c JOIN totals t on c.Type = t.Type WHERE c.rn <= 100; ```
Balancing different types of result records limited by a TOP 100
[ "", "sql", "t-sql", "sql-server-2008", "" ]
Consider the following class : ``` class Token: def __init__(self): self.d_dict = {} def __setattr__(self, s_name, value): self.d_dict[s_name] = value def __getattr__(self, s_name): if s_name in self.d_dict.keys(): return self.d_dict[s_name] else: raise AttributeError('No attribute {0} found !'.format(s_name)) ``` In my code Token have some other function (like get\_all() wich return d\_dict, has(s\_name) which tell me if my token has a particular attribute). Anyway, I think their is a flaw in my plan since it don't work : when I create a new instance, python try to call `__setattr__('d_dict', '{}')`. How can I achieve a similar behaviour (maybe in a more pythonic way ?) without having to write something like Token.set(name, value) and get(name) each I want to set or get an attribute for a token. Critics about design flaw and/or stupidity welcome :) Thank !
You need to special-case d\_dict. Although of course, in the above code, all you do is replicate what any object does with `__dict__` already, so it's pretty pointless. Do I guess correctly if you intended to special case some attributes and actally use methods for those? In that case, you can use properties. ``` class C(object): def __init__(self): self._x = None @property def x(self): """I'm the 'x' property.""" return self._x @x.setter def x(self, value): self._x = value @x.deleter def x(self): del self._x ```
The special-casing of `__dict__` works like this: ``` def __init__(self): self.__dict__['d_dict'] = {} ``` There is no need to use a new-style class for that.
Controlling getter and setter for a python's class
[ "", "python", "" ]
I am still visiting school and will finish my exams next year. Since two years I am working as (the only :-( ) in-house dev for a company providing financial services to Laboratories and doctors. After spending the first year fixing their existing Application and realizing, communicating and agreeing that it won't meet future requirements i rewrote it from scratch. This is my first LOB application. I needed a "IEnumerable.ToDataTable()" method to do simplify certain things in the Application. I realized that existing solutions wouldn't meet my performance and flexibility requirements, so i came up with a solution based on Dynamically injected IL code myself. I thought that this might be a good way to contribute to the community, thats why i asked my employer if i may take some of those code and release it under LGPL. They agreed and that's where my first project is: [ModelShredder](http://code.google.com/p/modelshredder/) Since this is my first OSS project and i am relatively unexperienced with running an OSS project on my own I am asking you for some "best-pratices" and what i can improve on it.
First read this book: [![Producing Open Source Software](https://i.stack.imgur.com/qTBE7.gif)](https://i.stack.imgur.com/qTBE7.gif) You can download it free of charge here: <http://producingoss.com/> There are also some nice screencasts there that may be some use too. It covers everything you need to know about looking for, contributing to, starting and maintaining an open source project,
It all depends on if you're going to have a team help you or not. It'll be simpler to start doing it yourself if you have the time if for no other reason than you can work out how you want to proceed without worrying about politics. For a start, any code used as a framework or a library typically needs to be developed to a much higher standard than what you might write for an internal application. This means you need: * Sufficient user and developer docuemtnation; * Unit tests with decent coverage; * A license; * Tagged versions in source control; and * Released binaries and source code with checksums. Additionally you'll need a method of: * Communicating your project status (release notes, goals, etc); and * A means to allow people to raise and track issues. [Google Code](http://code.google.com/) (as just one example) can do pretty much all of this for you. I would also suggest you register the domain name for your project (typically projectname.org for open source). If the one you want is taken already, you may want to change the project name, particularly as there might be cause for confusion.
How can I improve my first OSS project
[ "", "c#", "linq", "open-source", "datatable", "" ]
I am developping a site where users can post comments and each comment is categorized. I have a page where users can go and see a list of all the categories on the site with the latest 5 comments posted in them. The information I need to retrieve from the database is: * A list of categories + 5 comments in each categories This is what I have right now (simplified to basic PHP): ``` echo "<ul>"; $query = mysql_query("SELECT * FROM categories"); while($result = mysql_fetch_assoc($query)){ echo "<li><h2>{$result['category_name']}</h2>"; $query_comments = mysql_query( "SELECT * FROM comments WHERE ". "category_id = '{$result['id']}' ". "ORDER BY created_at DESC LIMIT 5"); while($result_comments = mysql_fetch_assoc($query_comments)){ echo "{$result_comments['username']} wrote {$result_comments['text']} on {$result_comments['created_at']}<br>"; } echo "</li>"; } echo "</ul>"; ``` It would look like this (assuming my categories are Fruits name) ``` Apple Jay wrote blah blah blah - August 5, 2009 Bob wrote hello hello hello - August 5, 2009 Tom wrote super super - August 5, 2009 Edward wrote no no no - August 5, 2009 Kaysie wrote super no! - August 5, 2009 Orange Cassie wrote ye ye ye ye - August 5, 2009 Alfonce wrote whoohoo - August 5, 2009 Arthur wrote love oranges - August 5, 2009 Alice wrote yes yes yes - August 5, 2009 Xavier wrote Lorem ipsum dolor sit amet - August 5, 2009 Strawberry Chris wrote Lorem ipsum dolor sit amet - August 5, 2009 Hubert wrote Lorem ipsum dolor sit amet - August 5, 2009 Martin wrote Lorem ipsum dolor sit amet - August 5, 2009 Lyon wrote Lorem ipsum dolor sit amet - August 5, 2009 Paris wrote Lorem ipsum dolor sit amet - August 5, 2009 Blueberry etc... ``` The problem is, if there are a lot of categories, I will have performance issues, especially if there are a lot of users using the site. I am struggling to find way to reduce the amount of queries needed. Does anyone has an idea of how I could do that ? **UPDATE** I tried to LEFT JOIN the category table with the comments table however, I didn't find a way to limit the amount of comments per category because if I use LIMIT 5, it only limits the amoutn of comments returned.
I talked to a database guy and it turns out to be a bit of a pain in MySQL. Something like this would work well in PostgreSQL: ``` SELECT * FROM categories LEFT JOIN comments ON categories.id = comments.category_id WHERE comments.id IS NULL OR comments.id IN ( SELECT id FROM comments AS a2 WHERE categories.id = a2.category_id ORDER BY id DESC LIMIT 5 ) ``` Unfortunately MySQL doesn't support LIMIT in subqueries. He scratched his head and said there's a workaround available but it didn't sound pretty. At that point I figured you may as well use multiple queries. If it's a performance issue it could be data that you cache temporarily. Sorry, not much help :) **Old wrong answer:** Try using a LEFT JOIN in your query, with categories on the left (so all categories are always returned, regardless of if they have comments) and the comments table on the right. This will reduce it to one query.
You could use a join: ``` SELECT categories.category_name, comments.* FROM comments LEFT JOIN categories ON categories.category_id=comments.category_id ``` and change the way you iterate through the results.
Listing categories with the latest comments/activities with MYSQL
[ "", "php", "mysql", "" ]
I've been programming C, mainly in an embedded environment, for years now and have a perfectly good mental model of pointers - I don't have to explicitly think about how to use them, am 100% comfortable with pointer arithmetic, arrays of pointers, pointers-to-pointers etc. I've written very little C++ and really don't have a good way of thinking about references. I've been advised in the past to "think of them as pointers that can't be NULL" but [this question](https://stackoverflow.com/questions/1164266/why-arrays-of-references-are-illegal) shows that that is far from the full story. So for more experienced C++ programmers - how do you think of references? Do you think of them as a special sort of pointer, or as their own thing entirely? What's a good way for a C programmer to get their head round the concept?
I've get used to think about references as an alias for main object. **EDIT(Due to request in comments):** I used to think about reference as kind of aliasing is because it behaves in the exact same way as the original variable without any need to make an extra manipulation in order to affect the variable referenced.
For me, when I see a pointer in code (as a local variable in a function or a member on a class), I have to think about 1. Is the pointer null, or is it valid 2. Who created the object it points to (is it me?, have I done it yet?) 3. Who is responsible for deleting the object 4. Does it always point to the same object I don't have to think about any of that stuff if it's a reference, it's somebody else's problem (i.e. think of a reference as an [SEP Field](http://www.economicexpert.com/a/Somebody:Else:s:Problem:field.htm) for a pointer) P.S. Yes, it's probably still my problem, *just not right now*
What is a good way to think about C++ references?
[ "", "c++", "reference", "" ]
Can anyone recommend a plug and play PHP/MySQL software that will enable me to manage users, protect pages, prompt for logins, handle lost passwords, store their info such as name/addy/email etc. Does anything like that exist? Preferably, I'd like it to be as easy as including a file in my existing pages to make them a part of the system.
I have used Pear::Auth to achieve this end (building custom CMS) with some success. There is another package that has more features called LiveUser. They both have methods for creating/deleting users (in mySql), storing sessions, and accessing session data, so they do quite a bit of the heavy lifting for you. That said, neither of these packages are "plug-and-play," but they're about as close as your going to get to your stated goal without using some kind of premade CMS. Pear packages are not very well documented as a whole, but the Auth docs aren't that bad, and there are some useful examples out there. This is the one I worked from: <http://forums.devshed.com/php-development-5/pear-auth-example-94752.html>
Since you're not gonna use WordPress, Joomla or something to fulfil your project needs, I recommend you to google for some membership system scripts & tutorials and attach 'em to your existing project. for example take a look at these: > <http://evolt.org/PHP-Login-System-with-Admin-Features> > <http://gigaspartan.com/?p=6> > <http://www.devarticles.com/c/a/PHP/Creating-a-Membership-System/> and [Google](http://www.google.com/search?q=php%20membership%20system) of course. Also using the `Pear` packages could be a better & more reliable solution. As a whole no one could find any other programmer's code to fully match his/her codes. So we have to adapt ourselves to the best cms/frameworks available, I think. WordPress is pretty easy to start with: ``` require './wp-blog-header.php'; //and your story begins! ``` Also this [article](http://www.problogdesign.com/wordpress/use-wordpress-as-a-php-framework-for-your-static-html-pages/) worth a look.
Plug and Play PHP/MySQL User Management/Login/Security System
[ "", "php", "mysql", "" ]
I wrote an application using Django 1.0. It works fine with the django test server. But when I tried to get it into a more likely production enviroment the Apache server fails to run the app. The server I use is [WAMP2.0](http://www.wampserver.com "WAMPServer's WebSite"). I've been a PHP programmer for years now and I've been using WAMPServer since long ago. I installed the mod\_wsgi.so and seems to work just fine (no services error) but I can't configure the httpd.conf to look at my python scripts located outside the server root. For now, I'm cool with overriding the document root and serve the django app from the document root instead so the httpd.conf line should look like this: ``` WSGIScriptAlias / C:/Users/Marcos/Documents/mysite/apache/django.wsgi ``` but the server's response is a 403 Forbidden
You have: ``` WSGIScriptAlias / /C:/Users/Marcos/Documents/mysite/apache/django.wsgi ``` That is wrong as RHS is not a valid Windows pathname. Use: ``` WSGIScriptAlias / C:/Users/Marcos/Documents/mysite/apache/django.wsgi ``` That is, no leading slash before the Windows drive specifier. Other than that, follow the mod\_wsgi documentation others have pointed out. --- Poster edited question to change what now would appear to be a typo in the post and not a problem with his configuration. If that is the case, next causes for a 403 are as follows. First is that you need to also have: ``` <Directory C:/Users/Marcos/Documents/mysite/apache> Order deny,allow Allow from all </Directory> ``` If you don't have that then Apache isn't being granted rights to serve a script from that directory and so will return FORBIDDEN (403). Second is that you do have that, but don't acknowledge that you do, and that that directory or the WSGI script file is not readable by the user that the Apache service runs as under Windows.
Have you seen <http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango> ? You need more than one line to assure that Apache will play nicely. ``` Alias /media/ /usr/local/django/mysite/media/ <Directory /usr/local/django/mysite/media> Order deny,allow Allow from all </Directory> WSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi <Directory /usr/local/django/mysite/apache> Order deny,allow Allow from all </Directory> ``` The `<Directory>`, as well as appropriate file system ownership and permissions are essential. The `usr/local/django/mysite/apache` directory has your Python/Django app and the all-important `django.wsgi` file. You must provide permissions on this directory.
Installing Django with mod_wsgi
[ "", "python", "django", "apache", "mod-wsgi", "" ]
I have a 3 ComboBoxes in a Form, a list of objects. I need to bind the the comboboxes with 3 different members of the class from the list. (C# 3.0, .NET 3.5) I am currently doing this ``` Title_Combo.DataSource = ListContaining.GroupBy(item => item.Title).Where(item => !item.Key.Equals(string.Empty)).ToList(); Title_Combo.DisplayMember = "Key"; ``` Where ListContaining is a subset of the main list of objects.Every time an item is selected in any one of those comboboxes the ListContaining is populated based on selected value from the main list of objects like and all the comboboxes are reloaded. ``` ListContaining = ListFiles.Where(item => item.GetType().GetProperty(name).GetValue(item, null).Equals(int.Parse(Sender.SelectedItem.ToString()))).ToList(); ``` It loads perfectly but the next selection of the comboboxes throws a NullReference Exception. Is this due to the fact that the List ListContaining is being rewritten or something, I can figure out. and is there a better way to handle the 3 comboboxes from the list. Your help is appreciated. EDITED: I have given up debugging this. But can anyone suggest a way to bind 3 comboboxes with a single list of objects with 3 different properties. And the controls update on index change.
Well, I got the answer. You can use a subset of objects to bind a control, that was not the cause of the problem. And I am able to handle multiple comboboxes in the way described.
This problem may become if list type of your second combobox is DropDown not DropDownList, normally the same error in that exception you mention returns.Please check your controls. For a second thought , If your comboboxes are related each other ,as follows: **One to many relation** * ComboBox:CompanyGroup * ComboBox:Company * ComboBox:Person -->If one changes from above , below is triggered. You case is like : **Many to Many relation** * ComboBox:Tags * ComboBox:Questions --> If question changes it triggers its own tags and If tags changes it triggers for only which tag the question has. For this purpose only,you should search in entire collection each time your combobox item changes.Because as I understand from your question,one choice triggers another choice.
Multiple Comboboxes with a list of objects
[ "", "c#", ".net", "list", "combobox", "" ]
I'm generating a unique filename for uploaded files with the following code ``` $date = date( 'U' ); $user = $_SERVER[REMOTE_ADDR]; $filename = md5($date.$user); ``` The problem is that I want to use this filename again later on in the script, but if the script takes a second to run, I'm going to get a different filename the second time I try to use this variable. For instance, I'm using an upload/resize/save image upload script. The first operation of the script is to copy and save the resized image, which I use a date function to assign a unique name to. Then the script processses the save and saves the whole upload, and assigns it a name. At the end of the script (`$thumb` and `$full` are the variables), I need to insert into a MySQL database, the filenames I used when i saved the uploads. Problem is, sometimes on large images it takes more than a second (or during the process, the seconds change) resulting in a different filename being put into the database than is what the file is actually saved under. Is this just not a good idea to use this method of naming?
AFAIK it's a great way to name the files, although I would check `file_exists()` and maybe tack on a random number. You need to store that filename in a variable and reference it again later, instead of relying on the algorithm each time. This could be stored in the user `$_SESSION`, a cookie, a GET variable, etc between pageloads. Hope that helps
I would recommend storing the file name in the session (as per AI). If you store it in one of the other variables, it is more likely for the end user to be able to attack the system through it. MD5 of user concatenated with rand() would be a nice way to get a long list of unique values. Just using rand() would probably have a higher percentage of conflicts. I am not sure about the process that you are following for uploading files, but another way to handle file uploads is with PHP's built in handlers. You can upload the file and then use the "secure" methods for pulling uploaded files out of the temporary space. (the temporary space in this instance can be safely located outside of the open base dir directive to prevent tampering). is\_uploaded\_file() and move\_uploaded\_file() from: <http://php.net/manual/en/features.file-upload.post-method.php> example 2 might handle the problem you are encountering. Definitely check for an existing file in that location if you are choosing a filename on the fly. If user input is allowed in any way shape or form, validate and filter the argument to make sure it is safe. Also, if the storage folder is web accessible, make sure you munge the name and probably the extension as well. You do not want someone to be able to upload code and then be able to execute it. That officially leads to BAD activities.
Generating a time-based unique file name for uploads without creating a race condition
[ "", "php", "uniqueidentifier", "" ]
I've just decided to venture off into PHP land for fun and to learn, reading that php is loosely typed and that $var's can be reused is this true that the code below will pose no problem? ``` $x = 996; $x = mysql_query("SELECT aString FROM table1"); ``` the variable x will stored as an int datatype with 996, then after the second line it will stored as a string datatype with the string from the query? There wont be any casting errors?
You are correct; that's the definition of being "loosely typed". However, that may not be the best practice. <http://drupaler.co.uk/blog/baby-dont-you-loose-your-type-me/66> is a good read on your subject.
There will be no errors, except that the second line won't give you a string, [mysql\_query](http://php.net/mysql_query) returns an internal PHP type called a [resource](https://www.php.net/manual/en/language.types.resource.php) (generally some kind of opaque handle/pointer for library functions)
php variable datatypes
[ "", "php", "" ]
I need to catch some warnings being thrown from some php native functions and then handle them. Specifically: ``` array dns_get_record ( string $hostname [, int $type= DNS_ANY [, array &$authns [, array &$addtl ]]] ) ``` It throws a warning when the DNS query fails. `try`/`catch` doesn't work because a warning is not an exception. I now have 2 options: 1. `set_error_handler` seems like overkill because I have to use it to filter every warning in the page (is this true?); 2. Adjust error reporting/display so these warnings don't get echoed to screen, then check the return value; if it's `false`, no records is found for hostname. What's the best practice here?
**Set and restore error handler** One possibility is to set your own error handler before the call and restore the previous error handler later with `restore_error_handler()`. ``` set_error_handler(function() { /* ignore errors */ }); dns_get_record(); restore_error_handler(); ``` You could build on this idea and write a re-usable error handler that logs the errors for you. ``` set_error_handler([$logger, 'onSilencedError']); dns_get_record(); restore_error_handler(); ``` **Turning errors into exceptions** You can use `set_error_handler()` and the `ErrorException` class to turn all php errors into exceptions. ``` set_error_handler(function($errno, $errstr, $errfile, $errline) { // error was suppressed with the @-operator if (0 === error_reporting()) { return false; } throw new ErrorException($errstr, 0, $errno, $errfile, $errline); }); try { dns_get_record(); } catch (ErrorException $e) { // ... } ``` The important thing to note when using your own error handler is that it will bypass the `error_reporting` setting and pass all errors (notices, warnings, etc.) to your error handler. You can set a second argument on `set_error_handler()` to define which error types you want to receive, or access the current setting using `... = error_reporting()` inside the error handler. **Suppressing the warning** Another possibility is to suppress the call with the @ operator and check the return value of `dns_get_record()` afterwards. **But I'd advise against this** as errors/warnings are triggered to be handled, not to be suppressed.
The solution that really works turned out to be setting simple error handler with `E_WARNING` parameter, like so: ``` set_error_handler("warning_handler", E_WARNING); dns_get_record(...) restore_error_handler(); function warning_handler($errno, $errstr) { // do something } ```
Can I try/catch a warning?
[ "", "php", "error-handling", "try-catch", "" ]
I learned a trick a while back from a DBA friend to speed up certain SQL queries. I remember him mentioning that it had something to do with how SQL Server compiles the query, and that the query path is forced to use the indexed value. Here is my original query (takes 20 seconds): ``` select Part.Id as PartId, Location.Id as LocationId FROM Part, PartEvent PartEventOuter, District, Location WHERE PartEventOuter.EventType = '600' AND PartEventOuter.AddressId = Location.AddressId AND Part.DistrictId = District.Id AND Part.PartTypeId = 15 AND District.SubRegionId = 11 AND PartEventOuter.PartId = Part.Id AND PartEventOuter.EventDateTime <= '4/28/2009 4:30pm' AND NOT EXISTS ( SELECT PartEventInner.EventDateTime FROM PartEvent PartEventInner WHERE PartEventInner.PartId = PartEventOuter.PartId AND PartEventInner.EventDateTime > PartEventOuter.EventDateTime AND PartEventInner.EventDateTime <= '4/30/2009 4:00pm') ``` Here is the "optimized" query (less than 1 second): ``` select Part.Id as PartId, Location.Id as LocationId FROM Part, PartEvent PartEventOuter, District, Location WHERE PartEventOuter.EventType = '600' AND PartEventOuter.AddressId = Location.AddressId AND Part.DistrictId = District.Id AND Part.PartTypeId = 15 AND District.SubRegionId = 11 AND PartEventOuter.PartId = Part.Id AND PartEventOuter.EventDateTime <= '4/28/2009 4:30pm' AND NOT EXISTS ( SELECT PartEventInner.EventDateTime FROM PartEvent PartEventInner WHERE PartEventInner.PartId = PartEventOuter.PartId **AND EventType = EventType** AND PartEventInner.EventDateTime > PartEventOuter.EventDateTime AND PartEventInner.EventDateTime <= '4/30/2009 4:00pm') ``` Can anyone explain in detail why this runs so much faster? I'm just trying to get a better understanding of this.
probably because you are getting a Cartesian product without your EventType = EventType From WikiPedia: <http://en.wikipedia.org/wiki/SQL> "[SQL] makes it too easy to do a Cartesian join (joining all possible combinations), which results in "run-away" result sets when WHERE clauses are mistyped. Cartesian joins are so rarely used in practice that requiring an explicit CARTESIAN keyword may be warranted. (SQL 1992 introduced the CROSS JOIN keyword that allows the user to make clear that a Cartesian join is intended, but the shorthand "comma-join" with no predicate is still acceptable syntax, which still invites the same mistake.)" you are actually going through more rows than necessary with your first query. <http://www.fluffycat.com/SQL/Cartesian-Joins/>
Are there a large number of records with EventType = Null? Before you added the aditional restriction your subquery would have been returning all those Null records, which would then have to be scanned by the Not Exists predicate for every row in the outer query... So the more you restrict what the subquery returns, the fewer the rows that have to be scanned to verify the Not Exists... If this is the issue, it would probably be even faster if you restricted the records to EventType = '600' in the subquery as well.... ``` Select Part.Id as PartId, Location.Id as LocationId FROM Part, PartEvent PartEventOuter, District, Location WHERE PartEventOuter.EventType = '600' AND PartEventOuter.AddressId = Location.AddressId AND Part.DistrictId = District.Id AND Part.PartTypeId = 15 AND District.SubRegionId = 11 AND PartEventOuter.PartId = Part.Id AND PartEventOuter.EventDateTime <= '4/28/2009 4:30pm' AND NOT EXISTS (SELECT PartEventInner.EventDateTime FROM PartEvent PartEventInner WHERE PartEventInner.PartId = PartEventOuter.PartId AND EventType = '600' AND PartEventInner.EventDateTime > PartEventOuter.EventDateTime AND PartEventInner.EventDateTime <= '4/30/2009 4:00pm') ```
Why does this speed up my SQL query?
[ "", "sql", "optimization", "query-optimization", "performance", "" ]
Ive managed to get something like this to work in another project but not on my current. The file locks and cannot be overwritten when saving again even though i dispose of the b bitmap before calling save. Any idea of what i might be doing wrong? ``` Bitmap b = (Bitmap)Image.FromFile("image.png"); Bitmap bClone = (Bitmap)b.Clone(); // modify bClone here.. b.Dispose(); b = null; GC.Collect(); bClone.Save("image.png"); ```
The "Clone" method doesn't do what you want. You can use the Bitmap's copy constructor instead to create a separate image that contains the same pixels. ``` Bitmap bClone = null; using (Bitmap b = (Bitmap)Image.FromFile("image.png")) { bClone = new Bitmap(b); // modify bClone here.. } bClone.Save("image.png"); bClone.Dispose(); ```
Using FromFile you really don't have any control over the lifetime of the file object. Try FromStream instead.
GDI+ error on Bitmap.Save
[ "", "c#", "" ]
``` SELECT username, (SUM(rating)/count(*)) as TheAverage, count(*) as TheCount FROM ratings WHERE month ='Aug' AND TheCount > 1 GROUP BY username ORDER BY TheAverage DESC, TheCount DESC ``` I know that's really close (I think) but it's saying 'TheCount' doesn't exist in the WHERE clause and the ORDER clause. The table is: id, username, rating, month And I'm trying to work out the average rating for each user then order the results by average rating and number of ratings.
If you group and count, you need having: ``` SELECT username, (SUM(rating)/COUNT(*)) as TheAverage, Count(*) as TheCount FROM rating WHERE month='Aug' GROUP BY username HAVING TheCount > 1 ORDER BY TheAverage DESC, TheCount DESC ```
``` SELECT username, (SUM(rating)/count()) as TheAverage, count() as TheCount FROM ratings WHERE month ='Aug' GROUP BY username HAVING TheCount > 1 ORDER BY TheAverage DESC, TheCount DESC ``` **EDIT:** Seems I didn't look closely enough. I think it'll work now.
MySQL query: work out the average rating for each user then order the results by average rating and number of ratings
[ "", "sql", "mysql", "" ]
How does the conversion to UTC from the standard `DateTime` format work? More specifically, if I create a `DateTime` object in one time zone and then switch to another time zone and run `ToUniversalTime()` on it, how does it know the conversion was done correctly and that the time is still accurately represented?
There is no implicit timezone attached to a `DateTime` object. If you run `ToUniversalTime()` on it, it uses the timezone of the context that the code is running in. For example, if I create a `DateTime` from the epoch of 1/1/1970, it gives me the same `DateTime` object no matter where in the world I am. If I run `ToUniversalTime()` on it when I'm running the code in Greenwich, then I get the same time. If I do it while I live in Vancouver, then I get an offset `DateTime` object of -8 hours. This is why it's important to store time related information in your database as UTC times when you need to do any kind of date conversion or localization. Consider if your codebase got moved to a server facility in another timezone ;) Edit: note from Joel's answer - `DateTime` objects by default are typed as `DateTimeKind.Local`. If you parse a date and set it as `DateTimeKind.Utc`, then `ToUniversalTime()` performs no conversion. And here's an article on ["Best Practices Coding with Date Times"](http://msdn.microsoft.com/en-us/library/ms973825.aspx), and an article on [Converting DateTimes with .Net](http://msdn.microsoft.com/en-us/library/bb397769.aspx).
Firstly, it checks whether the `Kind` of the `DateTime` is known to be UTC already. If so, it returns the same value. Otherwise, it's assumed to be a local time - that's local to the computer it's running on, and in particular in the time zone that the computer was using when some private property was first lazily initialized. That means if you change the time zone *after* your application was started, there's a good chance it will still be using the old one. The time zone contains enough information to convert a local time to a UTC time or vice versa, although there are times that that's ambiguous or invalid. (There are local times which occur twice, and local times which never occur due to daylight saving time.) The rules for handling these cases are specified in [the documentation](http://msdn.microsoft.com/en-us/library/system.datetime.touniversaltime.aspx): > If the date and time instance value is > an ambiguous time, this method assumes > that it is a standard time. (An > ambiguous time is one that can map > either to a standard time or to a > daylight saving time in the local time > zone) If the date and time instance > value is an invalid time, this method > simply subtracts the local time from > the local time zone's UTC offset to > return UTC. (An invalid time is one > that does not exist because of the > application of daylight saving time > adjustment rules.) The returned value will have a `Kind` of `DateTimeKind.Utc`, so if you call `ToUniveralTime` on that it won't apply the offset again. (This is a vast improvement over .NET 1.1!) If you want a non-local time zone, you should use [`TimeZoneInfo`](http://msdn.microsoft.com/en-us/library/system.timezoneinfo.aspx) which was introduced in .NET 3.5 (there are hacky solutions for earlier versions, but they're not nice). To represent an instant in time, you should consider using [`DateTimeOffset`](http://msdn.microsoft.com/en-us/library/system.datetimeoffset.aspx) which was introduced in .NET 2.0SP1, .NET3.0SP1 and .NET 3.5. However, that still doesn't have an actual time zone associated with it - just an offset from UTC. That means you don't know what local time will be one hour later, for example - the DST rules can vary between time zones which happened to use the same offset for that particular instant. `TimeZoneInfo` is designed to take historical and future rules into account, as opposed to [`TimeZone`](http://msdn.microsoft.com/en-us/library/system.timezone.aspx) which is somewhat simplistic. Basically the support in .NET 3.5 is a lot better than it was, but still leaves something to be desired for proper calendar arithmetic. Anyone fancy porting [Joda Time](http://joda-time.sourceforge.net) to .NET? ;)
How does DateTime.ToUniversalTime() work?
[ "", "c#", ".net", "datetime", "utc", "" ]
Pastebin link: <http://pastebin.com/f40ae1bcf> The problem: I made a wx.Gauge, with the range of 50. Then a function that updates Gauge's value when the program is idle. When the gauge is filled by around 50% it empties and doesn't show anything for a while. The value is actually 50 when it does this, and I think that when the value is 50 it should be full. Why does it do this? I also tried with a wx.Timer instead of binding to wx.EVT\_IDLE but I didn't have luck.
A few things. * I can't reproduce this on my iMac, it goes all the way to full. Python 2.5.4, wxPython 2.8.9.2 * Idle events can come at strange times. Try adding `print event` to your idle handler to see exactly when those events are coming. A timer would be best. Is the gauge moving really fast or flickering? * You can try calling gauge.Update() to force a complete redraw too. * I always just use 100 as my gauge limit, maybe just try that. An easier way than a timer could be: ``` import wx class GaugeFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, -1, "Gauge example", size=(350, 150)) panel = wx.Panel(self, -1) self.count = 0 self.gauge = wx.Gauge(panel, -1, 50, (20, 50), (250, 25)) self.update_gauge() def update_gauge(self): self.count = self.count + 1 if self.count >= 50: self.count = 0 self.gauge.SetValue(self.count) wx.CallLater(100, self.update_gauge) app = wx.PySimpleApp() GaugeFrame().Show() app.MainLoop() ```
After more tests I discovered that I must override the range of 2 units to display the gauge when completely full. On windows vista it seems not to cause problems. Does it cause problems on linux or mac?
wxPython gauge problem (skipping)
[ "", "python", "wxpython", "" ]
I'm not massively experienced with JavaScript and I'm having trouble with variable scope and jquery. I have the following structure: ``` function pass_variables() { username = "efcjoe" response = post_variables(username) alert(response) } function post_variables(username) { $.post( '/path/to/url/', { 'username': username, }, function(data) { valid = (data != 0) ? true : false // OPTION 1: If I put return here... return valid; // ... the alert box in pass_variables says "undefined" }, "text" ); // OPTION 2: If I put return here... return valid; // ... The alert box does not pop up, and Safari debug gives // me the error: "Can't find variable: valid" } ``` Am I missing something there? I think `valid` should be a global variable, and therefore option 2 should work fine. I'm really not sure about option 1. Can anyone give me any advice on the best way to get this working? Thanks a lot.
In your *option 1* you are returning from the callback function, and its return value is never used because this function is only called when the Ajax request ends. In the *option 2*, you are returning from your main function, but that return happens before the callback function assign any value to your `valid` variable. I would refactor your code in this way, without using global variables: ``` function post_variables(username){ $.post('/path/to/url/',{ 'username': username, }, function(data){ var valid = data != 0; // OPTION 3: Work in the callback function alert(username); alert(valid); // OPTION 4: Pass the values and work on another function otherFunction(username, valid); },"text"); } function otherFunction(username, isValid){ //... } ```
Ajax calls are asynchronous which means they get called but do wait around for execution to complete. Basically your alert is firing before the ajax request has completed and run the callback function to change your variable. The best thing you can do is actually pass a function to run when the ajax request has completed. This also negates the need for global variables which are frowned upon since other plugins, script can alter their state and leave your script open to errors, flaws etc E.g ``` function foobar(){ //call function to do post request and also pass a function to run //when post has returned runPostRequest( callbackFn ); } function runPostRequest(callback){ $.post( '/foo', callback ); } function callbackFn( data ){ console.log('post request complete'); } ```
Jquery $.post() variable scope
[ "", "javascript", "jquery", "scope", "" ]
I understand the theoretical concept that assigning one reference type variable to another, only the reference is copied, not the object. assigning one value type variable to another, the object is copied. But I cannot spot the different in the code. would someone kindly point out the difference between the following two code blocks? Thank you! REFERENCE TYPE ASSIGNMENT ``` using System; class Employee { private string m_name; public string Name { get { return m_name; } set { m_name = value; } } } class Program { static void Main() { Employee joe = new Employee(); joe.Name = "Joe"; Employee bob = new Employee(); bob.Name = "Bob"; Console.WriteLine("Original Employee Values:"); Console.WriteLine("joe = " + joe.Name); Console.WriteLine("bob = " + bob.Name); // assign joe reference to bob variable bob = joe; Console.WriteLine(); Console.WriteLine("Values After Reference Assignment:"); Console.WriteLine("joe = " + joe.Name); Console.WriteLine("bob = " + bob.Name); joe.Name = "Bobbi Jo"; Console.WriteLine(); Console.WriteLine("Values After Changing One Instance:"); Console.WriteLine("joe = " + joe.Name); Console.WriteLine("bob = " + bob.Name); Console.ReadKey(); } } ``` VALUE TYPE ASSIGNMENT ``` using System; struct Height { private int m_inches; public int Inches { get { return m_inches; } set { m_inches = value; } } } class Program { static void Main() { Height joe = new Height(); joe.Inches = 71; Height bob = new Height(); bob.Inches = 59; Console.WriteLine("Original Height Values:"); Console.WriteLine("joe = " + joe.Inches); Console.WriteLine("bob = " + bob.Inches); // assign joe reference to bob variable bob = joe; Console.WriteLine(); Console.WriteLine("Values After Value Assignment:"); Console.WriteLine("joe = " + joe.Inches); Console.WriteLine("bob = " + bob.Inches); joe.Inches = 65; Console.WriteLine(); Console.WriteLine("Values After Changing One Instance:"); Console.WriteLine("joe = " + joe.Inches); Console.WriteLine("bob = " + bob.Inches); Console.ReadKey(); } } ```
Well, the obvious difference is that with the class example, it appears both joe and bob changed in the last part there, to the same value. In the struct example, they keep their separate values, simply because each variable is a whole struct value by itself, not just a reference to a common object in memory somewhere. The main code-wise difference being the type you use, *class* or *struct*, this dictates whether you're creating a reference type or a value type.
One is a structure and the other is a class. This seems like an overly complicated example involving more than just value and reference differences but the differences between classes and structs as well. When one struct is assigned to another a copy is made. When one class is assigned to another only the reference changes.
C# Reference type assignment VS value type assignment
[ "", "c#", "" ]
I'm developing an app that handle sets of financial series data (input as csv or open document), one set could be say 10's x 1000's up to double precision numbers (Simplifying, but thats what matters). I plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input. This will be between columns (row level operations) on one set and also between columns on many (potentially all) sets at the row level also. I plan to write it in Python and it will eventually need a intranet facing interface to display the results/graphs etc. for now, csv output based on some input parameters will suffice. What is the best way to store the data and manipulate? So far I see my choices as being either (1) to write csv files to disk and trawl through them to do the math or (2) I could put them into a database and rely on the database to handle the math. My main concern is speed/performance as the number of datasets grows as there will be inter-dataset row level math that needs to be done. -Has anyone had experience going down either path and what are the pitfalls/gotchas that I should be aware of? -What are the reasons why one should be chosen over another? -Are there any potential speed/performance pitfalls/boosts that I need to be aware of before I start that could influence the design? -Is there any project or framework out there to help with this type of task? -Edit- More info: The rows will all read all in order, BUT I may need to do some resampling/interpolation to match the differing input lengths as well as differing timestamps for each row. Since each dataset will always have a differing length that is not fixed, I'll have some scratch table/memory somewhere to hold the interpolated/resampled versions. I'm not sure if it makes more sense to try to store this (and try to upsample/interploate to a common higher length) or just regenerate it each time its needed.
**"I plan to do operations on that data (eg. sum, difference, averages etc.) as well including generation of say another column based on computations on the input."** This is the standard use case for a data warehouse star-schema design. Buy Kimball's The Data Warehouse Toolkit. Read (and understand) the star schema before doing anything else. **"What is the best way to store the data and manipulate?**" A Star Schema. You can implement this as flat files (CSV is fine) or RDBMS. If you use flat files, you write simple loops to do the math. If you use an RDBMS you write simple SQL *and* simple loops. **"My main concern is speed/performance as the number of datasets grows"** Nothing is as fast as a flat file. Period. RDBMS is slower. The RDBMS value proposition stems from SQL being a relatively simple way to specify `SELECT SUM(), COUNT() FROM fact JOIN dimension WHERE filter GROUP BY dimension attribute`. Python isn't as terse as SQL, but it's just as fast and just as flexible. Python competes against SQL. **"pitfalls/gotchas that I should be aware of?"** DB design. If you don't get the star schema and how to separate facts from dimensions, all approaches are doomed. Once you separate facts from dimensions, all approaches are approximately equal. **"What are the reasons why one should be chosen over another?"** RDBMS slow and flexible. Flat files fast and (sometimes) less flexible. Python levels the playing field. **"Are there any potential speed/performance pitfalls/boosts that I need to be aware of before I start that could influence the design?"** Star Schema: central fact table surrounded by dimension tables. Nothing beats it. **"Is there any project or framework out there to help with this type of task?"** Not really.
For speed optimization, I would suggest two other avenues of investigation beyond changing your underlying storage mechanism: **1) Use an intermediate data structure.** If maximizing speed is more important than minimizing memory usage, you may get good results out of using a different data structure as the basis of your calculations, rather than focusing on the underlying storage mechanism. This is a strategy that, in practice, has reduced runtime in projects I've worked on dramatically, regardless of whether the data was stored in a database or text (in my case, XML). While sums and averages will require runtime in only [O(n)](http://en.wikipedia.org/wiki/Big_O_notation), more complex calculations could easily push that into O(n^2) without applying this strategy. O(n^2) would be a performance hit that would likely have far more of a perceived speed impact than whether you're reading from CSV or a database. An example case would be if your data rows reference other data rows, and there's a need to aggregate data based on those references. So if you find yourself doing calculations more complex than a sum or an average, you might explore data structures that can be created in O(n) and would keep your calculation operations in O(n) or better. As Martin pointed out, it sounds like your whole data sets can be held in memory comfortably, so this may yield some big wins. What kind of data structure you'd create would be dependent on the nature of the calculation you're doing. **2) Pre-cache.** Depending on how the data is to be used, you could store the calculated values ahead of time. As soon as the data is produced/loaded, perform your sums, averages, etc., and store those aggregations alongside your original data, or hold them in memory as long as your program runs. If this strategy is applicable to your project (i.e. if the users aren't coming up with unforeseen calculation requests on the fly), reading the data shouldn't be prohibitively long-running, whether the data comes from text or a database.
Store data series in file or database if I want to do row level math operations?
[ "", "python", "database", "database-design", "file-io", "" ]
I realize this is probably a fundamental thing I should know but I am self-teaching myself C# and asp.net so I am a little lost at this point. I right now have 2 pages. One is an .aspx (with aspx.cs file included) that is blank and html is generated for it from a Page\_Load function in the cs file. The HTML is very simple and it is just an image and some text. The second file is a shtml file which has lots of things, serverside includes, editable and noneditable areas. I want to put my webapp into this file. My asp.net app uses Response.Write to just write out the html. This does not flow well with this page as all that does is write it at the top of the page which is because it is ran first and generates it at the top. How can I make it to where I can generate HTML code inside the page, like within a specific DIV so it does not mess up the page. Where would a starting point be in learning how to do that. I should note that I do not need any interaction from the user. All of this should generate right away.
So if i have understood the questions correctly. You already have an existing page/application (the shtml file) that you want to extend with some new ASP.NET components by including output from the ASP.NET page in the existing page? This is as not something that is out of the box "supported" by ASP.NET and you "won't" be able to execute the aspx page using SSI. But you can do the opposite, an ASP.NET page does support [SSI](http://dotnetperls.com/ssi-use-aspnet). So if you are not using any other scripts in the shtml file this might be a solution. Otherwise the only common solutions would be either to use an AJAX framework and let it call the ASP.NET from within the existing pages or to use an iframe solution. In both cases the client will be resposible for making the calls to the ASP.NET pages and merging the results. And then you have a issue with controlling the output from the ASP.NET page? The Polymorphic Podcast has a good article on [Controlling HTML in ASP.NET WebForms](http://polymorphicpodcast.com/shows/controlhtml/) .
I think you need to read up on some basic ASP.Net documentation and tutorials. Response.Write is not the correct approach - you need to understand how the ASP.Net page lifecycle works and how WebControls are used to render the html. ASP.Net tries to abstract away having to create your html manually for the most part.
ASP.NET C# Properly Generating HTML for Page
[ "", "c#", "asp.net", "html", "" ]
I have a program in C# (Windows Forms) which draws some rectangles on a picturebox. They can be drawn at an angle too (rotated). I know each of the rectangles' start point (upper-left corner), their size(width+height) and their angle. Because of the rotation, the start point is not necessarely the upper-left corner, but that does not matter here. Then when I click the picturebox, I need to check in which rectangle (if any) I have clicked. So I need some way of checking if a point is in a rectangle, but I also need to take into account the rotation of each rectangle. Does anybody know of a way to do this in C#?
Is it possible to apply the same rotation applied to the rectangle to the point in reverse? For example, Rectangle A is rotated 45 degrees clockwise from its origin (upper left corner), you would then just rotate point B around the same origin 45 degrees COUNTER clockwise, then check to see if it falls within Rectangle A pre-rotation
You could keep a second, undisplayed image where you draw duplicates of the rectangles, each uniquely colored. When the user clicks on the picturebox, find the color of the corresponding pixel in the 2nd image, which will identify which rectangle was clicked.
Check if a point is in a rotated rectangle (C#)
[ "", "c#", "winforms", "math", "graphics", "geometry", "" ]
Looking at an open source code base i came across this code: ``` #include "StableHeaders.h" #include "polygon.h" #include "exception.h" #include "vector.h" ... ``` Now the StableHeaders.h is a precompiled header which is included by a 'control' cpp to force it's generation. The three includes that appear after the precompiled header are also included in the StableHeaders.h file anyway. My question is, are these files included twice so that the code base will build on compilers that don't support precompiled headers? As im assuming that include guards/header caching will make the multiple includes redundant anyway... **EDIT** btw, the stableheaders.h file has a check for win32 (roughly) so again im assuming that the includes inside stableheaders.h wont be included on compilers that don't support precompiled headers.
Compilers that don't support precompiled headers would just include StableHeaders.h and reparse it every time (rather than using the precompiled file). It won't cause any problems neither does it fix any problems for certain compilers as you asked. I think its just a minor 'mistake' that probably happened over time during development.
I think you yourself answered the question question! Pre-compiled headers is a compiler feature. If the guard is present the headers will not be included twice, in any case.
Small question about precompiled headers
[ "", "c++", "precompiled-headers", "" ]
I have a class: ``` public abstract class SendAgencyFileComponent : ISendAgencyFileComponent { public AgencyOutput agencyOutput; public TDXDataTypes.DB.Entity client; public TDXDataTypes.DB.Entity agency; public SendAgencyFileComponent(AgencyOutput agencyOutput, TDXDataTypes.DB.Entity client, TDXDataTypes.DB.Entity agency) { this.agencyOutput = agencyOutput; this.client = client; this.agency = agency; } } ``` I have a number of classes that inherit from this class, that reside in various DLLs (including the one this is being called from, but in a different location). I need to be able to instantiate an instance of this class from the DLL location and class name. Currently I am using: ``` System.Reflection.Assembly assembly = System.Reflection.Assembly.LoadFrom( "C:\\Program Files\\RMIS\\" + format.AssemblyName + ".dll"); return assembly.CreateInstance( format.ClassName, true, System.Reflection.BindingFlags.CreateInstance, null, new Object[] { agencyOutput, client, agency }, System.Globalization.CultureInfo.CurrentCulture, null ) as DotNet_WS_Components.ISendAgencyFileComponent; ``` But I keep getting the error: Constructor on type 'TDXDataTypes.DotNet\_WS\_Components.InternationalAgencyFileOut' not found. I'm sure my arguments match the constructor perfectly, and when loading the class from the same assembly using Activator.CreateInstance, it works fine: ``` System.Runtime.Remoting.ObjectHandle sendFilehandle = Activator.CreateInstance( format.AssemblyName, format.ClassName, true, System.Reflection.BindingFlags.CreateInstance, null, new Object[] { agencyOutput, client, agency }, System.Globalization.CultureInfo.CurrentCulture, null, null); return (TDXDataTypes.DotNet_WS_Components.ISendAgencyFileComponent)sendFilehandle.Unwrap(); ``` The particular example I'm working on at the moment is: ``` Webservice -> calls TDXDataTypes dll method through referenced DLL -> calls TDXDataTypes dll class (above) using reflection from a different folder ```
I managed to get this working with a simple workaround. For any calls to the classes within the same assembly, I use the Activator.CreateInstance, and for any calls to a different DLL, I use the other method.
Since your bindingflags deviate in the two examples your giving and since getting them wrong will give you the stated exception Im guessing that's your problem. UPDATE: If that's not the acse I'd compare the assembly versions ``` System.Reflection.Assembly assembly = System.Reflection.Assembly.LoadFrom( "C:\\Program Files\\RMIS\\" + format.AssemblyName + ".dll"); ``` might not return the same version of the assembly (or even the same assembly) that the call to Activator is using so there's a potential risk that the type definition differs. Try dumping the value of format.ClassName, assembly name/version/culture/key-hash (from the call to LoadFrom) and obj.GetType().AssemblyQualifiedName of the ISendAgencyFileComponent object you can create and compare
Instantiate an instance from a specified assembly that inherits from a base class - apparently very difficult problem
[ "", "c#", "reflection", "inheritance", "" ]
I'm trying to find out if an Element in a Django model exists. I think that should be very easy to do, but couldn't find any elegant way in the [Making queries](http://docs.djangoproject.com/en/dev/topics/db/queries/) section of the Django documentation. The problem I have is that I've thousands of screenshots in a directory and need to check if they are in the database that is supposed to store them. So I'm iterating over the filenames and want to see for each of them if a corresponding element exists. Having a model called Screenshot, the only way I could come up with is ``` filenames = os.listdir(settings.SCREENSHOTS_ON_DISC) for filename in filenames: exists = Screenshot.objects.filter(filename=filename) if exists: ... ``` Is there a nicer/ faster way to do this? Note that a screenshot can be in the database more than once (thus I didn't use .get).
If your `Screenshot` model has a lot of attributes, then the code you showed is doing unnecessary work for your specific need. For example, you can do something like this: ``` files_in_db = Screenshot.objects.values_list('filename', flat=True).distinct() ``` which will give you a list of all filenames in the database, and generate SQL to only fetch the filenames. It won't try to create and populate Screenshot objects. If you have ``` files_on_disc = os.listdir(settings.SCREENSHOTS_ON_DISC) ``` then you can iterate over one list looking for membership in the other, or make one or both lists into sets to find common members etc.
You could try: ``` Screenshot.objects.filter(filename__in = filenames) ``` That will give you a list of all the screenshots you do have. You could compare the two lists and see what doesnt exist between the two. That should get you started, but you might want to tweak the query for performance/use.
Check if an element exists
[ "", "python", "django", "performance", "" ]
I'm making a login system, but when a user logs in, it doesn't actually store any of the data i want it to in the session. I even checked the session's file, and it was empty. I have session\_start(); on all the pages. what else could i be doing wrong. Heres the code for the two main pages. the login code: ``` <? if ($DEBUG == true) { error_reporting(E_ALL); } require "header.php"; require_once "dbinterface.php"; require_once "user.class.php"; require_once "config.inc.php"; $db = new db($DB['host'], $DB['user'], $DB['pass'], $DB['database']); $u_result = $db->run("select user_id from users where user_name = '" . $db->escape($_POST['user_name']) . "'"); if ($u_result == false) { $url = 'Location: error.php?id=8'; header($url); } if (count($u_result) < 1) { $url = 'Location: error.php?id=3'; header($url); } $user = new user($u_result[0]['user_id']); if ($user->match_password($_POST['pass']) == true) { $_SESSION['authenticated'] = true; $_SESSION['user_id'] = $u_result[0]['user_id']; $_SESSION['user'] = $user; } else { $url = 'Location: error.php?id=4'; header($url); } session_write_close(); header('Location: index.php'); ?> ``` The header that gets included in every page: ``` <?php if (!session_start()) { $url = "Location: error.php?id=13"; header($url); } ?> ``` A little background: * windows 7 (also tried on windows * server 2008, but currently on 7) PHP * 5 localy hosted problem is present * for everyone problem exists in all * browsers
Here are a couple suggestions *(I don't really know what's happening and/or why ; so they are only suggestions ; maybe one will solve the problem ^^ )*. First of all, a couple of questions : *(They matter at least if none of these suggestion does the trick)* * Which version of PHP / Apache are you using ? * Are you on Windows ? Linux ? * If you are on your "production" server, what hosting service are you using ? Maybe there's something special about it ? * Is the problem present for **every one** ? + Is there always a problem when you are browsing the site ? + Is it still present when you are accessing the site from another browser ? + What about from another computer ? * If you use something like `var_dump($_SESSION); die;` at the end of the script that sets data in session, what does it give ? First idea : what if you set some header to disable caching by the browser ? Stuff like this, for instance : ``` session_start(); header("Cache-control: private"); ``` Second idea (at least if you are on windows) : did you try disabling you antivirus / firewall ? Is the session cookie correctly created in the client's browser ? If you are using sub-domains (or not) : is the cookie's domain OK ? What about it's expiration date ? Third idea : * you said `error_reporting` is set to `E_ALL`, which is nice * what about `display_errors` ? Is it set to On so that errors get displayed ? * Is there anything interesting in PHP/Apache's `error_log` ? Another one : Are you sure there is absolutly nothing that gets to the output before the `session_start` ? Not even white spaces ? Yet another one : Are you sure about permissions on the directories / files ? * Permission to write in a directory means you can create new files, and/or delete old ones. + But, if I remember correctly, not that you can modify them * To modify files, you need write access on the files too + Actually, your webserver need write access to those files ^^ What are the permissions on the session's directory, and on the (empty) files that get created ? I'm beginning to run out of ideas... With a bit of luck, maybe one of those will be the right one... Or help you find out what the right one would be ! Good luck !
A probable cause is that execution continues after the header('Location...') statements. However it looks like you want it to stop, so you should add 'exit;' after redirecting to error.php. E.g.: ``` if ($u_result == false) { $url = 'Location: error.php?id=8'; header($url); exit; } ``` This could also be part of your problem, as you never go to error.php and see the error code. The last line is always executed: ``` header('Location: index.php'); ``` And since header()'s default behavior is to replace existing headers, you always go to index.php no matter what.
PHP Session Data Not Being Stored
[ "", "php", "session", "" ]