Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I'm looking for a robust, easy-to-use, regular expression evaluator for native C++, either platform-independent or Windows-specific. Requirements: * Can't use Boost or ACE regex libraries (unfortunately) * Can't use .NET Regex (or any managed code) The main requirement is that it should be standalone and open.
C++11 now includes the support of regular expressions. It will be platform independent. You just need a recent compiler. Check the following list to know which one to use. <http://wiki.apache.org/stdcxx/C%2B%2B0xCompilerSupport> Hope it helps
try [libpcre](http://www.pcre.org/) If you're stuck on windows they have a windows port which should work. I know [e-texteditor](http://e-texteditor.com) uses it, so at least that's proof it works :-)
Easy-to-Use Regular Expression Support in C++?
[ "", "c++", "regex", "open-source", "" ]
I'd like to use a database to store i18n key/value pairs so we can modify / reload the i18n data at runtime. Has anyone done this? Or does anyone have an idea of how to implement this? I've read several threads on this, but I haven't seen a workable solution. I'm specifically refering to something that would work with the jstl tags such as ``` <fmt:setlocale> <fmt:bundle> <fmt:setBundle> <fmt:message> ``` I think this will involve extending ResourceBundle, but when I tried this I ran into problems that had to do with the way the jstl tags get the resource bundle.
Are you just asking how to store UTF-8/16 characters in a DB? in mysql it's just a matter of making sure you build with UTF8 support and setting that as the default, or specifying it at the column or table level. I've done this in oracle and mysql before. Create a table and cut and paste some i18n data into it and see what happens... you might be set already.. or am I completely missing your point? edit: to be more explicit... I usually implement a three column table... language, key, value... where "value" contains potentially foreign language words or phrases... "language" contains some language key and "key" is an english key (i.e. login.error.password.dup)... language and key are indexed... I've then built interfaces on a structure like this that shows each key with all its translations (values)... it can get fancy and include audit trails and "dirty" markers and all the other stuff you need to enable translators and data entry folk to make use of it.. Edit 2: Now that you added the info about the JSTL tags, I understand a bit more... I've never done that myself.. but I found this old info on [theserverside](http://www.theserverside.com/discussions/thread.tss?thread_id=27390)... ``` HttpSession session = .. [get hold of the session] ResourceBundle bundle = new PropertyResourceBundle(toInputStream(myOwnProperties)) [toInputStream just stores the properties into an inputstream] Locale locale = .. [get hold of the locale] javax.servlet.jsp.jstl.core.Config.set(session, Config.FMT_LOCALIZATION_CONTEXT, new LocalizationContext(bundle ,locale)); ```
I finally got this working with danb's help above. This is my resource bundle class and resource bundle control class. I used this code from @[danb]'s. ``` ResourceBundle bundle = ResourceBundle.getBundle("AwesomeBundle", locale, DbResourceBundle.getMyControl()); javax.servlet.jsp.jstl.core.Config.set(actionBeanContext.getRequest(), Config.FMT_LOCALIZATION_CONTEXT, new LocalizationContext(bundle, locale)); ``` and wrote this class. ``` public class DbResourceBundle extends ResourceBundle { private Properties properties; public DbResourceBundle(Properties inProperties) { properties = inProperties; } @Override @SuppressWarnings(value = { "unchecked" }) public Enumeration<String> getKeys() { return properties != null ? ((Enumeration<String>) properties.propertyNames()) : null; } @Override protected Object handleGetObject(String key) { return properties.getProperty(key); } public static ResourceBundle.Control getMyControl() { return new ResourceBundle.Control() { @Override public List<String> getFormats(String baseName) { if (baseName == null) { throw new NullPointerException(); } return Arrays.asList("db"); } @Override public ResourceBundle newBundle(String baseName, Locale locale, String format, ClassLoader loader, boolean reload) throws IllegalAccessException, InstantiationException, IOException { if ((baseName == null) || (locale == null) || (format == null) || (loader == null)) throw new NullPointerException(); ResourceBundle bundle = null; if (format.equals("db")) { Properties p = new Properties(); DataSource ds = (DataSource) ContextFactory.getApplicationContext().getBean("clinicalDataSource"); Connection con = null; Statement s = null; ResultSet rs = null; try { con = ds.getConnection(); StringBuilder query = new StringBuilder(); query.append("select label, value from i18n where bundle='" + StringEscapeUtils.escapeSql(baseName) + "' "); if (locale != null) { if (StringUtils.isNotBlank(locale.getCountry())) { query.append("and country='" + escapeSql(locale.getCountry()) + "' "); } if (StringUtils.isNotBlank(locale.getLanguage())) { query.append("and language='" + escapeSql(locale.getLanguage()) + "' "); } if (StringUtils.isNotBlank(locale.getVariant())) { query.append("and variant='" + escapeSql(locale.getVariant()) + "' "); } } s = con.createStatement(); rs = s.executeQuery(query.toString()); while (rs.next()) { p.setProperty(rs.getString(1), rs.getString(2)); } } catch (Exception e) { e.printStackTrace(); throw new RuntimeException("Can not build properties: " + e); } finally { DbUtils.closeQuietly(con, s, rs); } bundle = new DbResourceBundle(p); } return bundle; } @Override public long getTimeToLive(String baseName, Locale locale) { return 1000 * 60 * 30; } @Override public boolean needsReload(String baseName, Locale locale, String format, ClassLoader loader, ResourceBundle bundle, long loadTime) { return true; } }; } ```
Database backed i18n for java web-app
[ "", "java", "internationalization", "" ]
Why should I or shouldn't I use dirty reads: ``` set transaction isolation level read uncommitted ``` in SQL Server?
From [MSDN](http://msdn.microsoft.com/en-us/library/aa259216(SQL.80).aspx): > When this option is set, it is possible to read uncommitted or dirty data; values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction. Simply put, when you are using this isolation level, and you are performing multiple queries on an active table as part of one transaction, there is no guarantee that the information returned to you within different parts of the transaction will remain the same. You could query the same data twice within one transaction and get different results (this might happen in the case where a different user was updating the same data in the midst of your transaction). This can obviously have severe ramifications for parts of your application that rely on data integrity.
Generally when you need to do a sizeable (or frequent) queries to busy tables, where read committed would possibly be blocked by locks from uncommited transactions, but ONLY when you can live with inaccurate data. As an example, on a gaming web site I worked on recently there was a summary display of some stats about recent games, this was all based on dirty reads, it was more important for us to include then exclude the transactional data not yet committed (we knew anyway that few, if any, transactions would be backed out), we felt that on average the data would be more accurate that way.
SQL Server - Dirty Reads Pros & Cons
[ "", "sql", "sql-server", "" ]
When running a Java app from eclipse my ImageIcon shows up just fine. But after creating a jar the path to the image obviously gets screwed up. Is there a way to extract an image from the jar at runtime so I can then open it up? Or, is there a better way to do this? I'd like to distribute a single jar file if possible.
To create an `ImageIcon` from an image file within the same jars your code is loaded: ``` new javax.swing.ImageIcon(getClass().getResource("myimage.jpeg")) ``` `Class.getResource` returns a URL of a resource (or `null`!). `ImageIcon` has a constructors that load from a URL. To construct a URL for a resource in a jar not on your "classpath", see the documentation for `java.net.JarURLConnection`.
You can try something like: ``` InputStream stream = this.getClass().getClassLoader().getResourceAsStream("/images/image.jpg"); ``` In your JAR file, you might have a directory structure of: > MyJAR.jar > - com (class files in here) > - images > ----image.jpg
Java Swing: Displaying images from within a Jar
[ "", "java", "swing", "embedded-resource", "imageicon", "" ]
I want to return `StudentId` to use elsewhere outside of the *scope* of the `$.getJSON()` ``` j.getJSON(url, data, function(result) { var studentId = result.Something; }); //use studentId here ``` I would imagine this has to do with scoping, but it doesn't seem to work the same way *c#* does
Yeah, my previous answer does not work because I didn't pay any attention to your code. :) The problem is that the anonymous function is a callback function - i.e. getJSON is an async operation that will return at some indeterminate point in time, so even if the scope of the variable were outside of that anonymous function (i.e. a closure), it would not have the value you would think it should: ``` var studentId = null; j.getJSON(url, data, function(result) { studentId = result.Something; }); // studentId is still null right here, because this line // executes before the line that sets its value to result.Something ``` Any code that you want to execute with the value of studentId set by the getJSON call needs to happen either **within** that callback function or **after** the callback executes.
> it doesn't seem to work the same way > c# does To accomplish scoping similar to C#, disable async operations and set dataType to json: ``` var mydata = []; $.ajax({ url: 'data.php', async: false, dataType: 'json', success: function (json) { mydata = json.whatever; } }); alert(mydata); // has value of json.whatever ```
How can I return a variable from a $.getJSON function
[ "", "javascript", "jquery", "ajax", "scope", "return-value", "" ]
How can you allow a PHP script to write to a file with high-security restrictions, such as only allowing a single user to write to it? The difficulty seems to be that a PHP script is running as a low-permissions user (maybe apache, or www, or nobody?), and even if I `chown` apache the\_writable\_file, the directory it's in might not be writable for the low-level user. In general, what's the usual way that PHP can work with local files in a secure way?
Unfortunately, in shared hosts that use **mod\_php, there is no way to restrict access** to secure files to your web app and login user. The solution is to **run your web app as your login user**. When you do that, UNIX file permissions can correctly lock everyone else out. There are several ways to implement that, including [SuExec](http://httpd.apache.org/docs/2.0/suexec.html), [suPHP](http://www.suphp.org/Home.html), or running PHP with FastCGI with [mod\_fcgid](http://fastcgi.coremail.cn/) or [mod\_proxy\_fcgid](http://mproxyfcgi.sourceforge.net/). FastCGI is my favorite way. Another solution is to use a dedicated host or virtual private server.
Sure, `chgrp apache the_writable_file` and `chmod g+w the_writable_file`. After that, only your secure user and the apache user will be able to write to the file. Since the apache user is typically forbidden from logging in, you only have to worry about web users writing to your secure file using through the http daemon.
php scripts writing to non-world-writable files
[ "", "php", "permissions", "" ]
I was wondering if anybody knew of a method to configure apache to fall back to returning a static HTML page, should it (Apache) be able to determine that PHP has died? This would provide the developer with a elegant solution to displaying an error page and not (worst case scenario) the source code of the PHP page that should have been executed. Thanks.
The PHP source code is only displayed when apache is not configured correctly to handle php files. That is, when a proper handler has not been defined. On errors, what is shown can be configured on php.ini, mainly the display\_errors variable. That should be set to off and log\_errors to on on a production environment. If php actually dies, apache will return the appropriate HTTP status code (usually 500) with the page defined by the ErrorDocument directive. If it didn't die, but got stuck in a loop, there is not much you can do as far as I know. You can specify a different page for different error codes.
I would assume that this typically results in a 500 error, and you can configure apaches 500 handler to show a static page: ErrorDocument 500 /500error.html You can also read about error handlers on [apaches documentation site](http://httpd.apache.org/docs/2.0/custom-error.html)
Apache Fall Back When PHP Fails
[ "", "php", "apache", "configuration", "" ]
The compiler usually chokes when an event doesn't appear beside a `+=` or a `-=`, so I'm not sure if this is possible. I want to be able to identify an event by using an Expression tree, so I can create an event watcher for a test. The syntax would look something like this: ``` using(var foo = new EventWatcher(target, x => x.MyEventToWatch) { // act here } // throws on Dispose() if MyEventToWatch hasn't fired ``` My questions are twofold: 1. Will the compiler choke? And if so, any suggestions on how to prevent this? 2. How can I parse the Expression object from the constructor in order to attach to the `MyEventToWatch` event of `target`?
**Edit:** As [Curt](https://stackoverflow.com/questions/35211/identify-an-event-via-a-linq-expression-tree#36255) has pointed out, my implementation is rather flawed in that it can only be used from within the class that declares the event :) Instead of "`x => x.MyEvent`" returning the event, it was returning the backing field, which is only accessble by the class. Since expressions cannot contain assignment statements, a modified expression like "`( x, h ) => x.MyEvent += h`" cannot be used to retrieve the event, so reflection would need to be used instead. A correct implementation would need to use reflection to retrieve the `EventInfo` for the event (which, unfortunatley, will not be strongly typed). Otherwise, the only updates that need to be made are to store the reflected `EventInfo`, and use the `AddEventHandler`/`RemoveEventHandler` methods to register the listener (instead of the manual `Delegate` `Combine`/`Remove` calls and field sets). The rest of the implementation should not need to be changed. Good luck :) --- **Note:** This is demonstration-quality code that makes several assumptions about the format of the accessor. Proper error checking, handling of static events, etc, is left as an exercise to the reader ;) ``` public sealed class EventWatcher : IDisposable { private readonly object target_; private readonly string eventName_; private readonly FieldInfo eventField_; private readonly Delegate listener_; private bool eventWasRaised_; public static EventWatcher Create<T>( T target, Expression<Func<T,Delegate>> accessor ) { return new EventWatcher( target, accessor ); } private EventWatcher( object target, LambdaExpression accessor ) { this.target_ = target; // Retrieve event definition from expression. var eventAccessor = accessor.Body as MemberExpression; this.eventField_ = eventAccessor.Member as FieldInfo; this.eventName_ = this.eventField_.Name; // Create our event listener and add it to the declaring object's event field. this.listener_ = CreateEventListenerDelegate( this.eventField_.FieldType ); var currentEventList = this.eventField_.GetValue( this.target_ ) as Delegate; var newEventList = Delegate.Combine( currentEventList, this.listener_ ); this.eventField_.SetValue( this.target_, newEventList ); } public void SetEventWasRaised( ) { this.eventWasRaised_ = true; } private Delegate CreateEventListenerDelegate( Type eventType ) { // Create the event listener's body, setting the 'eventWasRaised_' field. var setMethod = typeof( EventWatcher ).GetMethod( "SetEventWasRaised" ); var body = Expression.Call( Expression.Constant( this ), setMethod ); // Get the event delegate's parameters from its 'Invoke' method. var invokeMethod = eventType.GetMethod( "Invoke" ); var parameters = invokeMethod.GetParameters( ) .Select( ( p ) => Expression.Parameter( p.ParameterType, p.Name ) ); // Create the listener. var listener = Expression.Lambda( eventType, body, parameters ); return listener.Compile( ); } void IDisposable.Dispose( ) { // Remove the event listener. var currentEventList = this.eventField_.GetValue( this.target_ ) as Delegate; var newEventList = Delegate.Remove( currentEventList, this.listener_ ); this.eventField_.SetValue( this.target_, newEventList ); // Ensure event was raised. if( !this.eventWasRaised_ ) throw new InvalidOperationException( "Event was not raised: " + this.eventName_ ); } } ``` Usage is slightly different from that suggested, in order to take advantage of type inference: ``` try { using( EventWatcher.Create( o, x => x.MyEvent ) ) { //o.RaiseEvent( ); // Uncomment for test to succeed. } Console.WriteLine( "Event raised successfully" ); } catch( InvalidOperationException ex ) { Console.WriteLine( ex.Message ); } ```
I too wanted to do this, and I have come up with a pretty cool way that does something like Emperor XLII idea. It doesn't use Expression trees though, as mentioned this can't be done as Expression trees do not allow the use of `+=` or `-=`. We can however use a neat trick where we use .NET Remoting Proxy (or any other Proxy such as LinFu or Castle DP) to intercept a call to Add/Remove handler on a very short lived proxy object. The role of this proxy object is to simply have some method called on it, and to allow its method calls to be intercepted, at which point we can find out the name of the event. This sounds weird but here is the code (which by the way ONLY works if you have a `MarshalByRefObject` or an interface for the proxied object) Assume we have the following interface and class ``` public interface ISomeClassWithEvent { event EventHandler<EventArgs> Changed; } public class SomeClassWithEvent : ISomeClassWithEvent { public event EventHandler<EventArgs> Changed; protected virtual void OnChanged(EventArgs e) { if (Changed != null) Changed(this, e); } } ``` Then we can have a very simply class that expects an `Action<T>` delegate that will get passed some instance of `T`. Here is the code ``` public class EventWatcher<T> { public void WatchEvent(Action<T> eventToWatch) { CustomProxy<T> proxy = new CustomProxy<T>(InvocationType.Event); T tester = (T) proxy.GetTransparentProxy(); eventToWatch(tester); Console.WriteLine(string.Format("Event to watch = {0}", proxy.Invocations.First())); } } ``` The trick is to pass the proxied object to the `Action<T>` delegate provided. Where we have the following `CustomProxy<T>` code, who intercepts the call to `+=` and `-=` on the proxied object ``` public enum InvocationType { Event } public class CustomProxy<T> : RealProxy { private List<string> invocations = new List<string>(); private InvocationType invocationType; public CustomProxy(InvocationType invocationType) : base(typeof(T)) { this.invocations = new List<string>(); this.invocationType = invocationType; } public List<string> Invocations { get { return invocations; } } [SecurityPermission(SecurityAction.LinkDemand, Flags = SecurityPermissionFlag.Infrastructure)] [DebuggerStepThrough] public override IMessage Invoke(IMessage msg) { String methodName = (String) msg.Properties["__MethodName"]; Type[] parameterTypes = (Type[]) msg.Properties["__MethodSignature"]; MethodBase method = typeof(T).GetMethod(methodName, parameterTypes); switch (invocationType) { case InvocationType.Event: invocations.Add(ReplaceAddRemovePrefixes(method.Name)); break; // You could deal with other cases here if needed } IMethodCallMessage message = msg as IMethodCallMessage; Object response = null; ReturnMessage responseMessage = new ReturnMessage(response, null, 0, null, message); return responseMessage; } private string ReplaceAddRemovePrefixes(string method) { if (method.Contains("add_")) return method.Replace("add_",""); if (method.Contains("remove_")) return method.Replace("remove_",""); return method; } } ``` And then we all that's left is to use this as follows ``` class Program { static void Main(string[] args) { EventWatcher<ISomeClassWithEvent> eventWatcher = new EventWatcher<ISomeClassWithEvent>(); eventWatcher.WatchEvent(x => x.Changed += null); eventWatcher.WatchEvent(x => x.Changed -= null); Console.ReadLine(); } } ``` Doing this I will see this output: ``` Event to watch = Changed Event to watch = Changed ```
Identify an event via a Linq Expression tree
[ "", "c#", "linq", "expression-trees", "" ]
I am currently writing a simple, timer-based mini app in C# that performs an action `n` times every `k` seconds. I am trying to adopt a test-driven development style, so my goal is to unit-test all parts of the app. So, my question is: Is there a good way to unit test a timer-based class? The problem, as I see it, is that there is a big risk that the tests will take uncomfortably long to execute since they must wait so and so long for the desired actions to happen. Especially if one wants realistic data (`seconds`), instead of using the minimal time resolution allowed by the framework (`1 ms`?). I am using a mock object for the action, to register the number of times the action was called, and so that the action takes practically no time.
What I have done is to mock the timer, and also the current system time, so that my events could be triggered immediately, but as far as the code under test was concerned time elapsed was seconds.
[Len Holgate](http://www.lenholgate.com/) has a series of [20 articles on testing timer based code](http://www.lenholgate.com/archives/000306.html).
Unit testing a timer based application?
[ "", "c#", ".net", "unit-testing", "timer", "" ]
I've heard of a few ways to implement tagging; using a mapping table between TagID and ItemID (makes sense to me, but does it scale?), adding a fixed number of possible TagID columns to ItemID (seems like a bad idea), Keeping tags in a text column that's comma separated (sounds crazy but could work). I've even heard someone recommend a sparse matrix, but then how do the tag names grow gracefully? Am I missing a best practice for tags?
Three tables (one for storing all items, one for all tags, and one for the relation between the two), properly indexed, with foreign keys set running on a proper database, should work well and scale properly. ``` Table: Item Columns: ItemID, Title, Content Table: Tag Columns: TagID, Title Table: ItemTag Columns: ItemID, TagID ```
Normally I would agree with Yaakov Ellis but in this special case there is another viable solution: Use two tables: ``` Table: Item Columns: ItemID, Title, Content Indexes: ItemID Table: Tag Columns: ItemID, Title Indexes: ItemId, Title ``` This has some major advantages: First it makes development much simpler: in the three-table solution for insert and update of `item` you have to lookup the `Tag` table to see if there are already entries. Then you have to join them with new ones. This is no trivial task. Then it makes queries simpler (and perhaps faster). There are three major database queries which you will do: Output all `Tags` for one `Item`, draw a Tag-Cloud and select all items for one Tag Title. **All Tags for one Item:** 3-Table: ``` SELECT Tag.Title FROM Tag JOIN ItemTag ON Tag.TagID = ItemTag.TagID WHERE ItemTag.ItemID = :id ``` 2-Table: ``` SELECT Tag.Title FROM Tag WHERE Tag.ItemID = :id ``` **Tag-Cloud:** 3-Table: ``` SELECT Tag.Title, count(*) FROM Tag JOIN ItemTag ON Tag.TagID = ItemTag.TagID GROUP BY Tag.Title ``` 2-Table: ``` SELECT Tag.Title, count(*) FROM Tag GROUP BY Tag.Title ``` **Items for one Tag:** 3-Table: ``` SELECT Item.* FROM Item JOIN ItemTag ON Item.ItemID = ItemTag.ItemID JOIN Tag ON ItemTag.TagID = Tag.TagID WHERE Tag.Title = :title ``` 2-Table: ``` SELECT Item.* FROM Item JOIN Tag ON Item.ItemID = Tag.ItemID WHERE Tag.Title = :title ``` But there are some drawbacks, too: It could take more space in the database (which could lead to more disk operations which is slower) and it's not normalized which could lead to inconsistencies. The size argument is not that strong because the very nature of tags is that they are normally pretty small so the size increase is not a large one. One could argue that the query for the tag title is much faster in a small table which contains each tag only once and this certainly is true. But taking in regard the savings for not having to join and the fact that you can build a good index on them could easily compensate for this. This of course depends heavily on the size of the database you are using. The inconsistency argument is a little moot too. Tags are free text fields and there is no expected operation like 'rename all tags "foo" to "bar"'. So tldr: I would go for the two-table solution. (In fact I'm going to. I found this article to see if there are valid arguments against it.)
Recommended SQL database design for tags or tagging
[ "", "sql", "database-design", "tags", "data-modeling", "tagging", "" ]
I'm new to SQL Server Reporting Services, and was wondering the best way to do the following: > * Query to get a list of popular IDs > * Subquery on each item to get properties from another table Ideally, the final report columns would look like this: ``` [ID] [property1] [property2] [SELECT COUNT(*) FROM AnotherTable WHERE ForeignID=ID] ``` There may be ways to construct a giant SQL query to do this all in one go, but I'd prefer to compartmentalize it. Is the recommended approach to write a VB function to perform the subquery for each row? Thanks for any help.
I would recommend using a [SubReport](http://msdn.microsoft.com/en-us/library/ms160348.aspx). You would place the SubReport in a table cell.
Simplest method is this: ``` select *, (select count(*) from tbl2 t2 where t2.tbl1ID = t1.tbl1ID) as cnt from tbl1 t1 ``` here is a workable version (using table variables): ``` declare @tbl1 table ( tbl1ID int, prop1 varchar(1), prop2 varchar(2) ) declare @tbl2 table ( tbl2ID int, tbl1ID int ) select *, (select count(*) from @tbl2 t2 where t2.tbl1ID = t1.tbl1ID) as cnt from @tbl1 t1 ``` Obviously this is just a raw example - standard rules apply like don't select \*, etc ... --- **UPDATE from Aug 21 '08 at 21:27:** @AlexCuse - Yes, totally agree on the performance. I started to write it with the outer join, but then saw in his sample output the count and thought that was what he wanted, and the count would not return correctly if the tables are outer joined. Not to mention that joins can cause your records to be multiplied (1 entry from tbl1 that matches 2 entries in tbl2 = 2 returns) which can be unintended. So I guess it really boils down to the specifics on what your query needs to return. --- **UPDATE from Aug 21 '08 at 22:07:** To answer the other parts of your question - is a VB function the way to go? No. Absolutely not. Not for something this simple. Functions are very bad on performance, each row in the return set executes the function. If you want to "compartmentalize" the different parts of the query you have to approach it more like a stored procedure. Build a temp table, do part of the query and insert the results into the table, then do any further queries you need and update the original temp table (or insert into more temp tables).
Best way to perform dynamic subquery in MS Reporting Services?
[ "", "sql", "sql-server", "reporting-services", "service", "reporting", "" ]
I found this open-source library that I want to use in my Java application. The library is written in C and was developed under Unix/Linux, and my application will run on Windows. It's a library of mostly mathematical functions, so as far as I can tell it doesn't use anything that's platform-dependent, it's just very basic C code. Also, it's not that big, less than 5,000 lines. What's the easiest way to use the library in my application? I know there's JNI, but that involves finding a compiler to compile the library under Windows, getting up-to-date with the JNI framework, writing the code, etc. Doable, but not that easy. Is there an easier way? Considering the small size of the library, I'm tempted to just translate it to Java. Are there any tools that can help with that? **EDIT** I ended up translating the part of the library that I needed to Java. It's about 10% of the library so far, though it'll probably increase with time. C and Java are pretty similar, so it only took a few hours. The main difficulty is fixing the bugs that get introduced by mistakes in the translation. Thank you everyone for your help. The proposed solutions all seemed interesting and I'll look into them when I need to link to larger libraries. For a small piece of C code, manual translation was the simplest solution.
Your best bet is probably to grab a good c book (K&R: The C Progranmming language) a cup of tea and start translating! I would be skeptical about trusting a translation program, more often then not the best translator is yourself! If you do this one, then its done and you don't need to keep re-doing it. There might be some complications if the library is open source, you'll need to check the licence carefully about this. Another point to consider is that there is always going to be some element of risk and potential error in the translation, therefore it might be necessary to consider writing some tests to ensure that the translation is correct. Are there no JAVA equivelent Math functions? As you yourself comment the JNI way is possible, as for a c compiler you could probably use 'Bloodshead Dev-c++' might work, but it is a lot of effort for ~5000 lines.
On the [Java GNU Scientific Library](http://sf.net/projects/jgsl) project I used [Swig](http://www.swig.org/) to generate the JNI wrapper classes around the C libraries. Great tool, and can also generate wrapper code in several languages including Python. Highly recommended.
What's the easiest way to use C source code in a Java application?
[ "", "java", "c", "translation", "" ]
We're looking for a Transformation library or engine which can read any input (EDIfact files, CSV, XML, stuff like that. So files (or webservices results) that contain data which must be transformed to a known business object structure.) This data should be transformed this to a existing business object using custom rules. XSLT is both to complex (to learn) and to simple (not enough features) Can anybody recommend a C# library or engine? I have seen Altova MapForce but would like something I can send out to dozens of people who will build / design their own transformations without having to pay dozens of Altova licenses.
If you think that XSLT is too difficult for you, I think you can try LINQ to XML for parsing XML files. It is integrated in the .NET framework, and you can use C# (or, if you use VB.NET 9.0, better because of the XML literals) instead of learning another language. You can integrate it with the existing application without much effort and withouth the paradigm mismatch between the language and the file management that occurs with XSLT. [Microsoft LINQ to XML](http://www.google.it/url?sa=t&source=web&ct=res&cd=3&url=http%3A%2F%2Fmsdn.microsoft.com%2Fen-us%2Flibrary%2Fbb387098.aspx&ei=6VrOSLmTN4qO8wThtpDsBQ&usg=AFQjCNEk8C3xS7mIRT5zr-2zfmDe9tYDqQ&sig2=uY2Nb-rBbWq9lstqUXyARQ) Sure, it's not a framework or library for parsing files, but neither XSLT is, so...
XSLT is not going to work for EDI and CSV. If you want a completely generic transformation engine, you might have to shell out some cash. I have used [Symphonia](http://www.symphonia3.com/) for dealing with EDI, and it worked, but it is not free. The thing is the problem you are describing sounds "enterprisey" (I am sure nobody uses EDI for fun), so there's no open source/free tooling for dealing with this stuff.
Is there a Transformation engine or library using .NET?
[ "", "c#", "transformation", "altova", "" ]
I am using Adobe Flex/Air here, but as far as I know this applies to all of JavaScript. I have come across this problem a few times, and there must be an easy solution out there! Suppose I have the following XML (using e4x): ``` var xml:XML = <root><example>foo</example></root> ``` I can change the contents of the example node using the following code: ``` xml.example = "bar"; ``` However, if I have this: ``` var xml:XML = <root>foo</root> ``` How do i change the contents of the root node? ``` xml = "bar"; ``` Obviously doesn't work as I'm attempting to assign a string to an XML object.
It seems you confuse variables for the values they contain. The assignment ``` node = textInput.text; ``` changes the value the *variable* `node` points to, it doesn't change anything with the object that `node` currently points to. To do what you want to do you can use the `setChildren` method of the `XML` class: ``` node.setChildren(textInput.text) ```
Ah thank you Theo - indeed seems I was confused there. I think the root of the confustion came from the fact I was able to assign ``` textInput.text = node; ``` Which I now guess is just implicity calling XML.toString() to convert XML->String. setChildren() is what I was looking for.
E4X : Assigning to root node
[ "", "javascript", "apache-flex", "air", "e4x", "" ]
If I inherit from a base class and want to pass something from the constructor of the inherited class to the constructor of the base class, how do I do that? For example, if I inherit from the Exception class I want to do something like this: ``` class MyExceptionClass : Exception { public MyExceptionClass(string message, string extraInfo) { //This is where it's all falling apart base(message); } } ``` Basically what I want is to be able to pass the string message to the base Exception class.
Modify your constructor to the following so that it calls the base class constructor properly: ``` public class MyExceptionClass : Exception { public MyExceptionClass(string message, string extrainfo) : base(message) { //other stuff here } } ``` Note that a constructor is not something that you can call anytime within a method. That's the reason you're getting errors in your call in the constructor body.
Note that you can use **static** methods within the call to the base constructor. ``` class MyExceptionClass : Exception { public MyExceptionClass(string message, string extraInfo) : base(ModifyMessage(message, extraInfo)) { } private static string ModifyMessage(string message, string extraInfo) { Trace.WriteLine("message was " + message); return message.ToLowerInvariant() + Environment.NewLine + extraInfo; } } ```
Calling the base constructor in C#
[ "", "c#", ".net", "inheritance", "constructor", "" ]
I know that just using `rand()` is predictable, if you know what you're doing, and have access to the server. I have a project that is **highly** dependent upon choosing a random number that is as unpredictable as possible. So I'm looking for suggestions, either other built-in functions or user functions, that can generate a *better* random number. I used this to do a little test: ``` $i = 0; while($i < 10000){ $rand = rand(0, 100); if(!isset($array[$rand])){ $array[$rand] = 1; } else { $array[$rand]++; } sort($array); $i++; } ``` I found the results to be evenly distributed, and there is an odd pattern to the number of times each number is generated.
Adding, multiplying, or truncating a poor random source will give you a poor random result. See [Introduction to Randomness and Random Numbers](http://random.org/randomness/) for an explanation. You're right about PHP rand() function. See the second figure on [Statistical Analysis](http://random.org/analysis/) for a striking illustration. (The first figure is striking, but it's been drawn by Scott Adams, not plotted with rand()). One solution is to use a true random generator such as [random.org](http://random.org/integers/). Another, if you're on Linux/BSD/etc. is to use [/dev/random](http://en.wikipedia.org/wiki/Urandom). If the randomness is mission critical, you will have to use a [hardware random generator](http://en.wikipedia.org/wiki/Hardware_random_number_generator).
[random.org](http://random.org/clients/http/) has an API you can access via HTTP. > RANDOM.ORG is a true random number service that generates randomness > via atmospheric noise.
Better Random Generating PHP
[ "", "php", "security", "random", "" ]
I've created a simple desktop application in C# 3.0 to learn some C#, wpf and .Net 3.5. My application essentially reads data from a csv file and stores it in a SQL server CE database. I use sqlmetal to generate the ORM code for the database. My first iteration of this app is ugly as hell and I'm in the process of refactoring it. Which brings me to my question. How would you architect a desktop database app in C#? What are the best practices? Do you create a Database Abstraction Layer (DAL) which uses the sqlmetal generated code? Or is the generated code enough of an abstraction? If you use DAL pattern, do you make it a singleton or a static member? Do you use the View-Model-ModelView pattern with the DAL pattern? Apologies if this seems like a long open ended question, but I have been giving this a lot of thought recently. I see a lot of examples on how to architect an enterprise n-tier app in C# but not that many on architecting standalone desktop apps.
I would start with the [Composite Application Guidance for WPF](http://codeplex.com/CompositeWPF) (*cough* PRISM *cough*) from Microsoft's P&P team. With the download comes a great reference application that is the starting point for most of my WPF development today. The [DotNetRocks crew](http://www.dotnetrocks.com/default.aspx?showNum=374) just interviewed [Glenn Block](http://blogs.msdn.com/gblock/) and [Brian Noyes](http://www.softinsight.com/bnoyes/) about this if you're interested in hearing more from them. Even better, Prism is not nearly as heavy as the CAB was, if you're familiar at all with that from the WinForms days.
The answer is "it depends" as always. A few things to think about: You may want to make this fat client app a web app (for example) at some point. If so, you should be sure to keep separation between the business layer (and below) and the presentation. The simplest way to do this is to be sure all calls to the business logic go through an interface of some kind. A more complex way is to implement a full MVC setup. Another thing you may consider is making the data access layer independent of the business logic and user interface. By this I mean that all calls from business logic into the DAL should be generic "get me this data" rather than "get me this data from SQL" or even worse "run this SQL statement". In this way, you can replace your DAL with one that accesses a different database, XML files, or even something icky like flat files. In short, separation of concerns. This allows you to grow in the future by adding a different UI, segmenting all three areas into their own tier, or changing the relevant technology.
How would you architect a desktop application in C# 3.0
[ "", "c#", "wpf", "architecture", "" ]
I've just heard the term covered index in some database discussion - what does it mean?
A *covering index* is an index that contains all of, and possibly more, the columns you need for your query. For instance, this: ``` SELECT * FROM tablename WHERE criteria ``` will typically use indexes to speed up the resolution of which rows to retrieve using *criteria*, but then it will go to the full table to retrieve the rows. However, if the index contained the columns *column1, column2* and *column3*, then this sql: ``` SELECT column1, column2 FROM tablename WHERE criteria ``` and, provided that particular index could be used to speed up the resolution of which rows to retrieve, the index already contains the values of the columns you're interested in, so it won't have to go to the table to retrieve the rows, but can produce the results directly from the index. This can also be used if you see that a typical query uses 1-2 columns to resolve which rows, and then typically adds another 1-2 columns, it could be beneficial to append those extra columns (if they're the same all over) to the index, so that the query processor can get everything from the index itself. Here's an [article: Index Covering Boosts SQL Server Query Performance](http://www.devx.com/dbzone/Article/29530) on the subject.
Covering index is just an ordinary index. It's called "covering" if it can satisfy query without necessity to analyze data. example: ``` CREATE TABLE MyTable ( ID INT IDENTITY PRIMARY KEY, Foo INT ) CREATE NONCLUSTERED INDEX index1 ON MyTable(ID, Foo) SELECT ID, Foo FROM MyTable -- All requested data are covered by index ``` This is one of the fastest methods to retrieve data from SQL server.
What is a Covered Index?
[ "", "sql", "database", "indexing", "" ]
I'm working on a **multithreaded** C++ application that is corrupting the heap. The usual tools to locate this corruption seem to be inapplicable. Old builds (18 months old) of the source code exhibit the same behavior as the most recent release, so this has been around for a long time and just wasn't noticed; on the downside, source deltas can't be used to identify when the bug was introduced - there are *a lot* of code changes in the repository. The prompt for crashing behavior is to generate throughput in this system - socket transfer of data which is munged into an internal representation. I have a set of test data that will periodically cause the app to exception (various places, various causes - including heap alloc failing, thus: heap corruption). The behavior seems related to CPU power or memory bandwidth; the more of each the machine has, the easier it is to crash. Disabling a hyper-threading core or a dual-core core reduces the rate of (but does not eliminate) corruption. This suggests a timing-related issue. Now here's the rub: When it's run under a lightweight debug environment (say `Visual Studio 98 / AKA MSVC6`) the heap corruption is reasonably easy to reproduce - ten or fifteen minutes pass before something fails horrendously and exceptions, like an `alloc;` when running under a sophisticated debug environment (Rational Purify, `VS2008/MSVC9` or even Microsoft Application Verifier) the system becomes memory-speed bound and doesn't crash (Memory-bound: CPU is not getting above `50%`, disk light is not on, the program's going as fast it can, box consuming `1.3G` of 2G of RAM). So, **I've got a choice between being able to reproduce the problem (but not identifying the cause) or being able to identify the cause of a problem I can't reproduce.** My current best guesses as to where to next is: 1. Get an insanely grunty box (to replace the current dev box: 2Gb RAM in an `E6550 Core2 Duo`); this will make it possible to repro the crash causing misbehavior when running under a powerful debug environment; or 2. Rewrite operators `new` and `delete` to use `VirtualAlloc` and `VirtualProtect` to mark memory as read-only as soon as it's done with. Run under `MSVC6` and have the OS catch the bad guy who's writing to freed memory. Yes, this is a sign of desperation: who the hell rewrites `new` and `delete`?! I wonder if this is going to make it as slow as under Purify et al. And, no: Shipping with Purify instrumentation built in is not an option. A colleague just walked past and asked "Stack Overflow? Are we getting stack overflows now?!?" And now, the question: **How do I locate the heap corruptor?** --- Update: balancing `new[]` and `delete[]` seems to have gotten a long way toward solving the problem. Instead of 15mins, the app now goes about two hours before crashing. Not there yet. Any further suggestions? The heap corruption persists. Update: a release build under Visual Studio 2008 seems dramatically better; current suspicion rests on the `STL` implementation that ships with `VS98`. ---
My first choice would be a dedicated heap tool such as [pageheap.exe](https://support.microsoft.com/en-us/kb/286470). Rewriting new and delete might be useful, but that doesn't catch the allocs committed by lower-level code. If this is what you want, better to Detour the `low-level alloc API`s using Microsoft Detours. Also sanity checks such as: verify your run-time libraries match (release vs. debug, multi-threaded vs. single-threaded, dll vs. static lib), look for bad deletes (eg, delete where delete [] should have been used), make sure you're not mixing and matching your allocs. Also try selectively turning off threads and see when/if the problem goes away. What does the call stack etc look like at the time of the first exception?
I have same problems in my work (we also use `VC6` sometimes). And there is no easy solution for it. I have only some hints: * Try with automatic crash dumps on production machine (see [Process Dumper](http://www.microsoft.com/downloads/details.aspx?FamilyID=e089ca41-6a87-40c8-bf69-28ac08570b7e&displaylang=en)). My experience says Dr. Watson is **not perfect** for dumping. * Remove all **catch(...)** from your code. They often hide serious memory exceptions. * Check [Advanced Windows Debugging](https://rads.stackoverflow.com/amzn/click/com/0321374460) - there are lots of great tips for problems like yours. I recomend this with all my heart. * If you use `STL` try `STLPort` and checked builds. Invalid iterator are hell. Good luck. Problems like yours take us months to solve. Be ready for this...
Heap corruption under Win32; how to locate?
[ "", "c++", "windows", "multithreading", "debugging", "memory", "" ]
I've got two models: Message and Attachment. Each attachment is attached to a specific message, using a ForeignKey on the Attachment model. Both models have an auto\_now DateTimeField called updated. I'm trying to make it so that when any attachment is saved, it also sets the updated field on the associated message to now. Here's my code: ``` def save(self): super(Attachment, self).save() self.message.updated = self.updated ``` Will this work, and if you can explain it to me, why? If not, how would I accomplish this?
You would also need to then save the message. Then it that should work.
Proper version to work is: (attention to last line `self.message.save()`) ``` class Message(models.Model): updated = models.DateTimeField(auto_now = True) ... class Attachment(models.Model): updated = models.DateTimeField(auto_now = True) message = models.ForeignKey(Message) def save(self): super(Attachment, self).save() self.message.save() ```
Updating an auto_now DateTimeField in a parent model in Django
[ "", "python", "database", "django", "orm", "" ]
I'm trying to unit test a custom ConfigurationSection I've written, and I'd like to load some arbitrary configuration XML into a [System.Configuration.Configuration](http://msdn.microsoft.com/en-us/library/system.configuration.configuration.aspx) for each test (rather than put the test configuration xml in the Tests.dll.config file. That is, I'd like to do something like this: ``` Configuration testConfig = new Configuration("<?xml version=\"1.0\"?><configuration>...</configuration>"); MyCustomConfigSection section = testConfig.GetSection("mycustomconfigsection"); Assert.That(section != null); ``` However, it looks like [ConfigurationManager](http://msdn.microsoft.com/en-us/library/system.configuration.configurationmanager.aspx) will only give you Configuration instances that are associated with an EXE file or a machine config. Is there a way to load arbitrary XML into a Configuration instance?
There is actually a way I've discovered.... You need to define a new class inheriting from your original configuration section as follows: ``` public class MyXmlCustomConfigSection : MyCustomConfigSection { public MyXmlCustomConfigSection (string configXml) { XmlTextReader reader = new XmlTextReader(new StringReader(configXml)); DeserializeSection(reader); } } ``` You can then instantiate your ConfigurationSection object as follows: ``` string configXml = "<?xml version=\"1.0\"?><configuration>...</configuration>"; MyCustomConfigSection config = new MyXmlCustomConfigSection(configXml); ``` Hope it helps someone :-)
I think what you're looking for is ConfigurationManager.[OpenMappedExeConfiguration](http://msdn.microsoft.com/en-us/library/system.configuration.configurationmanager.openmappedexeconfiguration.aspx) It allows you to open a configuration file that you specify with a file path (wrapped inside a [ExeConfigurationFileMap](http://msdn.microsoft.com/en-us/library/system.configuration.execonfigurationfilemap.aspx)) If what the other poster said is true, and you don't wish to create a whole new XML file for testing, then I'd recommend you put your Configuration edits in the Test method itself, then run your tests against the freshly changed configuration data.
Is there a way to get a System.Configuration.Configuration instance based on arbitrary xml?
[ "", "c#", ".net", "testing", "configuration", "configurationmanager", "" ]
In php I have open a .php file and want to evaluate certain lines. Specifically when the $table\_id and $line variables are assigned a value. Within the text file I have: ``` ... $table_id = 'crs_class'; // table name $screen = 'crs_class.detail.screen.inc'; // file identifying screen structure ... ``` amongst other lines. The if statement below never detects the occurance of `$table_id` or `$screen` (even without the $ prepended). I can't understand why it won't work as the strpos statement below looking for 'require' works fine. So, why isn't this if statement getting a hit? ``` while ($line=fgets($fh)) { //echo "Evaluating... $line <br>"; **if ((($pos = stripos($line, '$table_id')) === true) || (($pos = stripos($line, '$screen'))===true))** { // TODO: Not evaluating tableid and screen lines correctly fix. // Set $table_id and $screen variables from task scripts eval($line); } if (($pos=stripos($line, 'require')) === true) { $controller = $line; } } ```
use !==false instead of ===true stripos returns the position as an integer if the needle is found. And that's never ===bool. You might also be interested in PHP's [tokenizer module](https://www.php.net/tokenizer) or the [lexer package](http://pear.php.net/package/PHP_LexerGenerator) in the pear repository.
Variable interpolation is only performed on "strings", not 'strings' (note the quotes). i.e. ``` <?php $foo = "bar"; print '$foo'; print "$foo"; ?> ``` prints $foobar. Change your quotes, and all should be well.
strpos function issue in PHP not finding the needle
[ "", "php", "string", "" ]
I am looking for a good JavaScript library for parsing XML data. It should be much easier to use than the built-in [XML DOM parsers](http://www.w3schools.com/Xml/xml_parser.asp) bundled with the browsers. I got spoiled a bit working with JSON and am looking forward to something on similar lines for XML.
I use [jQuery](http://jquery.com/) for this. Here is a good example: (EDIT: Note - the following blog seems to have gone away.) <http://blog.reindel.com/2007/09/24/jquery-and-xml-revisited/> There are also lots and lots of good examples in the [jQuery](http://jquery.com/) documentation: <http://www.webmonkey.com/tutorial/Easy_XML_Consumption_using_jQuery?oldid=20032> EDIT: Due to the blog for my primary example going away, I wanted to add another example that shows the basics and helps with namespace issues: <http://www.zachleat.com/web/selecting-xml-with-javascript/>
**Disclaimer:** I am the author if the open-source [Jsonix](https://github.com/highsource/jsonix) library which *may* be suitable for the task. --- A couple of years ago I was also looking for a good XML<->JSON parsing/serialization library for JavaScript. I needed to process XML documents conforming to rather complex XML Schemas. In Java, I routinely use [JAXB](https://jaxb.java.net/) for the task so I was looking for something similar: > [Is there a JavaScript API for XML binding - analog to JAXB for Java?](https://stackoverflow.com/questions/3819192/is-there-a-javascript-api-for-xml-binding-analog-to-jaxb-for-java) I failed to find such a tool back then. So I wrote [**Jsonix**](https://github.com/highsource/jsonix) which I consider to be a JAXB analog for JavaScript. You may find [Jsonix](https://github.com/highsource/jsonix) suitable, if you're interested in the following features: * XML<->JSON conversion is based on a **declaraive mapping** between XML and JSON structures * This **mapping** can be **generated from an XML Schema** or written manually * **Bidirectional** - supports parsing as well as serialization (or unmarshalling/marshalling in other terms). * Support **elements**, **attributes** and also considers **namespaces** defined in the XML document. * Strictly typed. * Strictly structured. * Support almost all of the **XML Schema built-in types** (including special types like `QName`). * Works in **browsers** as well as **Node.js**, also compatible to [**RequireJS**](http://requirejs.org/)/[**AMD**](https://github.com/amdjs/amdjs-api/wiki/AMD) (also to [`amdefine`](https://github.com/jrburke/amdefine) in Node.js) * Has [extensive documentation](http://confluence.highsource.org/display/JSNX/User+Guide). However, Jsonix *may be an overkill*, if your XML is rather simple, does not have an XML Schema or if you're not interested in strict typing or structures. Check your requirements. **Example** Try it [in JSFiddle](http://jsfiddle.net/lexi/LP3DC/). You can take a [purchase order schema](http://www.w3.org/TR/xmlschema-0/#po.xsd) and generate a mapping for it using the following command: ``` java -jar node_modules/jsonix/lib/jsonix-schema-compiler-full.jar -d mappings -p PO purchaseorder.xsd ``` You'll get a `PO.js` file which describes mappings between XML and JavaScript structures. Here's a snippet from this mapping file to give you an impression: ``` var PO = { name: 'PO', typeInfos: [{ localName: 'PurchaseOrderType', propertyInfos: [{ name: 'shipTo', typeInfo: 'PO.USAddress' }, { name: 'billTo', typeInfo: 'PO.USAddress' }, { name: 'comment' }, { name: 'orderDate', typeInfo: 'Calendar', type: 'attribute' }, ...] }, { localName: 'USAddress', propertyInfos: [ ... ] }, ...], elementInfos: [{ elementName: 'purchaseOrder', typeInfo: 'PO.PurchaseOrderType' }, ... ] }; ``` Having this mapping file you can parse the [XML](https://github.com/highsource/jsonix/blob/master/fiddles/po/demo.response.xml): ``` // First we construct a Jsonix context - a factory for unmarshaller (parser) // and marshaller (serializer) var context = new Jsonix.Context([PO]); // Then we create a unmarshaller var unmarshaller = context.createUnmarshaller(); // Unmarshal an object from the XML retrieved from the URL unmarshaller.unmarshalURL('po.xml', // This callback function will be provided // with the result of the unmarshalling function (unmarshalled) { // Alice Smith console.log(unmarshalled.value.shipTo.name); // Baby Monitor console.log(unmarshalled.value.items.item[1].productName); }); ``` Or serialize your JavaScript object as XML: ``` // Create a marshaller var marshaller = context.createMarshaller(); // Marshal a JavaScript Object as XML (DOM Document) var doc = marshaller.marshalDocument({ name: { localPart: "purchaseOrder" }, value: { orderDate: { year: 1999, month: 10, day: 20 }, shipTo: { country: "US", name: "Alice Smith", street: "123 Maple Street", city: "Mill Valley", state: "CA", zip: 90952 }, billTo: { /* ... */ }, comment: 'Hurry, my lawn is going wild!', items: { /* ... */ } } }); ``` You can try it [in JSFiddle](http://jsfiddle.net/lexi/LP3DC/) to see how it works in practice. --- **Additional disclaimer:** this answer is high-voted because of the [following discussion](https://meta.stackoverflow.com/questions/276942/correcting-my-own-spam-answer-is-it-fine-now) on meta. *So please be aware of the "meta-effect".* High votes here do not necessarily mean that Jsonix is good, applicable or recommended by the community. Do not be mislead by the high votes.
XML parser for JavaScript
[ "", "javascript", "xml", "json", "browser", "parsing", "" ]
As I get more and more namespaces in my solution, the list of using statements at the top of my files grows longer and longer. This is especially the case in my unit tests where for each component that might be called I need to include the using for the interface, the IoC container, and the concrete type. With upward of 17 lines of usings in my integration test files its just getting downright messy. Does anyone know if theres a way to define a macro for my base using statements? Any other solutions?
Some people enjoy hiding the usings in a `#region`. Otherwise, I think you're out of luck. Unless you want to put the namespace on all your referents.
I know I shouldn't say this out loud, but, maybe reconsider your design. 17 usings in 1 file = a lot of coupling (on the namespace level).
Is there any way to get rid of the long list of usings at the top of my .cs files?
[ "", "c#", ".net", "visual-studio", "namespaces", "" ]
An instance of class A instantiates a couple of other objects, say for example from class B: ``` $foo = new B(); ``` I would like to access A's public class variables from methods within B. Unless I'm missing something, the only way to do this is to pass the current object to the instances of B: ``` $foo = new B($this); ``` Is this best practice or is there another way to do this?
That looks fine to me, I tend to use a rule of thumb of "would someone maintaining this understand it?" and that's an easily understood solution. If there's only one "A", you could consider using the registry pattern, see for example <http://www.phppatterns.com/docs/design/the_registry>
I would first check if you are not using the wrong pattern: From your application logic, should B really know about A? If B needs to know about A, a parent-child relationship seems not quite adequate. For example, A could be the child, or part of A's logic could go into a third object that is "below" B in the hierarchy (i. e. doesn't know about B). That said, I would suggest you have a method in B to register A as a data source, or create a method in A to register B as an [Observer](http://en.wikipedia.org/wiki/Observer_pattern "Wikipedia: Observer pattern") and a matching method in B that A uses to notify B of value changes.
How do I access class variables of a parent object in PHP?
[ "", "php", "oop", "" ]
I'm quoting part of an answer which I received for [another question of mine](https://stackoverflow.com/questions/83088/phpmysql-regular-recalcuation-of-benchmark-values-as-new-users-submit-their-data): > In the PHP/MySQL world I would say > stored procedures are no-go I would like to know: Is that so? Why? Why not? [edit]I mean this as a general question without a specific need in mind[/edit]
I develop and maintain a large PHP/MySQL application. Here is my experience with stored procedures. Over time our application has grown very complex. And with all the logic on the php side, some operations would query the database with over 100 short queries. MySQL is so quick that the performance was still acceptable, but not great. We made the decision in our latest version of the software to move some of the logic to stored procedures for complex operations. We did achieve a significant performance gain due to the fact that we did not have to send data back and forth between PHP and MySQL. I do agree with the other posters here that PL/SQL is not a modern language and is difficult to debug. Bottom Line: Stored Procedures are a great tool for certain situations. But I would not recommend using them unless you have a good reason. For simple applications, stored procedures are not worth the hassle.
When using stored procedures with MySQL, you will often need to use the [mysqli](https://www.php.net/manual/en/book.mysqli.php) interface in PHP and not the regular [mysql](https://www.php.net/manual/en/book.mysql.php) interface. The reason for this is due to the fact that the stored procedures often will return more than 1 result set. If it does, the mysql API can not handle it and will you get errors. The mysqli interface has functions to handling these multiple result sets, functions such as [mysqli\_more\_results](https://www.php.net/manual/en/mysqli.more-results.php) and [mysqli\_next\_result](https://www.php.net/manual/en/mysqli.next-result.php). Keep in mind that if you return any result set at all from the stored procedure, then you need to use these APIs, as the stored procedure generates 1 result set for the actual execution, and then 1 additional one for each result set intentionally returned from the stored procedure.
Stored procedures a no-go in the php/mysql world?
[ "", "php", "mysql", "stored-procedures", "" ]
Is there an easy way to avoid dealing with text encoding problems?
You can't really avoid dealing with the text encoding issues, but there are existing solutions in Apache Commons: * `Reader` to `InputStream`: [`ReaderInputStream`](https://commons.apache.org/proper/commons-io/javadocs/api-release/org/apache/commons/io/input/ReaderInputStream.html) * `Writer` to `OutputStream`: [`WriterOutputStream`](https://commons.apache.org/proper/commons-io/javadocs/api-release/org/apache/commons/io/output/WriterOutputStream.html) You just need to pick the encoding of your choice.
If you are starting off with a String you can also do the following: ``` new ByteArrayInputStream(inputString.getBytes("UTF-8")) ```
How to convert a Reader to InputStream and a Writer to OutputStream?
[ "", "java", "stream", "" ]
Sometimes a labeled break or continue can make code a lot more readable. ``` OUTERLOOP: for ( ;/*stuff*/; ) { //...lots of code if ( isEnough() ) break OUTERLOOP; //...more code } ``` I was wondering what the common convention for the labels was. All caps? first cap?
If you have to use them use capitals, this draws attention to them and singles them out from being mistakenly interpreted as "Class" names. Drawing attention to them has the additional benefit of catching someone's eye that will come along and refactor your code and remove them. ;)
I don't understand where this "don't use labels" rule comes from. When doing non-trivial looping logic, the test to break or continue isn't always neatly at the end of the surrounding block. ``` outer_loop: for (...) { // some code for (...) { // some code if (...) continue outer_loop; // more code } // more code } ``` Yes, cases like this do happen all the time. What are people suggesting I use instead? A boolean condition like this? ``` for (...) { // some code boolean continueOuterLoop = false; for (...) { // some code if (...) { continueOuterLoop = true; break; } // more code } if (continueOuterLoop) continue; // more code } ``` **Yuck!** Refactoring it as a method doesn't alleviate that either: ``` boolean innerLoop (...) { for (...) { // some code if (...) { return true; } // more code } return false; } for (...) { // some code if (innerLoop(...)) continue; // more code } ``` Sure it's a little prettier, but it's still passing around a superfluous boolean. And if the inner loop modified local variables, refactoring it into a method isn't always the correct solution. So why are you all against labels? Give me some solid reasons, and practical alternatives for the above case.
Java Coding standard / best practices - naming convention for break/continue labels
[ "", "java", "label", "convention", "" ]
So I came across an interesting problem today. We have a WCF web service that returns an IList. Not really a big deal until I wanted to sort it. Turns out the IList interface doesn't have a sort method built in. I ended up using the `ArrayList.Adapter(list).Sort(new MyComparer())` method to solve the problem but it just seemed a bit "ghetto" to me. I toyed with writing an extension method, also with inheriting from IList and implementing my own Sort() method as well as casting to a List but none of these seemed overly elegant. So my question is, does anyone have an elegant solution to sorting an IList
How about using LINQ To Objects to sort for you? Say you have a `IList<Car>`, and the car had an `Engine` property, I believe you could sort as follows: ``` from c in list orderby c.Engine select c; ``` *Edit: You do need to be quick to get answers in here. As I presented a slightly different syntax to the other answers, I will leave my answer - however, the other answers presented are equally valid.*
You can use LINQ: ``` using System.Linq; IList<Foo> list = new List<Foo>(); IEnumerable<Foo> sortedEnum = list.OrderBy(f=>f.Bar); IList<Foo> sortedList = sortedEnum.ToList(); ```
Sorting an IList in C#
[ "", "c#", "generics", "sorting", "ilist", "" ]
What tool would you recommend to detect **Java package cyclic dependencies**, knowing that the goal is to *list explicitly the specific classes involved in the detected 'across-packages cycle'*? I know about [classycle](http://classycle.sourceforge.net/) and [JDepend](http://clarkware.com/software/JDepend.html), but they both fail to list the classes involved in a cyclic package dependency. [Metrics](http://metrics.sourceforge.net/) has an interesting graphical representation of cycles, but it is again limited to packages, and quite difficult to read sometime. I am getting tired to get a: > *" you have a package cycle dependency between those 3 packages* > *you have xxx classes in each* > *good luck finding the right classes and break this cycle "* Do you know any tool that takes the extra step to actually explain to you why the cycle is detected (i.e. 'list the involved classes')? --- Riiight... Time to proclaim the results: @l7010.de: Thank you for the effort. I will vote you up (when I will have enough rep), especially for the 'CAP' answer... but CAP is dead in the water and no longer compatible with my Eclipse 3.4. The rest is commercial and I look only for freeware. @daniel6651: Thank you but, as said, freeware only (sorry to not have mentioned it in the first place). @izb as a frequent user of findbugs (using the latest 1.3.5 right now), I am one click away to accept your answer... if you could explain to me what option there is to activate for findbug to detect any cycle. That feature is only mentioned for the [0.8.7 version in passing](http://findbugs.sourceforge.net/Changes.html) (look for '*New Style detector to find circular dependencies between classes*'), and I am not able to test it. Update: It works now, and I had an old findbugs configuration file in which that option was not activated. I still like [CAD](https://stackoverflow.com/questions/62276/java-package-cycle-detection-how-to-find-the-specific-classes-involved#71610) though ;) THE ANSWER is... see [my own (second) answer below](https://stackoverflow.com/questions/62276/java-package-cycle-detection-how-to-find-the-specific-classes-involved#71610)
Findbugs can detect circular class dependencies and has an Eclipse plugin too. <http://findbugs.sourceforge.net/>
Well... after testing [DepFinder presented above](https://stackoverflow.com/questions/62276/java-package-cycle-detection-how-to-find-the-specific-classes-involved#66059), it turns out it is great for a quick detection of simple dependencies, but it does not scale well with the number of classes... So the REAL ACTUAL ANSWER is: **[CDA - Class Dependency Analyzer](http://www.dependency-analyzer.org/)** It is fast, up-to-date, easy to use and provides with graphical representation of classes and their circular dependencies. A dream come true ;) You have to create a workset in which you enter only the directory of your classes (.class) (no need to have a complete classpath) The option "Detect circular dependencies - `ALT`-`C`" works as advertise and does not take 100% of the CPU for hours to analyze my 468 classes. Note: to refresh a workspace, you need to open it again(!), in order to trigger a new scan of your classes. ![screenshot](https://i.stack.imgur.com/mS8rC.jpg)
Java package cycle detection: how do I find the specific classes involved?
[ "", "java", "class", "dependencies", "package", "" ]
I got this error today when trying to open a Visual Studio 2008 **project** in Visual Studio 2005: > The imported project "C:\Microsoft.CSharp.targets" was not found.
Open your csproj file in notepad (or notepad++) Find the line: ``` <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> ``` and change it to ``` <Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" /> ```
> This is a global solution, not dependent on particular package or bin. In my case, I removed **Packages** folder from my root directory. > Maybe it happens because of your packages are there but compiler is not finding it's reference. so remove older packages first and add new packages. Steps to **Add new packages** * First remove, packages folder (**it will be near by or one step up to your current project folder**). * Then restart the project or solution. * Now, Rebuild solution file. * Project will get new references from nuGet package manager. And your issue will be resolved. This is not proper solution, but **I posted it here because I face same issue.** In my case, **I wasn't even able to open my solution in visual studio and didn't get any help with other SO answers.**
The imported project "C:\Microsoft.CSharp.targets" was not found
[ "", "c#", "visual-studio", "" ]
I wonder if anyone uses commercial/free java obfuscators on his own commercial product. I know only about one project that actually had an obfuscating step in the ant build step for releases. Do you obfuscate? And if so, why do you obfuscate? Is it really a way to protect the code or is it just a better feeling for the developers/managers? **edit:** Ok, I to be exact about my point: Do you obfuscate to protect your IP (your algorithms, the work you've put into your product)? I won't obfuscate for security reasons, that doesn't feel right. So I'm only talking about protecting your applications code against competitors. [@staffan](https://stackoverflow.com/users/988/staffan) has a good point: > The reason to stay away from chaining code flow is that some of those changes makes it impossible for the JVM to efficiently optimize the code. In effect it will actually degrade the performance of your application.
If you do obfuscate, stay away from obfuscators that modify the code by changing code flow and/or adding exception blocks and such to make it hard to disassemble it. To make the code unreadable it is usually enough to just change all names of methods, fields and classes. The reason to stay away from changing code flow is that some of those changes makes it impossible for the JVM to efficiently optimize the code. In effect it will actually degrade the performance of your application.
I think that the old (classical) way of the obfuscation is gradually losing its relevance. Because in most cases a classical obfuscators breaking a stack trace (it is not good for support your clients) Nowadays the main point to not protect some algorithms, but to protect a sensitive data: API logins/passwords/keys, code which responsible for licensing (piracy still here, especially Western Europe, Russia, Asia, IMHO), advertisement account IDs, etc. Interesting fact: we have all this sensitive data in Strings. Actually Strings is about 50-80% of logic of our applications. It seems to me that future of obfuscation is "String encryption tools". But now "String encryption" feature is available only in commercial obfuscators, such as: [Allatori](http://www.allatori.com/), [Zelix KlassMaster](http://www.zelix.com/klassmaster/index.html), [Smokescreen](http://www.leesw.com/smokescreen/), [Stringer Java Obfuscation Toolkit](https://jfxstore.com/stringer), [DashO](http://www.preemptive.com/). N.B. I'm CEO at Licel LLC. Developer of Stringer Java Obfuscator.
Do you obfuscate your commercial Java code?
[ "", "java", "obfuscation", "" ]
In PHP, how can I replicate the expand/contract feature for Tinyurls as on search.twitter.com?
If you want to find out where a tinyurl is going, use fsockopen to get a connection to tinyurl.com on port 80, and send it an HTTP request like this ``` GET /dmsfm HTTP/1.0 Host: tinyurl.com ``` The response you get back will look like ``` HTTP/1.0 301 Moved Permanently Connection: close X-Powered-By: PHP/5.2.6 Location: http://en.wikipedia.org/wiki/TinyURL Content-type: text/html Content-Length: 0 Date: Mon, 15 Sep 2008 12:29:04 GMT Server: TinyURL/1.6 ``` example code... ``` <?php $tinyurl="dmsfm"; $fp = fsockopen("tinyurl.com", 80, $errno, $errstr, 30); if (!$fp) { echo "$errstr ($errno)<br />\n"; } else { $out = "GET /$tinyurl HTTP/1.0\r\n"; $out .= "Host: tinyurl.com\r\n"; $out .= "Connection: Close\r\n\r\n"; $response=""; fwrite($fp, $out); while (!feof($fp)) { $response.=fgets($fp, 128); } fclose($fp); //now parse the Location: header out of the response } ?> ```
And here is how to *contract* an arbitrary URL using the TinyURL API. The general call pattern goes like this, it's a simple HTTP request with parameters: <http://tinyurl.com/api-create.php?url=http://insertyourstuffhere.com> This will return the corresponding TinyURL for <http://insertyourstuffhere.com>. In PHP, you can wrap this in an fsockopen() call or, for convenience, just use the file() function to retrieve it: ``` function make_tinyurl($longurl) { return(implode('', file( 'http://tinyurl.com/api-create.php?url='.urlencode($longurl)))); } // make an example call print(make_tinyurl('http://www.joelonsoftware.com/items/2008/09/15.html')); ```
PHP: How to expand/contract Tinyurls
[ "", "php", "" ]
What is the preferred way to remove spaces from a string in C++? I could loop through all the characters and build a new string, but is there a better way?
The best thing to do is to use the algorithm [`remove_if`](http://en.cppreference.com/w/cpp/algorithm/remove) and isspace: ``` remove_if(str.begin(), str.end(), isspace); ``` Now the algorithm itself can't change the container(only modify the values), so it actually shuffles the values around and returns a pointer to where the end now should be. So we have to call string::erase to actually modify the length of the container: ``` str.erase(remove_if(str.begin(), str.end(), isspace), str.end()); ``` We should also note that remove\_if will make at most one copy of the data. Here is a sample implementation: ``` template<typename T, typename P> T remove_if(T beg, T end, P pred) { T dest = beg; for (T itr = beg;itr != end; ++itr) if (!pred(*itr)) *(dest++) = *itr; return dest; } ```
``` std::string::iterator end_pos = std::remove(str.begin(), str.end(), ' '); str.erase(end_pos, str.end()); ```
Remove spaces from std::string in C++
[ "", "c++", "string", "stl", "whitespace", "" ]
``` std::vector<int> ints; // ... fill ints with random values for(std::vector<int>::iterator it = ints.begin(); it != ints.end(); ) { if(*it < 10) { *it = ints.back(); ints.pop_back(); continue; } it++; } ``` This code is not working because when `pop_back()` is called, `it` is invalidated. But I don't find any doc talking about invalidation of iterators in `std::vector::pop_back()`. Do you have some links about that?
The call to [`pop_back()`](http://en.cppreference.com/w/cpp/container/vector/pop_back) removes the last element in the vector and so the iterator to that element is invalidated. The `pop_back()` call does *not* invalidate iterators to items before the last element, only reallocation will do that. From Josuttis' "C++ Standard Library Reference": > Inserting or removing elements > invalidates references, pointers, and > iterators that refer to the following > element. If an insertion causes > reallocation, it invalidates all > references, iterators, and pointers.
Here is your answer, directly from The Holy Standard: > 23.2.4.2 A vector satisfies all of the requirements of a container and of a reversible container (given in two tables in 23.1) and of a sequence, including most of the optional sequence requirements (23.1.1). > 23.1.1.12 Table 68 > expressiona.pop\_back() > return typevoid > operational semantics**a.erase(--a.end())** > containervector, list, deque Notice that a.pop\_back is equivalent to a.erase(--a.end()). Looking at vector's specifics on erase: > 23.2.4.3.3 - iterator erase(iterator position) - effects - **Invalidates all the iterators and references after the point of the erase** Therefore, once you call pop\_back, any iterators to the previously final element (which now no longer exists) are invalidated. Looking at your code, the problem is that when you remove the final element and the list becomes empty, you still increment it and walk off the end of the list.
Does pop_back() really invalidate *all* iterators on an std::vector?
[ "", "c++", "stl", "" ]
Say I have an ASMX web service, MyService. The service has a method, MyMethod. I could execute MyMethod on the server side as follows: ``` MyService service = new MyService(); service.MyMethod(); ``` I need to do similar, with service and method not known until runtime. I'm assuming that reflection is the way to go about that. Unfortunately, I'm having a hard time making it work. When I execute this code: ``` Type.GetType("MyService", true); ``` It throws this error: > Could not load type 'MyService' from assembly 'App\_Web\_\_ktsp\_r0, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null'. Any guidance would be appreciated.
I'm not sure if this would be the best way to go about it. The most obvious way to me, would be to make an HTTP Request, and call the webservice using an actual HTTP GET or POST. Using your method, I'm not entirely sure how you'd set up the data you are sending to the web service. I've added some sample code in VB.Net ``` Dim HTTPRequest As HttpWebRequest Dim HTTPResponse As HttpWebResponse Dim ResponseReader As StreamReader Dim URL AS String Dim ResponseText As String URL = "http://www.example.com/MyWebSerivce/MyMethod?arg1=A&arg2=B" HTTPRequest = HttpWebRequest.Create(URL) HTTPRequest.Method = "GET" HTTPResponse = HTTPRequest.GetResponse() ResponseReader = New StreamReader(HTTPResponse.GetResponseStream()) ResponseText = ResponseReader.ReadToEnd() ```
// Try this -> ``` Type t = System.Web.Compilation.BuildManager.GetType("MyServiceClass", true); object act = Activator.CreateInstance(t); object o = t.GetMethod("hello").Invoke(act, null); ```
Using reflection to call an ASP.NET web service
[ "", "c#", "asp.net", "web-services", "reflection", "" ]
If you've used Oracle, you've probably gotten the helpful message "ORA-00942: Table or view does not exist". Is there a legitimate technical reason the message doesn't include the name of the missing object? Arguments about this being due to security sound like they were crafted by the TSA. If I'm an attacker, I'd know what table I just attempted to exploit, and be able to interpret this unhelpful message easily. If I'm a developer working with a complex join through several layers of application code, it's often very difficult to tell. My guess is that when this error was originally implemented, someone neglected to add the object name, and now, people are afraid it will break compatibility to fix it. (Code doing silly things like parsing the error message will be confused if it changes.) Is there a developer-friendly (as opposed to recruiting your DBA) way to determine the name of the missing table? --- Although I've accepted an answer which is relevant to the topic, it doesn't really answer my question: *Why isn't the name part of the error message?* If anyone can come up with the real answer, I'll be happy to change my vote.
You can set an EVENT in your parameter file (plain text or spfile) to force Oracle to dump a detailed trace file in the user\_dump\_dest, the object name might be in there, if not the SQL should be. EVENT="942 trace name errorstack level 12" If you are using a plain text file you need to keep all your EVENT settings on consecutive lines. Not sure how that applied to spfile.
SQL\*Plus does tell you the table that doesn't exist. For example: ``` SQL> select 2 * 3 from 4 user_tables a, 5 non_existent_table b 6 where 7 a.table_name = b.table_name; non_existent_table b * ERROR at line 5: ORA-00942: table or view does not exist ``` Here it shows that the name of the missing table and the line number in the SQL statement where the error occurs. Similarly, in a one-line SQL statement you can see the asterisk highlighting the name of the unknown table: ``` SQL> select * from user_tables a, non_existent_table b where a.table_name = b.table_name; select * from user_tables a, non_existent_table b where a.table_name = b.table_name * ERROR at line 1: ORA-00942: table or view does not exist ``` In terms of your question, I guess the reason the error message doesn't include the name of the table is that the error message itself needs to be static text. The line number and location in the line of the error is clearly passed back to SQL\*Plus (somehow).
Why doesn't Oracle tell you WHICH table or view does not exist?
[ "", "sql", "database", "oracle", "ora-00942", "" ]
What's the best way to delete all rows from a table in sql but to keep n number of rows on the top?
``` DELETE FROM Table WHERE ID NOT IN (SELECT TOP 10 ID FROM Table) ``` **Edit:** Chris brings up a good performance hit since the TOP 10 query would be run for each row. If this is a one time thing, then it may not be as big of a deal, but if it is a common thing, then I did look closer at it.
I would select ID column(s) the set of rows that you want to keep into a temp table or table variable. Then delete all the rows that do not exist in the temp table. The syntax mentioned by another user: ``` DELETE FROM Table WHERE ID NOT IN (SELECT TOP 10 ID FROM Table) ``` Has a potential problem. The "SELECT TOP 10" query will be executed for each row in the table, which could be a huge performance hit. You want to avoid making the same query over and over again. This syntax should work, based what you listed as your original SQL statement: ``` create table #nuke(NukeID int) insert into #nuke(Nuke) select top 1000 id from article delete article where not exists (select 1 from nuke where Nukeid = id) drop table #nuke ```
Delete all but top n from database table in SQL
[ "", "sql", "" ]
What are the advantages/disadvantages between MS VS C++ 6.0 and MSVS C++ 2008? The main reason for asking such a question is that there are still many decent programmers that prefer using the older version instead of the newest version. Is there any reason the might prefer the older over the new?
Well, for one thing it may be because the executables built with MSVS 6 require only msvcrt.dll (C runtime) which is shipped with Windows now. The MSVS 2008 executables need msvcrt9 shipped with them (or already installed). Plus, you have a lot of OSS libraries already compiled for Windows 32 bit with the 6.0 C runtime, while for the 2008 C runtime you have to take the source and compile them yourself. (most of those libraries are actually compiled with MinGW, which too uses the 6.0 C runtime - maybe that's another reason).
Advantages of Visual Studio 2008 over Visual C++ 6.0: * Much more standards compliant C++ compiler, with better template handling * Support for x64 / mobile / XBOX targets * Improved STL implementation * Support for C++0x TR1 (smart pointers, regular expressions, etc) * Secure C runtime library * Improved code navigation * Improved debugger; possibility to run remote debug sessions * Better compiler optimizations * Many bug fixes * Faster builds on multi-core/multi-CPU systems * Improved IDE user interface, with many nice features * Improved macro support in the IDE; DTE allows access to more IDE methods and variables * Updated MFC library (in VS2008 Service Pack 1) * support for OPENMP (easy multithreading)(only in VS2008 pro.) Disadvantages of moving to Visual Studio 2008: * The IDE is a lot slower than VS6 * Intellisense still has performance issues (replacing it with VisualAssistX can help) * Side-by-side assemblies make app deployment much more problematic * The local (offline) MSDN library is extremely slow * As mentioned [here](https://stackoverflow.com/questions/61669/profiling-in-visual-studio-2008-pro), there's no profiler in the Professional version --- In the spirit of Joel's recent [blog post](http://www.joelonsoftware.com/items/2008/09/15.html), I've combined some of the other answers posted into a single answer (and made this a community-owned post, so I won't gain rep from it). I hope you don't mind. Many thanks to Laur, NeARAZ, 17 of 26, me.yahoo.com, and everyone else who answered. -- ChrisN
What are the differences between Visual C++ 6.0 and Visual C++ 2008?
[ "", "c++", "visual-studio", "visual-c++-6", "" ]
Exceptions in C++ don't need to be caught (no compile time errors) by the calling function. So it's up to the developer's judgment whether to catch them using try/catch (unlike in Java). Is there a way one can ensure that the exceptions thrown are always caught using try/catch by the calling function?
No. See [A Pragmatic Look at Exception Specifications](http://www.gotw.ca/publications/mill22.htm) for reasons why not. The only way you can "help" this is to document the exceptions your function can throw, say as a comment in the header file declaring it. This is not enforced by the compiler or anything. Use code reviews for that purpose.
You shouldn't be using an exception here. This obviously isn't an exceptional case if you need to be expecting it everywhere you use this function! A better solution would be to get the function to return an instance of something like this. In debug builds (assuming developers exercise code paths they've just written), they'll get an assert if they forget to check whether the operation succeded or not. ``` class SearchResult { private: ResultType result_; bool succeeded_; bool succeessChecked_; public: SearchResult(Result& result, bool succeeded) : result_(result) , succeeded_(succeeded) , successChecked_(false) { } ~SearchResult() { ASSERT(successChecked_); } ResultType& Result() { return result_; } bool Succeeded() { successChecked_ = true; return succeeded_; } } ```
Ensuring that Exceptions are always caught
[ "", "c++", "exception", "try-catch", "exception-safety", "" ]
Given that indexing is so important as your data set increases in size, can someone explain how indexing works at a database-agnostic level? For information on queries to index a field, check out [How do I index a database column](https://stackoverflow.com/questions/1156/).
**Why is it needed?** When data is stored on disk-based storage devices, it is stored as blocks of data. These blocks are accessed in their entirety, making them the atomic disk access operation. Disk blocks are structured in much the same way as linked lists; both contain a section for data, a pointer to the location of the next node (or block), and both need not be stored contiguously. Due to the fact that a number of records can only be sorted on one field, we can state that searching on a field that isn’t sorted requires a Linear Search which requires `(N+1)/2` block accesses (on average), where `N` is the number of blocks that the table spans. If that field is a non-key field (i.e. doesn’t contain unique entries) then the entire tablespace must be searched at `N` block accesses. Whereas with a sorted field, a Binary Search may be used, which has `log2 N` block accesses. Also since the data is sorted given a non-key field, the rest of the table doesn’t need to be searched for duplicate values, once a higher value is found. Thus the performance increase is substantial. **What is indexing?** Indexing is a way of sorting a number of records on multiple fields. Creating an index on a field in a table creates another data structure which holds the field value, and a pointer to the record it relates to. This index structure is then sorted, allowing Binary Searches to be performed on it. The downside to indexing is that these indices require additional space on the disk since the indices are stored together in a table using the MyISAM engine, this file can quickly reach the size limits of the underlying file system if many fields within the same table are indexed. **How does it work?** Firstly, let’s outline a sample database table schema; ``` Field name Data type Size on disk id (Primary key) Unsigned INT 4 bytes firstName Char(50) 50 bytes lastName Char(50) 50 bytes emailAddress Char(100) 100 bytes ``` **Note**: char was used in place of varchar to allow for an accurate size on disk value. This sample database contains five million rows and is unindexed. The performance of several queries will now be analyzed. These are a query using the *id* (a sorted key field) and one using the *firstName* (a non-key unsorted field). ***Example 1*** - *sorted vs unsorted fields* Given our sample database of `r = 5,000,000` records of a fixed size giving a record length of `R = 204` bytes and they are stored in a table using the MyISAM engine which is using the default block size `B = 1,024` bytes. The blocking factor of the table would be `bfr = (B/R) = 1024/204 = 5` records per disk block. The total number of blocks required to hold the table is `N = (r/bfr) = 5000000/5 = 1,000,000` blocks. A linear search on the id field would require an average of `N/2 = 500,000` block accesses to find a value, given that the id field is a key field. But since the id field is also sorted, a binary search can be conducted requiring an average of `log2 1000000 = 19.93 = 20` block accesses. Instantly we can see this is a drastic improvement. Now the *firstName* field is neither sorted nor a key field, so a binary search is impossible, nor are the values unique, and thus the table will require searching to the end for an exact `N = 1,000,000` block accesses. It is this situation that indexing aims to correct. Given that an index record contains only the indexed field and a pointer to the original record, it stands to reason that it will be smaller than the multi-field record that it points to. So the index itself requires fewer disk blocks than the original table, which therefore requires fewer block accesses to iterate through. The schema for an index on the *firstName* field is outlined below; ``` Field name Data type Size on disk firstName Char(50) 50 bytes (record pointer) Special 4 bytes ``` **Note**: Pointers in MySQL are 2, 3, 4 or 5 bytes in length depending on the size of the table. ***Example 2*** - *indexing* Given our sample database of `r = 5,000,000` records with an index record length of `R = 54` bytes and using the default block size `B = 1,024` bytes. The blocking factor of the index would be `bfr = (B/R) = 1024/54 = 18` records per disk block. The total number of blocks required to hold the index is `N = (r/bfr) = 5000000/18 = 277,778` blocks. Now a search using the *firstName* field can utilize the index to increase performance. This allows for a binary search of the index with an average of `log2 277778 = 18.08 = 19` block accesses. To find the address of the actual record, which requires a further block access to read, bringing the total to `19 + 1 = 20` block accesses, a far cry from the 1,000,000 block accesses required to find a *firstName* match in the non-indexed table. **When should it be used?** Given that creating an index requires additional disk space (277,778 blocks extra from the above example, a ~28% increase), and that too many indices can cause issues arising from the file systems size limits, careful thought must be used to select the correct fields to index. Since indices are only used to speed up the searching for a matching field within the records, it stands to reason that indexing fields used only for output would be simply a waste of disk space and processing time when doing an insert or delete operation, and thus should be avoided. Also given the nature of a binary search, the cardinality or uniqueness of the data is important. Indexing on a field with a cardinality of 2 would split the data in half, whereas a cardinality of 1,000 would return approximately 1,000 records. With such a low cardinality the effectiveness is reduced to a linear sort, and the query optimizer will avoid using the index if the cardinality is less than 30% of the record number, effectively making the index a waste of space.
Classic example **"Index in Books"** Consider a "Book" of 1000 pages, divided by 10 Chapters, each section with 100 pages. Simple, huh? Now, imagine you want to find a particular Chapter that contains a word "**Alchemist**". Without an index page, you have no other option than scanning through the entire book/Chapters. i.e: 1000 pages. This analogy is known as **"Full Table Scan"** in database world. [![enter image description here](https://i.stack.imgur.com/Mnuvr.jpg)](https://i.stack.imgur.com/Mnuvr.jpg) But with an index page, you know where to go! And more, to lookup any particular Chapter that matters, you just need to look over the index page, again and again, every time. After finding the matching index you can efficiently jump to that chapter by skipping the rest. But then, in addition to actual 1000 pages, you will need another ~10 pages to show the indices, so totally 1010 pages. > Thus, the index is a separate section that stores values of indexed > column + pointer to the indexed row in a sorted order for efficient > look-ups. Things are simple in schools, isn't it? :P
How does database indexing work?
[ "", "sql", "database", "performance", "indexing", "database-indexes", "" ]
A reliable coder friend told me that Python's current multi-threading implementation is seriously buggy - enough to avoid using altogether. What can said about this rumor?
Python threads are good for **concurrent I/O programming**. Threads are swapped out of the CPU as soon as they block waiting for input from file, network, etc. This allows other Python threads to use the CPU while others wait. This would allow you to write a multi-threaded web server or web crawler, for example. However, Python threads are serialized by the [GIL](http://en.wikipedia.org/wiki/Global_Interpreter_Lock) when they enter interpreter core. This means that if two threads are crunching numbers, only one can run at any given moment. It also means that you can't take advantage of multi-core or multi-processor architectures. There are solutions like running multiple Python interpreters concurrently, using a C based threading library. This is not for the faint of heart and the benefits might not be worth the trouble. Let's hope for an all Python solution in a future release.
The standard implementation of Python (generally known as CPython as it is written in C) uses OS threads, but since there is the [Global Interpreter Lock](http://en.wikipedia.org/wiki/Global_Interpreter_Lock), only one thread at a time is allowed to run Python code. But within those limitations, the threading libraries are robust and widely used. If you want to be able to use multiple CPU cores, there are a few options. One is to use multiple python interpreters concurrently, as mentioned by others. Another option is to use a different implementation of Python that does not use a GIL. The two main options are [Jython](http://en.wikipedia.org/wiki/Jython) and [IronPython](http://en.wikipedia.org/wiki/IronPython). Jython is written in Java, and is now fairly mature, though some incompatibilities remain. For example, the web framework [Django does not run perfectly yet](http://zyasoft.com/pythoneering/2008/01/django-on-jython-minding-gap.html), but is getting closer all the time. Jython is [great for thread safety](http://mail.python.org/pipermail/python-list/2001-December/116555.html), comes out [better in benchmarks](http://blogs.warwick.ac.uk/dwatkins/entry/benchmarking_parallel_python_1_2/) and has a [cheeky message for those wanting the GIL](http://cgwalters.livejournal.com/17956.html). IronPython uses the .NET framework and is written in C#. Compatibility is reaching the stage where [Django can run on IronPython](http://www.infoq.com/news/2008/03/django-and-ironpython) (at least as a demo) and there are [guides to using threads in IronPython](http://www.voidspace.org.uk/ironpython/threading.shtml).
Are Python threads buggy?
[ "", "python", "multithreading", "" ]
I'm working on a side project that would be a simple web application to maintain a list of classes and their upcoming schedules. I would really like to use Linq to SQL for this project, but unfortunately the server environment I'm developing for only has MySql available. I've dabbled briefly with Subsonic but it just doesn't get the job done. The database requirements for this application aren't that great, though, so I'm curious if using an MDF file in App\_Data would be a workable solution. Otherwise, it appears I'm going to have to hand-code sql queries which I want to avoid at all costs.
I've long since completed the project which prompted this question, but recently I've had another project come along with very minor data requirements, so I spent some more time experimenting with this. I had assumed that Sql Server Express required licensing fees to deploy, but this is not in fact the case. According to Microsoft's website, you are free to use it with certain restrictions: * Maximum database size: 4 GB * Maximum memory used: 1 GB * Maximum CPUs used: 1 (complete procs, not cores) Sql Server Compact is a bad idea for web applications because it requires a hack to make it work, and it isn't built for the concurrent access you'd need for the web. But if your application can fit within the modest limitations of Sql Server Express, it works pretty well. And since it speaks regular T-SQL like its larger siblings, you can use Linq to SQL with it. I hear that [Linq to Sql support is now in the Mono trunk](http://twitter.com/migueldeicaza/status/2368465000) for the 2.6 release, so L2S' tight-coupling to Sql Server will likely be a moot point in the near future. I will either end up porting my code to use Mono's superior Linq to Sql implementation on the db of my choice, or go [another route entirely](http://subsonicproject.com/) (SubSonic has improved by leaps and bounds since I last tried it). But for the time being, Sql Server Express is a valid choice for very small database-driven apps.
Take a look at Microsoft SQL Server Compact Edition. I believe you can work with MDF files without having to run a server. All code runs in process. I believe it has some limitations but it may work for you and I think it's free.
Would building an application using a Sql Server Database File (mdf) be a terrible idea?
[ "", "asp.net", "sql", "mysql", "sql-server", "" ]
One simple method I've used in the past is basically just creating a second table whose structure mirrors the one I want to audit, and then create an update/delete trigger on the main table. Before a record is updated/deleted, the current state is saved to the audit table via the trigger. While effective, the data in the audit table is not the most useful or simple to report off of. I'm wondering if anyone has a better method for auditing data changes? There shouldn't be too many updates of these records, but it is highly sensitive information, so it is important to the customer that all changes are audited and easily reported on.
How much writing vs. reading of this table(s) do you expect? I've used a single audit table, with columns for Table, Column, OldValue, NewValue, User, and ChangeDateTime - generic enough to work with any other changes in the DB, and while a LOT of data got written to that table, reports on that data were sparse enough that they could be run at low-use periods of the day. **Added:** If the amount of data vs. reporting is a concern, the audit table could be replicated to a read-only database server, allowing you to run reports whenever necessary without bogging down the master server from doing their work.
We are using two table design for this. One table is holding data about transaction (database, table name, schema, column, application that triggered transaction, host name for login that started transaction, date, number of affected rows and couple more). Second table is only used to store data changes so that we can undo changes if needed and report on old/new values. Another option is to use a third party tool for this such as [ApexSQL Audit](http://www.apexsql.com/sql_tools_audit.aspx) or Change Data Capture feature in SQL Server.
Suggestions for implementing audit tables in SQL Server?
[ "", "sql", "sql-server", "database", "audit", "" ]
I seem right now to be embroiled in a debate with another programmer on this project who thinks that views have no merits. He proposes a system that PHP looks something like this: ``` $draw = new Draw; $nav = $draw->wideHeaderBox(). $draw->left(). $draw->image(). Image::get($image,60,array('id'=>'header_image')). $draw->imageEnd(). $draw->leftEnd(). $draw->left(10). '<div id="header_text">'. self::defaultSectionText(). '</div>'. $draw->leftEnd(). ``` and so on (this is in the controller btw). Now his arguments for this actually make some sense, he claims that if there is a redesign all we need to do is change the HTML in one place and it changes everywhere automatically. For some reason however, this method still rubs me the wrong way, is there any merit to views over this method? I mean besides not having to retype HTML by hand.
HTML time-savers are useful, but they're only useful when they're intuitive and easy-to-understand. Having to instantiate a `new Draw` just doesn't sound very natural. Furthermore, `wideHeaderBox` and `left` will only have significance to someone who intimately knows the system. And what if there *is* a redesign, like your co-worker muses? What if the `wideHeaderBox` becomes very narrow? Will you change the markup (and styles, presumable) generated by the PHP method but leave a very inaccurate method name to call the code? If you guys just *have* to use HTML generation, you should use it interspersed in view files, and you should use it where it's really necessary/useful, such as something like this: ``` HTML::link("Wikipedia", "http://en.wikipedia.org"); HTML::bulleted_list(array( HTML::list_item("Dogs"), HTML::list_item("Cats"), HTML::list_item("Armadillos") )); ``` In the above example, the method names actually make sense to people who aren't familiar with your system. They'll also make more sense to you guys when you go back into a seldom-visited file and wonder what the heck you were doing.
The argument he uses is the argument you need to *have* views. Both result in only changing it in one place. However, in his version, you are mixing view markup with business code. I would suggest using more of a templated design. Do all your business logic in the PHP, setup all variables that are needed by your page. Then just have your page markup reference those variables (and deal with no business logic whatsoever). Have you looked at smarty? <http://smarty.php.net>
To use views or not to use views
[ "", "php", "model-view-controller", "" ]
What is the RegEx pattern for DateTime (2008-09-01 12:35:45 ) ? I get this error: > No ending delimiter '^' found Using: ``` preg_match('(?n:^(?=\d)((?<day>31(?!(.0?[2469]|11))|30(?!.0?2)|29(?(.0?2)(?=.{3,4}(1[6-9]|[2-9]\d)(0[48]|[2468][048]|[13579][26])|(16|[2468][048]|[3579][26])00))|0?[1-9]|1\d|2[0-8])(?<sep>[/.-])(?<month>0?[1-9]|1[012])\2(?<year>(1[6-9]|[2-9]\d)\d{2})(?:(?=\x20\d)\x20|$))?(?<time>((0?[1-9]|1[012])(:[0-5]\d){0,2}(?i:\ [AP]M))|([01]\d|2[0-3])(:[0-5]\d){1,2})?$)', '2008-09-01 12:35:45'); ``` Gives this error: > Warning: preg\_match() [function.preg-match]: Compilation failed: nothing to repeat at offset 0 in E:\www\index.php on line 19
@Espo: I just have to say that regex is incredible. I'd hate to have to write the code that did something useful with the matches, such as if you wanted to actually find out what date and time the user typed. It seems like Tom's solution would be more tenable, as it is about a zillion times simpler and with the addition of some parentheses you can easily get at the values the user typed: ``` (\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2}) ``` If you're using perl, then you can get the values out with something like this: ``` $year = $1; $month = $2; $day = $3; $hour = $4; $minute = $5; $second = $6; ``` Other languages will have a similar capability. Note that you will need to make some minor mods to the regex if you want to accept values such as single-digit months.
A simple version that will work for the format mentioned, but not all the others as per @Espos: ``` (\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) ```
What is the regex pattern for datetime (2008-09-01 12:35:45 )?
[ "", "php", "regex", "datetime", "" ]
When a java based application starts to misbehave on a windows machine, you want to be able to kill the process in the task manager if you can't quit the application normally. Most of the time, there's more than one java based application running on my machine. Is there a better way than just randomly killing java.exe processes in hope that you'll hit the correct application eventually? **EDIT:** Thank you to all the people who pointed me to Sysinternal's Process Explorer - Exactly what I'm looking for!
Download [Sysinternal's Process Explorer](http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx). It's a task manager much more powerfull than Windows's own manager. One of it's features is that you can see all the resources that each process is using (like registry keys, hard disk directories, named pipes, etc). So, browsing the resources that each java.exe process holds might help you determine wich one you want to kill. I usually find out by looking for the one that's using a certain log file directory.
Run `jps -lv` which shows PIDs and command lines of all running Java processes. Determine PID of the task you want to kill. Then use command: ``` taskkill /PID <pid> ``` to kill the misbehaving process.
Knowing which java.exe process to kill on a windows machine
[ "", "java", "windows", "" ]
I recently encountered a problem where a value was null if accessed with Request.Form but fine if retrieved with Request.Params. What are the differences between these methods that could cause this?
Request.Form only includes variables posted through a form, while Request.Params includes both posted form variables and get variables specified as URL parameters.
Request.Params contains a combination of QueryString, Form, Cookies and ServerVariables (added in that order). The difference is that if you have a form variable called "key1" that is in both the QueryString and Form then Request.Params["key1"] will return the QueryString value and Request.Params.GetValues("key1") will return an array of [querystring-value, form-value]. If there are multiple form values or cookies with the same key then those values will be added to the array returned by GetValues (ie. GetValues will not return a jagged array)
When do Request.Params and Request.Form differ?
[ "", "c#", "asp.net", "request", "" ]
Hopefully, I can get answers for each database server. For an outline of how indexing works check out: [How does database indexing work?](https://stackoverflow.com/questions/1108/how-does-database-indexing-work)
The following is SQL92 standard so should be supported by the majority of RDMBS that use SQL: ``` CREATE INDEX [index name] ON [table name] ( [column name] ) ```
`Sql Server 2005` gives you the ability to specify a covering index. This is an index that includes data from other columns at the leaf level, so you don't have to go back to the table to get columns that aren't included in the index keys. ``` create nonclustered index my_idx on my_table (my_col1 asc, my_col2 asc) include (my_col3); ``` This is invaluable for a query that has `my_col3` in the select list, and `my_col1` and `my_col2` in the where clause.
How do I index a database column
[ "", "sql", "database", "indexing", "" ]
I'm using namespaces in a project and Eclipse PDT, my IDE of choice, recognizes them as syntax errors. Not only it renders its convenient error checking unusable, but it also ruins Eclipse's PHP explorer. 5.3 features are coming to PDT 2.0 scheduled for release in December. Are there any alternatives for the present moment? I'm looking for 5.3 syntax highlighting and error checking at the least.
Some threads that have been addressed by the various PHP IDE developers regarding the status of 5.3 syntax support: * **PHPEclipse**: <http://www.phpeclipse.net/ticket/636> or [google](http://www.google.com.au/search?q=php+5.3+phpeclipse) * **Aptana**: <http://forums.aptana.com/viewtopic.php?t=6538> or [google](http://www.google.com.au/search?q=php+5.3+aptana) * **PDT**: <http://bugs.eclipse.org/bugs/show_bug.cgi?id=234938> or [google](http://www.google.com.au/search?q=php+pdt+5.3) * **TextMate**: <http://www.nabble.com/PHP-Namespace-Support-td19784898.html> (Namespace support) or [google](http://www.google.com.au/search?q=php+pdt+5.3)
This [blog](http://spektom.blogspot.com/2009/03/php-53-support-in-pdt-2nd-stage-is-over.html) states that PHP 5.3 support already presents in [latest integration](http://www.eclipse.org/pdt/downloads/) of PDT 2.1.0.
Any PHP editors supporting 5.3 syntax?
[ "", "php", "ide", "" ]
I'm trying to write a stored procedure to select employees who have birthdays that are upcoming. `SELECT * FROM Employees WHERE Birthday > @Today AND Birthday < @Today + @NumDays` This will not work because the birth year is part of Birthday, so if my birthday was '09-18-1983' that will not fall between '09-18-2008' and '09-25-2008'. Is there a way to ignore the year portion of date fields and just compare month/days? This will be run every monday morning to alert managers of birthdays upcoming, so it possibly will span new years. Here is the working solution that I ended up creating, thanks Kogus. ``` SELECT * FROM Employees WHERE Cast(DATEDIFF(dd, birthdt, getDate()) / 365.25 as int) - Cast(DATEDIFF(dd, birthdt, futureDate) / 365.25 as int) <> 0 ```
*Note: I've edited this to fix what I believe was a significant bug. The currently posted version works for me.* This should work after you modify the field and table names to correspond to your database. ``` SELECT BRTHDATE AS BIRTHDAY ,FLOOR(DATEDIFF(dd,EMP.BRTHDATE,GETDATE()) / 365.25) AS AGE_NOW ,FLOOR(DATEDIFF(dd,EMP.BRTHDATE,GETDATE()+7) / 365.25) AS AGE_ONE_WEEK_FROM_NOW FROM "Database name".dbo.EMPLOYEES EMP WHERE 1 = (FLOOR(DATEDIFF(dd,EMP.BRTHDATE,GETDATE()+7) / 365.25)) - (FLOOR(DATEDIFF(dd,EMP.BRTHDATE,GETDATE()) / 365.25)) ``` Basically, it gets the # of days from their birthday to now, and divides that by 365 (to avoid rounding issues that come up when you convert directly to years). Then it gets the # of days from their birthday to a week from now, and divides that by 365 to get their age a week from now. If their birthday is within a week, then the difference between those two values will be 1. So it returns all of those records.
In case someone is still looking for a solution in **MySQL** (slightly different commands), here's the query: ``` SELECT name,birthday, FLOOR(DATEDIFF(DATE(NOW()),birthday) / 365.25) AS age_now, FLOOR(DATEDIFF(DATE_ADD(DATE(NOW()),INTERVAL 30 DAY),birthday) / 365.25) AS age_future FROM user WHERE 1 = (FLOOR(DATEDIFF(DATE_ADD(DATE(NOW()),INTERVAL 30 DAY),birthday) / 365.25)) - (FLOOR(DATEDIFF(DATE(NOW()),birthday) / 365.25)) ORDER BY MONTH(birthday),DAY(birthday) ```
SQL Select Upcoming Birthdays
[ "", "sql", "sql-server", "t-sql", "" ]
I have a decimal number (let's call it **goal**) and an array of other decimal numbers (let's call the array **elements**) and I need to find all the combinations of numbers from **elements** which sum to goal. I have a preference for a solution in C# (.Net 2.0) but may the best algorithm win irrespective. Your method signature might look something like: ``` public decimal[][] Solve(decimal goal, decimal[] elements) ```
Interesting answers. Thank you for the pointers to Wikipedia - whilst interesting - they don't actually solve the problem as stated as I was looking for exact matches - more of an accounting/book balancing problem than a traditional bin-packing / knapsack problem. I have been following the development of stack overflow with interest and wondered how useful it would be. This problem came up at work and I wondered whether stack overflow could provide a ready-made answer (or a better answer) quicker than I could write it myself. Thanks also for the comments suggesting this be tagged homework - I guess that is reasonably accurate in light of the above. For those who are interested, here is my solution which uses recursion (naturally) I also changed my mind about the method signature and went for List> rather than decimal[][] as the return type: ``` public class Solver { private List<List<decimal>> mResults; public List<List<decimal>> Solve(decimal goal, decimal[] elements) { mResults = new List<List<decimal>>(); RecursiveSolve(goal, 0.0m, new List<decimal>(), new List<decimal>(elements), 0); return mResults; } private void RecursiveSolve(decimal goal, decimal currentSum, List<decimal> included, List<decimal> notIncluded, int startIndex) { for (int index = startIndex; index < notIncluded.Count; index++) { decimal nextValue = notIncluded[index]; if (currentSum + nextValue == goal) { List<decimal> newResult = new List<decimal>(included); newResult.Add(nextValue); mResults.Add(newResult); } else if (currentSum + nextValue < goal) { List<decimal> nextIncluded = new List<decimal>(included); nextIncluded.Add(nextValue); List<decimal> nextNotIncluded = new List<decimal>(notIncluded); nextNotIncluded.Remove(nextValue); RecursiveSolve(goal, currentSum + nextValue, nextIncluded, nextNotIncluded, startIndex++); } } } } ``` If you want an app to test this works, try this console app code: ``` class Program { static void Main(string[] args) { string input; decimal goal; decimal element; do { Console.WriteLine("Please enter the goal:"); input = Console.ReadLine(); } while (!decimal.TryParse(input, out goal)); Console.WriteLine("Please enter the elements (separated by spaces)"); input = Console.ReadLine(); string[] elementsText = input.Split(' '); List<decimal> elementsList = new List<decimal>(); foreach (string elementText in elementsText) { if (decimal.TryParse(elementText, out element)) { elementsList.Add(element); } } Solver solver = new Solver(); List<List<decimal>> results = solver.Solve(goal, elementsList.ToArray()); foreach(List<decimal> result in results) { foreach (decimal value in result) { Console.Write("{0}\t", value); } Console.WriteLine(); } Console.ReadLine(); } } ``` I hope this helps someone else get their answer more quickly (whether for homework or otherwise). Cheers...
I think you've got a [bin packing problem](http://en.wikipedia.org/wiki/Bin_packing_problem) on your hands (which is NP-hard), so I think the only solution is going to be to try every possible combination until you find one that works. Edit: As pointed out in a comment, you won't *always* have to try *every* combination for *every* set of numbers you come across. However, any method you come up with has worst-case-scenario sets of numbers where you *will* have to try *every* combination -- or at least a subset of combinations that grows exponentially with the size of the set. Otherwise, it wouldn't be NP-hard.
Algorithm to find which numbers from a list of size n sum to another number
[ "", "c#", "algorithm", "math", "np-complete", "" ]
What JavaScript keywords (function names, variables, etc) are reserved?
We should be linking to the actual sources of info, rather than just the top google hit. <http://developer.mozilla.org/En/Core_JavaScript_1.5_Reference/Reserved_Words> JScript 8.0: <http://msdn.microsoft.com/en-us/library/ttyab5c8.aspx>
Here is my poem, which includes all of the reserved keywords in JavaScript, and is dedicated to those who remain honest in the moment, and not just try to score: ``` Let this long package float, Goto private class if short. While protected with debugger case, Continue volatile interface. Instanceof super synchronized throw, Extends final export throws. Try import double enum? - False, boolean, abstract function, Implements typeof transient break! Void static, default do, Switch int native new. Else, delete null public var In return for const, true, char …Finally catch byte. ```
Reserved keywords in JavaScript
[ "", "javascript", "reserved-words", "" ]
I'm using an XmlSerializer to deserialize a particular type in mscorelib.dll ``` XmlSerializer ser = new XmlSerializer( typeof( [.Net type in System] ) ); return ([.Net type in System]) ser.Deserialize( new StringReader( xmlValue ) ); ``` This throws a caught `FileNotFoundException` when the assembly is loaded: > "Could not load file or assembly > 'mscorlib.XmlSerializers, > Version=2.0.0.0, Culture=neutral, > PublicKeyToken=b77a5c561934e089' or > one of its dependencies. The system > cannot find the file specified." FusionLog: ``` === Pre-bind state information === LOG: User = ### LOG: DisplayName = mscorlib.XmlSerializers, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=x86 (Fully-specified) LOG: Appbase = file:///C:/localdir LOG: Initial PrivatePath = NULL Calling assembly : System.Xml, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089. === LOG: This bind starts in default load context. LOG: Using application configuration file: C:\localdir\bin\Debug\appname.vshost.exe.Config LOG: Using machine configuration file from c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\config\machine.config. LOG: Post-policy reference: mscorlib.XmlSerializers, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=x86 LOG: Attempting download of new URL file:///C:/localdir/bin/Debug/mscorlib.XmlSerializers.DLL. LOG: Attempting download of new URL file:///C:/localdir/bin/Debug/mscorlib.XmlSerializers/mscorlib.XmlSerializers.DLL. LOG: Attempting download of new URL file:///C:/localdir/bin/Debug/mscorlib.XmlSerializers.EXE. LOG: Attempting download of new URL file:///C:/localdir/bin/Debug/mscorlib.XmlSerializers/mscorlib.XmlSerializers.EXE. ``` As far as I know there is no mscorlib.XmlSerializers.DLL, I think the DLL name has bee auto generated by .Net looking for the serializer. You have the option of creating a myApplication.XmlSerializers.DLL when compiling to optimise serializations, so I assume this is part of the framework's checking for it. The problem is that this appears to be causing a delay in loading the application - it seems to hang for a few seconds at this point. Any ideas how to avoid this or speed it up?
I'm guessing now. but: 1. The system might be generating a serializer for the whole of mscorlib, which could be very slow. 2. You could probably avoid this by wrapping the system type in your own type and serialising that instead - then you'd get a serializer for your own assembly. 3. You might be able to build the serializer for mscorlib with sgen.exe, which was the old way of building serializer dlls before it got integrated into VS.
The delay is because, having been unable to find the custom serializer dll, the system is building the equivalent code (which is very time-consuming) on the fly. The way to avoid the delay is to have the system build the DLL, and make sure it's available to the .EXE - have you tried this?
FileNotFoundException for mscorlib.XmlSerializers.DLL, which doesn't exist
[ "", "c#", ".net", "serialization", "assemblies", "" ]
I am developing an application that controls an Machine. When I receive an error from the Machine the users should be able to directly notice it, one way that is done is Flashing the tray on the taskbar. When the machine clears the error the tray should stop flashing. There's one little annoyance using the `FlashWindowEx` function, when I clear the flashing of the window, it stays (in my case WinXP) orange (not flashing). [![](https://i.stack.imgur.com/GOS2r.gif)](https://i.stack.imgur.com/GOS2r.gif) ``` [Flags] public enum FlashMode { /// <summary> /// Stop flashing. The system restores the window to its original state. /// </summary> FLASHW_STOP = 0, /// <summary> /// Flash the window caption. /// </summary> FLASHW_CAPTION = 1, /// <summary> /// Flash the taskbar button. /// </summary> FLASHW_TRAY = 2, /// <summary> /// Flash both the window caption and taskbar button. /// This is equivalent to setting the FLASHW_CAPTION | FLASHW_TRAY flags. /// </summary> FLASHW_ALL = 3, /// <summary> /// Flash continuously, until the FLASHW_STOP flag is set. /// </summary> FLASHW_TIMER = 4, /// <summary> /// Flash continuously until the window comes to the foreground. /// </summary> FLASHW_TIMERNOFG = 12 } public static bool FlashWindowEx(IntPtr hWnd, FlashMode fm) { FLASHWINFO fInfo = new FLASHWINFO(); fInfo.cbSize = Convert.ToUInt32(Marshal.SizeOf(fInfo)); fInfo.hwnd = hWnd; fInfo.dwFlags = (UInt32)fm; fInfo.uCount = UInt32.MaxValue; fInfo.dwTimeout = 0; return FlashWindowEx(ref fInfo); } [StructLayout(LayoutKind.Sequential)] public struct FLASHWINFO { public UInt32 cbSize; public IntPtr hwnd; public UInt32 dwFlags; public UInt32 uCount; public UInt32 dwTimeout; } ``` In my case I use `FLASHW_TRAY` to start flashing and `FLASHW_STOP` to stop the flashing. Am I doing something wrong or is this a known bug of WinXP and is there a fix for it?
Behaviour is the same when a window finishes flashing for as long as it's supposed to: the taskbar button stays coloured. I don't think this is a bug. If you think about it, when you use `FLASHW_STOP`, the flashing does in fact stop, but the point of the flashing is to get the user's attention. The button stays coloured because the user still may not have looked down and discovered *which* window was trying to get her attention. Keeping the button coloured keeps that information available.
Here's an error: > fInfo.uCount = UInt32.MaxValue; You should set fInfo.uCount to zero when calling with FLASHW\_STOP parameter. Otherwise when you try to call stop when taskbar button is active it will stay active. You can check a note about undefined behavior here: <http://msdn.microsoft.com/en-us/library/windows/desktop/ms679348(v=vs.85).aspx> I know that's an old post but it can help other people to solve this problem fast.
FlashWindowEx FLASHW_STOP still keeps taskbar colored
[ "", "c#", "winapi", "pinvoke", "" ]
Nokia has stopped offering its Developer's Suite, relying on other IDEs, including Eclipse. Meanwhile, Nokia changed its own development tools again and EclipseMe has also changed. This leaves most documentation irrelevant. I want to know what does it take to make a simple Hello-World? (I already found out myself, so this is a Q&A for other people to use)
Here's what's needed to make a simple hello world - 1. Get [Eclipse](http://www.eclipse.org/downloads/) IDE for Java. I used Ganymede. Set it up. 2. Get Sun's [Wireless Toolkit](http://java.sun.com/products/sjwtoolkit/download.html). I used 2.5.2. Install it. 3. Get Nokia's SDK ([found here](http://developers.nokia.com/info/sw.nokia.com/id/cc48f9a1-f5cf-447b-bdba-c4d41b3d05ce/Series_40_Platform_SDKs.html)), in my case for S40 6230i Edition, and install it choosing the option to **integrate with Sun's WTK** 4. Follow the instructions at <http://www.eclipseme.org/> to download and install Mobile Tools Java (MTJ). I used version 1.7.9. 5. When configuring devices profiles in MTJ (inside Eclipse) use the Nokia device from the WTK folder and NOT from Nokia's folder. 6. Set the WTK root to the main installation folder - for instance c:\WTK2.5.2; Note that the WTK installer creates other folders apparently for backward compatibility. 7. Get [Antenna](http://antenna.sourceforge.net/) and set its location in MTJ's property page (in Eclipse). [Here's an HelloWorld sample to test the configuration.](http://wiki.forum.nokia.com/index.php/Hello_World_in_Java_ME) Note: It worked for me on WindowsXP. Also note: This should work for S60 as well. Just replace the S40 SDK in phase 3 with S60's.
Unless you need to do something Nokia-specific, I suggest avoiding the Nokia device definitions altogether. Develop for a generic device, then download your application to real, physical devices for final testing. The steps I suggest: 1. Download and install Sun's Wireless Toolkit. 2. Install EclipseME, using the method ["installing via a downloaded archive"](http://eclipseme.org/docs/installEclipseME.html#step2c). 3. [Configure EclipseME](http://eclipseme.org/docs/configuring.html). Choose a generic device, such as the "DefaultColorPhone" to develop on. 4. Create a new project "J2ME Midlet Suite" 5. Right-click on the project, and create a new Midlet "HelloWorld" 6. Enter the code, for example: ``` public HelloWorld() { super(); myForm = new Form("Hello World!"); myForm.append( new StringItem(null, "Hello, world!")); myForm.addCommand(new Command("Exit", Command.EXIT, 0)); myForm.setCommandListener(this); } protected void startApp() throws MIDletStateChangeException { Display.getDisplay(this).setCurrent(myForm); } protected void pauseApp() {} protected void destroyApp(boolean arg0) throws MIDletStateChangeException {} public void commandAction(Command arg0, Displayable arg1) { notifyDestroyed(); } ```
How to create J2ME midlets for Nokia using Eclipse
[ "", "java", "eclipse", "java-me", "nokia", "java-wireless-toolkit", "" ]
I need to write a program used internally where different users will have different abilities within the program. Rather than making users have a new username and password, how do I tie into an existing domain server's login system? Assume .NET (C#, VB, ASP, etc) -Adam
For WinForms, use System.Threading.Thread.CurrentPrincipal with the IsInRole() method to check which groups they are a member of. You do need to set the principal policy of the AppDomain to WindowsPrincipal first. Use this to get the current user name: ``` private string getWindowsUsername() { AppDomain.CurrentDomain.SetPrincipalPolicy(PrincipalPolicy.WindowsPrincipal); return Thread.CurrentPrincipal.Identity.Name; } ``` And then something like this to check a role: ``` if (Thread.CurrentPrincipal.IsInRole("Domain Users") == true) {} ``` In ASP.NET, the thread will belong to IIS, so instead you should 1. Set the virtual folder or website to require authentication 2. Get the user name supplied by the browser with Request.ServerVariables("LOGON\_USER") 3. Use the [DirectorySearcher](http://msdn.microsoft.com/en-us/library/system.directoryservices.directorysearcher.aspx) class to find the users groups
I would use LDAP and the DirectorySearcher Class: <http://msdn.microsoft.com/en-us/library/system.directoryservices.directorysearcher.aspx>
How to tie into a domain server's login for program access rights
[ "", "c#", "" ]
I need to remotely install windows service on number of computers, so I use CreateService() and other service functions from winapi. I know admin password and user name for machines that I need access to. In order to gain access to remote machine I impersonate calling process with help of LogonUser like this: ``` //all variables are initialized correctly int status = 0; status = LogonUser(lpwUsername, lpwDomain, lpwPassword, LOGON32_LOGON_NEW_CREDENTIALS, LOGON32_PROVIDER_DEFAULT, &hToken); if (status == 0) { //here comes a error } status = ImpersonateLoggedOnUser(hToken); if (status == 0) { //once again a error } //ok, now we are impersonated, do all service work there ``` So, I gain access to machine in a domain, but some of computers are out of domain. On machines that are out of domain this code doesn't work. Is there any way to access service manager on machine out of domain?
You can do it , the account needs to exist on the remote machine and you need to use the machine name for the domain name in the LogonUser call.
Rather than rolling your own, why not just use the SC built-in command?
Remote installing of windows service
[ "", "c++", "windows", "windows-services", "" ]
I've got an Active Directory synchronization tool (.NET 2.0 / C#) written as a Windows Service that I've been working on for a while and have recently been tasked with adding the ability to drive events based on changes in group membership. The basic scenario is that users are synchronized with a security database and, when group membership changes, the users need to have their access rights changed (ie. if I am now a member of "IT Staff" then I should automatically receive access to the server room, if I am removed from that group then I should automatically lose access to the server room). The problem is that when doing a DirectorySynchronization against groups you receive back the group that has had a member added/removed, and from there when you grab the members list you get back the list of all members in that group currently not just the members that have been added or removed. This leads me to quite an efficiency problem - that being that in order to know if a user has been added or removed I will have to keep locally a list of each group and all members and compare that against the current list to see who has been added (not in local list), and who has been deleted (in local list, not in current members list). I'm debating just storing the group membership details in a DataSet in memory and writing to disk each time I've processed new membership changes. That way if the service stops/crashes or the machine is rebooted I can still get to the current state of the Active Directory within the security database by comparing the last information on disk to that from the current group membership list. However, this seems terrible inefficient - running through every member in the group to compare against what is in the dataset and then writing out changes to disk each time there are changes to the list. Has anyone dealt with this scenario before? Is there some way that I haven't found to retrieve only a delta of group members? What would you do in this situation to ensure that you never miss any changes while taking the smallest performance hit possible? **Edit:** The AD might contain 500 users, it might contain 200,000 users - it depends on the customer, and on top of that how many groups the average user is a member of
I'd say it depends on how many active directory objects you need to keep track of. If it's a small number (less than 1000 users) you can probably serialize your state data to disk with little noticable performance hit. If you're dealing with a very large number of objects it might be more efficient to create a simple persistence schema in something like SQL Express and use that.
You can set up auditing for the success of account modifications in group policy editor You may then monitor security log for entries and handle log entries on account modifications. E.g. ``` EventLog myLog = new EventLog("Security"); // set event handler myLog.EntryWritten += new EntryWrittenEventHandler(OnEntryWritten); myLog.EnableRaisingEvents = true; ``` Make sure that you have privileges to acces Security event log <http://support.microsoft.com/kb/323076>
Monitoring group membership in Active Directory more efficiently (C# .NET)
[ "", "c#", ".net", "active-directory", "" ]
OK, so I don't want to start a holy-war here, but we're in the process of trying to consolidate the way we handle our application configuration files and we're struggling to make a decision on the best approach to take. At the moment, every application we distribute is using it's own ad-hoc configuration files, whether it's property files (ini style), XML or JSON (internal use only at the moment!). Most of our code is Java at the moment, so we've been looking at [Apache Commons Config](http://commons.apache.org/configuration/), but we've found it to be quite verbose. We've also looked at [XMLBeans](http://xmlbeans.apache.org/), but it seems like a lot of faffing around. I also feel as though I'm being pushed towards XML as a format, but my clients and colleagues are apprehensive about trying something else. I can understand it from the client's perspective, everybody's heard of XML, but at the end of the day, shouldn't be using the right tool for the job? What formats and libraries are people using in production systems these days, is anyone else trying to avoid the [angle bracket tax](http://www.codinghorror.com/blog/archives/001114.html)? ***Edit:*** really needs to be a cross platform solution: Linux, Windows, Solaris etc. and the choice of library used to interface with configuration files is just as important as the choice of format.
XML XML XML XML. We're talking *config files here*. There is no "angle bracket tax" if you're not serializing objects in a performance-intense situation. Config files must be human readable and human understandable, in addition to machine readable. XML is a good compromise between the two. If your shop has people that are afraid of that new-fangled XML technology, I feel bad for you.
YAML, for the simple reason that it makes for very readable configuration files compared to XML. XML: ``` <user id="babooey" on="cpu1"> <firstname>Bob</firstname> <lastname>Abooey</lastname> <department>adv</department> <cell>555-1212</cell> <address password="xxxx">ahunter@example1.com</address> <address password="xxxx">babooey@example2.com</address> </user> ``` YAML: ``` babooey: computer : cpu1 firstname: Bob lastname: Abooey cell: 555-1212 addresses: - address: babooey@example1.com password: xxxx - address: babooey@example2.com password: xxxx ``` The examples were taken from this page: <http://www.kuro5hin.org/story/2004/10/29/14225/062>
Application configuration files
[ "", "java", "xml", "json", "cross-platform", "configuration-files", "" ]
Why are pointers such a leading factor of confusion for many new, and even old, college-level students in C or C++? Are there any tools or thought processes that helped you understand how pointers work at the variable, function, and beyond level? What are some good practice things that can be done to bring somebody to the level of, "Ah-hah, I got it," without getting them bogged down in the overall concept? Basically, drill-like scenarios.
Pointers is a concept that for many can be confusing at first, in particular when it comes to copying pointer values around and still referencing the same memory block. I've found that the best analogy is to consider the pointer as a piece of paper with a house address on it, and the memory block it references as the actual house. All sorts of operations can thus be easily explained. I've added some Delphi code down below, and some comments where appropriate. I chose Delphi since my other main programming language, C#, does not exhibit things like memory leaks in the same way. If you only wish to learn the high-level concept of pointers, then you should ignore the parts labelled "Memory layout" in the explanation below. They are intended to give examples of what memory could look like after operations, but they are more low-level in nature. However, in order to accurately explain how buffer overruns really work, it was important that I added these diagrams. *Disclaimer: For all intents and purposes, this explanation and the example memory layouts are vastly simplified. There's more overhead and a lot more details you would need to know if you need to deal with memory on a low-level basis. However, for the intents of explaining memory and pointers, it is accurate enough.* --- Let's assume the THouse class used below looks like this: ``` type THouse = class private FName : array[0..9] of Char; public constructor Create(name: PChar); end; ``` When you initialize the house object, the name given to the constructor is copied into the private field FName. There is a reason it is defined as a fixed-size array. In memory, there will be some overhead associated with the house allocation, I'll illustrate this below like this: ``` ---[ttttNNNNNNNNNN]--- ^ ^ | | | +- the FName array | +- overhead ``` The "tttt" area is overhead, there will typically be more of this for various types of runtimes and languages, like 8 or 12 bytes. It is imperative that whatever values are stored in this area never gets changed by anything other than the memory allocator or the core system routines, or you risk crashing the program. --- **Allocate memory** Get an entrepreneur to build your house, and give you the address to the house. In contrast to the real world, memory allocation cannot be told where to allocate, but will find a suitable spot with enough room, and report back the address to the allocated memory. In other words, the entrepreneur will choose the spot. ``` THouse.Create('My house'); ``` Memory layout: ``` ---[ttttNNNNNNNNNN]--- 1234My house ``` --- **Keep a variable with the address** Write the address to your new house down on a piece of paper. This paper will serve as your reference to your house. Without this piece of paper, you're lost, and cannot find the house, unless you're already in it. ``` var h: THouse; begin h := THouse.Create('My house'); ... ``` Memory layout: ``` h v ---[ttttNNNNNNNNNN]--- 1234My house ``` --- **Copy pointer value** Just write the address on a new piece of paper. You now have two pieces of paper that will get you to the same house, not two separate houses. Any attempts to follow the address from one paper and rearrange the furniture at that house will make it seem that *the other house* has been modified in the same manner, unless you can explicitly detect that it's actually just one house. *Note* This is usually the concept that I have the most problem explaining to people, two pointers does not mean two objects or memory blocks. ``` var h1, h2: THouse; begin h1 := THouse.Create('My house'); h2 := h1; // copies the address, not the house ... ``` ``` h1 v ---[ttttNNNNNNNNNN]--- 1234My house ^ h2 ``` --- **Freeing the memory** Demolish the house. You can then later on reuse the paper for a new address if you so wish, or clear it to forget the address to the house that no longer exists. ``` var h: THouse; begin h := THouse.Create('My house'); ... h.Free; h := nil; ``` Here I first construct the house, and get hold of its address. Then I do something to the house (use it, the ... code, left as an exercise for the reader), and then I free it. Lastly I clear the address from my variable. Memory layout: ``` h <--+ v +- before free ---[ttttNNNNNNNNNN]--- | 1234My house <--+ h (now points nowhere) <--+ +- after free ---------------------- | (note, memory might still xx34My house <--+ contain some data) ``` --- **Dangling pointers** You tell your entrepreneur to destroy the house, but you forget to erase the address from your piece of paper. When later on you look at the piece of paper, you've forgotten that the house is no longer there, and goes to visit it, with failed results (see also the part about an invalid reference below). ``` var h: THouse; begin h := THouse.Create('My house'); ... h.Free; ... // forgot to clear h here h.OpenFrontDoor; // will most likely fail ``` Using `h` after the call to `.Free` *might* work, but that is just pure luck. Most likely it will fail, at a customers place, in the middle of a critical operation. ``` h <--+ v +- before free ---[ttttNNNNNNNNNN]--- | 1234My house <--+ h <--+ v +- after free ---------------------- | xx34My house <--+ ``` As you can see, h still points to the remnants of the data in memory, but since it might not be complete, using it as before might fail. --- **Memory leak** You lose the piece of paper and cannot find the house. The house is still standing somewhere though, and when you later on want to construct a new house, you cannot reuse that spot. ``` var h: THouse; begin h := THouse.Create('My house'); h := THouse.Create('My house'); // uh-oh, what happened to our first house? ... h.Free; h := nil; ``` Here we overwrote the contents of the `h` variable with the address of a new house, but the old one is still standing... somewhere. After this code, there is no way to reach that house, and it will be left standing. In other words, the allocated memory will stay allocated until the application closes, at which point the operating system will tear it down. Memory layout after first allocation: ``` h v ---[ttttNNNNNNNNNN]--- 1234My house ``` Memory layout after second allocation: ``` h v ---[ttttNNNNNNNNNN]---[ttttNNNNNNNNNN] 1234My house 5678My house ``` A more common way to get this method is just to forget to free something, instead of overwriting it as above. In Delphi terms, this will occur with the following method: ``` procedure OpenTheFrontDoorOfANewHouse; var h: THouse; begin h := THouse.Create('My house'); h.OpenFrontDoor; // uh-oh, no .Free here, where does the address go? end; ``` After this method has executed, there's no place in our variables that the address to the house exists, but the house is still out there. Memory layout: ``` h <--+ v +- before losing pointer ---[ttttNNNNNNNNNN]--- | 1234My house <--+ h (now points nowhere) <--+ +- after losing pointer ---[ttttNNNNNNNNNN]--- | 1234My house <--+ ``` As you can see, the old data is left intact in memory, and will not be reused by the memory allocator. The allocator keeps track of which areas of memory has been used, and will not reuse them unless you free it. --- **Freeing the memory but keeping a (now invalid) reference** Demolish the house, erase one of the pieces of paper but you also have another piece of paper with the old address on it, when you go to the address, you won't find a house, but you might find something that resembles the ruins of one. Perhaps you will even find a house, but it is not the house you were originally given the address to, and thus any attempts to use it as though it belongs to you might fail horribly. Sometimes you might even find that a neighbouring address has a rather big house set up on it that occupies three address (Main Street 1-3), and your address goes to the middle of the house. Any attempts to treat that part of the large 3-address house as a single small house might also fail horribly. ``` var h1, h2: THouse; begin h1 := THouse.Create('My house'); h2 := h1; // copies the address, not the house ... h1.Free; h1 := nil; h2.OpenFrontDoor; // uh-oh, what happened to our house? ``` Here the house was torn down, through the reference in `h1`, and while `h1` was cleared as well, `h2` still has the old, out-of-date, address. Access to the house that is no longer standing might or might not work. This is a variation of the dangling pointer above. See its memory layout. --- **Buffer overrun** You move more stuff into the house than you can possibly fit, spilling into the neighbours house or yard. When the owner of that neighbouring house later on comes home, he'll find all sorts of things he'll consider his own. This is the reason I chose a fixed-size array. To set the stage, assume that the second house we allocate will, for some reason, be placed before the first one in memory. In other words, the second house will have a lower address than the first one. Also, they're allocated right next to each other. Thus, this code: ``` var h1, h2: THouse; begin h1 := THouse.Create('My house'); h2 := THouse.Create('My other house somewhere'); ^-----------------------^ longer than 10 characters 0123456789 <-- 10 characters ``` Memory layout after first allocation: ``` h1 v -----------------------[ttttNNNNNNNNNN] 5678My house ``` Memory layout after second allocation: ``` h2 h1 v v ---[ttttNNNNNNNNNN]----[ttttNNNNNNNNNN] 1234My other house somewhereouse ^---+--^ | +- overwritten ``` The part that will most often cause crash is when you overwrite important parts of the data you stored that really should not be randomly changed. For instance it might not be a problem that parts of the name of the h1-house was changed, in terms of crashing the program, but overwriting the overhead of the object will most likely crash when you try to use the broken object, as will overwriting links that is stored to other objects in the object. --- **Linked lists** When you follow an address on a piece of paper, you get to a house, and at that house there is another piece of paper with a new address on it, for the next house in the chain, and so on. ``` var h1, h2: THouse; begin h1 := THouse.Create('Home'); h2 := THouse.Create('Cabin'); h1.NextHouse := h2; ``` Here we create a link from our home house to our cabin. We can follow the chain until a house has no `NextHouse` reference, which means it's the last one. To visit all our houses, we could use the following code: ``` var h1, h2: THouse; h: THouse; begin h1 := THouse.Create('Home'); h2 := THouse.Create('Cabin'); h1.NextHouse := h2; ... h := h1; while h <> nil do begin h.LockAllDoors; h.CloseAllWindows; h := h.NextHouse; end; ``` Memory layout (added NextHouse as a link in the object, noted with the four LLLL's in the below diagram): ``` h1 h2 v v ---[ttttNNNNNNNNNNLLLL]----[ttttNNNNNNNNNNLLLL] 1234Home + 5678Cabin + | ^ | +--------+ * (no link) ``` --- **In basic terms, what is a memory address?** A memory address is in basic terms just a number. If you think of memory as a big array of bytes, the very first byte has the address 0, the next one the address 1 and so on upwards. This is simplified, but good enough. So this memory layout: ``` h1 h2 v v ---[ttttNNNNNNNNNN]---[ttttNNNNNNNNNN] 1234My house 5678My house ``` Might have these two address (the leftmost - is address 0): * h1 = 4 * h2 = 23 Which means that our linked list above might actuall look like this: ``` h1 (=4) h2 (=28) v v ---[ttttNNNNNNNNNNLLLL]----[ttttNNNNNNNNNNLLLL] 1234Home 0028 5678Cabin 0000 | ^ | +--------+ * (no link) ``` It is typical to store an address that "points nowhere" as a zero-address. --- **In basic terms, what is a pointer?** A pointer is just a variable holding a memory address. You can typically ask the programming language to give you its number, but most programming languages and runtimes tries to hide the fact that there is a number beneath, just because the number itself does not really hold any meaning to you. It is best to think of a pointer as a black box, ie. you don't really know or care about how it is actually implemented, just as long as it works.
In my first Comp Sci class, we did the following exercise. Granted, this was a lecture hall with roughly 200 students in it... Professor writes on the board: `int john;` John stands up Professor writes: `int *sally = &john;` Sally stands up, points at john Professor: `int *bill = sally;` Bill stands up, points at John Professor: `int sam;` Sam stands up Professor: `bill = &sam;` Bill now points to Sam. I think you get the idea. I think we spent about an hour doing this, until we went over the basics of pointer assignment.
What are the barriers to understanding pointers and what can be done to overcome them?
[ "", "c++", "c", "pointers", "" ]
I was looking for a tree or graph data structure in C#, but I guess there isn't one provided. [An Extensive Examination of Data Structures Using C# 2.0](http://msdn.microsoft.com/en-us/library/ms379574.aspx) a bit about why. Is there a convenient library which is commonly used to provide this functionality? Perhaps through a [strategy pattern](https://en.wikipedia.org/wiki/Strategy_pattern) to solve the issues presented in the article. I feel a bit silly implementing my own tree, just as I would implementing my own ArrayList. I just want a generic tree which can be unbalanced. Think of a directory tree. [C5](https://itu.dk/research/c5/) looks nifty, but their tree structures seem to be implemented as balanced red-black trees better suited to search than representing a hierarchy of nodes.
My best advice would be that there is no standard tree data structure because there are so many ways you could implement it that it would be impossible to cover all bases with one solution. The more specific a solution, the less likely it is applicable to any given problem. I even get annoyed with LinkedList - what if I want a circular linked list? The basic structure you'll need to implement will be a collection of nodes, and here are some options to get you started. Let's assume that the class Node is the base class of the entire solution. If you need to only navigate down the tree, then a Node class needs a List of children. If you need to navigate up the tree, then the Node class needs a link to its parent node. Build an AddChild method that takes care of all the minutia of these two points and any other business logic that must be implemented (child limits, sorting the children, etc.)
``` delegate void TreeVisitor<T>(T nodeData); class NTree<T> { private T data; private LinkedList<NTree<T>> children; public NTree(T data) { this.data = data; children = new LinkedList<NTree<T>>(); } public void AddChild(T data) { children.AddFirst(new NTree<T>(data)); } public NTree<T> GetChild(int i) { foreach (NTree<T> n in children) if (--i == 0) return n; return null; } public void Traverse(NTree<T> node, TreeVisitor<T> visitor) { visitor(node.data); foreach (NTree<T> kid in node.children) Traverse(kid, visitor); } } ``` Simple recursive implementation... < 40 lines of code... You just need to keep a reference to the root of the tree outside of the class, or wrap it in another class, maybe rename to TreeNode??
Tree data structure in C#
[ "", "c#", "data-structures", "" ]
What's the most efficient way to concatenate strings?
The `StringBuilder.Append()` method is much better than using the `+` operator. But I've found that, when executing 1000 concatenations or less, `String.Join()` is even more efficient than `StringBuilder`. ``` StringBuilder sb = new StringBuilder(); sb.Append(someString); ``` The only problem with `String.Join` is that you have to concatenate the strings with a common delimiter. **Edit:** as *@ryanversaw* pointed out, you can make the delimiter `string.Empty`. ``` string key = String.Join("_", new String[] { "Customers_Contacts", customerID, database, SessionID }); ```
[Rico Mariani](https://blogs.msdn.microsoft.com/ricom/), the .NET Performance guru, had [an article](https://blogs.msdn.microsoft.com/ricom/2003/12/15/more-stringbuilder-advice/) on this very subject. It's not as simple as one might suspect. The basic advice is this: > If your pattern looks like: > > `x = f1(...) + f2(...) + f3(...) + f4(...)` > > that's one concat and it's zippy, StringBuilder probably won't help. > > If your pattern looks like: > > `if (...) x += f1(...)` > `if (...) x += f2(...)` > `if (...) x += f3(...)` > `if (...) x += f4(...)` > > then you probably want StringBuilder. [Yet another article to support this claim](http://ericlippert.com/2013/06/17/string-concatenation-behind-the-scenes-part-one/) comes from Eric Lippert where he describes the optimizations performed on one line `+` concatenations in a detailed manner.
Most efficient way to concatenate strings?
[ "", "c#", ".net", "string", "performance", "optimization", "" ]
I need to be able to manipulate a large (10^7 nodes) graph in python. The data corresponding to each node/edge is minimal, say, a small number of strings. What is the most efficient, in terms of ***memory and speed***, way of doing this? A dict of dicts is more flexible and simpler to implement, but I intuitively expect a list of lists to be faster. The list option would also require that I keep the data separate from the structure, while dicts would allow for something of the sort: ``` graph[I][J]["Property"]="value" ``` What would you suggest? --- Yes, I should have been a bit clearer on what I mean by efficiency. In this particular case I mean it in terms of random access retrieval. Loading the data in to memory isn't a huge problem. That's done once and for all. The time consuming part is visiting the nodes so I can extract the information and measure the metrics I'm interested in. I hadn't considered making each node a class (properties are the same for all nodes) but it seems like that would add an extra layer of overhead? I was hoping someone would have some direct experience with a similar case that they could share. After all, graphs are one of the most common abstractions in CS.
I would strongly advocate you look at [NetworkX](http://networkx.github.com/). It's a battle-tested war horse and the first tool most 'research' types reach for when they need to do analysis of network based data. I have manipulated graphs with 100s of thousands of edges without problem on a notebook. Its feature rich and very easy to use. You will find yourself focusing more on the problem at hand rather than the details in the underlying implementation. **Example of [Erdős-Rényi](http://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model) random graph generation and analysis** ``` """ Create an G{n,m} random graph with n nodes and m edges and report some properties. This graph is sometimes called the Erd##[m~Qs-Rényi graph but is different from G{n,p} or binomial_graph which is also sometimes called the Erd##[m~Qs-Rényi graph. """ __author__ = """Aric Hagberg (hagberg@lanl.gov)""" __credits__ = """""" # Copyright (C) 2004-2006 by # Aric Hagberg # Dan Schult # Pieter Swart # Distributed under the terms of the GNU Lesser General Public License # http://www.gnu.org/copyleft/lesser.html from networkx import * import sys n=10 # 10 nodes m=20 # 20 edges G=gnm_random_graph(n,m) # some properties print "node degree clustering" for v in nodes(G): print v,degree(G,v),clustering(G,v) # print the adjacency list to terminal write_adjlist(G,sys.stdout) ``` Visualizations are also straightforward: ![enter image description here](https://i.stack.imgur.com/5biM9.jpg) More visualization: <http://jonschull.blogspot.com/2008/08/graph-visualization.html>
Even though this question is now quite old, I think it is worthwhile to mention my own python module for graph manipulation called [graph-tool](http://graph-tool.skewed.de). It is very efficient, since the data structures and algorithms are implemented in C++, with template metaprograming, using the Boost Graph Library. Therefore its performance (both in memory usage and runtime) is comparable to a pure C++ library, and can be orders of magnitude better than typical python code, without sacrificing ease of use. I use it myself constantly to work with very large graphs.
What is the most efficient graph data structure in Python?
[ "", "python", "performance", "data-structures", "graph-theory", "" ]
Short version: I want to trigger the Form\_Load() event without making the form visible. This doesn't work because Show() ignores the current value of the Visible property: ``` tasksForm.Visible = false; tasksForm.Show(); ``` Long version: I have a WinForms application with two forms: main and tasks. The main form is always displayed. The user can either click a button to open the tasks form, or click some buttons that just run a task directly without opening the tasks form. When a user asks to run a task directly, I'd like to just call some public methods on the tasks form without showing it. Unfortunately, the task logic depends on stuff that happens in the Form\_Load() event. The only way I can find to trigger Form\_Load() is to call Show(). The best I've been able to do is to show the form in the minimized state: ``` tasksForm.WindowState = FormWindowState.Minimized; tasksForm.Show(); ``` I suppose the cleanest solution would be to pull the tasks logic out of the tasks form and into a controller class. Then I can use that class from the main form and from the tasks form, and only load the tasks form when I need it visible for the user. However, if it's an easy thing to load the form without displaying it, that would be a smaller change.
I totally agree with Rich B, you need to look at where you are placing your application logic rather than trying to cludge the WinForms mechanisms. All of those operations and data that your Tasks form is exposing should really be in a separate class say some kind of Application Controller or something held by your main form and then used by your tasks form to read and display data when needed but doesn't need a form to be instantiated to exist. It probably seems a pain to rework it, but you'll be improving the structure of the app and making it more maintainable etc.
Perhaps it should be noted here that you **can** cause the form's window to be created without showing the form. I think there could be legitimate situations for wanting to do this. Anyway, good design or not, you can do that like this: ``` MyForm f = new MyForm(); IntPtr dummy = f.Handle; // forces the form Control to be created ``` I don't think this will cause Form\_Load() to be called, but you will be able to call f.Invoke() at this point (which is what I was trying to do when I stumbled upon this SO question).
Load a form without showing it
[ "", "c#", "winforms", "" ]
So you have a String that is retrieved from an admin web UI (so it is definitely a String). How can you find out whether this string is an IP address or a hostname in Java? Update: I think I didn't make myself clear, I was more asking if there is anything in the Java SDK that I can use to distinguish between IPs and hostnames? Sorry for the confusion and thanks for everybody who took/will take the time to answer this.
You can use a regular expression with this pattern: ``` \b(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b ``` That will tell you if it's an IPv4 address.
You can see if the string matches the number.number.number.number format, for example: ``` \b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b ``` will match anything from `0 - 999`. Anything else you can have it default to hostname.
How do you tell whether a string is an IP or a hostname
[ "", "java", "ip-address", "hostname", "" ]
I have a list of 2-item tuples and I'd like to convert them to 2 lists where the first contains the first item in each tuple and the second list holds the second item. **For example:** ``` original = [('a', 1), ('b', 2), ('c', 3), ('d', 4)] # and I want to become... result = (['a', 'b', 'c', 'd'], [1, 2, 3, 4]) ``` Is there a builtin function that does that?
In 2.x, [`zip`](http://docs.python.org/library/functions.html#zip) is its own inverse! Provided you [use the special \* operator](https://stackoverflow.com/questions/36901). ``` >>> zip(*[('a', 1), ('b', 2), ('c', 3), ('d', 4)]) [('a', 'b', 'c', 'd'), (1, 2, 3, 4)] ``` This is equivalent to calling `zip` with each element of the list as a separate argument: ``` zip(('a', 1), ('b', 2), ('c', 3), ('d', 4)) ``` except the arguments are passed to `zip` directly (after being converted to a tuple), so there's no need to worry about the number of arguments getting too big. In 3.x, `zip` [returns a lazy iterator](https://stackoverflow.com/questions/27431390), but this is trivially converted: ``` >>> list(zip(*[('a', 1), ('b', 2), ('c', 3), ('d', 4)])) [('a', 'b', 'c', 'd'), (1, 2, 3, 4)] ```
You could also do ``` result = ([ a for a,b in original ], [ b for a,b in original ]) ``` It *should* scale better. Especially if Python makes good on not expanding the list comprehensions unless needed. (Incidentally, it makes a 2-tuple (pair) of lists, rather than a list of tuples, like `zip` does.) If generators instead of actual lists are ok, this would do that: ``` result = (( a for a,b in original ), ( b for a,b in original )) ``` The generators don't munch through the list until you ask for each element, but on the other hand, they do keep references to the original list.
Transpose/Unzip Function (inverse of zip)?
[ "", "python", "list", "matrix", "transpose", "" ]
One of the sites I maintain relies heavily on the use of `ViewState` (it isn't my code). However, on certain pages where the `ViewState` is extra-bloated, Safari throws a `"Validation of viewstate MAC failed"` error. This appears to only happen in Safari. Firefox, IE and Opera all load successfully in the same scenario.
I've been doing a little research into this and whilst I'm not entirely sure its the cause I believe it is because Safari is not returning the full result set (hence cropping it). I have been in dicussion with another developer and found the following post on Channel 9 as well which recommends making use of the SQL State service to store the viewstate avoiding the postback issue and also page size. <http://channel9.msdn.com/forums/TechOff/250549-ASPNET-ViewState-flawed-architecture/?CommentID=270477#263702> Does this seem like the best solution?
While I second the Channel 9 solution, also be aware that in some hosted environments Safari is not considered an up-level browser. You may need to add it to your application's browscap in order to make use of some ASP.Net features. That was the root cause of some headaches we had for a client's site that used the ASP Menu control.
ViewState invalid only in Safari
[ "", "c#", ".net", "safari", "viewstate", "" ]
I'd like to find the different ways to solve a real life problem I had: imagine to have a contest, or a game, during which the users collect points. You have to build a query to show the list of users with the best "n" scores. I'm making an example to clarify. Let's say that this is the Users table, with the points earned: ``` UserId - Points 1 - 100 2 - 75 3 - 50 4 - 50 5 - 50 6 - 25 ``` If I want the top 3 scores, the result will be: ``` UserId - Points 1 - 100 2 - 75 3 - 50 4 - 50 5 - 50 ``` This can be realized in a view or a stored procedure, as you want. My target db is Sql Server. Actually I solved this, but I think there are different way to obtain the result... faster or more efficent than mine.
Untested, but should work: ``` select * from users where points in (select distinct top 3 points from users order by points desc) ```
Here's one that works - I don't know if it's more efficient, and it's SQL Server 2005+ ``` with scores as ( select 1 userid, 100 points union select 2, 75 union select 3, 50 union select 4, 50 union select 5, 50 union select 6, 25 ), results as ( select userid, points, RANK() over (order by points desc) as ranking from scores ) select userid, points, ranking from results where ranking <= 3 ``` Obviously the first "with" is to set up the values, so you can test the second with, and final select work - you could start at "with results as..." if you were querying against an existing table.
SQL query to get the top "n" scores out of a list
[ "", "sql", "sql-server", "puzzle", "" ]
A friend and I were discussing C++ templates. He asked me what this should do: ``` #include <iostream> template <bool> struct A { A(bool) { std::cout << "bool\n"; } A(void*) { std::cout << "void*\n"; } }; int main() { A<true> *d = 0; const int b = 2; const int c = 1; new A< b > (c) > (d); } ``` The last line in main has two reasonable parses. Is 'b' the template argument or is `b > (c)` the template argument? Although, it is trivial to compile this, and see what we get, we were wondering what resolves the ambiguity?
AFAIK it would be compiled as `new A<b>(c) > d`. This is the only reasonable way to parse it IMHO. If the parser can't assume under normal circumstances a > end a template argument, that would result it much more ambiguity. If you want it the other way, you should have written: ``` new A<(b > c)>(d); ```
As stated by Leon & Lee, 14.2/3 (C++ '03) explicitly defines this behaviour. C++ '0x adds to the fun with a similar rule applying to `>>`. The basic concept, is that when parsing a template-argument-list a non nested `>>` will be treated as two distinct `>` `>` tokens and not the right shift operator: ``` template <bool> struct A { A(bool); A(void*); }; template <typename T> class C { public: C (int); }; int main() { A<true> *d = 0; const int b = 2; const int c = 1; new C <A< b >> (c) > (d); // #1 new C <A< b > > (c) > (d); // #2 } ``` '#1' and '#2' are equivalent in the above. This of course fixes that annoyance with having to add spaces in nested specializations: ``` C<A<false>> c; // Parse error in C++ '98, '03 due to "right shift operator" ```
C++ Template Ambiguity
[ "", "c++", "templates", "grammar", "" ]
I have three tables: page, attachment, page-attachment I have data like this: ``` page ID NAME 1 first page 2 second page 3 third page 4 fourth page attachment ID NAME 1 foo.word 2 test.xsl 3 mm.ppt page-attachment ID PAGE-ID ATTACHMENT-ID 1 2 1 2 2 2 3 3 3 ``` I would like to get the number of attachments per page **also when that number is 0**. I have tried with: ``` select page.name, count(page-attachment.id) as attachmentsnumber from page inner join page-attachment on page.id=page-id group by page.id ``` I am getting this output: ``` NAME ATTACHMENTSNUMBER second page 2 third page 1 ``` I would like to get this output: ``` NAME ATTACHMENTSNUMBER first page 0 second page 2 third page 1 fourth page 0 ``` How do I get the 0 part?
Change your "inner join" to a "left outer join", which means "get me all the rows on the left of the join, even if there isn't a matching row on the right." ``` select page.name, count(page-attachment.id) as attachmentsnumber from page left outer join page-attachment on page.id=page-id group by page.name ```
Here's another solution using sub-querying. ``` SELECT p.name, ( SELECT COUNT(*) FROM [page-attachment] pa WHERE pa.[PAGE-ID] = p.id ) as attachmentsnumber FROM page p ```
SQL Query, Count with 0 count
[ "", "sql", "count", "" ]
I've been tasked with the the maintenance of a nonprofit website that recently fell victim to a SQL injection attack. Someone exploited a form on the site to add text to every available text-like field in the database (varchar, nvarchar, etc.) which, when rendered as HTML, includes and executes a JavaScript file. A Google search of the URL indicates that it's from email spammers based out of Romania or China, but that's not what's important right now. I went through and manually removed the information from the the text fields that render on most visible and popular pages on the site but I'm curious as to what would be the best programmatic way of removing the text from the other text fields on the site. Obviously there's more that needs to be done (hardening the site against SQL injections, using something like markdown instead of storing HTML, etc.) and I am working on those but for the time being what I really need is a good way to go in and programmatically remove the injected text. I know what the exact text is, it's the same every time, and it's always appended to the end of any text field. I can't afford to strip out all HTML in the database at this time and I don't know when this happened exactly so I can't just roll back to a backup. Also, the site is on shared hosting and I cannot connect to the database directly with SQL Server tools. I can execute queries against it though, so if there's any way of constructing a SQL update statement to the effect of "hey find all the text fields in all of the tables in the entire database and do this to clean them" that would be the best.
Restore the data from a recent backup.
I was victim and you can use it to clean up ``` UPDATE Table SET TextField = SUBSTRING(TextField, 1, CHARINDEX('</title', TextField) - 1) WHERE (ID IN (SELECT ID FROM Table WHERE (CHARINDEX('</title', Textfield, 1) > 0))) ```
What's the best way of cleaning up after a SQL Injection?
[ "", "sql", "sql-server", "database", "security", "" ]
What's the easiest way to profile a PHP script? I'd love tacking something on that shows me a dump of all function calls and how long they took but I'm also OK with putting something around specific functions. I tried experimenting with the [microtime](http://php.net/microtime) function: ``` $then = microtime(); myFunc(); $now = microtime(); echo sprintf("Elapsed: %f", $now-$then); ``` but that sometimes gives me negative results. Plus it's a lot of trouble to sprinkle that all over my code.
The [PECL APD](http://www.php.net/apd) extension is used as follows: ``` <?php apd_set_pprof_trace(); //rest of the script ?> ``` After, parse the generated file using `pprofp`. Example output: ``` Trace for /home/dan/testapd.php Total Elapsed Time = 0.00 Total System Time = 0.00 Total User Time = 0.00 Real User System secs/ cumm %Time (excl/cumm) (excl/cumm) (excl/cumm) Calls call s/call Memory Usage Name -------------------------------------------------------------------------------------- 100.0 0.00 0.00 0.00 0.00 0.00 0.00 1 0.0000 0.0009 0 main 56.9 0.00 0.00 0.00 0.00 0.00 0.00 1 0.0005 0.0005 0 apd_set_pprof_trace 28.0 0.00 0.00 0.00 0.00 0.00 0.00 10 0.0000 0.0000 0 preg_replace 14.3 0.00 0.00 0.00 0.00 0.00 0.00 10 0.0000 0.0000 0 str_replace ``` **Warning: the latest release of APD is dated 2004, the extension [is no longer maintained](https://pecl.php.net/package/apd) and has various compability issues (see comments).**
You want [xdebug](http://www.xdebug.org/) I think. Install it on the server, turn it on, pump the output through [kcachegrind](http://kcachegrind.github.io/html/Home.html) (for linux) or [wincachegrind](http://sourceforge.net/projects/wincachegrind/) (for windows) and it'll show you a few pretty charts that detail the exact timings, counts and memory usage (but you'll need another extension for that). It rocks, seriously :D
Simplest way to profile a PHP script
[ "", "php", "profiling", "" ]
here is the input i am getting from my flash file process.php?Q2=898&Aa=Grade1&Tim=0%3A0%3A12&Q1=908&Bb=lkj&Q4=jhj&Q3=08&Cc=North%20America&Q0=1 and in php i use this code foreach ($\_GET as $field => $label) { $datarray[]=$\_GET[$field]; ``` echo "$field :"; echo $_GET[$field];; echo "<br>"; ``` i get this out put Q2 :898 Aa :Grade1 Tim :0:0:12 Q1 :908 Bb :lkj Q4 :jhj Q3 :08 Cc :North America Q0 :1 now my question is how do i sort it alphabaticaly so it should look like this Aa :Grade1 Bb :lkj Cc :North America Q0 :1 Q1 :908 and so on....before i can insert it into the DB
``` ksort($_GET); ``` This should [ksort](http://php.net/manual/en/function.ksort.php) the `$_GET` array by it's keys. [krsort](http://php.net/manual/en/function.krsort.php) for reverse order.
what you're looking for is [ksort](http://php.net/manual/en/function.ksort.php). Dig the PHP manual! ;)
How to sort an array by keys in an ascending direction?
[ "", "php", "arrays", "sorting", "query-string", "" ]
For data types such as std::set and std::map where lookup occurs in logarithmic time, is the implementation required to maintain the begin and end iterators? Does accessing begin and end imply a lookup that could occur in logarithmic time? I have always assumed that begin and end always occur in constant time, however I can't find any confirmation of this in Josuttis. Now that I'm working on something where I need to be anal about performance, I want to make sure to cover my bases. Thanks
They happen in constant time. I'm looking at page 466 of the ISO/IEC 14882:2003 standard: **Table 65 - Container Requiments** a.begin(); *(constant complexity)* a.end(); *(constant complexity)* **Table 66 - Reversible Container Requirements** a.rbegin(); *(constant complexity)* a.rend(); *(constant complexity)*
Yes, according to <http://www.cplusplus.com/reference/stl/>, begin(), end() etc are all O(1).
C++ does begin/end/rbegin/rend execute in constant time for std::set, std::map, etc?
[ "", "c++", "stl", "" ]
We are currently working in a private beta and so are still in the process of making fairly rapid changes, although obviously as usage is starting to ramp up, we will be slowing down this process. That being said, one issue we are running into is that after we push out an update with new JavaScript files, the client browsers still use the cached version of the file and they do not see the update. Obviously, on a support call, we can simply inform them to do a `ctrl``F5` refresh to ensure that they get the up-to-date files from the server, but it would be preferable to handle this before that time. Our current thought is to simply attach a version number onto the name of the JavaScript files and then when changes are made, increment the version on the script and update all references. This definitely gets the job done, but updating the references on each release could get cumbersome. As I'm sure we're not the first ones to deal with this, I figured I would throw it out to the community. How are you ensuring clients update their cache when you update your code? If you're using the method described above, are you using a process that simplifies the change?
As far as I know a common solution is to add a `?<version>` to the script's src link. For instance: ``` <script type="text/javascript" src="myfile.js?1500"></script> ``` --- > I assume at this point that there isn't a better way than find-replace to increment these "version numbers" in all of the script tags? You might have a version control system do that for you? Most version control systems have a way to automatically inject the revision number on check-in for instance. It would look something like this: ``` <script type="text/javascript" src="myfile.js?$$REVISION$$"></script> ``` --- Of course, there are always better solutions like [this one](http://blog.greenfelt.net/2009/09/01/caching-javascript-safely/).
Appending the current time to the URL is indeed a common solution. However, you can also manage this at the web server level, if you want to. The server can be configured to send different HTTP headers for javascript files. For example, to force the file to be cached for no longer than 1 day, you would send: ``` Cache-Control: max-age=86400, must-revalidate ``` For beta, if you want to force the user to always get the latest, you would use: ``` Cache-Control: no-cache, must-revalidate ```
How can I force clients to refresh JavaScript files?
[ "", "javascript", "caching", "versioning", "" ]
After hours of debugging, it appears to me that in FireFox, the innerHTML of a DOM reflects what is actually in the markup, but in IE, the innerHTML reflects what's in the markup PLUS any changes made by the user or dynamically (i.e. via Javascript). Has anyone else found this to be true? Any interesting work-arounds to ensure both behave the same way?
I agree with Pat. At this point in the game, writing your own code to deal with cross-browser compatibility given the available Javascript frameworks doesn't make a lot of sense. There's a framework for nearly any taste (some really quite tiny) and they've focused on really abstracting out all of the differences between the browsers. They're doing WAY more testing of it than you're likely to. Something like jQuery or Yahoo's YUI (think how many people hit the Yahoo Javascript in a day and the variety of browsers) is just way more road-tested than any snippet you or I come up with.
I use jQuery's [.html()](http://docs.jquery.com/Attributes/html) to get a consistent result across browsers.
Firefox vs. IE: innerHTML handling
[ "", "javascript", "internet-explorer", "firefox", "dom", "" ]
I'm writing a C# POS (point of sale) system that takes input from a keyboard wedge magcard reader. This means that any data it reads off of a mag stripe is entered as if it were typed on the keyboard very quickly. Currently I'm handling this by attaching to the KeyPress event and looking for a series of very fast key presses that contain the card swipe sentinel characters. Is there a better way to deal with this sort of input? Edit: The device does simply present the data as keystrokes and doesn't interface through some other driver. Also We use a wide range of these types of devices so ideally a method should work independent of the specific model of wedge being used. However if there is no other option I'll have to make do.
One thing you can do is that you should be able to configure your wedge reader so that it presents one or many escape characters before or after the string. You would use these escape characters to know that you are about to have (or just had) a magcard input. This same technique is used by barcode reader devices so you application knows to get focus or handle the data input from the device. The negative to this approach is that you have to properly configure your external devices. This can be a deployment issue. This assumes that your devices simply present the data as keystrokes and don't interface through some other driver.
You can also use the Raw Input API if you know the Hardware IDs of the devices ahead of time. [I blogged about this recently.](http://nicholas.piasecki.name/blog/2009/02/distinguishing-barcode-scanners-from-the-keyboard-in-winforms/) It may be insane but it satisifed my requirement: the primary goal in my case was to be able to receive input even when the application lost focus because someone accidentally bumped into something while rummaging around to scan items on a pallet. The secondary goal is that I couldn't add any sentinel characters because that would have broken existing third-party applications being used with the scan guns. I've done the sentinel character method before, however, both via a `KeyPress` attach or a low-level keyboard hook via `SetWindowsHookEx()` or via `KeyPreview` on your application's main form. If it meets your requirements, it's definitely much simpler and easier to use that method and to that end I second the recommendations already given.
Best way to handle input from a keyboard "wedge"
[ "", "c#", "point-of-sale", "" ]
I'm tasked with building a .NET client app to detect silence in a WAV files. Is this possible with the built-in Windows APIs? Or alternately, any good libraries out there to help with this?
Audio analysis is a difficult thing requiring a lot of complex math (think Fourier Transforms). The question you have to ask is "what is silence". If the audio that you are trying to edit is captured from an analog source, the chances are that there isn't any silence... they will only be areas of soft noise (line hum, ambient background noise, etc). All that said, an algorithm that should work would be to determine a minimum volume (amplitude) threshold and duration (say, <10dbA for more than 2 seconds) and then simply do a volume analysis of the waveform looking for areas that meet this criteria (with perhaps some filters for millisecond spikes). I've never written this in C#, but this [CodeProject article](http://www.codeproject.com/Articles/20025/Sound-visualizer-in-C) looks interesting; it describes C# code to draw a waveform... that is the same kind of code which could be used to do other amplitude analysis.
I'm using [NAudio](https://github.com/naudio/NAudio), and I wanted to detect the silence in audio files so I can either report or truncate. After a lot of research, I came up with this basic implementation. So, I wrote an extension method for the [`AudioFileReader`](https://github.com/naudio/NAudio/blob/master/NAudio/Wave/WaveStreams/AudioFileReader.cs) class which returns the silence duration at the start/end of the file, or starting from a specific position. Here: ``` static class AudioFileReaderExt { public enum SilenceLocation { Start, End } private static bool IsSilence(float amplitude, sbyte threshold) { double dB = 20 * Math.Log10(Math.Abs(amplitude)); return dB < threshold; } public static TimeSpan GetSilenceDuration(this AudioFileReader reader, SilenceLocation location, sbyte silenceThreshold = -40) { int counter = 0; bool volumeFound = false; bool eof = false; long oldPosition = reader.Position; var buffer = new float[reader.WaveFormat.SampleRate * 4]; while (!volumeFound && !eof) { int samplesRead = reader.Read(buffer, 0, buffer.Length); if (samplesRead == 0) eof = true; for (int n = 0; n < samplesRead; n++) { if (IsSilence(buffer[n], silenceThreshold)) { counter++; } else { if (location == SilenceLocation.Start) { volumeFound = true; break; } else if (location == SilenceLocation.End) { counter = 0; } } } } // reset position reader.Position = oldPosition; double silenceSamples = (double)counter / reader.WaveFormat.Channels; double silenceDuration = (silenceSamples / reader.WaveFormat.SampleRate) * 1000; return TimeSpan.FromMilliseconds(silenceDuration); } } ``` This will accept almost any audio file format *not just WAV*. **Usage:** ``` using (AudioFileReader reader = new AudioFileReader(filePath)) { TimeSpan duration = reader.GetSilenceDuration(AudioFileReaderExt.SilenceLocation.Start); Console.WriteLine(duration.TotalMilliseconds); } ``` **References:** * [How audio dB levels are calculated](https://stackoverflow.com/a/9812267/4934172). * [Floating-point samples range](https://stackoverflow.com/q/29761331/4934172). * [More about amplitude](https://sound.stackexchange.com/q/23746).
Detecting audio silence in WAV files using C#
[ "", "c#", ".net", "audio", "" ]
My source code needs to support both .NET version 1.1 and 2.0 ... how do I test for the different versions & what is the best way to deal with this situation. I'm wondering if I should have the two sections of code inline, in separate classes, methods etc. What do you think?
If you want to do something like this you will need to use preprocessor commands and conditional compilation symbols. I would use symbols that clearly indicate the version of .NET you are targeting (say NET11 and NET20) and then wrap the relevant code like this: ``` #if NET11 // .NET 1.1 code #elif NET20 // .NET 2.0 code #endif ``` The reason for doing it this way rather than a simple if/else is an extra layer of protection in case someone forgets to define the symbol. That being said, you should really drill down to the heart of the reason why you want/need to do this.
There are a lot of different options here. Where I work we use #if pragmas but it could also be done with separate assemblies for the separate versions. Ideally you would at least keep the version dependant code in separate partial class files and make the correct version available at compile time. I would enforce this if I could go back in time, our code base now has a whole lot of #if pragmas and sometimes it can be hard to manage. The worst part of the whole #if pragma thing is that Visual Studio just ignores anything that won't compile with the current defines and so it's very easy to check in breaking changes. [NUnit](http://www.nunit.org/index.php) supports both 1.1 and 2.0 and so is a good choice for a test framework. It's not too hard to use something like [NAnt](http://nant.sourceforge.net/) to make separate 1.1 and 2.0 builds and then automatically run the NUnit tests.
How should I write code with unique sections for different versions of .NET
[ "", "c#", ".net", "c-preprocessor", "" ]
Code: ``` <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Unusual Array Lengths!</title> <script type="text/javascript"> var arrayList = new Array(); arrayList = [1, 2, 3, 4, 5, ]; alert(arrayList.length); </script> </head> <body> </body> </html> ``` Notice the extra comma in the array declaration. The code above gives different outputs for various browsers: Safari: 5 Firefox: 5 IE: 6 The extra comma in the array is being ignored by Safari and FF while IE treats it as another object in the array. On some search, I have found mixed opinions about which answer is correct. Most people say that IE is correct but then Safari is also doing the same thing as Firefox. I haven't tested this on other browsers like Opera but I assume that there are discrepancies. My questions: i. **Which one of these is correct?** *Edit: By general consensus (and ECMAScript guidelines) we assume that IE is again at fault.* ii. **Are there any other such Javascript browser quirks that I should be wary of?** *Edit: Yes, there are loads of Javascript quirks. [www.quirksmode.org](http://www.quirksmode.org) is a good resource for the same.* iii. **How do I avoid errors such as these?** *Edit: Use [JSLint](http://www.jslint.com/) to validate your javascript. Or, use some external [libraries](http://openjsan.org/). Or, [sanitize](https://stackoverflow.com/questions/29053/javascript-browser-quirks-arraylength#29062) your code.* *Thanks to [DamienB](https://stackoverflow.com/users/3069/damien-b), [JasonBunting](https://stackoverflow.com/users/1790/jasonbunting), [John](https://stackoverflow.com/users/2168/john) and [Konrad Rudolph](https://stackoverflow.com/users/1968/konrad-rudolph) for their inputs.*
It seems to me that the Firefox behavior is correct. What is the value of the 6th value in IE (sorry I don't have it handy to test). Since there is no actual value provided, I imagine it's filling it with something like 'null' which certainly doesn't seem to be what you intended to have happen when you created the array. At the end of the day though, it doesn't really matter which is "correct" since the reality is that either you are targeting only one browser, in which case you can ignore what the others do, or you are targeting multiple browsers in which case your code needs to work on all of them. In this case the obvious solution is to never include the dangling comma in an array initializer. If you have problems avoiding it (e.g. for some reason you have developed a (bad, imho) habit of including it) and other problems like this, then something like [JSLint](http://www.jslint.com/) might help.
I was intrigued so I looked it up in the definition of [ECMAScript 262 ed. 3](http://www.ecma-international.org/publications/standards/Ecma-262.htm) which is the basis of JavaScript 1.8. The relevant definition is found in section 11.1.4 and unfortunately is not very clear. The section explicitly states that elisions (= omissions) at the beginning or in the middle don't define an element *but do contribute to the overall length*. There is no explicit statements about redundant commas at the end of the initializer but by omission I conclude that the above statement implies that they do *not* contribute to the overall length so I conclude that **MSIE is wrong**. The relevant paragraph reads as follows: > Array elements may be elided at the beginning, middle or end of the element list. Whenever a comma in the element list is not preceded by an Assignment Expression (i.e., a comma at the beginning or after another comma), the missing array element contributes to the length of the Array and increases the index of subsequent elements. Elided array elements are not defined.
Javascript Browser Quirks - array.Length
[ "", "javascript", "" ]
The following returns > Type of conditional expression cannot be determined because there is no implicit conversion between 'double' and '<null>' ``` aNullableDouble = (double.TryParse(aString, out aDouble) ? aDouble : null) ``` --- The reason why I can't just use aNullableBool instead of the roundtrip with aDouble is because aNullableDouble is a property of a generated EntityFramework class which cannot be used as an out par.
``` aNullableDouble = double.TryParse(aString, out aDouble) ? (double?)aDouble : null; ```
Just blow the syntax out into the full syntax instead of the shorthand ... it'll be easier to read: ``` aNullableDouble = null; if (double.TryParse(aString, out aDouble)) { aNullableDouble = aDouble; } ```
Shorthand if + nullable types (C#)
[ "", "c#", "conditional-operator", "" ]
This query works great: ``` var pageObject = (from op in db.ObjectPermissions join pg in db.Pages on op.ObjectPermissionName equals page.PageName where pg.PageID == page.PageID select op) .SingleOrDefault(); ``` I get a new type with my 'op' fields. Now I want to retrieve my 'pg' fields as well, but ``` select op, pg).SingleOrDefault(); ``` doesn't work. How can I select everything from both tables so that they appear in my new pageObject type?
You can use anonymous types for this, i.e.: ``` var pageObject = (from op in db.ObjectPermissions join pg in db.Pages on op.ObjectPermissionName equals page.PageName where pg.PageID == page.PageID select new { pg, op }).SingleOrDefault(); ``` This will make pageObject into an IEnumerable of an anonymous type so AFAIK you won't be able to pass it around to other methods, however if you're simply obtaining data to play with in the method you're currently in it's perfectly fine. You can also name properties in your anonymous type, i.e.:- ``` var pageObject = (from op in db.ObjectPermissions join pg in db.Pages on op.ObjectPermissionName equals page.PageName where pg.PageID == page.PageID select new { PermissionName = pg, ObjectPermission = op }).SingleOrDefault(); ``` This will enable you to say:- ``` if (pageObject.PermissionName.FooBar == "golden goose") Application.Exit(); ``` For example :-)
If you don't want to use anonymous types b/c let's say you're passing the object to another method, you can use the LoadWith load option to load associated data. It requires that your tables are associated either through foreign keys or in your Linq-to-SQL dbml model. ``` db.DeferredLoadingEnabled = false; DataLoadOptions dlo = new DataLoadOptions(); dlo.LoadWith<ObjectPermissions>(op => op.Pages) db.LoadOptions = dlo; var pageObject = from op in db.ObjectPermissions select op; // no join needed ``` Then you can call ``` pageObject.Pages.PageID ``` Depending on what your data looks like, you'd probably want to do this the other way around, ``` DataLoadOptions dlo = new DataLoadOptions(); dlo.LoadWith<Pages>(p => p.ObjectPermissions) db.LoadOptions = dlo; var pageObject = from p in db.Pages select p; // no join needed var objectPermissionName = pageObject.ObjectPermissions.ObjectPermissionName; ```
Creating a LINQ select from multiple tables
[ "", "c#", "linq", "" ]
I'm currently creating an explicit reference to this in the outer class so that I have a name to refer to in the anonymous inner class. Is there a better way to do this?
I just found this recently. Use `OuterClassName.this`. ``` class Outer { void foo() { new Thread() { public void run() { Outer.this.bar(); } }.start(); } void bar() { System.out.println("BAR!"); } } ``` **Updated** If you just want the object itself (instead of invoking members), then `Outer.this` is the way to go.
Use `EnclosingClass.this`
How do you get a reference to the enclosing class from an anonymous inner class in Java?
[ "", "java", "oop", "" ]
In my example below I'm using a `dijit.form.DateTextBox`: ``` <input type="text" name="startDate" dojoType="dijit.form.DateTextBox" constraints="{datePattern:'MM/dd/yyyy'}" value='<c:out value="${sessionScope.adminMessageForm.startDate}"/>' /> ``` So for example, if the user starts to enter "asdf" into the date the field turns yellow and a popup error message appears saying `The value entered is not valid.`. Even if I remove the `constraints="{datePattern:'MM/dd/yyyy'}"` it still validates. Without going into details as to why, I would like to be able keep the dojoType and still prevent validation in particular circumstances.
Try overriding the validate method in your markup. This will work (just tested): ``` <input type="text" name="startDate" dojoType="dijit.form.DateTextBox" constraints="{datePattern:'MM/dd/yyyy'}" value='<c:out value="${sessionScope.adminMessageForm.startDate}"/>' validate='return true;' /> ```
My only suggestion is to programmatically remove the dojoType on the server-side or client-side. It is not possible to keep the dojoType and not have it validate. Unless you create your own type that has you logic in it.
Is there anyway to disable the client-side validation for dojo date text box?
[ "", "javascript", "dojo", "" ]
Given the key for some registry value (e.g. HKEY\_LOCAL\_MACHINE\blah\blah\blah\foo) how can I: 1. Safely determine that such a key exists. 2. Programmatically (i.e. with code) get its value. I have absolutely no intention of writing anything back to the registry (for the duration of my career if I can help it). So we can skip the lecture about every molecule in my body exploding at the speed of light if I write to the registry incorrectly. Prefer answers in C++, but mostly just need to know what the special Windows API incantation to get at the value is.
**Here is some pseudo-code to retrieve the following:** 1. If a registry key exists 2. What the default value is for that registry key 3. What a string value is 4. What a DWORD value is **Example code:** Include the library dependency: Advapi32.lib ``` HKEY hKey; LONG lRes = RegOpenKeyExW(HKEY_LOCAL_MACHINE, L"SOFTWARE\\Perl", 0, KEY_READ, &hKey); bool bExistsAndSuccess (lRes == ERROR_SUCCESS); bool bDoesNotExistsSpecifically (lRes == ERROR_FILE_NOT_FOUND); std::wstring strValueOfBinDir; std::wstring strKeyDefaultValue; GetStringRegKey(hKey, L"BinDir", strValueOfBinDir, L"bad"); GetStringRegKey(hKey, L"", strKeyDefaultValue, L"bad"); LONG GetDWORDRegKey(HKEY hKey, const std::wstring &strValueName, DWORD &nValue, DWORD nDefaultValue) { nValue = nDefaultValue; DWORD dwBufferSize(sizeof(DWORD)); DWORD nResult(0); LONG nError = ::RegQueryValueExW(hKey, strValueName.c_str(), 0, NULL, reinterpret_cast<LPBYTE>(&nResult), &dwBufferSize); if (ERROR_SUCCESS == nError) { nValue = nResult; } return nError; } LONG GetBoolRegKey(HKEY hKey, const std::wstring &strValueName, bool &bValue, bool bDefaultValue) { DWORD nDefValue((bDefaultValue) ? 1 : 0); DWORD nResult(nDefValue); LONG nError = GetDWORDRegKey(hKey, strValueName.c_str(), nResult, nDefValue); if (ERROR_SUCCESS == nError) { bValue = (nResult != 0) ? true : false; } return nError; } LONG GetStringRegKey(HKEY hKey, const std::wstring &strValueName, std::wstring &strValue, const std::wstring &strDefaultValue) { strValue = strDefaultValue; WCHAR szBuffer[512]; DWORD dwBufferSize = sizeof(szBuffer); ULONG nError; nError = RegQueryValueExW(hKey, strValueName.c_str(), 0, NULL, (LPBYTE)szBuffer, &dwBufferSize); if (ERROR_SUCCESS == nError) { strValue = szBuffer; } return nError; } ```
Since Windows >=Vista/Server 2008, [RegGetValue](https://msdn.microsoft.com/en-us/library/windows/desktop/ms724868(v=vs.85).aspx) is available, which [is a safer function](https://msdn.microsoft.com/en-us/magazine/mt808504.aspx) than [RegQueryValueEx](https://msdn.microsoft.com/en-us/library/ms724911%28VS.85%29.aspx). No need for `RegOpenKeyEx`, `RegCloseKey` or `NUL` termination checks of string values ([`REG_SZ`, `REG_MULTI_SZ`, `REG_EXPAND_SZ`](https://msdn.microsoft.com/en-us/library/ms724884(v=vs.85).aspx)). ``` #include <iostream> #include <string> #include <exception> #include <windows.h> /*! \brief Returns a value from HKLM as string. \exception std::runtime_error Replace with your error handling. */ std::wstring GetStringValueFromHKLM(const std::wstring& regSubKey, const std::wstring& regValue) { size_t bufferSize = 0xFFF; // If too small, will be resized down below. std::wstring valueBuf; // Contiguous buffer since C++11. valueBuf.resize(bufferSize); auto cbData = static_cast<DWORD>(bufferSize * sizeof(wchar_t)); auto rc = RegGetValueW( HKEY_LOCAL_MACHINE, regSubKey.c_str(), regValue.c_str(), RRF_RT_REG_SZ, nullptr, static_cast<void*>(valueBuf.data()), &cbData ); while (rc == ERROR_MORE_DATA) { // Get a buffer that is big enough. cbData /= sizeof(wchar_t); if (cbData > static_cast<DWORD>(bufferSize)) { bufferSize = static_cast<size_t>(cbData); } else { bufferSize *= 2; cbData = static_cast<DWORD>(bufferSize * sizeof(wchar_t)); } valueBuf.resize(bufferSize); rc = RegGetValueW( HKEY_LOCAL_MACHINE, regSubKey.c_str(), regValue.c_str(), RRF_RT_REG_SZ, nullptr, static_cast<void*>(valueBuf.data()), &cbData ); } if (rc == ERROR_SUCCESS) { cbData /= sizeof(wchar_t); valueBuf.resize(static_cast<size_t>(cbData - 1)); // remove end null character return valueBuf; } else { throw std::runtime_error("Windows system error code: " + std::to_string(rc)); } } int main() { std::wstring regSubKey; #ifdef _WIN64 // Manually switching between 32bit/64bit for the example. Use dwFlags instead. regSubKey = L"SOFTWARE\\WOW6432Node\\Company Name\\Application Name\\"; #else regSubKey = L"SOFTWARE\\Company Name\\Application Name\\"; #endif std::wstring regValue(L"MyValue"); std::wstring valueFromRegistry; try { valueFromRegistry = GetStringValueFromHKLM(regSubKey, regValue); } catch (std::exception& e) { std::cerr << e.what(); } std::wcout << valueFromRegistry; } ``` Its parameter `dwFlags` supports flags for type restriction, filling the value buffer with zeros on failure (`RRF_ZEROONFAILURE`) and 32/64bit registry access (`RRF_SUBKEY_WOW6464KEY`, `RRF_SUBKEY_WOW6432KEY`) for 64bit programs.
How to read a value from the Windows registry
[ "", "c++", "windows", "winapi", "registry", "" ]
a colleague pointed me the other day to [BCEL](http://jakarta.apache.org/bcel/) which , as best I can tell from his explanation and a quick read, a way to modify at run time the byte code. My first thought was that it sounded dangerous, and my second thought was that it sounded cool. Then I gave it some more thought and I recalled the [codinghorror post on monkey-patching](https://blog.codinghorror.com/monkeypatching-for-humans/) and realized that this was basically the same thing. Has anyone ever used BCEL for anything practical? Am I right that this is basically run time monkey patching, or am I missing something?
It's a bit more low-level than classic monkey patching, and from what I read, the classes already loaded into the VM are not updated. It only supports saving it to class files again, not modifying run time classes.
From BCEL's FAQ: > Q: Can I create or modify classes > dynamically with BCEL? > > A: BCEL contains useful classes in the > util package, namely ClassLoader and > JavaWrapper.Take a look at the > ProxyCreator example. But monkeypatching is... uhm... controversial, and you probably shouldn't use it if your language doesn't support it. If you have a good use case for it, may I suggest embbededing Jython?
Is BCEL == monkeypatching for java?
[ "", "java", "bytecode", "monkeypatching", "bcel", "" ]
I've been using PHP for too long, but I'm new to JavaScript integration in some places. I'm trying to find the fastest way to pass database information into a page where it can be modified and displayed dynamically in JavaScript. Right now, I'm looking at loading a *JSON with PHP* echo statements because it's fast and effective, but I saw that I could use PHP's JSON library (PHP 5.2). **Has anybody tried the new JSON library, and is it better than my earlier method?**
Use the library. If you try to generate it manually, I predict with 99% certainty that the resulting text will be invalid in some way. Especially with more esoteric features like Unicode strings or exponential notation.
the json\_encode and json\_decode methods work perfectly. Just pass them an object or an array that you want to encode and it recursively encodes them to JSON. Make sure that you give it UTF-8 encoded data!
Loading JSON with PHP
[ "", "php", "json", "" ]
I am currently writing a small calendar in ASP.Net C#. Currently to produce the rows of the weeks I do the following for loop: ``` var iWeeks = 6; for (int w = 0; w < iWeeks; w++) { ``` This works fine, however, some month will only have 5 weeks and in some rare cases, 4. How can I calculate the number of rows that will be required for a particular month? This is an example of what I am creating: ![enter image description here](https://i.stack.imgur.com/NkxN7.png) As you can see for the above month, there are only 5 rows required, however. Take the this month (August 2008) which started on a Saturday and ends on a Monday on the 6th Week/Row. ***Image found on google*** --- This is an example of what I am creating: ![enter image description here](https://i.stack.imgur.com/NkxN7.png) As you can see for the above month, there are only 5 rows required, however. Take the this month (August 2008) which started on a Saturday and ends on a Monday on the 6th Week/Row. ***Image found on google***
Here is the method that does it: ``` public int GetWeekRows(int year, int month) { DateTime firstDayOfMonth = new DateTime(year, month, 1); DateTime lastDayOfMonth = new DateTime(year, month, 1).AddMonths(1).AddDays(-1); System.Globalization.Calendar calendar = System.Threading.Thread.CurrentThread.CurrentCulture.Calendar; int lastWeek = calendar.GetWeekOfYear(lastDayOfMonth, System.Globalization.CalendarWeekRule.FirstFourDayWeek, DayOfWeek.Monday); int firstWeek = calendar.GetWeekOfYear(firstDayOfMonth, System.Globalization.CalendarWeekRule.FirstFourDayWeek, DayOfWeek.Monday); return lastWeek - firstWeek + 1; } ``` You can customize the calendar week rule by modifying the System.Globalization.CalendarWeekRule.FirstFourDayWeek part. I hope the code is self explanatory.
Well, it depends on the culture you're using, but let's assume you can use Thread.CurrentThread.CurrentCulture, then the code to get the week of today would be: ``` Culture culture = Thread.CurrentThread.CurrentCulture; Calendar cal = culture.Calendar; Int32 week = cal.GetWeekOfYear(DateTime.Today, culture.DateTimeFormat.CalendarWeekRule, culture.DateTimeFormat.FirstDayOfWeek); ```
Calculate DateTime Weeks into Rows
[ "", "c#", "asp.net", "" ]
I have a Person object with two constructors - one takes an int (personId), the other a string (logonName). I would like another constructor that takes a string (badgeNumber). I know this can't be done, but seems it might be a common situation. Is there a graceful way of handling this? I suppose this would apply to any overloaded method. Code: ``` public class Person { public Person() {} public Person(int personId) { this.Load(personId); } public Person(string logonName) { this.Load(logonName); } public Person(string badgeNumber) { //load logic here... } ``` ...etc.
You might consider using custom types. For example, create LogonName and BadgeNumber classes. Then your function declarations look like... ``` public Person(LogonName ln) { this.Load(ln.ToString()); } public Person(BadgeNumber bn) { //load logic here... } ``` Such a solution might give you a good place to keep the business logic that governs the format and usage of these strings.
You could perhaps use factory methods instead? ``` public static Person fromId(int id) { Person p = new Person(); p.Load(id); return p; } public static Person fromLogonName(string logonName) { Person p = new Person(); p.Load(logonName); return p; } public static Person fromBadgeNumber(string badgeNumber) { Person p = new Person(); // load logic return p; } private Person() {} ```
Constructors with the same argument type
[ "", "c#", ".net", "oop", "" ]
I have a method that where I want to redirect the user back to a login page located at the root of my web application. I'm using the following code: ``` Response.Redirect("~/Login.aspx?ReturnPath=" + Request.Url.ToString()); ``` This doesn't work though. My assumption was that ASP.NET would automatically resolve the URL into the correct path. Normally, I would just use ``` Response.Redirect("../Login.aspx?ReturnPath=" + Request.Url.ToString()); ``` but this code is on a master page, and can be executed from any folder level. How do I get around this issue?
> I think you need to drop the "~/" and replace it with just "/", I believe / is the root **STOP RIGHT THERE!** :-) unless you want to hardcode your web app so that it can only be installed at the root of a web site. "~/" ***is*** the correct thing to use, but the reason that your original code didn't work as expected is that `ResolveUrl` (which is used internally by `Redirect`) tries to first work out if the path you are passing it is an absolute URL (e.g. "\*\*<http://server/>\*\*foo/bar.htm" as opposed to "foo/bar.htm") - but unfortunately it does this by simply looking for a colon character ':' in the URL you give it. But in this case it finds a colon in the URL you give in the `ReturnPath` query string value, which fools it - therefore your '~/' doesn't get resolved. The fix is that you should be URL-encoding the `ReturnPath` value which escapes the problematic ':' along with any other special characters. ``` Response.Redirect("~/Login.aspx?ReturnPath=" + Server.UrlEncode(Request.Url.ToString())); ``` Additionally, I recommend that you (or anyone) never use `Uri.ToString` - because it gives a human-readable, more "friendly" version of the URL - not a necessarily correct one (it unescapes things). Instead use Uri.AbsoluteUri - like so: ``` Response.Redirect("~/Login.aspx?ReturnPath=" + Server.UrlEncode(Request.Url.AbsoluteUri)); ```
you can resolve the URL first Response.Redirect("~/Login.aspx); and add the parameters after it got resolved.
Response.Redirect using ~ Path
[ "", "c#", "asp.net", "response.redirect", "" ]
Is there any way to use inheritance in database (Specifically in SQL Server 2005)? Suppose I have few field like **CreatedOn**, **CreatedBy** which I want to add on all of my entities. I looking for an alternative way instead of adding these fields to every table.
There is no such thing as inheritance between tables in SQL Server 2005, and as noted by the others, you can get as far as getting help adding the necessary columns to the tables when you create them, but it won't be inheritance as you know it. Think of it more like a template for your source code files. As GateKiller mentions, you can create a table containing the shared data and reference it with a foreign key, but you'll either have to have audit hooks, triggers, or do the update manually. Bottom line: Manual work.
PostgreSQL has this feature. Just add this to the end of your table definition: ``` INHERITS FROM (tablename[, othertable...]) ``` The child table will have all the columns of its parent, and changes to the parent table will change the child. Also, everything in the child table will come up in queries to the parent table (by default). Unfortunately indices don't cross the parent/child border, which also means you can't make sure that certain columns are unique across both the parent and child. As far as I know, it's not a feature used very often.
Inheritance in database?
[ "", "sql", "database", "inheritance", "sql-server-2005", "" ]
What's the best way to specify a proxy with username and password for an http connection in python?
Use this: ``` import requests proxies = {"http":"http://username:password@proxy_ip:proxy_port"} r = requests.get("http://www.example.com/", proxies=proxies) print(r.content) ``` I think it's much simpler than using `urllib`. I don't understand why people love using `urllib` so much.
This works for me: ``` import urllib2 proxy = urllib2.ProxyHandler({'http': 'http:// username:password@proxyurl:proxyport'}) auth = urllib2.HTTPBasicAuthHandler() opener = urllib2.build_opener(proxy, auth, urllib2.HTTPHandler) urllib2.install_opener(opener) conn = urllib2.urlopen('http://python.org') return_str = conn.read() ```
How to specify an authenticated proxy for a python http connection?
[ "", "python", "http", "proxy", "" ]
Anyone know this compiler feature? It seems GCC support that. How does it work? What is the potential gain? In which case it's good? Inner loops? (this question is specific, not about optimization in general, thanks)
It works by placing extra code to count the number of times each codepath is taken. When you compile a second time the compiler uses the knowledge gained about execution of your program that it could only guess at before. There are a couple things PGO can work toward: * Deciding which functions should be inlined or not depending on how often they are called. * Deciding how to place hints about which branch of an "if" statement should be predicted on based on the percentage of calls going one way or the other. * Deciding how to optimize loops based on how many iterations get taken each time that loop is called. You never really know how much these things can help until you test it.
PGO gives about a 5% speed boost when compiling x264, the project I work on, and we have a built-in system for it (make fprofiled). Its a nice free speed boost in some cases, and probably helps more in applications that, unlike x264, are less made up of handwritten assembly.
profile-guided optimization (C)
[ "", "c++", "c", "optimization", "gcc", "pgo", "" ]
Html Textarea elements only wrap when they reach a space or tab character. This is fine, until the user types a looooooooooooooooooooooong enough word. I'm looking for a way to strictly enforce line breaks (eg.: even if it results in "loooooooooooo \n ooooooooooong"). ![alt text](https://i.stack.imgur.com/sUmHB.png) The best I've found is to add a zero-width unicode space after every letter, but this breaks copy and paste operations. Anyone know of a better way? Note: I'm referring to the "textarea" element here (i.e.: the one that behaves similarly to a text input) - not just a plain old block of text.
The CSS settings `word-wrap:break-word` and `text-wrap:unrestricted` appear to be CSS 3 features. Good luck finding a way to do this on current implementations.
* quirksmode.org has an [overview of various methods](http://www.quirksmode.org/oddsandends/wbr.html). * There's a related SO question: ["In HTML, how to word-break on a dash?"](https://stackoverflow.com/questions/904/in-html-how-to-word-break-on-a-dash) * In browsers that support it, [`word-wrap: break-word`](http://www.w3.org/TR/css3-text/#word-wrap) might give the desired effect as well.
Most elegant way to force a TEXTAREA element to line-wrap, *regardless* of whitespace
[ "", "javascript", "html", "css", "text", "" ]
Using core jQuery, how do you remove all the options of a select box, then add one option and select it? My select box is the following. ``` <Select id="mySelect" size="9"> </Select> ``` EDIT: The following code was helpful with chaining. However, (in Internet Explorer) `.val('whatever')` did not select the option that was added. (I did use the same 'value' in both `.append` and `.val`.) ``` $('#mySelect').find('option').remove().end() .append('<option value="whatever">text</option>').val('whatever'); ``` EDIT: Trying to get it to mimic this code, I use the following code whenever the page/form is reset. This select box is populated by a set of radio buttons. `.focus()` was closer, but the option did not appear selected like it does with `.selected= "true"`. Nothing is wrong with my existing code - I am just trying to learn jQuery. ``` var mySelect = document.getElementById('mySelect'); mySelect.options.length = 0; mySelect.options[0] = new Option ("Foo (only choice)", "Foo"); mySelect.options[0].selected="true"; ``` EDIT: selected answer was close to what I needed. This worked for me: ``` $('#mySelect').children().remove().end() .append('<option selected value="whatever">text</option>') ; ``` But both answers led me to my final solution..
``` $('#mySelect') .find('option') .remove() .end() .append('<option value="whatever">text</option>') .val('whatever') ; ```
``` $('#mySelect') .empty() .append('<option selected="selected" value="whatever">text</option>') ; ```
How do you remove all the options of a select box and then add one option and select it with jQuery?
[ "", "javascript", "jquery", "html-select", "dynamic-html", "" ]
I have been sold on mod\_wsgi and apache rather than mod\_python. I have all the parts installed (django, apache, mod\_wsgi) but have run into a problem deploying. I am on osx 10.5 with apache 2.2 and django 1.0b2, mod\_wsgi-2.3 My application is called tred. Here are the relevant files: httpd-vhosts (included in httpd-conf) ``` NameVirtualHost tred:80 ServerName tred Alias /admin_media /usr/lib/python2.5/site-packages/django/contrib/admin/media Order allow,deny Allow from all Alias /media /Users/dmg/Sites/tred/media Order allow,deny Allow from all Alias / /Users/dmg/Sites/tred/ Order allow,deny Allow from all WSGIScriptAlias / /Users/dmg/Sites/tred/mod_wsgi-handler.wsgi WSGIDaemonProcess tred user=dmg group=staff processes=1 threads=10 WSGIProcessGroup tred ``` mod\_wsgi-handle.wsgi ``` import sys import os sys.path.append(os.path.dirname(os.path.abspath(__file__)) + '/..') os.environ['DJANGO_SETTINGS_MODULE'] = 'tred.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() ``` When I go to <http://tred> I get a directory listing rather than the rendered website. I think I have followed the tutorials correctly but it is obviously not right. What can I do to make this work?
What happens if you remove the `Alias /` directive?
Note that Alias and WSGIScriptAlias directives do not have the same precedence. Thus, they will not be processed in file order as written. Instead, all Alias directives get precedence over WSGIScriptAlias directives. Thus, it wouldn't have mattered if the Alias for '/' appeared after WSGIScriptAlias, it would still have taken precedence.
Setup django with WSGI and apache
[ "", "python", "django", "apache", "mod-wsgi", "" ]
Currently, I am writing up a bit of a product-based CMS as my first project. Here is my question. How can I add additional data (products) to my Product model? I have added '/admin/products/add' to my urls.py, but I don't really know where to go from there. How would i build both my view and my template? Please keep in mind that I don't really know all that much Python, and i am very new to Django How can I do this all without using this existing django admin interface.
You will want to wire your URL to the Django [create\_object generic view](https://docs.djangoproject.com/en/1.4/ref/generic-views/#django-views-generic-create-update-create-object), and pass it either "model" (the model you want to create) or "form\_class" (a customized [ModelForm](https://docs.djangoproject.com/en/1.1/topics/forms/modelforms/#topics-forms-modelforms) class). There are a number of [other arguments](https://docs.djangoproject.com/en/1.4/ref/generic-views/#django-views-generic-create-update-create-object) you can also pass to override default behaviors. Sample URLconf for the simplest case: ``` from django.conf.urls.defaults import * from django.views.generic.create_update import create_object from my_products_app.models import Product urlpatterns = patterns('', url(r'^admin/products/add/$', create_object, {'model': Product})) ``` Your template will get the context variable "form", which you just need to wrap in a <form> tag and add a submit button. The simplest working template (by default should go in "my\_products\_app/product\_form.html"): ``` <form action="." method="POST"> {{ form }} <input type="submit" name="submit" value="add"> </form> ``` Note that your Product model must have a get\_absolute\_url method, or else you must pass in the post\_save\_redirect parameter to the view. Otherwise it won't know where to redirect to after save.
This topic is covered in [Django tutorials](https://code.djangoproject.com/wiki/Tutorials).
How do I add data to an existing model in Django?
[ "", "python", "django", "" ]
I'm trying out the following query: ``` SELECT A,B,C FROM table WHERE field LIKE 'query%' UNION SELECT A,B,C FROM table WHERE field LIKE '%query' UNION SELECT A,B,C FROM table WHERE field LIKE '%query%' GROUP BY B ORDER BY B ASC LIMIT 5 ``` That's three queries stuck together, kinda sorta. However, the result set that comes back reflects results from query #3 before the results from query #1 which is undesired. Is there any way to prioritize these so that results come as all for query #1, then all for query #2 then all for query #3? I don't want to do this in PHP just yet (not to mention having to control for results that showed up in the first query not to show in the second and so forth).
Maybe you should try including a fourth column, stating the table it came from, and then order and group by it: ``` SELECT A,B,C, "query 1" as origin FROM table WHERE field LIKE 'query%' UNION SELECT A,B,C, "query 2" as origin FROM table WHERE field LIKE '%query' UNION SELECT A,B,C, "query 3" as origin FROM table WHERE field LIKE '%query%' GROUP BY origin, B ORDER BY origin, B ASC LIMIT 5 ```
Add an additional column with hard-coded values that you will use to sort the overall resultset, like so: ``` SELECT A,B,C,1 as [order] FROM table WHERE field LIKE 'query%' UNION SELECT A,B,C,2 as [order] FROM table WHERE field LIKE '%query' UNION SELECT A,B,C,3 as [order] FROM table WHERE field LIKE '%query%' GROUP BY B ORDER BY [order] ASC, B ASC LIMIT 5 ```
How do I Concatenate entire result sets in MySQL?
[ "", "mysql", "sql", "union", "" ]
How would you dynamically subscribe to a C# event so that given a Object instance and a String name containing the name of the event, you subscribe to that event and do something (write to the console for example) when that event has been fired? It would seem using Reflection this isn't possible and I would like to avoid having to use Reflection.Emit if possible, as this currently (to me) seems like the only way of doing it. **/EDIT:** I do not know the signature of the delegate needed for the event, this is the core of the problem **/EDIT 2:** Although delegate contravariance seems like a good plan, I can not make the assumption necessary to use this solution
You can compile expression trees to use void methods without any arguments as event handlers for events of any type. To accommodate other event handler types, you have to map the event handler's parameters to the events somehow. ``` using System; using System.Linq; using System.Linq.Expressions; using System.Reflection; class ExampleEventArgs : EventArgs { public int IntArg {get; set;} } class EventRaiser { public event EventHandler SomethingHappened; public event EventHandler<ExampleEventArgs> SomethingHappenedWithArg; public void RaiseEvents() { if (SomethingHappened!=null) SomethingHappened(this, EventArgs.Empty); if (SomethingHappenedWithArg!=null) { SomethingHappenedWithArg(this, new ExampleEventArgs{IntArg = 5}); } } } class Handler { public void HandleEvent() { Console.WriteLine("Handler.HandleEvent() called.");} public void HandleEventWithArg(int arg) { Console.WriteLine("Arg: {0}",arg); } } static class EventProxy { //void delegates with no parameters static public Delegate Create(EventInfo evt, Action d) { var handlerType = evt.EventHandlerType; var eventParams = handlerType.GetMethod("Invoke").GetParameters(); //lambda: (object x0, EventArgs x1) => d() var parameters = eventParams.Select(p=>Expression.Parameter(p.ParameterType,"x")); var body = Expression.Call(Expression.Constant(d),d.GetType().GetMethod("Invoke")); var lambda = Expression.Lambda(body,parameters.ToArray()); return Delegate.CreateDelegate(handlerType, lambda.Compile(), "Invoke", false); } //void delegate with one parameter static public Delegate Create<T>(EventInfo evt, Action<T> d) { var handlerType = evt.EventHandlerType; var eventParams = handlerType.GetMethod("Invoke").GetParameters(); //lambda: (object x0, ExampleEventArgs x1) => d(x1.IntArg) var parameters = eventParams.Select(p=>Expression.Parameter(p.ParameterType,"x")).ToArray(); var arg = getArgExpression(parameters[1], typeof(T)); var body = Expression.Call(Expression.Constant(d),d.GetType().GetMethod("Invoke"), arg); var lambda = Expression.Lambda(body,parameters); return Delegate.CreateDelegate(handlerType, lambda.Compile(), "Invoke", false); } //returns an expression that represents an argument to be passed to the delegate static Expression getArgExpression(ParameterExpression eventArgs, Type handlerArgType) { if (eventArgs.Type==typeof(ExampleEventArgs) && handlerArgType==typeof(int)) { //"x1.IntArg" var memberInfo = eventArgs.Type.GetMember("IntArg")[0]; return Expression.MakeMemberAccess(eventArgs,memberInfo); } throw new NotSupportedException(eventArgs+"->"+handlerArgType); } } static class Test { public static void Main() { var raiser = new EventRaiser(); var handler = new Handler(); //void delegate with no parameters string eventName = "SomethingHappened"; var eventinfo = raiser.GetType().GetEvent(eventName); eventinfo.AddEventHandler(raiser,EventProxy.Create(eventinfo,handler.HandleEvent)); //void delegate with one parameter string eventName2 = "SomethingHappenedWithArg"; var eventInfo2 = raiser.GetType().GetEvent(eventName2); eventInfo2.AddEventHandler(raiser,EventProxy.Create<int>(eventInfo2,handler.HandleEventWithArg)); //or even just: eventinfo.AddEventHandler(raiser,EventProxy.Create(eventinfo,()=>Console.WriteLine("!"))); eventInfo2.AddEventHandler(raiser,EventProxy.Create<int>(eventInfo2,i=>Console.WriteLine(i+"!"))); raiser.RaiseEvents(); } } ```
It's not a completely general solution, but if all your events are of the form void Foo(object o, T args) , where T derives from EventArgs, then you can use delegate contravariance to get away with it. Like this (where the signature of KeyDown is not the same as that of Click) : ``` public Form1() { Button b = new Button(); TextBox tb = new TextBox(); this.Controls.Add(b); this.Controls.Add(tb); WireUp(b, "Click", "Clickbutton"); WireUp(tb, "KeyDown", "Clickbutton"); } void WireUp(object o, string eventname, string methodname) { EventInfo ei = o.GetType().GetEvent(eventname); MethodInfo mi = this.GetType().GetMethod(methodname, BindingFlags.Public | BindingFlags.Instance | BindingFlags.NonPublic); Delegate del = Delegate.CreateDelegate(ei.EventHandlerType, this, mi); ei.AddEventHandler(o, del); } void Clickbutton(object sender, System.EventArgs e) { MessageBox.Show("hello!"); } ```
C# Dynamic Event Subscription
[ "", "c#", "events", "reflection", "delegates", "" ]