Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I’ve noticed that after making a change to a javascript file, clearing the cache, and reloading; one function in particular runs in about 90ms, the next time I load the page, it runs in 40ms, the next time I run it, it runs in 20ms … then never gets faster. It kind of looks like IE is compiling my javascript and caching that compiled version somewhere, similar to how SQLServer processes queries. Is that what is happening? Does anybody know where I can find a clarification of how browsers process javascript?
You may want to check out [Eric Lippert's comment](http://blogs.msdn.com/ptorr/archive/2003/09/14/56184.aspx#56186) to Peter Torr's blog post [Compiled, interpreted, whatever](http://blogs.msdn.com/ptorr/archive/2003/09/14/56184.aspx): > JScript Classic acts like a compiled language in the sense that before any JScript Classic program runs, we fully syntax check the code, generate a full parse tree, and generate a bytecode. We then run the bytecode through a bytecode interpreter. In that sense, JScript is every bit as "compiled" as Java. The difference is that JScript does not allow you to persist or examine our proprietary bytecode. Also, the bytecode is much higher-level than the JVM bytecode -- the JScript Classic bytecode language is little more than a linearization of the parse tree, whereas the JVM bytecode is clearly intended to operate on a low-level stack machine. The post and the comment are from September 2003, but judging from Ralph Sommerer's [On JavaScript performance in IE8](http://blogs.msdn.com/sompost/archive/2008/09/26/javascript-performance-in-ie8.aspx) post, they haven't changed much in the underlying JScript engine: > Unless the JavaScript engine used in IE (and elsewhere) employs some sort of compilation to native code, it will always lag behind its competitors with respect to performance. From what I gather in their [Channel9 appearance](http://channel9.msdn.com/posts/janakiram/Whats-New-for-JScript-in-IE8/) they have made improvements in bytecode execution, but their main targets were JavaScript native objects (Array, String, ...) and JavaScript-DOM-interaction.
IE8 is not open-source, so one can only make hypotheses; however, open-source browsers (such as Chromium, Firefox, Webkit) do work roughly as you say, as do many other interpreters in non-browser and not necessarily JS settings (compile new sources when first seen or reloaded, cache or save the compiled version for faster execution in the future), so it seems very reasonable that IE's Javascript approach should be very much the same, as you surmised.
Do browsers compile and cache javascript?
[ "", "javascript", "internet-explorer", "internet-explorer-8", "" ]
I am trying to write a C++ template function that will throw a runtime exception on integer overflow in casts between different integral types, with different widths, and possible signed/unsigned mismatch. For these purposes I'm not concerned with casting from floating-point types to integral types, nor other object-to-object conversions. I'd like to do this without having to write lots of special case code. This is what I currently have: ``` template< typename T, typename R > void safe_cast( const T& source, R& result ) { // get the maximum safe value of type R R rMax = (R) ~0; if ( rMax < 0 ) // R is a signed type { // assume that we're on an 8-bit twos-compliment machine rMax = ~( 0x80 << ( ( sizeof( R ) - 1 ) * 8 ) ); } if ( ( source & rMax ) != source ) { throw new IntegerOverflowException( source ); } result = static_cast<R>( source ); } ``` Is this correct and efficient? EDIT: For various reasons stl isn't available, so I can't use std::numeric\_limits, and anything from Boost is right out.
Have you tried SafeInt? It's a cross platform template that will do integer overflow checks for a variety of integer types. It's available on github * <https://github.com/dcleblanc/SafeInt>
Is boost an option? If so, try [boost::numeric\_cast<>](http://www.boost.org/doc/libs/1_39_0/libs/numeric/conversion/doc/html/boost_numericconversion/improved_numeric_cast__.html). It appears to provide the characteristics you're looking for.
C++ Template for safe integer casts
[ "", "c++", "templates", "casting", "integer-overflow", "" ]
[CharsetDecoder](http://java.sun.com/javase/6/docs/api/java/nio/charset/CharsetDecoder.html) reads: > There are two general types of decoding errors. If the input byte sequence is not legal for this charset then the input is considered *malformed*. If the input byte sequence is legal but cannot be mapped to a valid Unicode character then an *unmappable character* has been encountered. I understand the concept of malformed characters, but what does an unmappable character mean? I thought that Unicode contains all possible characters. How then could a legal byte sequence not be mappable to a Unicode character?
While Unicode can represent a great number of characters for a great number of languages, it is certainly not exhaustive. In other words, there are character sets with characters for which there is no mapping into Unicode.
Just a guess... I expect that such a value would exist in one of the empty blocks that have not yet been filled for the implementation. The error probably anticipates values that will be legal characters in the future, but don't exist at present. The set of characters encompassed by Unicode is a work in progress that may never be finished (see [proposed characters](http://www.unicode.org/faq/prop_new_characters.html) for characters currently under consideration).
What does UnmappableCharacterException mean?
[ "", "java", "unicode", "character-encoding", "" ]
Of late, I'm becoming more health oriented when constructing my program, I have observed that most of programs take 2 or 3 minutes to execute and when I check on the task scheduler, I see that they consume 100% of CPU usage, can I limit this usage programatically in code? This will certainly enable me to run multiple programs at a given time. Thanks, Nidhi
That's not your concern... It's the job of the operating system to distribute processor time between running processes. If you'd like to give other processes first crack at getting their stuff done, then simply reduce the priority of your own process by modifying the [`Process.PriorityClass`](http://msdn.microsoft.com/en-us/library/system.diagnostics.process.priorityclass.aspx) value for it. ### See also: [Windows Equivalent of ‘nice’](https://stackoverflow.com/questions/4208/windows-equivalent-of-nice)
This thread is over four years old, and it still annoys me that the accepted answer criticizes the question rather than answering it. There are many valid reasons you would want to limit the CPU time taken by your program, I can list a few off the top of my head. It might seem like a waste not to use all free CPU cycles available, but this mentality is flawed. Unlike older CPUs, most modern CPUs do not run at a fixed clock speed - many have power saving modes where they **drop the clock speed and cpu voltage when load is low**. CPUs also consume more power when performing calculations than they do running NOOPs. This is especially relevant to laptops that require fans to cool the CPU when it is under high load. **Running a task at 100% for a short time can use far more energy than running a task at 25% for four times as long.** Imagine you are writing a background task that is designed to index files periodically in the background. Should the indexing task use as much of the CPU as it can at a lower priority, or throttle itself to 25% and take as long as it needs? Well, if it were to consume 100% of the CPU on a laptop, the CPU would heat up, the fans would kick in, and the battery would drain fairly quickly, and the user would get annoyed. If the indexing service throttled itself, the laptop may be able to run with completely passive cooling at a very low cpu clock speed and voltage. Incidentally, the Windows Indexing Service now throttles itself in newer versions of Windows, which it never did in older versions. For an example of a service that still doesn't throttle itself and frequently annoys people, see Windows Installer Module. An example of how to throttle part of your application internally in C#: ``` public void ThrottledLoop(Action action, int cpuPercentageLimit) { Stopwatch stopwatch = new Stopwatch(); while(true) { stopwatch.Reset(); stopwatch.Start(); long actionStart = stopwatch.ElapsedTicks; action.Invoke(); long actionEnd = stopwatch.ElapsedTicks; long actionDuration = actionEnd - actionStart; long relativeWaitTime = (int)( (1/(double)cpuPercentageLimit) * actionDuration); Thread.Sleep((int)((relativeWaitTime / (double)Stopwatch.Frequency) * 1000)); } } ```
How can I programmatically limit my program's CPU usage to below 70%?
[ "", "c#", "performance", "cpu-usage", "system.diagnostics", "" ]
I want create a button option that takes the entire datalist and converts it to a pdf file. As anyone done this in asp.net ? Please can you show an example or direct me in the right way.
Just found a snippet of code a guy posted in forum, and though i might share with others Re: EXPORT DATAGRID TO PDF in C#/Asp.Net ``` //************************************************* // // Author: // Ryan Van Aken (vanakenr@msn.com) // (C) 2009 Ryan Van Aken // // // Permission is hereby granted, free of charge, to any person obtaining // a copy of this software and associated documentation files (the // "Software"), to deal in the Software without restriction, including // without limitation the rights to use, copy, modify, merge, publish, // distribute, sublicense, and/or sell copies of the Software, and to // permit persons to whom the Software is furnished to do so, subject to // the following conditions: // // The above copyright notice and this permission notice shall be // included in all copies or substantial portions of the Software. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, // EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF // MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND // NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE // LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION // OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION // WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. // //************************************************* //SQL Connection Settings ----------- public string strConn = ConfigurationManager.ConnectionStrings["BLAH-Here"].ConnectionString; //----------------------------------- protected void Page_Load(object sender, EventArgs e) { //Grab the same data as the datagrid [report view] on the reporting page //Then set the "ContentType" to "application/vnd.ms-excel" which will generate the .XSL file //---Retrieve the Report from SQL, drop into DataSet, then Bind() it to a DataGrid SqlConnection conn = new SqlConnection(strConn); conn.Open(); SqlDataAdapter cmd1 = new SqlDataAdapter("EXEC [dbo].[spStatReport] @CompanyID=" + Session["CompanyID"] + ", @StatReportID=" + Request.QueryString["ReportID"].ToString() + ", @StartDate='" + Request.QueryString["StartDate"].Replace("-", "/").ToString() + "', @EndDate='" + Request.QueryString["EndDate"].Replace("-", "/").ToString() + "';", conn); cmd1.SelectCommand.CommandType = CommandType.Text; DataSet dsReports = new DataSet("tblReporting"); cmd1.Fill(dsReports); conn.Close(); DataGrid dtaFinal = new DataGrid(); dtaFinal.DataSource = dsReports.Tables[0]; dtaFinal.DataBind(); dtaFinal.HeaderStyle.ForeColor = System.Drawing.Color.White; dtaFinal.HeaderStyle.BackColor = System.Drawing.Color.DarkGray; dtaFinal.ItemStyle.BackColor = System.Drawing.Color.White; dtaFinal.AlternatingItemStyle.BackColor = System.Drawing.Color.AliceBlue; //---Create the File--------- Response.Buffer = true; Response.ClearContent(); Response.ClearHeaders(); //---For PDF uncomment the following lines---------- //Response.ContentType = "application/pdf"; //Response.AddHeader("content-disposition", "attachment;filename=FileName.pdf"); //---For MS Excel uncomment the following lines---------- //Response.ContentType = "application/vnd.ms-excel"; //Response.AddHeader("content-disposition", "attachment;filename=FileName.xls"); //---For MS Word uncomment the following lines---------- //Response.ContentType = "application/vnd.word"; //Response.AddHeader("content-disposition", "attachment;filename=FileName.doc"); //---For CSV uncomment the following lines---------- //Response.ContentType = "text/csv"; //Response.AddHeader("content-disposition", "attachment;filename=FileName.csv"); //---For TXT uncomment the following lines---------- //Response.ContentType = "text/plain"; //Response.AddHeader("content-disposition", "attachment;filename=FileName.txt"); EnableViewState = false; StringWriter sw = new StringWriter(); HtmlTextWriter hw = new HtmlTextWriter(sw); //---Renders the DataGrid and then dumps it into the HtmlTextWriter Control dtaFinal.RenderControl(hw); //---Utilize the Response Object to write the StringWriter to the page Response.Write(sw.ToString()); Response.Flush(); Response.Close(); Response.End(); } ```
You can try develop your solution using [PDFSharp](http://pdfsharp.com/PDFsharp/) a very simple pdf library for .net
Convert/Export C# datalist control/(html page) as a pdf
[ "", "c#", "asp.net", "export", "pdf-generation", "datalist", "" ]
I like to use python for almost everything and always had clear in my mind that if for some reason I was to find a bottleneck in my python code(due to python's limitations), I could always use a C script integrated to my code. But, as I started to read a [guide](http://www.suttoncourtenay.org.uk/duncan/accu/integratingpython.html) on how to integrate python. In the article the author says: There are several reasons why one might wish to extend Python in C or C++, such as: > * Calling functions in an existing library. > * Adding a new builtin type to Python > * Optimising inner loops in code > * Exposing a C++ class library to Python > * Embedding Python inside a C/C++ application Nothing about performance. So I ask again, is it reasonable to integrate python with c for performance?
In my experience it is rarely necessary to optimize using C. I prefer to identify bottlenecks and improve algorithms in those areas completely in Python. Using hash tables, caching, and generally re-organizing your data structures to suit future needs has amazing potential for speeding up your program. As your program develops you'll get a better sense of what kind of material can be precalculated, so don't be afraid to go back and redo your storage and algorithms. Additionally, look for chances to kill "two birds with one stone", such as sorting objects as you render them instead of doing huge sorts. When everything is worked to the best of your knowledge, I'd consider using an optimizer like [Psyco](http://psyco.sourceforge.net/). I've experienced literally 10x performance improvements just by using Psyco and adding one line to my program. If all else fails, use C in the proper places and you'll get what you want.
> \* Optimising inner loops in code Isn't that about performance ?
Is it reasonable to integrate python with c for performance?
[ "", "python", "c", "performance", "" ]
Hey, I'm programming a feedback form on a website for a client, however, there are over 100 inputs (all uniquely named). I was wondering if there was a loop I could run to get all of the variables, do you have to call them like this: ``` $variable = $_REQUEST['variable']; ``` EDIT: I'm going with $\_POST as recommended by everyone here - thanks for the input! I'll just manually go through and write a line for each $\_POST I have.
You can loop over all veriables in the $\_PREQUEST array: ``` foreach($_REQUEST as $key=>$value){ //doStuff } ``` However this will include all send parameters, not only the input. Also you should not use $\_REQUEST but $\_POST or $\_GET
If you don't want to have hundreds of uniquely named variables and want to have arrays of data turn up client side, there is a handy form trick you may want to try. ``` <form> <input name="foo[a]" type="text" /> <input name="foo[b]" type="text" /> <input name="bar[]" type="text" /> <input name="bar[]" type="text" /> ``` Client Side: ``` <?php $_POST['foo']['a']; $_POST['foo']['b']; $_POST['bar'][0]; $_POST['bar'][1]; ?> ```
Is there a PHP function to call all variables from a form?
[ "", "php", "forms", "text", "input", "variables", "" ]
I have a button on my form that should only be enabled when an item is selected in a treeview (or the listview in a tabitem). When an item is selected, it's value is stored in a string member variable. Can I bind the `IsEnabled` property of the button to the content of the member var? That is, if the member var is not empty, enable the button. Similarly, when the content of the member var changes (set or cleared), the button's state should change.
Since you're probably looking to bind the IsEnabled property of the button based on a string, try making a converter for it. Ie... ``` <StackPanel> <StackPanel.Resources> <local:SomeStringConverter mystringtoboolconverter /> </StackPanel.Resources> <Button IsEnabled="{Binding ElementName=mytree, Path=SelectedItem.Header, Converter={StaticResource mystringtoboolconverter}}" /> <StackPanel> ``` and the converter: ``` [ValueConversion(typeof(string), typeof(bool))] class SomeStringConverter : IValueConverter { public object Convert( object value, Type targetType, object parameter, CultureInfo culture ) { string myheader = (string)value; if(myhead == "something"){ return true; } else { return false; } } public object ConvertBack( object value, Type targetType, object parameter, CultureInfo culture ) { return null; } } ``` EDIT: Since the OP wanted to bind to a variable, something like this needs to be done: ``` public class SomeClass : INotifyPropertyChanged { private string _somestring; public string SomeString{ get{return _somestring;} set{ _somestring = value; OnPropertyChanged("SomeString");} } public event PropertyChangedEventHandler PropertyChanged; protected void OnPropertyChanged(string propertyName) { if (this.PropertyChanged != null) { this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } } ``` Then, change the above binding expression to: ``` {Binding Path=SomeString, Converter={StaticResource mystringtoboolconverter}} ``` Note, you MUST implement INotifyPropertyChanged for your UI to be updated.
Do you have a ViewModel holding your string property set as the DataContext of the View where you try to do this Binding? Then the following will work: ``` // Example ViewModel public class MyClass : INotifyPropertyChanged { private string text; public string Text { get { return text; } set { text = value; UpdateProperty("Text"); UpdateProperty("HasContent"); } } public bool HasContent { get { return !string.IsNullOrEmpty(text); } } public event PropertyChangedEventHandler PropertyChanged; protected void UpdateProperty(string name) { if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs(name)); } } ``` Then you should have done something like this in the code behind of the view: ``` this.DataContext = new MyClass(); ``` And a Xaml example: ``` <StackPanel> <TextBox Text="{Binding Text, UpdateSourceTrigger=PropertyChanged}" /> <Button IsEnabled="{Binding HasContent}"> Click Me! </Button> </StackPanel> ```
WPF Data Binding : enable/disable a control based on content of var?
[ "", "c#", "wpf", "data-binding", "" ]
I've written two COM classes in C++, contained in a single MFC DLL. They're being loaded as plugins by a 3rd party application. How can I get the file name, and version number, of the DLL from within those classes?
``` CString GetCallingFilename(bool includePath) { CString filename; GetModuleFileName(AfxGetInstanceHandle(), filename.GetBuffer(MAX_PATH), MAX_PATH); filename.ReleaseBuffer(); if( !includePath ) { int filenameStart = filename.ReverseFind('\\') + 1; if( filenameStart > 0 ) { filename = filename.Mid(filenameStart); } } return filename; } CString GetCallingVersionNumber(const CString& filename) { DWORD fileHandle, fileVersionInfoSize; UINT bufferLength; LPTSTR lpData; VS_FIXEDFILEINFO *pFileInfo; fileVersionInfoSize = GetFileVersionInfoSize(filename, &fileHandle); if( !fileVersionInfoSize ) { return ""; } lpData = new TCHAR[fileVersionInfoSize]; if( !lpData ) { return ""; } if( !GetFileVersionInfo(filename, fileHandle, fileVersionInfoSize, lpData) ) { delete [] lpData; return ""; } if( VerQueryValue(lpData, "\\", (LPVOID*)&pFileInfo, (PUINT)&bufferLength) ) { WORD majorVersion = HIWORD(pFileInfo->dwFileVersionMS); WORD minorVersion = LOWORD(pFileInfo->dwFileVersionMS); WORD buildNumber = HIWORD(pFileInfo->dwFileVersionLS); WORD revisionNumber = LOWORD(pFileInfo->dwFileVersionLS); CString fileVersion; fileVersion.Format("%d.%d.%d.%d", majorVersion, minorVersion, buildNumber, revisionNumber); delete [] lpData; return fileVersion; } delete [] lpData; return ""; } ```
``` TCHAR fileName[MAX_PATH + 1]; GetModuleFileName(hInstance, fileName, MAX_PATH); ``` Where `hInstance` is the one you get in the `DllMain` function. Don't use `GetModuleHandle(0)`, because that returns the `HINSTANCE` of the host application.
Retrieving DLL name, not calling application name
[ "", "c++", "com", "" ]
[SymPy](http://en.wikipedia.org/wiki/SymPy) is a great tool for doing units conversions in Python: ``` >>> from sympy.physics import units >>> 12. * units.inch / units.m 0.304800000000000 ``` You can easily roll your own: ``` >>> units.BTU = 1055.05585 * units.J >>> units.BTU 1055.05585*m**2*kg/s**2 ``` However, I cannot implement this into my application unless I can convert degrees C (absolute) to K to degrees F to degrees R, or any combo thereof. I thought maybe something like this would work: ``` units.degC = <<somefunc of units.K>> ``` But clearly that is the wrong path to go down. Any suggestions for cleanly implementing "offset"-type units conversions in SymPy? Note: I'm open to trying other units conversion modules, but don't know of any besides [Unum](http://home.scarlet.be/be052320/Unum.html), and found it to be cumbersome. Edit: OK, it is now clear that what I want to do is first determine if the two quantities to be compared are in the same coordinate system. (like time units reference to different epochs or time zones or dB to straight amplitude), make the appropriate transformation, then make the conversion. Are there any general coordinate system management tools? That would be great. I would make the assumption that °F and °C always refer to Δ°F Δ°C within an expression but refer to absolute when standing alone. I was just wondering if there was a way to make `units.degF` a function and slap a decorator `property()` on it to deal with those two conditions. But for now, I'll set `units.C == units.K` and try to make it very clear in the documentation to use functions `convertCtoK(...)` and `convertFtoR(...)` when dealing with absolute units. (Just kidding. No I won't.)
I personally like [Quantities](http://packages.python.org/quantities/) thanks to its [NumPy](http://en.wikipedia.org/wiki/NumPy) integration, however it only does relative temperatures, not absolute.
The Unum documentation has a pretty good writeup on why this is hard: > Unum is unable to handle reliably conversions between °Celsius and Kelvin. The issue is referred as the 'false origin problem' : the 0°Celsius is defined as 273.15 K. This is really a special and annoying case, since in general the value 0 is unaffected by unit conversion, e.g. 0 [m] = 0 [miles] = ... . Here, the conversion Kelvin/°Celsius is characterized by a factor 1 and an offset of 273.15 K. The offset is not feasible in the current version of Unum. > > Moreover it will presumably never be integrated in a future version because there is also a conceptual problem : the offset should be applied if the quantity represents an absolute temperature, but it shouldn't if the quantity represents a difference of temperatures. For instance, a raise of temperature of 1° Celsius is equivalent to a raise of 1 K. It is impossible to guess what is in the user mind, whether it's an absolute or a relative temperature. The question of absolute vs relative quantities is unimportant for other units since the answer does not impact the conversion rule. Unum is unable to make the distinction between the two cases. It's pretty easy to conceptually see the problems with trying to represent absolute temperature conversion symbolically. With any normal relative unit, `(x unit) * 2 == (x * 2) unit`—unit math is commutative. With absolute temperatures, that breaks down—it's difficult to do anything more complex than straight temperature conversions with no other unit dimensions. You're probably best off keeping all calculations in Kelvin, and converting to and from other temperature units only at the entry and exit points of your code.
Units conversion in Python
[ "", "python", "math", "symbolic-math", "sympy", "" ]
In the following sample, the writer of the derived class will be expected to call base.Add(). If it happens 1st, the base can do one sort of code. If it happens last, the base can do another kind of logic (see sample). I doesn't seem possible to have it both ways. And easy fix would be do stop calling the base method at all because the base will never know if its being called first, last or in the middle or twice! What is the object oriented way to deal with this? Should I just plain stop putting code into base methods because I will never know the pre and post-conditions? EDIT: The goal is to have a business object class that does CRUD operations. The repetitious code would be moved to the base class. For example, checking to see if before adding a record, the business object's id is 0 and checking that after saving, the business object's id is >0. ``` namespace StackOverFlowSample { class BusinessObjectBase { private bool _isNew; private int _id; public virtual void Add(string newAccount) { //Code that happens when subclasses run this method with the //same signature //makes sense if base is called 1st if(_isNew && _id>0) throw new InvalidOperationException("Invalid precondition state"); //makes sense if bae is called 2nd if (!_isNew && _id == 0) throw new InvalidOperationException("Invalid post condition state"); } } class BusinessObject : BusinessObjectBase { public override void Add(string newAccount) { //doesn't make sense, because base will need to be called again. base.Add(newAccount);//pre validation, logging //Save newAccount to database //doesn't make sense, because base has already been called base.Add(newAccount); //post validation, logging } } } ```
If you want a safe way to introduce pre- and post condition checks, you can make Add non-virtual, and instead have another method (AddInternal or something like that) that the derived classes can (or must?) override: ``` namespace StackOverFlowSample { abstract class BusinessObjectBase { private bool _isNew; private int _id; protected abstract void AddInternal(string newAccount); public void Add(string newAccount) { if(_isNew && _id>0) throw new InvalidOperationException("Invalid precondition state"); AddInternal(newAccount); if (!_isNew && _id == 0) throw new InvalidOperationException("Invalid post condition state"); } } class BusinessObject : BusinessObjectBase { protected override void AddInternal(string newAccount) { //Save newAccount to database } } } ```
It's hard to follow the exact example in terms of what you want, but one solution is often to use the template pattern instead - don't ask child classes to call the base class, but provide a separate abstract method which can't do anything useful *unless* it calls a (non-virtual) method in the base class - or which just returns something which the template method can use to call the real method instead. I find inheritance gets easier to understand if everything is either abstract or sealed - i.e. you've either *got* to override it and you can't call the base method, or you can't override it to start with. There are exceptions to this, of course, but it's a good starting point. It's a sort of functional way of looking at things as well - you may find you could *actually* do without inheritance, and just take appropriate delegates to provide specialization instead...
When to call base.method() and what code should go in a base.method()?
[ "", "c#", "oop", "" ]
I would like to display "password" as text in the password area, and when focused the box should become empty and allow normal password (starred out) input from the user. Currently I use the following method. Initially I display a text field showing password, when focused it is removed and replaced with a password field. Here is the jQuery: ``` $(document).ready(function() { $("#password").focus( function() { $("#pwd").html('<input type="password" id="password" name="password" value="" />'); $("#password").focus(); }); }); ``` This works on Firefox and Chrome but fails in IE 8 (and presumably other IE's) as the focus call fails for some reason (maybe the DOM isn't ready?). A demo is available on [jsbin](http://jsbin.com/apuza). Any ideas?
You could try the [watermarker](http://plugins.jquery.com/project/jquery-watermark) plugin. I don't know if it works in IE8.
After realizing my original solution wouldn't work, I spent a while trying to come up with a working solution just out of curiosity. I'd still recommend using the jQuery plugin [eKek0](https://stackoverflow.com/questions/1010046/how-can-you-swap-an-input-text-field-to-a-password-area-on-focus-in-jquery/1010241#1010241) recommended, since it degrades better, though. The following example is fairly flexible. Rather than hard-coding the replacement `<input />` tags as strings, the replacement tags are derived from the original tags. This helps if the `<input />` tag attributes are altered, since you won't lose any styling or other attributes given to the tags. The following solution has been tested in Chrome, Firefox, IE8, and IE8 in IE7 compatibility mode. ``` $(document).ready(function() { var focusEvent = function() { if(this.type == 'text') { var html = $(this).clone().wrap('<span></span>').parent().html(); html = html.replace(/type=("|')?text("|')?/, 'type="password"'); var $newPwdBx = $(html); $(this).replaceWith($newPwdBx); $newPwdBx.removeClass('readOnly'); $newPwdBx.val(''); $newPwdBx.focus(); $newPwdBx.blur(blurEvent); } }; var blurEvent = function() { if(this.value == '') { var html = $(this).clone().wrap('<span></span>').parent().html(); html = html.replace(/type=("|')?password("|')?/, 'type="text"'); var $newTxtBx = $(html); $(this).replaceWith($newTxtBx); $newTxtBx.addClass('readOnly'); $newTxtBx.val('password'); $newTxtBx.focus(focusEvent); } }; $("#password").focus(focusEvent); }); ``` This code replaces the `<input />` with the appropriate type on blur and focus. It also adds/removes a class for styling the "password" placeholder text so it is more usable, since the placeholder "password" text is not the default black color, which could potentially be misleading to a user. You can see it working here: <http://jsbin.com/azugu>
How can you swap an input text field to a password area on focus in JQuery?
[ "", "javascript", "jquery", "html", "" ]
I want to write a library which will be dynamically linked from other programs running on modern operating systems like Windows, Linux and OS/X (i.e. it will be deployed as a `.dll` or `.so` module). What is the most appropriate language in that case? Should I stick with plain C? Or is C++ also ok?
You can use either C or C++ for the implementation, but I would recommend to define the interface in pure C. It will be much easier to integrate.
The difficulty with creating a C++ library distributed in binary form is that your customers - the users of the library - are typically constrained to use the same C++ compiler as you created the library with. This can be problematic if you want to keep up to date and they don't, or if they want to keep up to date and you don't. If you deal in source, this is less of an issue, as long as your C++ is portable enough to allow it to be used by all the compilers your customers use. If the code may be used from C, I'd probably code to a C interface. Alternative, provide two interfaces - the native C++ interface and a C interface. But that's more work than just a C interface. On the other hand, there may be benefits from a C++ interface (perhaps using STL iterators, etc) and that could sway your decision.
Choice of language for portable library
[ "", "c++", "c", "portability", "multiplatform", "" ]
is there a way to set `error_reporting(E_ALL);` for a specific directory rather than including it in each file? I'd like to turn on error reporting for my beta.mysite.com
[You can use a .htaccess file in Apache](https://www.php.net/configuration.changes). Just add this line: ``` php_value error_reporting 6143 ``` Or for old PHP versions: ``` php_value error_reporting 2047 ``` Note that you can't use the contants (like E\_ALL) From [the manual](http://php.net/manual/en/errorfunc.configuration.php#ini.error-reporting): > **Note: PHP Constants outside of PHP** > > Using PHP Constants outside of PHP, > like in httpd.conf, will have no > useful meaning so in such cases the > integer values are required. And since > error levels will be added over time, > the maximum value (for E\_ALL) will > likely change. So in place of E\_ALL > consider using a larger value to cover > all bit fields from now and well into > the future, a numeric value like > 2147483647.
[Use an .htaccess file to set the option.](https://www.php.net/configuration.changes) ``` <IfModule mod_php5.c> display_errors 1 </IfModule> ``` Now, naturally this only works if you are using apache as a module. If you want to add the configuration option when using CGI, your options going to be limited. A couple of ideas: * Including something in every script. * More exotic: Use a rewrite rule which pointed to a known script in the directory which did the usual `set_ini` style argument, and then included the intended script by checking the path. I'm bad with rewrite rules, but I know this could be done.
error reporting on specific folders
[ "", "php", "error-reporting", "" ]
I need to make a timestamp to put into MySQL. The user is submitting a number (of weeks) I need to add that many weeks to today's date. What I am trying to do is calculate an end date of an ad that the user is submitting. Any suggestions on how to do this? Thanks!
You can use [`strtotime`](http://www.php.net/strtotime) to add time to a date - it takes a second argument that is the current time if none is passed. You can then pass that created time to the [`date`](http://www.php.net/date) function to create your timestamp: ``` $timestamp = date('Y-m-d H:i:s', strtotime('+10 weeks')); ```
I think DATE\_ADD(CURDATE(), INTERVAL 2 WEEK) would add 2 weekss to the current date, for instance
Calculating a date
[ "", "php", "mysql", "timestamp", "" ]
I was wondering what the best way to use transations with the entity framework. Say I have three repositories: ``` Repo1(ObjectContext context) Repo2(ObjectContext context) Repo3(ObjectContext context) ``` and a service object that takes the three repositories: ``` Service(Repo1 repo1,Repo2 repo2, Repo3 repo3) Serive.CreateNewObject <- calls repo1, repo2, repo3 to do stuff. ``` So when I create the service I create three repositories first and pass them down, each repositry takes a object context so my code looks something like this: ``` MyObjectContext context = new MyObjectContext(); Repo1 repo = new Repo1(context); // etc ``` Now I have a controller class that is responsible for calling different services and compants of my application, showing the right forms etc. Now what I want to be able to do is wrap everything that happens in one of the controller methods in a transaction so that if some thing goes wrong I can rollback back. The controller takes a few different Service objects, but doesn't know anything about the object context. My questions are: 1. Should the context be passed in to the service layer also. 2. How do I implement a transaction in the controller so that anything that happens in the service layers arn't commited untill everything has passed. Sorry if it's a bit hard to understand..
Why doesn't your controller know about the ObjectContext? This is where I would put it. Check out - <http://msdn.microsoft.com/en-us/magazine/dd882510.aspx> - here the Command is what will commit/rollback the UnitOfWork(ObjectContext). If you don't want to have your Controller know *exactly* about the EF (good design) then you want to abstract your ObjectContext into an interface similar to the approach in the above link.
How about using a custom TransactionScope, one that commits when all of your services have committed? ``` public class TransactionScope : Scope<IDbTransaction> { public TransactionScope() { InitialiseScope(ConnectionScope.CurrentKey); } protected override IDbTransaction CreateItem() { return ConnectionScope.Current.BeginTransaction(); } public void Commit() { if (CurrentScopeItem.UserCount == 1) { TransactionScope.Current.Commit(); } } } ``` So the transaction is only committed when the UserCount is 1, meaning the last service has committed. The scope classes are (shame we can't do attachements...): ``` public abstract class Scope<T> : IDisposable where T : IDisposable { private bool disposed = false; [ThreadStatic] private static Stack<ScopeItem<T>> stack = null; public static T Current { get { return stack.Peek().Item; } } internal static string CurrentKey { get { return stack.Peek().Key; } } protected internal ScopeItem<T> CurrentScopeItem { get { return stack.Peek(); } } protected void InitialiseScope(string key) { if (stack == null) { stack = new Stack<ScopeItem<T>>(); } // Only create a new item on the stack if this // is different to the current ambient item if (stack.Count == 0 || stack.Peek().Key != key) { stack.Push(new ScopeItem<T>(1, CreateItem(), key)); } else { stack.Peek().UserCount++; } } protected abstract T CreateItem(); public void Dispose() { Dispose(true); } protected virtual void Dispose(bool disposing) { if (!disposed) { if (disposing) { // If there are no users for the current item // in the stack, pop it if (stack.Peek().UserCount == 1) { stack.Pop().Item.Dispose(); } else { stack.Peek().UserCount--; } } // There are no unmanaged resources to release, but // if we add them, they need to be released here. } disposed = true; } } public class ScopeItem<T> where T : IDisposable { private int userCount; private T item; private string key; public ScopeItem(int userCount, T item, string key) { this.userCount = userCount; this.item = item; this.key = key; } public int UserCount { get { return this.userCount; } set { this.userCount = value; } } public T Item { get { return this.item; } set { this.item = value; } } public string Key { get { return this.key; } set { this.key = value; } } } public class ConnectionScope : Scope<IDbConnection> { private readonly string connectionString = ""; private readonly string providerName = ""; public ConnectionScope(string connectionString, string providerName) { this.connectionString = connectionString; this.providerName = providerName; InitialiseScope(string.Format("{0}:{1}", connectionString, providerName)); } public ConnectionScope(IConnectionDetailsProvider connectionDetails) : this(connectionDetails.ConnectionString, connectionDetails.ConnectionProvider) { } protected override IDbConnection CreateItem() { IDbConnection connection = DbProviderFactories.GetFactory(providerName).CreateConnection(); connection.ConnectionString = connectionString; connection.Open(); return connection; } } ```
Object Context, Repositories and Transactions
[ "", "c#", "entity-framework", "transactions", "" ]
I have a standard ASP.NET 2.0 web page with a Delete button on it. What I need and can't figure out how to pull off is when the user presses the delete button a confirm dialog popups asking the user "are you sure?". If the user says yes then I want to disable the delete button and perform a postback that will run the server side code deleteButton\_Click. Here is the tag: ``` <asp:Button ID="deleteButton" Text="Delete" OnClick="deleteButton_Click" runat="server" /> ``` Here is the javascript (in jquery) to handle the client side click: ``` var deleteButton = $(input.eq(0)); deleteButton.click( function() { var a = confirm( "Are you sure you want to delete this item?" ); if ( a == false ) { return false; } else { deleteButton.attr( "disabled", "disabled" ); __doPostBack( deleteButton.attr( "id" ), "" ); } } ); ``` The confirm dialog works as expected and the disabling works ok too. The form does postback fine but it does not run the deleteButton\_Click event handler. The \_\_doPostBack javascript code does exist on the page. I could add UseSubmitBehavior="false" to the deleteButton tag but then it would ignore the confirm dialog answer. So maybe I'm asking too much of ASP.NET here. Any ideas how to make this work? Thanks, Craig
Thanks for the feedback but it was a fairly simple solution. The javascript line should be: ``` __doPostBack( deleteButton.attr( "name" ), "" ); ``` instead of: ``` __doPostBack( deleteButton.attr( "id" ), "" ); ``` I didn't realize that the "name" attribute is the one that doPostBack method was looking for.
``` btnSubmit.Attributes.Add("onclick", "if(confirm(\"Are you sure you want to delete this item?\" ){this.disabled=true;" + ClientScript.GetPostBackEventReference(btnSubmit, "").ToString() + "}else{return false;}"); ```
How can I confirm and then disable a button in asp.net/javascript
[ "", "asp.net", "javascript", "button", "submit", "confirm", "" ]
I am using a timetabling system with SQL Server 2000 backend I need to list events with tutors and rooms next to them this can be more than 1 so could do with turning the multiple rows of rooms and tutors into + separated lists. I have used the code below in the past: ``` DECLARE @Tutors as varchar(8000) SELECT @Tutors = isnull(@Tutors + ' + ', '') + name FROM ( SELECT CT_EVENT_STAFF.event_id, CT_EVENT_STAFF.weeks, CT_STAFF.unique_name, CT_STAFF.name FROM celcat200809.dbo.CT_EVENT_STAFF AS CT_EVENT_STAFF LEFT OUTER JOIN celcat200809.dbo.CT_STAFF AS CT_STAFF ON CT_EVENT_STAFF.staff_id = CT_STAFF.staff_id WHERE event_id = @eventID ) As data_set print @Tutors ``` The event\_id being the unique event, this will only work when I know the exact ID, I can't run it for every ID. Is there a way to do this for each individual event\_id without cursors. I have seen a possible solution to this in using a UDF unfortunately my second problem is the timetabling system (CELCAT) creates a new database for each year (I know don't ask) so I am going to have to make the SQL dynamic i.e next years database would be celcat200910, I believe dynamic SQL cannot be run in UDF's. Please remember this is SQL Server 2000
You can still use a view as goodgai suggested, but instead of having it redirect to one table, have it union select the tables together. Could break out the year/month into columns if that's not already done and you need it. ``` CREATE VIEW UNIFIED_CT_STAFF AS SELECT year = 2008, month = 9, unique_name, name FROM celcat200809.dbo.CT_STAFF UNION SELECT year = 2008, month = 10, unique_name, name FROM celcat200810.dbo.CT_STAFF ```
You could create an UDF to calculate the string, and then use it like: ``` select event_id, dbo.GetTutorsText(@eventId) from EventsTable ``` The UDF could be defined like: ``` if object_id('dbo.GetTutorText') is not null drop function dbo.GetTutorText go create function dbo.GetTutorText( @eventID int) returns varchar(8000) as begin DECLARE @Tutors as varchar(8000) SELECT @Tutors = isnull(@Tutors + ' + ', '') + name FROM ( SELECT CT_EVENT_STAFF.event_id, CT_EVENT_STAFF.weeks, CT_STAFF.unique_name, CT_STAFF.name FROM celcat200809.dbo.CT_EVENT_STAFF AS CT_EVENT_STAFF LEFT OUTER JOIN celcat200809.dbo.CT_STAFF AS CT_STAFF ON CT_EVENT_STAFF.staff_id = CT_STAFF.staff_id WHERE event_id = @eventID ) As data_set return @Tutors end go ```
Concatenate column values from rows
[ "", "sql", "sql-server", "sql-server-2000", "" ]
I just used the XmlWriter to create some XML to send back in an HTTP response. How would you create a JSON string. I assume you would just use a stringbuilder to build the JSON string and them format your response as JSON?
You could use the [JavaScriptSerializer class](http://msdn.microsoft.com/en-us/library/system.web.script.serialization.javascriptserializer.aspx), check [this article](http://weblogs.asp.net/scottgu/archive/2007/10/01/tip-trick-building-a-tojson-extension-method-using-net-3-5.aspx) to build an useful extension method. Code from article: ``` namespace ExtensionMethods { public static class JSONHelper { public static string ToJSON(this object obj) { JavaScriptSerializer serializer = new JavaScriptSerializer(); return serializer.Serialize(obj); } public static string ToJSON(this object obj, int recursionDepth) { JavaScriptSerializer serializer = new JavaScriptSerializer(); serializer.RecursionLimit = recursionDepth; return serializer.Serialize(obj); } } } ``` Usage: ``` using ExtensionMethods; ... List<Person> people = new List<Person>{ new Person{ID = 1, FirstName = "Scott", LastName = "Gurthie"}, new Person{ID = 2, FirstName = "Bill", LastName = "Gates"} }; string jsonString = people.ToJSON(); ```
Using [Newtonsoft.Json](http://james.newtonking.com/pages/json-net.aspx) makes it really easier: ``` Product product = new Product(); product.Name = "Apple"; product.Expiry = new DateTime(2008, 12, 28); product.Price = 3.99M; product.Sizes = new string[] { "Small", "Medium", "Large" }; string json = JsonConvert.SerializeObject(product); ``` Documentation: **[Serializing and Deserializing JSON](http://www.newtonsoft.com/json/help/html/SerializingJSON.htm)**
How to create JSON string in C#
[ "", "c#", "asp.net", "json", "" ]
I wonder if it's possible to keep methods for a Python class in a different file from the class definition, something like this: `main_module.py:` ``` class Instrument(Object): # Some import statement? def __init__(self): self.flag = True def direct_method(self,arg1): self.external_method(arg1, arg2) ``` `to_import_from.py:` ``` def external_method(self, arg1, arg2): if self.flag: #doing something #...many more methods ``` In my case, `to_import_from.py` is machine-generated, and contains many methods. I would rather not copy-paste these into main\_module.py or import them one by one, but have them all recognized as methods of the Instrument class, just as if they had been defined there: ``` >>> instr = Instrument() >>> instr.direct_method(arg1) >>> instr.external_method(arg1, arg2) ``` Thanks!
I don't think what you want is directly possible in Python. You could, however, try one of the following. 1. When generating `to_import_from.py`, add the non-generated stuff there too. This way, all methods are in the same class definition. 2. Have `to_import_from.py` contain a base class definition which the the Instrument class inherits. In other words, in `to_import_from.py`: ``` class InstrumentBase(object): def external_method(self, arg1, arg2): if self.flag: ... ``` and then in `main_module.py`: ``` import to_import_from class Instrument(to_import_from.InstrumentBase): def __init__(self): ... ```
People seem to be overthinking this. Methods are just function valued local variables in class construction scope. So the following works fine: ``` class Instrument(Object): # load external methods from to_import_from import * def __init__(self): self.flag = True def direct_method(self,arg1): self.external_method(arg1, arg2) ```
Importing methods for a Python class
[ "", "python", "import", "methods", "" ]
I have the following code for population a ListView from a background thread (DoWork calls the PopulateThread method): ``` delegate void PopulateThreadCallBack(DoWorkEventArgs e); private void PopulateThread(DoWorkEventArgs e) { if (this.InvokeRequired) { PopulateThreadCallBack d = new PopulateThreadCallBack(this.PopulateThread); this.Invoke(d, new object[] { e }); } else { // Ensure there is some data if (this.DataCollection == null) { return; } this.Hide(); // Filter the collection based on the filters List<ServiceCallEntity> resultCollection = this.ApplyFilter(); // Get the current Ids List<Guid> previousIdList = this.GetUniqueIdList(listView); List<Guid> usedIdList = new List<Guid>(); foreach (ServiceCallEntity record in resultCollection) { if (e.Cancel) { this.Show(); return; } else { // Get the top level entities UserEntity userEntity = IvdSession.Instance.Collection.GetEngineerEntity(record.UserId); AssetEntity assetEntity = IvdSession.Instance.Collection.GetAssetEntity(record.AssetId); SiteEntity siteEntity = IvdSession.Instance.Collection.GetSiteEntity(record.SiteId); FaultEntity faultEntity = IvdSession.Instance.Collection.GetFaultEntity(record.FaultId); if (siteEntity == null || userEntity == null || faultEntity == null) { continue; } else { // Get the linked entities RegionEntity regionEntity = IvdSession.Instance.Collection.GetRegionEntity(siteEntity.RegionId); StatusEntity statusEntity = IvdSession.Instance.Collection.GetStatusEntity(record.ServiceCallStatus.StatusId); ListViewItem item = new ListViewItem(siteEntity.SiteName); item.SubItems.Add(siteEntity.Address); item.Tag = record; item.SubItems.Add(regionEntity.Description); // Handle if an Asset is involved if (record.AssetId > 0) item.SubItems.Add(assetEntity.AssetDisplay); else item.SubItems.Add("N/A"); item.SubItems.Add(faultEntity.Description); item.SubItems.Add(userEntity.UserDisplay); item.SubItems.Add("TODO: Claimed By"); item.SubItems.Add(record.DateTimeStamp.ToString()); IvdColourHelper.SetListViewItemColour(item, false); this.PopulateItem(item, ref usedIdList); } } } // Clean up the grid this.CleanListView(previousIdList, usedIdList); // Only autosize when allowed and when there are some items in the ListView if (this.AllowAutoSize && listView.Items.Count > 0) { rsListView.AutoSizeColumns(listView); this.AllowAutoSize = false; } this.Show(); } } ``` Unfortunately, this causes the UI to freeze whilst in the foreach... is there any way to update/populate the ListView without it freezing the main UI?
A) You probably don't need to use this.Invoke and instead use this.BeginInvoke. Invoke blocks the current thread. B) You don't need to define your own delegates you can use MethodInvoker ``` if(this.InvokeRequired) { this.BeginInvoke(new MethodInvoker(() => PopulateThread(e))); return; } ``` It's much cleaner :)
You are using Control.Invoke to execute just about everything, meaning this code isn't multithreaded at all. The proper way (involving a Backgroundworker) would be to use the UpdateProgress event to add elements. It is already synchronized. But since you're hiding the control (or is it the Form ?) during this process you might as well build a List and on completion add it to the Listview. That piece of code shouldn't take long. Or some sort of combination, adding small lists in an update event. And I wonder about the wisdom of Hide/Show, I expect this to just make the UI flicker. Leave them out or replace with SuspendLayout/Resumelayout.
Using BackgroundWorker to update the UI without freezes...?
[ "", "c#", "multithreading", "user-interface", "delegates", "backgroundworker", "" ]
I have a `PRODUCTS` table, and each product can have multiple attributes so I have an `ATTRIBUTES` table, and another table called `ATTRIBPRODUCTS` which sits in the middle. The attributes are grouped into classes (type, brand, material, colour, etc), so people might want a product of a particular type, from a certain brand. ``` PRODUCTS product_id product_name ATTRIBUTES attribute_id attribute_name attribute_class ATTRIBPRODUCTS attribute_id product_id ``` When someone is looking for a product they can select one or many of the attributes. The problem I'm having is returning a single product that has multiple attributes. This should be really simple I know but SQL really isn't my thing and past a certain point I get a bit lost in the logic. The problem is I'm trying to check each attribute class separately so I want to end up with something like: ``` SELECT DISTINCT products.product_id FROM attribproducts INNER JOIN products ON attribproducts.product_id = products.product_id WHERE (attribproducts.attribute_id IN (9,10,11) AND attribproducts.attribute_id IN (60,61)) ``` I've used IN to separate the blocks of attributes of different classes, so I end up with the products which are of certain types, but also of certain brands. From the results I've had it seems to be that AND between the IN statements that's causing the problem. Can anyone help a little? I don't have the luxury of completely refactoring the database unfortunately, there is a lot more to it than this bit, so any suggestions how to work with what I have will be gratefully received.
Take a look at the answers to the question [SQL: Many-To-Many table AND query](https://stackoverflow.com/questions/1054299/sql-many-to-many-table-and-query). It's the exact same problem. Cletus gave there 2 possible solutions, none of which very trivial (but then again, there simply is no trivial solution).
``` SELECT DISTINCT products.product_id FROM products p INNER JOIN attribproducts ptype on p.product_id = ptype.product_id INNER JOIN attribproducts pbrand on p.product_id = pbrand.product_id WHERE ptype.attribute_id IN (9,10,11) AND pbrand.attribute_id IN (60,61) ```
What is the best way to implement this SQL query?
[ "", "sql", "sql-server", "" ]
I have a jquery script which I need to run only once everything else on the page, including some other javascripts (over which I have no control) have finished doing their thing. I though perhaps there was an alternative to $(document).ready but I haven't been able to find it.
It turns out that because of a peculiar mixture of javascript frameworks that I needed to initiate the script using an event listener provide by one of the other frameworks.
You can have `$(document).ready()` multiple times in a page. The code gets run in the sequence in which it appears. You can use the `$(window).load()` event for your code since this happens after the page is fully loaded and all the code in the various `$(document).ready()` handlers have finished running. ``` $(window).load(function(){ //your code here }); ```
Delaying a jquery script until everything else has loaded
[ "", "javascript", "jquery", "dom", "" ]
Say I have defined a button with rounded corners. ``` <Style x:Key="RoundButton" TargetType="Button"> <!-- bla bla --> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Border CornerRadius="0,5,5,0" /> <!-- bla bla --> </ControlTemplate> </Setter.Value> </Setter> </Style> ``` I it possible that the user of this button can specify the CornerRadius? Can I use a TemplateBinding? But where should I bind to? (to Tag?)
In order to use a `TemplateBinding`, there must be a property on the templated control (`Button`, in this case). `Button` does not have a `CornerRadius` or equivalent property, so your options are: * hard code the value in the template * Hijack another property (such as `Tag`) to store this information. This is quicker, but lacks type safety, is harder to maintain, and prevents other uses of that property. * Subclass `Button` and add the propery you need, then provide a template for that subclass. This takes a little longer but yields a much nicer experience for consumers of your control.
In addition to Kent's suggestions, you could also create an attached property to define the CornerRadius on the button, and bind to that property in the template
Passing parameters to a template
[ "", "c#", ".net", "wpf", "xaml", "controltemplate", "" ]
> **Possible Duplicate:** > [python dict.add\_by\_value(dict\_2) ?](https://stackoverflow.com/questions/877295/python-dict-add-by-valuedict-2) My input is two dictionaries that have string keys and integer values. I want to add the two dictionaries so that the result has all the keys of the input dictionaries, and the values are the sum of the input dictionaries' values. For clarity, if a key appears in only one of the inputs, that key/value will appear in the result, whereas if the key appears in both dictionaries, the sum of values will appear in the result. For example, say my input is: ``` a = dict() a['cat'] = 1 a['fish'] = 10 a['aardvark'] = 1000 b = dict() b['cat'] = 2 b['dog'] = 200 b['aardvark'] = 2000 ``` I would like the result to be: ``` {'cat': 3, 'fish': 10, 'dog': 200, 'aardvark': 3000} ``` Knowing Python there must be a one-liner to get this done (it doesn't really have to be one line...). Any thoughts?
How about that: ``` dict( [ (n, a.get(n, 0)+b.get(n, 0)) for n in set(a)|set(b) ] ) ``` Or without creating an intermediate list (generator is enough): ``` dict( (n, a.get(n, 0)+b.get(n, 0)) for n in set(a)|set(b) ) ``` --- Post Scriptum: As a commentator addressed correctly, there is a way to implement that easier with the new (from Py2.7) collections.Counter class. As much I remember, this version was not available when I wrote the answer: ``` from collections import Counter dict(Counter(a)+Counter(b)) ```
result in `a`: ``` for elem in b: a[elem] = a.get(elem, 0) + b[elem] ``` result in `c`: ``` c = dict(a) for elem in b: c[elem] = a.get(elem, 0) + b[elem] ```
"Adding" Dictionaries in Python?
[ "", "python", "dictionary", "" ]
I have a dictionary where each key has a list of variable length, eg: ``` d = { 'a': [1, 3, 2], 'b': [6], 'c': [0, 0] } ``` Is there a clean way to get a random dictionary key, weighted by the length of its value? `random.choice(d.keys())` will weight the keys equally, but in the case above I want `'a'` to be returned roughly half the time.
This would work: ``` random.choice([k for k in d for x in d[k]]) ```
Do you always know the total number of values in the dictionary? If so, this might be easy to do with the following algorithm, which can be used whenever you want to make a probabilistic selection of some items from an ordered list: 1. Iterate over your list of keys. 2. Generate a uniformly distributed random value between 0 and 1 (aka "roll the dice"). 3. Assuming that this key has N\_VALS values associated with it and there are TOTAL\_VALS total values in the entire dictionary, accept this key with a probability N\_VALS / N\_REMAINING, where N\_REMAINING is the number of items left in the list. This algorithm has the advantage of not having to generate any new lists, which is important if your dictionary is large. Your program is only paying for the loop over K keys to calculate the total, a another loop over the keys which will on average end halfway through, and whatever it costs to generate a random number between 0 and 1. Generating such a random number is a very common application in programming, so most languages have a fast implementation of such a function. In Python the [random number generator](http://docs.python.org/library/random.html) a C implementation of the [Mersenne Twister algorithm](http://en.wikipedia.org/wiki/Mersenne_Twister), which should be very fast. Additionally, the documentation claims that this implementation is thread-safe. Here's the code. I'm sure that you can clean it up if you'd like to use more Pythonic features: ``` #!/usr/bin/python import random def select_weighted( d ): # calculate total total = 0 for key in d: total = total + len(d[key]) accept_prob = float( 1.0 / total ) # pick a weighted value from d n_seen = 0 for key in d: current_key = key for val in d[key]: dice_roll = random.random() accept_prob = float( 1.0 / ( total - n_seen ) ) n_seen = n_seen + 1 if dice_roll <= accept_prob: return current_key dict = { 'a': [1, 3, 2], 'b': [6], 'c': [0, 0] } counts = {} for key in dict: counts[key] = 0 for s in range(1,100000): k = select_weighted(dict) counts[k] = counts[k] + 1 print counts ``` After running this 100 times, I get select keys this number of times: ``` {'a': 49801, 'c': 33548, 'b': 16650} ``` Those are fairly close to your expected values of: ``` {'a': 0.5, 'c': 0.33333333333333331, 'b': 0.16666666666666666} ``` Edit: Miles pointed out a serious error in my original implementation, which has since been corrected. Sorry about that!
Random Python dictionary key, weighted by values
[ "", "python", "random", "dictionary", "" ]
I want to leverage the Scala's Actor Framework while I develop the user interface in the familiar Swing way. Is it possible to have a mixed Java - Scala project in Eclipse, NetBeans or any other IDE?
Intellij does a good job of supporting mixed Java / Scala projects. At the moment the Scala support in Intellij seems better to me than the Scala IDE for Eclipse. (I have been a long time Eclipse user, recently trying Intellij on the recommendation of some other Scala coders).
The "official" Scala plugin for Eclipse allows you to add the "scala nature" to any project, alongside the java nature. This allows you to mix and match however you'd wish. I was playing with scala for the first time last night, using the Eclipse IDE, and it works very well. --- Also, I just ran across this article: <http://www.codecommit.com/blog/scala/joint-compilation-of-scala-and-java-sources> Seems that scalac knows enough about java to be able to resolve dependencies on java source, without actually compiling them. Very smart.
Simultaneous Java and Scala development within the same project
[ "", "java", "eclipse", "netbeans", "scala", "" ]
So - working with C# and Windows Forms, I have a bunch of "Task" classes. Every class has a `List<Task>`, so that they can have any number of children (and the children can obviously have more children...) These tasks are stored in a "Project" class (in a `List<Task>`, obviously), which is then serialized to XML. I have a function to recurse through the tasks and put them all in a TreeView. It works just like it should. But the thing is - if I select a task from the TreeView, how will I find it from the `Project`? I thought of some kind of foreign key (which would be the TreeNode's key?), but since I'm using XML for serialization, that's not possible. So, what should I do?
I would put a reference to the "Task" object in the Tag member of each TreeNode. It then becomes very easy to cast the Tag to a Task and use the Task when handling any Tree event.
You have to give each Task a unique key (store them in a Dictionary), then set that int value to the TreeNode's tag.
Associating Windows Forms TreeView items to actual data
[ "", "c#", "xml", "winforms", "treeview", "xml-serialization", "" ]
I want the 'Alt' to be pressed by code. It is like firing key-press event with key 'Alt' by the code, not hitting it manually. The need is, I have set 'ShortCut keys for menu, but it (the single underline on key letter) is not visible to the user unless he presses 'alt'. So i need to make the Alt be pressed by default. Is there a way to 'press' or 'fireup' the keys in key board using c# code?
Check out the [System.Windows.Forms.SendKeys](http://msdn.microsoft.com/en-us/library/system.windows.forms.sendkeys.aspx) class. You can use the static `Send` method to send keystrokes to the active window. If you're trying to send keystrokes to another window, you'll need to use the Windows API to activate the other window first.
If you have any control over the operating system on which the program is being deployed, apparently you can force the underlined shortcut letter to always be displayed by going to Control Panel -> Display -> Appearance -> Effects -> Hide underlined letters for keyboard navigation. (<http://www.chinhdo.com/20080902/underlined-letters-windows/>)
Is there a way to press or fireup the keys in key board using c# code?
[ "", "c#", "" ]
I wan to check if my stateful bean is passivated/activated and the corresponding callbacks are called properly. For that I want to configure the containers GlassFish and/or JBOSS to limit the number of instances of the bean. Is it possible ? If yes, how ?
Thanks for pointing me to the right direction. For JBOSS, found the annotation org.jboss.ejb3.annotation.CacheConfig with parameters maxSize and idleTimeoutSeconds as parameters. Here the documentation link for someone searching... <http://www.jboss.org/file-access/default/members/jbossas/freezone/docs/Administration_And_Configuration_Guide/5/html/clustering-session-sfsb30.html> Though, I don't yet know, how to configure this with xml without using the annotations.
In JBoss, the configuration file you want to look at is conf/standardjboss.xml It looks intimidating (and it is), but look for a container-configuration element called "Standard Stateful SessionBean". Towards the end of this element you'll see the container-pool-conf section, which defines the maximum number of instances for this configuration. This file can be modified, or you can define your own configurations, although I'm not too clear how to go about that. I suspect it goes inside the META-INF/jboss.xml file. Once you've defined a configuration as you want it, you need to configure the EJB itself to use it. Under EJB 2, you'd use the META-INF/jboss.xml, but in the wonderful descriptor-free world of EJB3, I'm not sure. So the answer likely lies in an appriately configured META-INF/jboss.xml file. Sorry I can't be more exact, but at least know you know where to look.
How is it possible to limit the number of instances of an EJB?
[ "", "java", "jboss", "glassfish", "ejb-3.0", "stateful", "" ]
I've seen queries for 2005/2008 to find what process caused a delete but have not seen anything for 2000. Is there anything?
For deletes, you can set up a DML trigger (see [CREATE TRIGGER](http://msdn.microsoft.com/en-us/library/aa258254(SQL.80).aspx)) in BOL for details. Likely the most relevant information that you can obtain will come from [sysprocesses](http://msdn.microsoft.com/en-us/library/aa260456(SQL.80).aspx), and you can query it using your spid (@@SPID). Capture the info you're looking for, and log it out to a table you create for logging (along with the key values required to know which record(s) were deleted).
If you're talking about a current statement or process.... execute: ``` sp_who2 ```
How to find what sql process caused a delete in SQL 2000?
[ "", "sql", "sql-server", "" ]
I am debugging a (native) multi-threaded C++ application under Visual Studio 2008. On seemingly random occasions, I get a "Windows has triggered a break point..." error with a note that this might be due to a corruption in the heap. These errors won't always crash the application right away, although it is likely to crash short after. The big problem with these errors is that they pop up only after the corruption has actually taken place, which makes them very hard to track and debug, especially on a multi-threaded application. * What sort of things can cause these errors? * How do I debug them? Tips, tools, methods, enlightments... are welcome.
[Application Verifier](https://learn.microsoft.com/en-us/windows-hardware/drivers/devtest/application-verifier) combined with [Debugging Tools for Windows](http://msdn.microsoft.com/en-us/library/windows/hardware/ff551063.aspx) is an amazing setup. You can get both as a part of the [Windows Driver Kit or the lighter Windows SDK](http://msdn.microsoft.com/en-us/windows/hardware/hh852365). (Found out about Application Verifier when researching an [earlier question about a heap corruption issue](https://stackoverflow.com/questions/811951/mt-and-md-builds-crashing-but-only-when-debugger-isnt-attached-how-to-debug).) I've used BoundsChecker and Insure++ (mentioned in other answers) in the past too, although I was surprised how much functionality was in Application Verifier. Electric Fence (aka "efence"), [dmalloc](http://dmalloc.com/), [valgrind](http://valgrind.org/), and so forth are all worth mentioning, but most of these are much easier to get running under \*nix than Windows. Valgrind is ridiculously flexible: I've debugged large server software with many heap issues using it. When all else fails, you can provide your own global operator new/delete and malloc/calloc/realloc overloads -- how to do so will vary a bit depending on compiler and platform -- and this will be a bit of an investment -- but it may pay off over the long run. The desirable feature list should look familiar from dmalloc and electricfence, and the surprisingly excellent book [Writing Solid Code](https://writingsolidcode.com/): * **sentry values**: allow a little more space before and after each alloc, respecting maximum alignment requirement; fill with magic numbers (helps catch buffer overflows and underflows, and the occasional "wild" pointer) * **alloc fill**: fill new allocations with a magic non-0 value -- Visual C++ will already do this for you in Debug builds (helps catch use of uninitialized vars) * **free fill**: fill in freed memory with a magic non-0 value, designed to trigger a segfault if it's dereferenced in most cases (helps catch dangling pointers) * **delayed free**: don't return freed memory to the heap for a while, keep it free filled but not available (helps catch more dangling pointers, catches proximate double-frees) * **tracking**: being able to record where an allocation was made can sometimes be useful Note that in our local homebrew system (for an embedded target) we keep the tracking separate from most of the other stuff, because the run-time overhead is much higher. --- If you're interested in more reasons to overload these allocation functions/operators, take a look at [my answer to "Any reason to overload global operator new and delete?"](https://stackoverflow.com/a/1215807/80074); shameless self-promotion aside, it lists other techniques that are helpful in tracking heap corruption errors, as well as other applicable tools. --- Because I keep finding my own answer here when searching for alloc/free/fence values MS uses, here's [another answer that covers Microsoft dbgheap fill values](https://stackoverflow.com/a/370362/80074).
You can detect a lot of heap corruption problems by enabling Page Heap for your application . To do this you need to use gflags.exe that comes as a part of [Debugging Tools For Windows](https://learn.microsoft.com/en-us/windows-hardware/drivers/debugger/) Run Gflags.exe and in the Image file options for your executable, check "Enable Page Heap" option. Now restart your exe and attach to a debugger. With Page Heap enabled, the application will break into debugger whenever any heap corruption occurs.
How to debug heap corruption errors?
[ "", "c++", "windows", "debugging", "heap-corruption", "" ]
In my code I default to using ArrayList for all Lists, HashMap for all maps, HashSet for all sets. From a practical standpoint how much am I losing in flexibility, scalability, readability and performance by choosing the wrong implementation? When does it make sense to spend time to decide to use one rather than another? I certainly see a very clear cut case for why someone would use a LinkedList instead of an ArrayList given certain circumstances. When does someone feel that it is critical they use a HashMap rather than a TreeMap or a HashTable? What about Sets? Questions: 1. What is the cost of choosing poorly? 2. Does anyone have an disaster stories about choosing the wrong implementation and the datacenter catching fire? 3. Any good rules of thumb? 4. Are there any obscure collections implementations you can't live without? I've read through: * <https://docs.oracle.com/javase/1.5.0/docs/api/java/util/TreeMap.html> * <https://docs.oracle.com/javase/1.5.0/docs/api/java/util/HashMap.html> * [Java: ArrayList for List, HashMap for Map, and HashSet for Set?](https://stackoverflow.com/questions/896139/java-arraylist-for-list-hashmap-for-map-and-hashset-for-set) etc... I found [this](https://stackoverflow.com/questions/40471/java-hashmap-vs-hashtable) question to be related from a theoretical point of view, but I'm more interested in a real world, down in the trenches answer.
This is a very general question, but i´ll throw in a couple of thougts. If you are programming oriented to interfaces, then flexibility wont take a great hit. For example ``` void foo(List<E> list); ``` The cost of choosing poorly could be seen in performance penalties. For example, choosing a LinkedList when direct access (as in ArrayList) is what you are looking for. Sets have a similiar issue. If you want to keep sorted collections with no duplicates, a SortedSet would be a wiser choice over a HashSet. In the latter one, you´d have to sort the entire Set manually (this is, a call to Collections.sort()) **`<EDIT>`** As for [maps](http://java.sun.com/j2se/1.5.0/docs/api/java/util/SortedMap.html), there are a lot of different implementations. Each one has a different purpose. For example, there´s [SortedMap](http://java.sun.com/j2se/1.5.0/docs/api/java/util/SortedMap.html), analog to SortedSet. Then, there´s [WeakHashMap](http://java.sun.com/j2se/1.5.0/docs/api/java/util/WeakHashMap.html), that doesn´t work like a HashMap, in the sense that keys can be removed by the garbage collector. As you can imagine, the choice between a HashMap and a WeakHashMap is not trivial. As always, depends on what you want to implement with them. **`</EDIT>`** Regarding the story, in my current project, we replaced a HashSet with a SortedSet because performance was being affected. DataCenter didn´t caught fire though. My two cents.
So long as you follow the good OO practice of *depending on an abstract type*, what does it matter? If for example you find you've used the wrong `Map` you simply change the implementation you are using and because all of your dependencies are on `Map` everything works as before just with the different performance characteristics.
Java Collections Implementations (e.g. HashMaps vs HashSet vs HashTable ...), what is the cost of choosing the wrong one?
[ "", "java", "collections", "hashmap", "" ]
I'm building a rather specialized screen saver application for some kiosks running Windows XP. Users tend to leave the kiosks without returning the browser to the homepage, so the screen saver does the following: 1. Launches via the standard screen saver mechanism 2. Notifies the user that there has been no recent activity, and that the browser will close in X seconds. 3. If X seconds passes without user activity, the screen saver kills all current browser instances (via Process.GetProcessesByName) and starts a new instance of the browser that points to the configured web site (via Process.Start). 4. The screen then "blanks out" until a user moves the mouse or presses a key - at this point the screen saver application exits. When this runs on Windows Vista or 2008, everything works as expected. However, on Windows XP (which is what the kiosks are running), when the screen saver application exits, the browser process is killed. If I add a Thread.Sleep just before the screen saver exists, I can see and interact with the browser up until the point at which the screen saver exits. To make matters more confusing, Windows XP does NOT exhibit this behavior when I run the screen saver by clicking the "Preview" button in the settings area - that is, it behaves as expected. The exact same code is run in this case. I tested this under the .NET 2.0 framework, and later installed .NET 2.0 SP1. On the Windows 2008 workstation, I have 3.5 SP1. Is there some difference between these versions of .NET with respect to dependencies on launched processes? Is there some flag I can set to make sure that the launched browser process is not "attached" to the screen saver application?
There's some [code at CodeProject](http://www.codeproject.com/KB/cs/csdesktopswitching.aspx) that wraps the Windows Desktop API. It includes a function to open a new process in a different desktop that you might at least be able to look at if not use directly. (Basically there's a parameter to the Windows API CreateProcess function that allows you to specify which desktop to start the process on.) If that doesn't work you might also try out [my answer to your other question](https://stackoverflow.com/questions/1035213/process-start-and-the-process-tree/1035491#1035491) (about using SendMessage inter-process to get another process besides the screen saver process to actually do the launching). It's not the best answer to the process tree issue, but it might actually work if the process tree is not the problem.
Try setting Process.StartInfo.UseShellExecute to False (it is True by default) before calling Process.Start(). That way, CreateProcess() is used internally instead of ShellExecute().
Windows XP Screen Saver - Launched Process Dies with Screen Saver
[ "", "c#", ".net", "windows-xp", "" ]
I want to replace whitespace with underscore in a string to create nice URLs. So that for example: ``` "This should be connected" ``` Should become ``` "This_should_be_connected" ``` I am using Python with Django. Can this be solved using regular expressions?
You don't need regular expressions. Python has a built-in string method that does what you need: ``` mystring.replace(" ", "_") ```
Replacing spaces is fine, but I might suggest going a little further to handle other URL-hostile characters like question marks, apostrophes, exclamation points, etc. Also note that the general consensus among SEO experts is that [dashes are preferred to underscores in URLs.](http://www.google.com/search?q=dashes+underscores+seo) ``` import re def urlify(s): # Remove all non-word characters (everything except numbers and letters) s = re.sub(r"[^\w\s]", '', s) # Replace all runs of whitespace with a single dash s = re.sub(r"\s+", '-', s) return s # Prints: I-cant-get-no-satisfaction" print(urlify("I can't get no satisfaction!")) ```
How to replace whitespaces with underscore?
[ "", "python", "string", "" ]
With Perl's `Getopt::Long` you can easily define command-line options that take a variable number of arguments: ``` foo.pl --files a.txt --verbose foo.pl --files a.txt b.txt c.txt --verbose ``` Is there a way to do this directly with Python's `optparse` module? As far as I can tell, the `nargs` option attribute can be used to specify a fixed number of option arguments, and I have not seen other alternatives in the documentation.
I believe `optparse` does not support what you require (not directly -- as you noticed, you can do it if you're willing to do all the extra work of a callback!-). You could also do it most simply with the third-party extension [argparse](http://code.google.com/p/argparse/), which does support variable numbers of arguments (and also adds several other handy bits of functionality). [This URL](http://argparse.googlecode.com/svn/trunk/doc/add_argument.html#add_argument) documents `argparse`'s `add_argument` -- passing `nargs='*'` lets the option take zero or more arguments, `'+'` lets it take one or more arguments, etc.
This took me a little while to figure out, but you can use the callback action to your options to get this done. Checkout how I grab an arbitrary number of args to the "--file" flag in this example. ``` from optparse import OptionParser, def cb(option, opt_str, value, parser): args=[] for arg in parser.rargs: if arg[0] != "-": args.append(arg) else: del parser.rargs[:len(args)] break if getattr(parser.values, option.dest): args.extend(getattr(parser.values, option.dest)) setattr(parser.values, option.dest, args) parser=OptionParser() parser.add_option("-q", "--quiet", action="store_false", dest="verbose", help="be vewwy quiet (I'm hunting wabbits)") parser.add_option("-f", "--filename", action="callback", callback=cb, dest="file") (options, args) = parser.parse_args() print options.file print args ``` Only side effect is that you get your args in a list instead of tuple. But that could be easily fixed, for my particular use case a list is desirable.
With Python's optparse module, how do you create an option that takes a variable number of arguments?
[ "", "python", "optparse", "" ]
*I had to take my working example [here.](http://www.yvoschaap.com/videowall/) For some reason, it does not work as easily as the initial example.* **New Example** Suppose I want to see M5s every time the page loads. So how can I fire the same query for M5 every time the page load? I copied the critical part here: ``` <body> <div id="search"> <form onSubmit="makeRequest(1); return false;" style="margin: 2px; padding: 2px; font-size: 1.2em;"> <input id="searchinput" type="text" name="tags" size="20" value=""> <input id="searchbutton" type="button" onClick="makeRequest(1);" value="Create VideoWall"><br /> ... </form> </div> ``` **Response to the idea in MiffTheFox's and Tom's reply** So I added the command before the form above: ``` <body onload="document.getElementById('myform').submit();"> ``` However, the wall stays black. It should be full of M5s. **Emerged problem to the initial Question:** Why does it not work? Why does the wall stay black? **makeRequest asked by Tom** ``` function makeRequest(page){ startrequest = 0; for(i =1; i < 4; i++){ clearList('ul'+i); var tags = encodeURI(document.getElementById('searchinput').value); if(i == 1 || i == 2){ quantity = 45; } if(i == 3){ quantity = 36; } insertVideos('ul'+i,'search',tags,quantity,startrequest); startrequest = startrequest + quantity; } } ``` Please, see the url at the top and press CTRL+U to see the code.
Well, there´s on load attribute inside the body element ``` <body onload = "javascript:doSubmit()"> ... </body> <script> function doSubmit(){ var form = document.getElementById("myform"); if (form !=null) form.submit(); } </script> ``` Also, you could add javascript at the end of your html page. This is not as portable as the first option ``` <html> <body> <form id="myForm" ...> ... </form> <script> //this executes when the page finishes loading var form = document.getElementById("myForm"); if (form!=null) form.submit(); </script> </body> </html> ```
First add an ID to the form, then add an onLoad handler that submits it. ``` <body onload="myForm.submit();"> <form id="myForm" name="input" action="form_action.asp" method="get"> ... ```
Javascript/HTML > Form/Input > automatic firing every time a page loads
[ "", "javascript", "html", "" ]
I downloaded a zip of source files containing a C# project with multiple entry points / Main methods. Since I wanted to tinker around, I created another one like this in a new type/class ``` class MyExperiments { static void Main(String[] args) { // do something } } ``` then I switched to project properties. Simply switch the startup object to MyExperiments eh? To my surprise, the dropdown didn't have it. I rebuilt, made the method public, tried a whole lot of stuff.. but to no avail. Finally I edited the .csproj manually in notepad and then it worked. More tinkering around, I removed the parameters to make it ``` static void Main() ``` and now VS Project properties could 'see' the startup object. So now I could select it using the dropdown. I then added the String[] back and everything still worked. Seems a bit weird to me (because the most common form is a Main method with parameters for command line args from the C/C++ times). [MSDN](http://msdn.microsoft.com/en-us/library/aa302052%28VS.71%29.aspx) says the dropdown will contain valid startup objects if they exist in your project.
Good thing you copy-pasted it, it is the capital 'S' in `Main(String[] args)`. Apparently VS uses some text matching, and it's case sensitive. As it probably should be.
lol - it looks like a bug in the IDE: ``` static void Main(String[] args) {} ``` doesn't show, but ``` static void Main(string[] args) {} ``` does!
Why is only a parameter-less Main method considered as a "valid startup object" for a C# project?
[ "", "c#", "projects-and-solutions", "" ]
Does anyone know how can I make a sleep in javascript before next line been read by the system? example: ``` 1 var chkResult = Validation(); 2 //sleep here for for 10 sec before the next line been read 3 4 document.getElementById('abc').innerHTML = chkResult; ``` For this example, how can I make the javascript sleep/wait in line 2 for 10 sec before it continues to read line 4? I had tried setTimeout('', 10000); but it's seems not working for me still...
[I Have the Hat](https://stackoverflow.com/users/4296/i-have-the-hat) has given the right hint. Use the [`setTimeout` method](https://developer.mozilla.org/en/DOM/window.setTimeout) to execute your forth line code after 10 seconds: ``` var chkResult = Validation(); var timeout = window.setTimeout(function() { document.getElementById('abc').innerHTML = chkResult; }, 10000); ``` Storing the timeout ID in a variable can be handy if you want to [clear a timeout](https://developer.mozilla.org/en/DOM/window.clearTimeout).
Try ``` setTimeout(function() { return true; }, 10000); ``` The first argument expects a function. This is from memory; I haven't tested it. **Edit:** What Gumbo said... late here... not sure what I was thinking.
how to make a sleep in javascript?
[ "", "javascript", "jquery", "ajax", "" ]
If my table has a huge number of columns (over 80) should I split it into several tables with a 1-to-1 relationship or just keep it as it is? Why? My main concern is performance. PS - my table is already in 3rd normal form. PS2 - I am using MS Sql Server 2008. PS3 - I do not need to access all table data at once, but rather have 3 different categories of data within that table, which I access separately. It is something like: member preferences, member account, member profile.
80 columns really isn't that many... I wouldn't worry about it from a performance standpoint. Having a single table (if you're typically using all of the data in your standard operations) will probably outperform multiple tables with 1-1 relationships, especially if you're indexing appropriately. I would worry about this (potentially) from a maintenance standpoint, though. The more columns of data in a single table, the less understandable the role of that table in your grand scheme becomes. Also, if you're typically only using a small subset of the data, and all 80 columns are not always required, splitting into 2+ tables might help performance.
Re the performance question - it depends. The larger a row is, the less rows can be read from disk in one read. If you have a lot of rows, and you want to be able to read the core information from the table very quickly, then it may be worth splitting it into two tables - one with small rows with only the core info that can be read quickly, and an extra table containing all the info you rarely use that you can lookup when needed.
Table with a lot of columns
[ "", "sql", "sql-server-2008", "asp.net-3.5", "" ]
Given a path, say, /home/shree/path/def, I would want to determine if def is a directory or a file. Is there a way of achieving this in C or C++ code?
The following code uses the `stat()` function and the `S_ISDIR` ('is a directory') and `S_ISREG` ('is a regular file') macros to get information on the file. The rest is just error checking and enough to make a complete compilable program. ``` #include <stdio.h> #include <errno.h> #include <sys/stat.h> int main (int argc, char *argv[]) { int status; struct stat st_buf; // Ensure argument passed. if (argc != 2) { printf ("Usage: progName <fileSpec>\n"); printf (" where <fileSpec> is the file to check.\n"); return 1; } // Get the status of the file system object. status = stat (argv[1], &st_buf); if (status != 0) { printf ("Error, errno = %d\n", errno); return 1; } // Tell us what it is then exit. if (S_ISREG (st_buf.st_mode)) { printf ("%s is a regular file.\n", argv[1]); } if (S_ISDIR (st_buf.st_mode)) { printf ("%s is a directory.\n", argv[1]); } return 0; } ``` Sample runs are shown here: ``` pax> vi progName.c ; gcc -o progName progName.c ; ./progName Usage: progName where is the file to check. pax> ./progName /home /home is a directory. pax> ./progName .profile .profile is a regular file. pax> ./progName /no_such_file Error, errno = 2 ```
Use the stat(2) system call. You can use the S\_ISREG or S\_ISDIR macro on the st\_mode field to see if the given path is a file or a directory. The man page tells you about all the other fields.
Differentiate between a unix directory and file in C and C++
[ "", "c++", "c", "file", "unix", "directory", "" ]
I'm parsing wikipedia infoboxes and I noticed that some infoboxes have image fields - these fields hold names of image files stashed on wikipedia somewhere. However they just contain the name of the file as is as opposed to the actual link. I checked the links of the images on real live infoboxes and the links do not seem to be from one source but the sources vary. How can I hyperlink to an image on wikipedia considering I only have the name of the image from an infobox entry.
Have you tried `http://en.wikipedia.org/wiki/File:filename.jpg` ? Even if the files are located on Wikimedia Commons, the above URL should still work. Edit: Are you trying to hotlink the image? If so, Wikipedia prohibits hotlinking. <http://commons.wikimedia.org/wiki/Commons:Reusing_content_outside_Wikimedia#Hotlinking> **Update 10-Jan-2019:** Hotlinking is [now permitted](https://commons.wikimedia.org/wiki/Commons:Reusing_content_outside_Wikimedia#Downloading): > **Hotlinking or InstantCommons**: It is possible to use files directly on > Commons within another website, by setting up a MediaWiki wiki with > InstantCommons, ...
According to [What are the strangely named components in Wikipedia file paths](http://commons.wikimedia.org/wiki/FAQ#What_are_the_strangely_named_components_in_file_paths.3F), you need to run md5 to find out url. Now wikipedia allows hotlinking, so: If you have utf-8 encoded `$name`, you need to do the following: ``` $filename = replace($name, ' ', '_'); $digest = md5($filename); $folder = $digest[0] . '/' . $digest[0] . $digest[1] . '/' . urlencode($filename); $url = 'http://upload.wikimedia.org/wikipedia/commons/' . $folder; ``` The same can be used for thumbnails.
How do I get link to an image on wikipedia from the infobox?
[ "", "php", "wikipedia", "imagesource", "wikimedia-commons", "" ]
I have two divs inside of a container. One on the left, one on the right, side by side. How am I able to make each one be of equal height, even though they have different content. For example, the right div has a lot of content, and is double the height of the left div, how do I make the left div stretch to the same height of the right div? Is there some JavaScript (jQuery) code to accomplish this?
You *could* use jQuery, but there are better ways to do this. This sort of question comes up a lot and there are generally 3 answers... ### [1. Use CSS](http://www.alistapart.com/articles/holygrail) This is the 'best' way to do it, as it is the most semantically pure approach (without resorting to JS, which has its own problems). The best way is to use the `display: table-cell` and related values. You could also try using [the faux background technique](http://www.alistapart.com/articles/fauxcolumns/) (which you can do with CSS3 gradients). ### [2. Use Tables](http://www.sadtrombone.com/) This seems to work great, but at the expense of having an unsemantic layout. You'll also cause a stir with purists. I have all but avoided using tables, and you should too. ### [3. Use jQuery / JavaScript](http://www.filamentgroup.com/lab/setting_equal_heights_with_jquery/) This benefits in having the most semantic markup, except with JS disabled, you will not get the effect you desire.
Here's a way to do it with pure CSS, however, as you'll notice in the example (which works in IE 7 and Firefox), borders can be difficult - but they aren't impossible, so it all depends what you want to do. This example assumes a rather common CSS structure of body > wrapper > content container > column 1 and column 2. The key is the bottom margin and its canceling padding. ``` <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Equal Height Columns</title> <style type="text/css"> <!-- * { padding: 0; margin: 0; } #wrapper { margin: 10px auto; width: 600px; } #wrapper #main_container { width: 590px; padding: 10px 0px 10px 10px; background: #CCC; overflow: hidden; border-bottom: 10px solid #CCC; } #wrapper #main_container div { float: left; width: 263px; background: #999; padding: 10px; margin-right: 10px; border: 1px solid #000; margin-bottom: -1000px; padding-bottom: 1000px; } #wrapper #main_container #right_column { background: #FFF; } --> </style> </head> <body> <div id="wrapper"> <div id="main_container"> <div id="left_column"> <p>I have two divs inside of a container. One on the left, one on the right, side by side. How am I able to make each one be of equal height, even though they have different content.</p> </div><!-- LEFT COLUMN --> <div id="right_column"> <p>I have two divs inside of a container. One on the left, one on the right, side by side. How am I able to make each one be of equal height, even though they have different content.</p> <p>&nbsp;</p> <p>For example, the right div has a lot of content, and is double the height of the left div, how do I make the left div stretch to the same height of the right div?</p> <p>&nbsp;</p> <p>Is there some JavaScript (jQuery) code to accomplish this?</p> </div><!-- RIGHT COLUMN --> </div><!-- MAIN CONTAINER --> </div><!-- WRAPPER --> </body> </html> ``` **This is what it looks like:** ![enter image description here](https://i.stack.imgur.com/qdEzm.png)
How do I achieve equal height divs (positioned side by side) with HTML / CSS ?
[ "", "javascript", "jquery", "css", "xhtml", "" ]
I wanted to write a server that a client could connect to and receive periodic updates without having to poll. The problem I have experienced with asyncore is that if you do not return true when dispatcher.writable() is called, you have to wait until after the asyncore.loop has timed out (default is 30s). The two ways I have tried to work around this is 1) reduce timeout to a low value or 2) query connections for when they will next update and generate an adequate timeout value. However if you refer to 'Select Law' in 'man 2 select\_tut', it states, "You should always try to use select() without a timeout." Is there a better way to do this? Twisted maybe? I wanted to try and avoid extra threads. I'll include the variable timeout example here: ``` #!/usr/bin/python import time import socket import asyncore # in seconds UPDATE_PERIOD = 4.0 class Channel(asyncore.dispatcher): def __init__(self, sock, sck_map): asyncore.dispatcher.__init__(self, sock=sock, map=sck_map) self.last_update = 0.0 # should update immediately self.send_buf = '' self.recv_buf = '' def writable(self): return len(self.send_buf) > 0 def handle_write(self): nbytes = self.send(self.send_buf) self.send_buf = self.send_buf[nbytes:] def handle_read(self): print 'read' print 'recv:', self.recv(4096) def handle_close(self): print 'close' self.close() # added for variable timeout def update(self): if time.time() >= self.next_update(): self.send_buf += 'hello %f\n'%(time.time()) self.last_update = time.time() def next_update(self): return self.last_update + UPDATE_PERIOD class Server(asyncore.dispatcher): def __init__(self, port, sck_map): asyncore.dispatcher.__init__(self, map=sck_map) self.port = port self.sck_map = sck_map self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.bind( ("", port)) self.listen(16) print "listening on port", self.port def handle_accept(self): (conn, addr) = self.accept() Channel(sock=conn, sck_map=self.sck_map) # added for variable timeout def update(self): pass def next_update(self): return None sck_map = {} server = Server(9090, sck_map) while True: next_update = time.time() + 30.0 for c in sck_map.values(): c.update() # <-- fill write buffers n = c.next_update() #print 'n:',n if n is not None: next_update = min(next_update, n) _timeout = max(0.1, next_update - time.time()) asyncore.loop(timeout=_timeout, count=1, map=sck_map) ```
The "select law" doesn't apply to your case, as you have not only client-triggered (pure server) activities, but also time-triggered activities - this is precisely what the select timeout is for. What the law should really say is "if you specify a timeout, make sure you actually have to do something useful when the timeout arrives". The law is meant to protect against busy-waiting; your code does not busy-wait. I would not set \_timeout to the maximum of 0.1 and the next update time, but to the maximum of 0.0 and the next timeout. IOW, if an update period has expired while you were doing updates, you should do that specific update right away. Instead of asking each channel every time whether it wants to update, you could store all channels in a priority queue (sorted by next-update time), and then only run update for the earliest channels (until you find one whose update time has not arrived). You can use the heapq module for that. You can also save a few system calls by not having each channel ask for the current time, but only poll the current time once, and pass it to .update.
Maybe you can do this with `sched.scheduler`, like this (n.b. not tested): ``` import sched, asyncore, time # Create a scheduler with a delay function that calls asyncore.loop scheduler = sched.scheduler(time.time, lambda t: _poll_loop(t, time.time()) ) # Add the update timeouts with scheduler.enter # ... def _poll_loop(timeout, start_time): asyncore.loop(timeout, count=1) finish_time = time.time() timeleft = finish_time - start_time if timeleft > timeout: # there was a message and the timeout delay is not finished _poll_loop(timeleft, finish_time) # so wait some more polling the socket def main_loop(): while True: if scheduler.empty(): asyncore.loop(30.0, count=1) # just default timeout, use what suits you # add other work that might create scheduled events here else: scheduler.run() ```
Python's asyncore to periodically send data using a variable timeout. Is there a better way?
[ "", "python", "sockets", "asynchronous", "" ]
I have 3 kinds of objects: Agency, BusinessUnit and Client (each with their own respective table) In terms of hierarchy, Agencies own BusinessUnits, and BusinessUnits own Clients. I have 3 C# POCO Objects to represent them (I usually select new {} into them, rather than use the LINQ generated classes): ``` public class Agency { public IEnumerable<BusinessUnit> BusinessUnits { get; set; } } public class BusinessUnit { public IEnumerable<Client> Clients { get; set; } } public class Client { public int NumberOfAccounts { get; set; } public Decimal AmountOfPlacement { get; set; } public Decimal AvgBalance { get; set; } public Double NeuPlacementScore { get; set; } } ``` You can see that Agencies contain a list of BusinessUnits, and BusinessUnits contain a list of Clients. I also have a mapping table called BAC\_Map in the database which says which owns which, and it looks something like this: ![alt text](https://i.stack.imgur.com/ECr2K.jpg) How can I construct a query, so I can query for and return a list of Agencies? Meaning that, I want each Agency to have its list of BusinessUnit objects set, and I want the list of BusinessObjects to have its list of Clients set. I can do basic LINQ queries, but this is a little over my head concerning the Map table and the multiple? queries. How could I construct a method like GetAllAgencies() which would query, for not only all agencies, but populate its BusinessUnits that Agency owns, and the Clients those BusinessUnits own? --- Edit: Any tips or info is appreciated. Do I need to do joins? Does this need to be multiple queries to return an Agency list, with its submembers populated?
If you drop all four tables (Agency, BusinessUnit, Client, Map) on the linq to sql designer, and draw relationships from Map to the other three, there will be some useful properties on Map. ``` //construct a query to fetch the row/column shaped results. var query = from m in db.map //where m.... ? let a = m.Agency let b = m.BusinessUnit let c = m.Client // where something about a or b or c ? select new { AgencyID = a.AgencyID, AgencyName = a.Name, BusinessUnitID = b.BusinessUnitID, ClientID = c.ClientID, NumberOfAccounts = c.NumberOfAccounts, Score = c.Score }; //hit the database var rawRecords = query.ToList(); //shape the results further into a hierarchy. List<Agency> results = rawRecords .GroupBy(x => x.AgencyID) .Select(g => new Agency() { Name = g.First().AgencyName, BusinessUnits = g .GroupBy(y => y.BusinessUnitID) .Select(g2 => new BusinessUnit() { Clients = g2 .Select(z => new Client() { NumberOfAccounts = z.NumberOfAccounts, Score = z.Score }) }) }) .ToList(); ``` If approriate filters are supplied (see the commented out `where` clauses), then only the needed portions of the tables will be pulled into memory. This is standard SQL joining at work here.
I created your tables in a SQL Server database, and tried to recreate your scenario in LinqPad. I ended up with the following LINQ statements, which basically result in the same structure of your POCO classes: ``` var map = from bac in BAC_Maps join a in Agencies on bac.Agency_ID equals a.Agency_ID join b in BusinessUnits on bac.Business_Unit_ID equals b.Business_Unit_ID join c in Clients on bac.Client_ID equals c.Client_ID select new { AgencyID = a.Agency_ID, BusinessUnitID = b.Business_Unit_ID, Client = c }; var results = from m in map.ToList() group m by m.AgencyID into g select new { BusinessUnits = from m2 in g group m2 by m2.BusinessUnitID into g2 select new { Clients = from m3 in g2 select m3.Client } }; results.Dump(); ``` Note that I called map.ToList() in the second query. This actually resulted in a single, efficient query. My initial attempt did not include .ToList(), and resulted in nine separate queries to produce the same results. The query generated by the .ToList() version is as follows: ``` SELECT [t1].[Agency_ID] AS [AgencyID], [t2].[Business_Unit_ID] AS [BusinessUnitID], [t3].[Client_ID], [t3].[NumberOfAccounts], [t3].[AmountOfPlacement], [t3].[AvgBalance], [t3].[NeuPlacementScore] FROM [BAC_Map] AS [t0] INNER JOIN [Agencies] AS [t1] ON [t0].[Agency_ID] = [t1].[Agency_ID] INNER JOIN [BusinessUnits] AS [t2] ON [t0].[Business_Unit_ID] = [t2].[Business_Unit_ID] INNER JOIN [Clients] AS [t3] ON [t0].[Client_ID] = [t3].[Client_ID] ``` Here is a screenshot of the results: [alt text http://img411.imageshack.us/img411/5003/agencybusinessunitclien.png](http://img411.imageshack.us/img411/5003/agencybusinessunitclien.png)
How can I query this hierarchical data using LINQ?
[ "", "c#", "linq", "linq-to-sql", "" ]
We are running Nginx+FastCgi as the backend for our Drupal site. Everything seems to work like fine, except for this one url. http:///sites/all/modules/tinymce/tinymce/jscripts/tiny\_mce/plugins/smimage/index.php (We use TinyMCE module in Drupal, and the url above is invoked when user tries to upload an image) When we were using Apache, everything was working fine. However, nginx treats that above url Binary and tries to Download it. (We've verified that the file pointed out by the url is a valid PHP file) Any idea what could be wrong here? I think it's something to do with the NGINX configuration, but not entirely sure what that is. Any help is greatly appreciated. Config: Here's the snippet from the nginx configuration file: ``` root /var/www/; index index.php; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; } error_page 404 index.php; location ~* \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { deny all; } location ~* ^.+\.(jpg|jpeg|gif|png|ico)$ { access_log off; expires 7d; } location ~* ^.+\.(css|js)$ { access_log off; expires 7d; } location ~ .php$ { include /etc/nginx/fcgi.conf; fastcgi_pass 127.0.0.1:8888; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; } location ~ /\.ht { deny all; } ```
Make it ``` location ~ \.php$ { ``` instead of ``` location ~ .php$ { ```
If that didn't do it, also be sure that you have fastcgi running on port 8888. You can check that with: ``` netstat -la | grep :8888 ``` You're okay if you get a response like: ``` tcp 0 0 localhost:9000 *:* LISTEN ``` Or try the following and look for which port it is listening on: ``` netstat -la | grep LISTEN ```
Nginx - Treats PHP as binary
[ "", "php", "drupal", "binary", "download", "nginx", "" ]
I have some existing projects that were built upon a deprecated PHP framework, and I'm hoping to move them over to Ruby on Rails with minimal effort. My main problem right now is the format that the JSON is coming back in. My frontend code (all ExtJS) is expecting JSON in the format: ``` { "result": [ [id: 1, name: "mike"], [id: 2, name: "john"], [id: 3, name: "gary"] ] } ``` But the default return from Ruby on Rails is as follows: ``` { "result": [ {"record" : {id: 1, name: "mike"}}, {"record" : {id: 2, name: "john"}}, {"record" : {id: 3, name: "gary"}} ] } ``` My controller is basically doing nothing but: ``` @records = Record.find(:all) respond_to do |format| format.json { render :text => @records.to_json} end ``` As you can see, it's adding in an additional key to every record, which my frontend ExtJS code is not capable of parsing as-is. Is there any way to stop this from occuring? Thanks for any help you can offer, Mike Trpcic
basically : ``` ActiveRecord::Base.include_root_in_json = false ``` or ``` YourClass.include_root_in_json = false ``` as described here: <http://apidock.com/rails/ActiveRecord/Serialization/to_json>
This question can now be closed, but I feel it relevant that I post the solution for anyone in the future who runs into the same situation. You can use the following plugin: [Ext Scaffold Generator](http://agilewebdevelopment.com/plugins/ext_scaffold_generator). even if you don't wish to use the scaffold functionality, it adds an additional "to\_ext\_json" method that outputs JSON that is readable by ExtJS by default. Thanks to anyone who looked into the matter and tried to help me.
How to read JSON from Ruby on Rails with ExtJS
[ "", "javascript", "ruby-on-rails", "ruby", "json", "extjs", "" ]
I'm intersecting some sets of numbers, and doing this by storing a count of each time I see a number in a map. I'm finding the performance be very slow. Details: - One of the sets has 150,000 numbers in it - The intersection of that set and another set takes about 300ms the first time, and about 5000ms the second time - I haven't done any profiling yet, but every time I break the debugger while doing the intersection its in malloc.c! So, how can I improve this performance? Switch to a different data structure? Some how improve the memory allocation performance of map? Update: 1. Is there any way to ask std::map or boost::unordered\_map to pre-allocate some space? 2. Or, are there any tips for using these efficiently? Update2: See [Fast C++ container like the C# HashSet<T> and Dictionary<K,V>?](https://stackoverflow.com/questions/1056366/fast-c-container-like-the-c-hashsett-and-dictionaryk-v) Update3: I benchmarked set\_intersection and got horrible results: ``` (set_intersection) Found 313 values in the intersection, in 11345ms (set_intersection) Found 309 values in the intersection, in 12332ms ``` Code: ``` int runIntersectionTestAlgo() { set<int> set1; set<int> set2; set<int> intersection; // Create 100,000 values for set1 for ( int i = 0; i < 100000; i++ ) { int value = 1000000000 + i; set1.insert(value); } // Create 1,000 values for set2 for ( int i = 0; i < 1000; i++ ) { int random = rand() % 200000 + 1; random *= 10; int value = 1000000000 + random; set2.insert(value); } set_intersection(set1.begin(),set1.end(), set2.begin(), set2.end(), inserter(intersection, intersection.end())); return intersection.size(); } ```
I figured something out: if I attach the debugger to either RELEASE or DEBUG builds (e.g. hit F5 in the IDE), then I get horrible times.
You should definitely be using preallocated vectors which are way faster. The problem with doing set intersection with stl sets is that each time you move to the next element you're chasing a dynamically allocated pointer, which could easily not be in your CPU caches. With a vector the next element will often be in your cache because it's physically close to the previous element. The trick with vectors, is that if you don't preallocate the memory for a task like this, it'll perform EVEN WORSE because it'll go on reallocating memory as it resizes itself during your initialization step. Try something like this instaed - it'll be WAY faster. ``` int runIntersectionTestAlgo() { vector<char> vector1; vector1.reserve(100000); vector<char> vector2; vector2.reserve(1000); // Create 100,000 values for set1 for ( int i = 0; i < 100000; i++ ) { int value = 1000000000 + i; set1.push_back(value); } sort(vector1.begin(), vector1.end()); // Create 1,000 values for set2 for ( int i = 0; i < 1000; i++ ) { int random = rand() % 200000 + 1; random *= 10; int value = 1000000000 + random; set2.push_back(value); } sort(vector2.begin(), vector2.end()); // Reserve at most 1,000 spots for the intersection vector<char> intersection; intersection.reserve(min(vector1.size(),vector2.size())); set_intersection(vector1.begin(), vector1.end(),vector2.begin(), vector2.end(),back_inserter(intersection)); return intersection.size(); } ```
std::map and performance, intersecting sets
[ "", "c++", "stl", "dictionary", "intersection", "" ]
I'm working on a package which includes rescaling of images in PHP. For image rescaling, PHP has the GD and ImageMagick libraries. Are those likely to be part of a given client's PHP install already? Is there a "default" install of PHP, for that matter? By default, are either GD or ImageMagick included? If neither is installed, should I have some sort of horrible fallback position of reading, rescaling, and saving GIFs, JPGs, and PNGs, or is it simple to add GD or ImageMagick to an existing install?
i can't remember when i've last seen a lamp hosting provider without GD. the imagemagick extension is not that widespread. if they run their server themself, they really should be able to activate one of it or both. in your place i'd build the full functionality with GD and a (probably reduced) imagemagick fallback. if they got neither, *show them the basic concepts of the wheel and/or fire, they should be grateful*. PS: i encountered providers that deactivated certain GD functions (computationally too intensive on a shared hosting environment), like imagerotate and imagefilter.
GD is the easiet library to include in php.. it is "only" an extension, mainly included in php packages... the only operation needed is to activate the extension. Regarding ImageMagik, it is more complecated, since the php imageMagik library is only an interface for the image magik software. Using imageMagik require both software and php library installed
What are the odds of GD or ImageMagick already being part of a client's PHP install?
[ "", "php", "installation", "gd", "imagemagick", "" ]
I'm writing arbitrary byte arrays (mock virus signatures of 32 bytes) into arbitrary files, and I need code to overwrite a specific file given an offset into the file. *My specific question is: is there source code/libraries that I can use to perform this particular task?* I've had this problem with Python file manipulation as well. I'm looking for a set of functions that can kill a line, cut/copy/paste, etc. My assumptions are that these are extremely common tasks, and I couldn't find it in the Java API nor my google searches. Sorry for not RTFM well; I haven't come across any information, and I've been looking for a while now.
Maybe you are looking for something like the [RandomAccessFile](http://java.sun.com/j2se/1.5.0/docs/api/java/io/RandomAccessFile.html) class in the standard Java JDK. It supports reads and writes at some offset, as well as byte arrays.
Java's [`RandomAccessFile`](http://java.sun.com/j2se/1.5.0/docs/api/java/io/RandomAccessFile.html) is exactly what you want. It includes methods like `seek(long)` that allow you to move wherever you need in the file. It also allows for reading and writing at the same time.
Java: Where can I find advanced file manipulation source/libraries?
[ "", "java", "file-io", "" ]
Problem: I would like to share code between multiple assemblies. This shared code will need to work with LINQ to SQL-mapped classes. I've encountered the same issue found [here](https://stackoverflow.com/questions/156113/linqtosql-and-abstract-base-classes), but I've also found a work-around that I find troubling (I'm not going so far as to say "bug"). **All the following code can be downloaded in [this solution](http://www.filehosting.org/file/details/40226/TestLinq2Sql.zip).** Given this table: ``` create table Users ( Id int identity(1,1) not null constraint PK_Users primary key , Name nvarchar(40) not null , Email nvarchar(100) not null ) ``` and this DBML mapping: ``` <Table Name="dbo.Users" Member="Users"> <Type Name="User"> <Column Name="Id" Modifier="Override" Type="System.Int32" DbType="Int NOT NULL IDENTITY" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" /> <Column Name="Name" Modifier="Override" Type="System.String" DbType="NVarChar(40) NOT NULL" CanBeNull="false" /> <Column Name="Email" Modifier="Override" Type="System.String" DbType="NVarChar(100) NOT NULL" CanBeNull="false" /> </Type> </Table> ``` I've created the following base classes in one assembly "Shared": ``` namespace TestLinq2Sql.Shared { public abstract class UserBase { public abstract int Id { get; set; } public abstract string Name { get; set; } public abstract string Email { get; set; } } public abstract class UserBase<TUser> : UserBase where TUser : UserBase { public static TUser FindByName_Broken(DataContext db, string name) { return db.GetTable<TUser>().FirstOrDefault(u => u.Name == name); } public static TUser FindByName_Works(DataContext db, string name) { return db.GetTable<TUser>().FirstOrDefault(u => u.Name == name && 1 == 1); } public static TUser FindByNameEmail_Works(DataContext db, string name, string email) { return db.GetTable<TUser>().FirstOrDefault(u => u.Name == name || u.Email == email); } } } ``` These classes are referenced in another assembly "Main", like so: ``` namespace TestLinq2Sql { partial class User : TestLinq2Sql.Shared.UserBase<User> { } } ``` The DBML file is located in the "Main" assembly, as well. When calling `User.FindByName_Broken(db, "test")`, an exception is thrown: > System.InvalidOperationException: Class member UserBase.Name is unmapped. However, the other two base static methods work. Furthermore, the SQL generated by calling `User.FindByName_Works(db, "test")` is what we were hoping for in the broken call: ``` SELECT TOP (1) [t0].[Id], [t0].[Name], [t0].[Email] FROM [dbo].[Users] AS [t0] WHERE [t0].[Name] = @p0 -- @p0: Input NVarChar (Size = 4; Prec = 0; Scale = 0) [test] ``` While I am willing to use this `1 == 1` "hack" for single predicate queries, is there a better way of sharing LINQ to SQL-aware code in a base/shared/core assembly?
I have encountered this problem many times in the past because we have a similar architecture in a framework that we use in our company. You may have noticed that if you use the declarative style LINQ queries you'll not encounter this problem. For example the following code will work: ``` return (from i in db.GetTable<TUser>() where i.Name = "Something").FirstOrDefault(); ``` However, since we are using dynamic filter expressions we couldn't use this method. The alternative solution is to use something like this: ``` return db.GetTable<TUser>().Select(i => i).Where(i => i.Name == "Something").SingleOrDefault(); ``` This solution solved our problem since we can inject a ".Select(i => i)" to the beginning of almost all expressions. This will cause the Linq engine not to look at the base class for the mappings and will force it to look at the actual entity class and find the mappings. Hope it helps
Try including OfType before Where clause `return _dbContext.GetTable<T>().OfType<T>().Where(expression).ToList();`
LINQ to SQL - mapping exception when using abstract base classes
[ "", "c#", "linq-to-sql", "abstract-class", "" ]
I've retemplated the DataGridRow in the Microsoft WPF DataGrid to the below, the problem I'm having is if the user clicks on the border elements of the template the Row(s) don't get selected. Is there a way to make the click on the border cause a row selection. ``` <Grid x:Name="LayoutRoot" Margin="0,0,0,-1"> <Border x:Name="DGR_Border" BorderBrush="Transparent" Background="Transparent" BorderThickness="1" CornerRadius="5" SnapsToDevicePixels="True"> <Border x:Name="DGR_InnerBorder" BorderBrush="Transparent" Background="Transparent" BorderThickness="1" CornerRadius="5" SnapsToDevicePixels="True"> <toolkit:SelectiveScrollingGrid Name="DGR_SelectiveScrollingGrid"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="*"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <toolkit:DataGridCellsPresenter Grid.Column="1" Name="DGR_CellsPresenter" ItemsPanel="{TemplateBinding ItemsPanel}" SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}"/> <toolkit:DataGridDetailsPresenter Grid.Column="1" Grid.Row="1" Visibility="{TemplateBinding DetailsVisibility}" toolkit:SelectiveScrollingGrid.SelectiveScrollingOrientation="{Binding RelativeSource={RelativeSource AncestorType={x:Type Controls:DataGrid}}, Path=AreRowDetailsFrozen, Converter={x:Static Controls:DataGrid.RowDetailsScrollingConverter}, ConverterParameter={x:Static Controls:SelectiveScrollingOrientation.Vertical}}" /> <toolkit:DataGridRowHeader Grid.RowSpan="2" toolkit:SelectiveScrollingGrid.SelectiveScrollingOrientation="Vertical" Visibility="{Binding RelativeSource={RelativeSource AncestorType={x:Type Controls:DataGrid}}, Path=HeadersVisibility, Converter={x:Static Controls:DataGrid.HeadersVisibilityConverter}, ConverterParameter={x:Static Controls:DataGridHeadersVisibility.Row}}"/> </toolkit:SelectiveScrollingGrid> </Border> </Border> </Grid> ```
Invoking an internal method seems somewhat dangerous. What if implementation details change? There have been lots of changes in previous versions. I think it may be more prudent to simply add an event handler like this to your rows: ``` protected void DataGridRow_MouseDown(object sender, MouseButtonEventArgs e) { // GetVisualChild<T> helper method, simple to implement DataGridCellsPresenter presenter = GetVisualChild<DataGridCellsPresenter>(rowContainer); // try to get the first cell in a row DataGridCell cell = (DataGridCell)presenter.ItemContainerGenerator.ContainerFromIndex(0); if (cell != null) { RoutedEventArgs newEventArgs = new RoutedEventArgs(MouseLeftButtonDownEvent); //if the DataGridSelectionUnit is set to FullRow this will have the desired effect cell.RaiseEvent(newEventArgs); } } ``` This would have the same effect as clicking the cell itself, and would use only the public members of DataGrid elements.
The toolkit DataGridRow has no defined OnMouseDown override or similar method. The selection of a full row is handled trough this method: ``` internal void HandleSelectionForCellInput(DataGridCell cell, bool startDragging, bool allowsExtendSelect, bool allowsMinimalSelect) { DataGridSelectionUnit selectionUnit = SelectionUnit; // If the mode is None, then no selection will occur if (selectionUnit == DataGridSelectionUnit.FullRow) { // In FullRow mode, items are selected MakeFullRowSelection(cell.RowDataItem, allowsExtendSelect, allowsMinimalSelect); } else { // In the other modes, cells can be individually selected MakeCellSelection(new DataGridCellInfo(cell), allowsExtendSelect, allowsMinimalSelect); } if (startDragging) { BeginDragging(); } } ``` This method is called when input occurs on a **cell**. The MakeFullRowSelection method only selects all cells in a row, not the row itself. So, when you click on a DataGridRow (and not a DataGridCell), no mouse down or selection handling occurs. To accomplish what you desire, you should add some sort of mouse down handler to your rows' mouse down events where you would set the IsSelected property. Of course, note that you should specify the selected property for each cell in the row individually, since the row.IsSelected does not imply or set that.
Border in ControlTemplate causing odd selection behavior with DataGrid
[ "", "c#", "wpf", "xaml", "datagrid", "" ]
I have a databound DataGridView. The data source is a typed data set with a table containing two `DateTime` columns (`BeginTimeStamp` and `EndTimeStamp`). I read and write the data to an SQL Server 2005 database using the typed data set's `Update` command. The user *must* enter a date into each of the two columns, which I enforce using the `CellValidating` and `RowValidating` events. However, I also need to make sure that the following two rules apply: 1. The time value for the `BeginDate` column must always be 00:00:00 2. The time value for the `EndDate` column must always be 23:59:59 (or 11:59:59 pm if you like) As I do not want the user to enter the 23:59:59 all the time, I'd like to somehow change the user's inputs according to 1. and 2. in my code. Where and how would I do that? **EDIT** Sorry in case I was unclear. The user may enter *any* date part, however, the time part is fixed at midnight for the `BeginTimeStamp` and 23:59:59 for the `EndTimeStamp`. Example: The user enters 2009/01/01 01:00:00pm as `BeginTimeStamp`. My application should change this to 2009/01/01 00:00:00. The user enters 2009/01/31 01:00:00pm as `EndTimeStamp`. My application should change this to 2009/01/31 23:59:59.
I'd just display the DateTime as a Date and add the time behind the scenes. This could be when the user enters the data or equally it could be when you write the data to the database. If you choose the former then look at the DataGridView.CellEndEdit event. See Noam's answer for the code to set the time appropriately.
You can add the following lines to your `CellValidating` method, after your other validations ``` DateTime newValue = oldValue.Date; ``` and ``` DateTime newValue = oldValue.Date.AddDays(1).AddSeconds(-1); ```
C#: How to adjust the value the user entered in DataGridView?
[ "", "c#", "data-binding", "datagridview", "" ]
I'm having trouble understanding how to use ORM generated objects. We're using LLBLGen for mapping our database model to objects. These objects we encapsulate in another layer which represents our business model(I think). Maybe this bit of code will explain this better. ``` public class Book { // The class as used in our application private BookEntity book; // LLBLGen entity private BookType bookType; // BookType is another class that wraps an entity public Book(int Id) { book = new BookEntity(Id); } public BookType BookType { get { return this.bookType; } set { this.bookType = value; this.book.BookType = new BookTypeEntity(value.ID); this.book.Save(); } } public int CountPages() { } // Example business method } ``` Exposing the entity's fields like properties feels awkward, since I'm mapping all over again. With list-types it's even much worse, since I have to write a "Add" and "Remove" method plus a property that exposes List. In the above example in the BookType setter I need access to the BookTypeEntity object, I can get this object by instantiating a new one using the ID oh the BookType object. This also doesn't feel good. I'm wondering if I shouldn't just extend the BookEntity object and add my business logic there? Or maybe use partials? In the LLGLGen examples they use the entity objects directly, but this looks very messy to me. I want to have objects which can also have methods for my business logic(like CountPages) in the code above.
Dunno if it's possible in LLGLGen, but what I generally do when working with ORMs is to create an interface to the persisted class, in your example IBook. I expose this interface via a public getter from the wrapping class. This way, if needs will be, you can extend you IBook the way you want if you need to add some custom behaviour to its fields. Generally speaking, I think there's 3 ways of "mapping" your ORM-entities to your domain: 1. The way you've posted. Basically, remap everything again 2. The way I posted, expose the ORM-entity as an interface 3. Expose the ORM-entity directly I don't like #1, cause I don't like to have 2 mappings in my application. DRY, KISS and YAGNI are all violated by this. I don't like #3 cause it would make whatever consumer-layer of your domain-layer talk directly to the ORM layer. .. So, I go with #2, as it seems to be the lesser of 3 evils ;) [Update] Small code snippet :) ORM-generated class in the data-layer: ``` public class Book : IBook { public string ISBN {get; private set;} } ``` IBook is found in the business-logic layer, along with a book wrapper: ``` public interface IBook { string ISBN {get;} } public class BookWrapper //Or whatever you want to call it :) { //Create a new book in the constructor public BookWrapper() { BookData = new Data.MyORM.CreateNewBook(); } //Expose the IBook, so we dont have to cascade the ISBN calls to it public IBook BookData {get; protected set;} //Also add whichever business logic operation we need here public Author LookUpAuther() { if (IBook == null) throw new SystemException("Noes, this bookwrapper doesn't have an IBook :(") //Contact some webservice to find the author of the book, based on the ISBN } } ``` I don't know if this is a recognizable design-pattern, but it's what I use, and so far it has worked quite well :)
I've never used LLBLGen for mapping, but most of the ORM tools I've worked with generate partial classes. I then add any custom code/logic I'd like to add to those objects in a seperate file (so they don't get over-written if the partial classes are re-generated). Seems to work pretty well. If you don't get partial classes from your ORM, I'd create a Facade which wraps your Data Object with your Business Logic...that way the two are seperated and you can re-gen one without overwriting the other. **UPDATE** Partial classes support implementing an Interface in one declaration of a partial class and not the other. If you want to implement an interface, you can implement it in your custom code partial file. Straight from [MSDN](http://msdn.microsoft.com/en-us/library/wa80x488.aspx): ``` partial class Earth : Planet, IRotate { } partial class Earth : IRevolve { } ``` is equivilant to ``` class Earth : Planet, IRotate, IRevolve { } ```
Should one extend or encapsulate ORM objects?
[ "", "c#", ".net", "oop", "orm", "llblgenpro", "" ]
I'm working with the Passive View pattern. The user clicks a new account button. The view delegates responsibility to the presenter using parameterless method calls. The problem is there are multiple account types so the user needs to pick which one they want to create. How do I resolve this? 1. Create a new form from the view, get the needed information and expose it as a property so the presenter can retrieve it. (This ignores the notion that the view shouldn't have any logic in it) 2. Create and use the new form from the presenter. (This ties the presenter directly to a form, ignoring the entire point of MVP) 3. Create the new form somewhere else and pass it in as a constructor argument to the presenter... or view. 4. Forget it and add a new button for each account type. (There are a number of account types and this will clutter the UI but so be it.) 5. I'm going about this the wrong way and need to rethink my design. (If this is the case, an nudge in the the right direction would be appreciated.)
My solution for this was different than I expected. I changed the button the user clicked to a DropDownMenuButton. Then I passed a string list of account types to the view which populates the drop down menu. I also created an event handler for the drop down menu item click event, which updates a public property with the name of the menu item then delegates everything else to the presenter. The presenter just has to get the menu item name from the exposed property and then lookup the account type in a private dictionary of account types using the account type name as the key.
I'd probably create another presenter-view pair for getting the account type. Then either * your presenter calls the other presenter directly to display the new form or * your presenter asks its model for the right account type. The model knows that it should ask somewhere else and invokes the "account type presenter" or even the "account type model". I think I'd go with the first option unless your presenter grows unwieldy.
My presenter needs to prompt the user for more information. How do I wire it up?
[ "", "c#", "mvp", "passive-view", "" ]
I have coded my own tiny static DAL class for .NET Compact Framework and local SQL Server that does just what I need; GetTable(SQL,params), Exec(SQL,params), ExecScalar(SQL,Params) etc. I wanted to do everything by SQL so that's how I rolled, if there are better ways to handle SQL Connection in C# -except ORM- please let me know. Here's how I give public access to SQLConnection object under my DAL Class: ``` public static SqlConnection Conn() { if (ConnStr == "") Defaults(); try { if (conn == null) conn = new SqlConnection(ConnStr); if (conn.State != ConnectionState.Open) { conn.Open(); Execute("SET DATEFORMAT DMY",null); } return conn; } catch { System.Windows.Forms.MessageBox.Show("Unable to connect to DB"); } return null; } ``` The application I'm coding is being used on mobile Windows CE 5.0. The problem is, when user gets disconnected from wireless network then reconnects, `if (conn.State != ConnectionState.Open)` line doesn't get executed and application gives a general database exception, nothing else. How do I fix this? Edit: Turns out exception is raised by functions such as GetTable(). I can handle it using try-catch in each function, but unfortunately in Compact Framework every sql error is named as "SqlException". How do I differ connection problems and query problems?
I have fixed it by using new SqlConnection for each query instead of sharing one static SqlConnection for everything.
You could catch the SqlException and check the [ErrorCode](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.externalexception.errorcode.aspx) property to see if it was an error related to a loss of connection (i.e. error message would be something like "cannot connect to server" and that has a specific error code).
How can I reconnect to SQLServer when disconnected?
[ "", "c#", "sql-server", "compact-framework", "" ]
I'm still a novice when it comes to polymorphism so I hope I can phrase my question correctly. Let's say I have tables in my Database - Dog, Cat and Mouse. I want to be able to call something like: ``` $animal = new Animal("dog", 12); ``` with the animal class being something like: ``` class Animal { protected $id; protected $legs; protected $height; function __construct($type, $id) { if ($type == "dog") { return new Dog($id); } } } class Dog extends Animal { function __construct($id) { $this->id = $id; $this->legs = 4; $this->height = 1; } } ``` This doesn't work but I want to be able to call a new Animal and pass in the specific animal and have it be returned. How can I design this? (I'm using PHP).
What you're looking for is actually a design pattern called the Factory Pattern. You can read up on it here: [Factory Pattern](http://www.dofactory.com/Patterns/PatternFactory.aspx) And some longer articles for PHP: [Design Patterns in PHP](http://www.devshed.com/c/a/PHP/Design-Patterns-in-PHP-Factory-Method-and-Abstract-Factory/) [The Basics of Using Factory Pattern in PHP](http://www.devshed.com/c/a/PHP/The-Basics-of-Using-the-Factory-Pattern-in-PHP-5/)
What you want is a "Factory Pattern". Rather than create new `Animal`s directly, call a function which chooses the type of animal to create. In Java I'd probably make that a static method of the class, and in Python I'd store all the Animal classes in a dictionary linked to the key, so I could look up the key and then pass the arguments along to the constructor. For PHP, I found an article on [Using the Factory Pattern in PHP](http://www.devshed.com/c/a/PHP/The-Basics-of-Using-the-Factory-Pattern-in-PHP-5/).
OO Design: Returning a child class from a parent
[ "", "php", "oop", "polymorphism", "" ]
I have a datalogging application (c#/.net) that logs data to a SQLite database. This database is written to constantly while the application is running. It is also possible for the database to be archived and a new database created once the size of the SQLite database reaches a predefined size. I'm writing a web application for reporting on the data. My web setup is c#/.Net with a SQL Server. Clients will be able to see their own data gathered online from their instance of my application. For test purposes, to upload the data to test with I've written a rough and dirty application which basically reads from the SQLite DB and then injects the data into the SQL Server using SQL - I run the application once to populate the SQL Server DB online. My application is written in c# and is modular so I could add a process that periodically checks the SQLite DB then transfer new data in batches to my SQL Server. My question is, if I wanted to continually synchronise the client side SQLLite database (s) with my server as the application is datalogging what would the best way of going about this be? Is there any technology/strategy I should be looking into employing here? Any recommended techniques?
You could take a look at the Sync Framework. How complex is the schema that you're looking to sync up & is it only one-way or does data need to come back down? As a simply solution I'd look at exporting data in some delimited format and then using bcp/BULK INSERT to pull it in to your central server.
Several options come to mind. You can add a timestamp to each table that you want to copy from and then select rows written after the last update. This is fast and will work if you archive the database and start with an empty one. You can also journal your updates for each table into an XML string that describes the changes and store that into a new table that is treated as a queue.
Best way to synchronise client database with server database
[ "", "c#", "sql-server", "sqlite", "synchronization", "" ]
I'm hoping I can get this question answered here... I have a client who wants to add a PayPal "Add to Cart" button inside a Lightbox window. I was able to achieve this by placing the PayPal code inside the image title tag. Not sure if this was the best way, but it worked. However, here is my problem... Now when you mouse over the image thumbnail the image title tooltip popup displays the PayPal code. Not good... Is there a way to disable this feature? Can I tell browsers to not display this popup? Here is the link (I only applied the code to the first two photos): <http://lancemissionart.com/index.php?Page=gallery&title=fof&gallery=msd> Thanks!
I would modify lightbox to use a different attribute instead of title, like paypalForm. Modification on line 398 of lightbox.js. ``` 398: imageArray.push(new Array(anchor.getAttribute('href'), anchor.getAttribute('paypalForm'))); ``` The other option would be to only set the title attribute `onMouseDown` and setting it back to empty string when the lightbox is closed.
You should change from using Lightbox to Thickbox (looks like you might be experimenting with it in your code). Lightbox is for images only (even though you've made it work its pretty ugly putting all that in the title tag and pretty wrong), Thickbox will let you include your form. <http://jquery.com/demo/thickbox/>
Hide Image Title Tool Tip Popup on Mouse Rollover or Hover
[ "", "javascript", "html", "browser", "" ]
I'm looking for an elegant way of determining which element has the highest occurrence ([mode](http://en.wikipedia.org/wiki/Mode_%28statistics%29)) in a JavaScript array. For example, in ``` ['pear', 'apple', 'orange', 'apple'] ``` the `'apple'` element is the most frequent one.
This is just the mode. Here's a ~~quick, non-optimized~~ solution. It should be O(n). ``` function mode(array) { if(array.length == 0) return null; var modeMap = {}; var maxEl = array[0], maxCount = 1; for(var i = 0; i < array.length; i++) { var el = array[i]; if(modeMap[el] == null) modeMap[el] = 1; else modeMap[el]++; if(modeMap[el] > maxCount) { maxEl = el; maxCount = modeMap[el]; } } return maxEl; } ```
There have been some developments in javascript since 2009 - I thought I'd add another option. I'm less concerned with efficiency until it's actually a problem so my definition of *"elegant"* code (as stipulated by the OP) favours readability - which is of course subjective... ``` function mode(arr){ return arr.sort((a,b) => arr.filter(v => v===a).length - arr.filter(v => v===b).length ).pop(); } mode(['pear', 'apple', 'orange', 'apple']); // apple ``` In this particular example, should two or more elements of the set have equal occurrences then the one that appears latest in the array will be returned. It's also worth pointing out that it will modify your original array - which can be prevented if you wish with an [`Array.slice`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice) call beforehand. --- **Edit:** updated the example with some *ES6* [fat arrows](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Functions/Arrow_functions) because *2015* happened and I think they look pretty... If you are concerned with backwards compatibility you can find this in the [revision history](https://stackoverflow.com/posts/20762713/revisions).
Get the element with the highest occurrence in an array
[ "", "javascript", "mode", "" ]
This question was bugging me for quite a while (as evidenced by [my previous question](https://stackoverflow.com/questions/933214/what-are-the-used-unused-features-of-python-3)): why exactly is `print(x)` better (which is defined as being more pythonic) than `print x`? For those who don't know, the `print` statement was changed into function in Python 3.0. The formal documentation is in [PEP 3105](http://www.python.org/dev/peps/pep-3105/) and motivation is in [Guido van Rossum's email](http://mail.python.org/pipermail/python-dev/2005-September/056154.html). To those points I would like to make a counterpoint: 1. There are other operators, such as `import` which we write as a statement, though their functionality is actually duplicated with a function `__import__` * To beginners, the operator `print` does not belong to the general application logic. To them it's the mysterious operator which is a culmination of their program. They expect it to look differently. * All the beginner books which were describing basic Python 2.x are now *guaranteed to be broken from the first example*. Certainly, languages sometimes change, but those changes are usually less visible to novices. * It's not immediately obvious to me that a functionality of `print` can be duplicated on an application level. For example, sometimes I would like to redirect print from a console as a modal OS dialog. * While people say it's hard to rewrite all `print` statements to a function, they have forced every Python 2.x developer to do exactly that for all their projects. Good, it's not hard with automatic converter. * Everyone who enjoys having an ability to manipulate function `print` would be just as well-served if `print` was a statement wrapping function `__print__`. So, can we please have a canonical answer to this question on the pages of Stack Overflow?
Looks to me like yours is a debate, not a question -- are you really going to accept an answer that shows how deeply and badly wrong you were in your assertions?! On to your debating points: > There are other operators, such as > import which we write as a statement, > though their functionality is actually > duplicated with a function `__import__` Absolutely wrong: function `__import__` (like **every** other function -- and operator, for that matter) binds **no** names in the scope of "caller" (code containing it) -- any "thingie" that binds names in the "caller's scope" **must** be a statement (just like assignment, `def`, and `call`). Your "point" appears to totally miss the extremely deep and crucial distinction that Python draws between statements and expressions -- one may reasonably *dislike* this distinction, but *ignoring* it is, most obviously, simply wrong. Python statements are things the Python compiler must be specifically aware of -- they may alter the binding of names, may alter control flow, and/or may need to be entirely removed from the generated bytecode in certain conditions (the latter applies to `assert`). `print` was the **only** exception to this assertion in Python 2; by removing it from the roster of statements, Python 3 removes an exception, makes the general assertion "just hold", and therefore is a more regular language. **Special cases are not special enough to break the rules** has long been a Pythonic tenet (do `import this` at an interactive interpreter's `>>>` prompt to see "the Zen of Python" displayed), and this change to the language removes a violation of this tenet that had to remain for many years due to an early, erroneous design decision. > To beginners, the operator print does > not belong to the general application > logic. To them it's the mysterious > operator which is a culmination of > their program. They expect it to look > differently. Curing beginners of their misconceptions as early as feasible is a very good thing. > All the beginner books which were > describing basic Python 2.x are now > guaranteed to be broken from the fist > example. Certainly, languages > sometimes changes, but changes are > usually less visible to novices. Languages rarely change in deep and backwards-incompatible ways (Python does it about once a decade) and few language features are "highly visible to novices", so the total number of observations is small -- yet even within that tiny compass we can easily find counter-examples, where a feature highly visible to beginners was just so badly designed that removing it was well worth the disruption. For example, modern dialects of Basic, such as Microsoft's Visual Basic, don't use explicit user-entered line numbers, a "feature" that was both terrible and highly visible to absolutely everybody since it was mandatory in early dialects of Basic. Modern variants of Lisp (from Scheme onwards) don't use dynamic scoping, a misfeature that was sadly highly visible (usually manifesting as hard-to-understand bugs in their code) to beginners, basically as soon as they started writing functions in Lisp 1.5 (I once was a beginner in that and can testify to how badly it bit me). > It's not immediately obvious to me > that a functionality of print can be > duplicated on an application level. > For example, sometimes I would like to > redirect print from a console as a > modal OS dialog. Not sure I follow this "point". Just change `sys.stdout` to your favorite pseudo-file object and redirect to your heart's contents -- you have the *option* of monkey patching the built-in function `print` (which you never had in Python 2), but nobody's twisting your arm and forcing you to do so. > While people say it's hard to rewrite > all print statements to a function, > they have forced every Python 2.x > developer to do exactly that for all > their projects. Good, it's not hard > with automatic converter. The `2to3` tool does indeed take care of all such easy surface incompatibilities -- no sweat (and it needs to be run anyway to take care of quite a few more besides `print`, so people do use it extensively). So, what's your "point" here? > Everyone who enjoys having an ability > to manipulate function print would be > just as well-served if print was a > statement wrapping function **print**. Such an arrangement would not, per se, remove an unnecessary keyword (and most especially, an unjustified **irregularity**, as I explained above: a statement that has **no** good reason to **be** a statement because there is absolutely no need for the compiler to be specially aware of it in any way, shape, or form!). It's far from clear to me that having such an underlying function would add any real value, but if you have real use cases you can certainly propose the case in the Python Ideas mailing list -- such an underlying function, if proven to be precious indeed, could be retrofitted to be used by the `print` statement in Python 2.7 as well as by the `print` function in Python 3.2. However, consider a typical case in which one might want to monkey-patch the built-in `print`: adding keyword arguments to allow fancy tweaks. How would the `__print__` function you're apparently proposed ever ge those KW arguments from a `__print__` statement? Some funkier syntax yet than the horrors of `>> myfile` and the trailing comma...?! With `print` as a function, keyword arguments follow just the perfectly normal and ordinary rules that apply to **every** function and function call -- bliss! So, in summary, it's more Pythonic for `print` to be a function because it removes anomalies, special cases, and any need for weird exceptional syntax -- simplicity, regularity, and uniformity are Python's trademark.
Here's the reason I hate the print statement in 2.x. ``` >>> something() <something instance at 0xdeadbeef> >>> print something() <something instance at 0xdeadbeef> ``` worthless object has no useful `__str__`, Fine, I can deal, look at it some more. ``` >>> dir(something()) ['foo', 'bar', 'baz', 'wonderful'] >>> help(something().foo) "foo(self, callable)" ``` hmm.. so does that callable take arguments? ``` >>> something().foo(print) something().foo(print) ^ SyntaxError: invalid syntax >>> something().foo(lambda *args: print(*args)) something().foo(lambda *args: print(*args)) ^ SyntaxError: invalid syntax ``` So... I have to either define a function to use ``` >>> def myPrint(*args): print *args def myPrint(*args): print *args ^ SyntaxError: invalid syntax >>> def myPrint(*args): print args ... >>> myPrint(1) (1,) ``` Shudder, or use `sys.stdout.write`, which is almost as cludgy, since it has very different behavior from `print`. It also *looks* different, which means I'll almost never remember that it exists. Using `print` statements in a short, one-off type facility and then improving it to use logging or something better is just inelegant. If print worked like those things, and especially could be used with high order functions, then it would be better than just the thing you use when you don't use *real* logging or *real* debuggers.
Why print statement is not pythonic?
[ "", "python", "python-3.x", "" ]
From my initial readings on unit testing (I'm a beginner) it is wise to put all of your setups and tests in a separate project from the code being tested. This seems ideal to me, as well. However, I've recently begun reading The Art of Unit Testing, trying to discover how to break dependencies on things such as database calls. The methods offered involve changing areas of the test code, such as adding specific interfaces and "stub" methods to the production code. This seems to defeat some of the good things about keeping tests and production code separate. Is there any recommended dependency-breaking technique that doesn't involve changing production code?
There is no way to break dependencies without making some sort of change. What is important is that the changes you make don't change the behavior of production code in production, and that you aren't introducing worse dependencies.
By definition, the dependencies need to be broken in the production code to make it more testable, i.e., to make the production code more testable you need to change the code to make it less coupled to actual implementations. This will allow you to substitute mock objects for the real objects in the class under test in your tests. This removes the dependency on other production classes that the class under test depends on. If you've written loosely-coupled production code -- code that relies on interfaces rather than implementations, that uses factories and dependency injection to create objects rather than direct instantiation -- then you may only need to make small changes or none at all to your production code. If not, then you will need to make those types of changes. This isn't a bad thing, however, as it will improve your design by reducing coupling between classes. The cost of this will be a few extra (small) classes and/or interfaces that make the isolation possible. If you use TDD (Test Driven Development/Design), the types of construction that you use in your production code will change to make it more naturally testable. This is one of the ways that TDD works to improve design as well as incorporate testing into your code. Note that you shouldn't need to introduce coupling or dependencies in your production code to your test code. Your test code will obviously be dependent on production and you may need to refactor the dependencies in production to make it more testable, but if your production code "knows" anything about how it's being tested, you've probably done something wrong. You've probably introduced artificial interfaces when you should be using dependency injection.
How to break dependencies without modifying production code?
[ "", "asp.net", "sql", "vb.net", "unit-testing", "" ]
I am trying to write a Python-based Web Bot that can read and interpret an HTML page, then execute an onClick function and receive the resulting new HTML page. I can already read the HTML page and I can determine the functions to be called by the onClick command, but I have no idea how to execute those functions or how to receive the resulting HTML code. Any ideas?
The only tool in Python for Javascript, that I am aware of is [python-spidermonkey](http://code.google.com/p/python-spidermonkey/). I have never used it though. With Jython you could (ab-)use [HttpUnit](http://httpunit.sourceforge.net/index.html). **Edit**: forgot that you can use [Scrapy](http://scrapy.org/). It supports Javascript through Spidermonkey, and you can even use Firefox for crawling the web. **Edit 2**: Recently, I find myself using browser automation more and more for such tasks thanks to some excellent libraries. [QtWebKit](http://qt-project.org/doc/qt-5.0/qtwebkit/qtwebkit-module.html) offers full access to a WebKit browser, which can be used in Python thanks to language bindings ([PySide](http://qt-project.org/wiki/PySide) or [PyQt](http://www.riverbankcomputing.co.uk/software/pyqt/intro)). There seem to be similar libraries and bindings for Gtk+ which I haven't tried. [Selenium WebDriver API](http://seleniumhq.org/docs/03_webdriver.jsp) also works great and has an active community.
Well obviously python won't interpret the JS for you (though there may be modules out there that can). I suppose you need to convert the JS instructions to equivalent transformations in Python. I suppose ElementTree or BeautifulSoup would be good starting points to interpret the HTML structure.
Python Web-based Bot
[ "", "python", "html", "bots", "" ]
> **Possible Duplicate:** > [Is JavaScript’s math broken?](https://stackoverflow.com/questions/588004/is-javascripts-math-broken) This seems really stupid, but when running this, it will display 7.300000000000001 instead of 7.3 > ``` > <script language="javascript"> > function buttonClicked() { > var mySum = 1.1 + 1.1 + 1.1 + 0 + 4; > alert(mySum); > } > </script> > ``` What would cause this? I have the javascript on a pretty basic aspx page. We are actually using javascript to add up values from form inputs, this was just an example with numbers that were breaking it, there are many others.. Any idea what could be doing this?!?
It has to do with how decimal values are converted to binary floating point numbers. 1/10 turns into a repeating decimal in binary, so the number is not perfectly represented, and repeated operations can expose the error. JavaScript uses IEEE-754 floating point numbers, for the record. Some other languages have the same problem. How to deal with it? In your case, maybe [toPrecision()](http://www.w3schools.com/jsref/jsref_toprecision.asp).
Floating points are stored as the numeric portion (mantissa) and the exponent (how many places to move the decimal point). The mantissa portion of loating point numbers are stored as a sum of fractions. They are calculated by adding a series of fractions. The order of the fractions is: ``` 1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128, ... etc ``` The binary representation is stored as 0s and 1s which indicate yes/no. For example, 001010 would be 0 \* 1/2 + 0 \* 1/4 + 1 \* 1/8 + 0 \* 1/16 + 1 \* 1/32. This is a rough example of why floating points cannot be exact. As you add precision (float -> double -> long double) you get more precision to a limit. The underlying binary data stored is split into two pieces - one for the part that appears before the decimal point, and the other for the part after the decimal. It's an [IEEE standard](http://en.wikipedia.org/wiki/IEEE_754) that has been adopted because of the speed at which calculations can be performed (and probably other factors on top). Check this link for more information: <https://en.wikibooks.org/wiki/A-level_Computing/AQA/Paper_2/Fundamentals_of_data_representation/Floating_point_numbers>
Weird Javascript Behaviour: Floating Point Addition giving the wrong answer
[ "", "javascript", "floating-point", "" ]
For example we can construct such an array like this: ``` new ElementType[0]; ``` I seen such a construct, but I don't understand why this might be useful.
An example. Say, you have a function ``` public String[] getFileNames(String criteria) { ``` to get some filenames. Imagine that you don't find any filenames satisfying criteria. What do you return? You have 2 choices - *either return null*, or *0-sized array*. The variant with *0-sized array* is better, because your caller doesn't need to check for **NULL** and can process the array in a consistent way - say, in a loop (which would be empty in this case). There's a chapter on this in [Effective Java, Item 27](http://books.google.com/books?id=ZZOiqZQIbRMC&lpg=PP1&dq=effective%2Bjava&pg=PA134)
It's easier to work with than `null` in many cases, where `null` is the obvious alternative. Suppose you want to return an `Iterable<String>` containing (say) a list of relevant filenames... but there aren't any for some reason. You *could* return `null` to indicate that, but then the caller has to special-case that. Instead, if you return an empty collection, the caller can still use an enhanced for loop: ``` for (String file : getFiles()) ``` So why use an empty array instead of an empty `ArrayList` or something similar? Arrays are a fixed size, so an empty array is effectively immutable. That means you can keep a single value and return it to whoever you like, knowing they can't possibly do anything with it. That can be very useful in some situations.
Use of array of zero length
[ "", "java", "arrays", "" ]
I want you to give reasons why someone should not use the highest possible error reporting level in PHP? **Ways to set highest level:** PHP < 5.4: ``` error_reporting(E_ALL | E_STRICT); ``` PHP >= 5.4: ``` error_reporting(E_ALL); ``` PHP *all versions* (as recommended for [config files](http://docs.php.net/manual/en/errorfunc.configuration.php#ini.error-reporting)): ``` error_reporting(2147483647); ``` PHP *all versions* (my config, -1 will contain all errors and is easy to remember) ``` error_reporting(-1); ``` **My experiences:** * there is no reason for low reporting levels * never used the [error control operator](http://docs.php.net/manual/en/language.operators.errorcontrol.php) * use to convert all errors to exceptions via [set\_error\_handler](http://docs.php.net/manual/en/function.set-error-handler.php) and customized exception class to overwrite file and line
I personally prefer to code at the highest level of error reporting, and fix all warnings generated by my code. However, I can envisage a couple of reasons why you might want to work at a lower level: 1. You may be working with legacy code which emits a lot of warnings. If the code works correctly this is not an issue, but the "noise" may be distracting and prevent you from seeing the real problems. In this situation it may be desirable to lower the error reporting level. 2. In a production environment you may want to log only errors. This has two benefits, it means your error logs contain only critical issues which need attention, and it will save disk space (and reduce disk i/o). Off topic aside: In production environment you should run "display\_errors = Off" and "error\_logging = On" to prevent users from seeing PHP errors (which may contain sensitive information e.g. database connection properties), and collect a log of errors as they occur. So your production error\_reporting level and related setting may be different to what you'd prefer to run in development.
I think there is no good reason, except maybe what Jim says in his first point, running legacy code that cannot or won't be changed. You should most certainly run it at the highest level during development and wipe out every warning and notice unless you have a great reason not to. If you have a great reason not to fix a notice during development, you should document it and use the error contorl operator to avoid cluttering the logs.
Why not to use highest error reporting level in PHP?
[ "", "php", "error-reporting", "" ]
I have a stored procedure that does some parameter validation and should fail and stop execution if the parameter is not valid. My first approach for error checking looked like this: ``` create proc spBaz ( @fooInt int = 0, @fooString varchar(10) = null, @barInt int = 0, @barString varchar(10) = null ) as begin if (@fooInt = 0 and (@fooString is null or @fooString = '')) raiserror('invalid parameter: foo', 18, 0) if (@barInt = 0 and (@barString is null or @barString = '')) raiserror('invalid parameter: bar', 18, 0) print 'validation succeeded' -- do some work end ``` This didn't do the trick since severity 18 doesn't stop the execution and 'validation succeeded' is printed together with the error messages. I know I could simply add a return after every raiserror but this looks kind of ugly to me: ``` if (@fooInt = 0 and (@fooString is null or @fooString = '')) begin raiserror('invalid parameter: foo', 18, 0) return end ... print 'validation succeeded' -- do some work ``` Since errors with severity 11 and higher are caught within a try/catch block another approach I tested was to encapsulate my error checking inside such a try/catch block. The problem was that the error was swallowed and not sent to the client at all. So I did some research and found a way to [rethrow](http://sqlblog.com/blogs/roman_rehak/archive/2007/12/01/how-to-rethrow-errors-in-t-sql.aspx) the error: ``` begin try if (@fooInt = 0 and (@fooString is null or @fooString = '')) raiserror('invalid parameter: foo', 18, 0) ... end try begin catch exec usp_RethrowError return end catch print 'validation succeeded' -- do some work ``` I'm still not happy with this approach so I'm asking you: How does your parameter validation look like? Is there some kind of "best practice" to do this kind of checking?
I don't think that there is a single "right" way to do this. My own preference would be similar to your second example, but with a separate validation step for each parameter and more explicit error messages. As you say, it's a bit cumbersome and ugly, but the intent of the code is obvious to anyone reading it, and it gets the job done. ``` IF (ISNULL(@fooInt, 0) = 0) BEGIN RAISERROR('Invalid parameter: @fooInt cannot be NULL or zero', 18, 0) RETURN END IF (ISNULL(@fooString, '') = '') BEGIN RAISERROR('Invalid parameter: @fooString cannot be NULL or empty', 18, 0) RETURN END ```
We normally avoid raiseerror() and return a value that indicates an error, for example a negative number: ``` if <errorcondition> return -1 ``` Or pass the result in two out parameters: ``` create procedure dbo.TestProc .... @result int output, @errormessage varchar(256) output as set @result = -99 set @errormessage = null .... if <errorcondition> begin set @result = -1 set @errormessage = 'Condition failed' return @result end ```
The "right" way to do stored procedure parameter validation
[ "", "sql", "sql-server", "stored-procedures", "error-handling", "" ]
I'm using C# to continuously search for multiple string "keywords" within large strings, which are >= 4kb. This code is constantly looping, and sleeps aren't cutting down CPU usage enough while maintaining a reasonable speed. The bog-down is the keyword matching method. I've found a few possibilities, and all of them give similar efficiency. 1) <http://tomasp.net/articles/ahocorasick.aspx> -I do not have enough keywords for this to be the most efficient algorithm. 2) Regex. Using an instance level, compiled regex. -Provides more functionality than I require, and not quite enough efficiency. 3) String.IndexOf. -I would need to do a "smart" version of this for it provide enough efficiency. Looping through each keyword and calling IndexOf doesn't cut it. Does anyone know of any algorithms or methods that I can use to attain my goal?
Are you always looking for the same keywords? Try [Boyer-Moore](http://en.wikipedia.org/wiki/Boyer-Moore_string_search_algorithm). It requires some pre-processing for the keywords, but gains speed afterwards.
I haven't tried it, but have you looked at [Rabin-Karp](http://en.wikipedia.org/wiki/Rabin-Karp_string_search_algorithm)? Apparently it has a bad worst-case complexity, but is usually quite good. What do your keywords look like? In particular, are they always delimited by spaces (or something similar)? If so, you could basically look through the string once looking for "words" and then either create a map from a word to the list of indexes of that word, or perhaps only do so for keywords you're interested in. If you could give more details of the exact situation (such as the keywords, delimiters and what you need the result of your search to be) that would help.
C#: Efficiently search a large string for occurences of other strings
[ "", "c#", "string", "" ]
I'm working in an embedded environment (Arduino/AVR ATMega328) and want to implement the Factory Method pattern in C++. However, the compiler I'm using (avr-gcc) doesn't support the `new` keyword. Is there a way of implementing this pattern without using `new`?
Since the AVR compiler is based on the gcc compiler, it is very likely to support the new keyword. What exactly is the error you're getting. I'm guessing it's a link/compiler error along the lines of an undefined function, namely, operator new. There is a difference between the new operator and operator new, the first is used to create objects and the latter is used to allocate memory for objects. The new operator calls operator new for the type of object being created, then initialises the object's v-table and calls the object's constructors. [Reading this FAQ](http://www.nongnu.org/avr-libc/user-manual/FAQ.html) it says that operator new is not defined in the standard libraries. This is easy to fix, just define one: ``` void *operator new (size_t size) { return some allocated memory big enough to hold size bytes } ``` and you'll need to define a delete as well: ``` void operator delete (void *memory) { free the memory } ``` The only thing to add is the memory management, the allocation and freeing of blocks of memory. This can be done trivially, being careful not to clobber any existing allocated memory (the code, static / global data, the stack). You should have two symbols defined - one for the start of free memory and one for the end of the free memory. You can dynamically allocate and free any chunk of memory in this region. You will need to manage this memory yourself.
The big picture of the Factory Method is object creation, which means heap memory consumption. On an embedded system, you are constrained by RAM and need to make all your design decisions with your memory limits in mind. The ATmega328 only has 2 KB RAM. I would recommend against using dynamically allocated memory in such a tight space. Without knowing your problem in more detail, I would recommend statically declaring a handful of instances of the class and re-use those instances in some fashion. This means you need to know when and why your objects are created and--JUST AS IMPORTANT--when and why they end; then you need to figure out how many you need to have active at one time and how many it is possible to have active at one time. !!Dean
Can I implement the Factory Method pattern in C++ without using new?
[ "", "c++", "embedded", "avr", "avr-gcc", "factory-method", "" ]
I am trying to parse a xml file that i have (using javascript DOM). and then i want to insert this data in a mysql table (using php). I know how to parse xml data. But i am unable to figure out how to use php and javascript together and insert data in mysql table. Kindly help me in this. Best Zeeshan
Why don't you just use [PHP DOM](http://au.php.net/book.dom) to parse the XML ? e.g: ``` $doc = new DOMDocument(); $doc->load('book.xml'); echo $doc->saveXML(); ``` After loaded your document `$doc` you can use the PHP DOM functions.
The JavaScript needs to send the data to the server. There are several ways this can be achieved. 1. Generate a form and submit it 2. Use [XMLHttpRequest](http://www.jibbering.com/2002/4/httprequest.html) directly 3. Use a library like [YUI](http://developer.yahoo.com/yui.) or [jQuery](http://jquery.com/) Once there, you get the data (from $\_POST for example) and [insert it into MySQL as normal](https://stackoverflow.com/questions/60174/best-way-to-stop-sql-injection-in-php).
parse XML and insert data in MySQL table
[ "", "javascript", "mysql", "xml", "" ]
I have a large block of text (200+ characters in a String) and need to insert new lines at the next space after 30 characters, to preserve words. Here is what I have now (NOT working): ``` String rawInfo = front.getItemInfo(name); String info = ""; int begin = 0; for(int l=30;(l+30)<rawInfo.length();l+=30) { while(rawInfo.charAt(l)!=' ') l++; info += rawInfo.substring(begin, l) + "\n"; begin = l+1; if((l+30)>=(rawInfo.length())) info += rawInfo.substring(begin, rawInfo.length()); } ``` Thanks for any help
As suggested by kdgregory, using a [`StringBuilder`](http://java.sun.com/javase/6/docs/api/java/lang/StringBuilder.html) would probably be an easier way to work with string manipulation. Since I wasn't quite sure if the number of characters before the newline is inserted is the word before or after 30 characters, I opted to go for the word after 30 characters, as the implementation is probably easier. The approach is to find the instance of the `" "` which occurs at least 30 characters after the current character which is being viewed by using [`StringBuilder.indexOf`](http://java.sun.com/javase/6/docs/api/java/lang/StringBuilder.html#indexOf(java.lang.String,%20int)). When a space occurs, a `\n` is inserted by [`StringBuilder.insert`](http://java.sun.com/javase/6/docs/api/java/lang/StringBuilder.html#insert(int,%20char)). (We'll assume that a newline is `\n` here -- the actual line separator used in the current environment can be retrieved by `System.getProperty("line.separator");`). Here's the example: ``` String s = "A very long string containing " + "many many words and characters. " + "Newlines will be entered at spaces."; StringBuilder sb = new StringBuilder(s); int i = 0; while ((i = sb.indexOf(" ", i + 30)) != -1) { sb.replace(i, i + 1, "\n"); } System.out.println(sb.toString()); ``` Result: ``` A very long string containing many many words and characters. Newlines will. ``` I should add that the code above hasn't been tested out, except for the example `String` that I've shown in the code. It wouldn't be too surprising if it didn't work under certain circumstances. **Edit** The loop in the sample code has been replaced by a `while` loop rather than a `for` loop which wasn't very appropriate in this example. Also, the [`StringBuilder.insert`](http://java.sun.com/javase/6/docs/api/java/lang/StringBuilder.html#insert(int,%20java.lang.String)) method was replaced by the [`StringBuilder.replace`](http://java.sun.com/javase/6/docs/api/java/lang/StringBuilder.html#replace(int,%20int,%20java.lang.String)) method, as Kevin Stich mentioned in the comments that the `replace` method was used rather than the `insert` to get the desired behavior.
Here's a test-driven solution. ``` import junit.framework.TestCase; public class InsertLinebreaksTest extends TestCase { public void testEmptyString() throws Exception { assertEquals("", insertLinebreaks("", 5)); } public void testShortString() throws Exception { assertEquals("abc def", insertLinebreaks("abc def", 5)); } public void testLongString() throws Exception { assertEquals("abc\ndef\nghi", insertLinebreaks("abc def ghi", 1)); assertEquals("abc\ndef\nghi", insertLinebreaks("abc def ghi", 2)); assertEquals("abc\ndef\nghi", insertLinebreaks("abc def ghi", 3)); assertEquals("abc def\nghi", insertLinebreaks("abc def ghi", 4)); assertEquals("abc def\nghi", insertLinebreaks("abc def ghi", 5)); assertEquals("abc def\nghi", insertLinebreaks("abc def ghi", 6)); assertEquals("abc def\nghi", insertLinebreaks("abc def ghi", 7)); assertEquals("abc def ghi", insertLinebreaks("abc def ghi", 8)); } public static String insertLinebreaks(String s, int charsPerLine) { char[] chars = s.toCharArray(); int lastLinebreak = 0; boolean wantLinebreak = false; StringBuffer sb = new StringBuffer(); for (int i = 0; i < chars.length; i++) { if (wantLinebreak && chars[i] == ' ') { sb.append('\n'); lastLinebreak = i; wantLinebreak = false; } else { sb.append(chars[i]); } if (i - lastLinebreak + 1 == charsPerLine) wantLinebreak = true; } return sb.toString(); } } ```
JAVA - Inserting a new line at the next space after 30 characters
[ "", "java", "string", "newline", "" ]
consider this scenario: * I have loaded a Parent entity through hibernate * Parent contains a collection of Children which is large and lazy loaded * The hibernate session is closed after this initial load while the user views the Parent data * The user may choose to view the contents of the lazy Children collection * I now wish to load that collection What are the ways / best way of loading this collection? * Assume session-in-view is not an option as the fetching of the Children collection would only happen after the user has viewed the Parent and decided to view the Children. * This is a service which will be accessed remotely by web and desktop based client. Thanks.
I'm making some assumptions about what the user is looking at, but it seems like you only want to retrieve the children if the user has already viewed the parent and really wants to see the children. Why not try opening a new session and fetching the children by their parent? Something along the lines of ... ``` criteria = session.createCriteria(Child.class); criteria.add(Restrictions.eq("parent", parent)); List<Child> children = criteria.list(); ```
The lazy collection can be loaded by using Hibernate.initialize(parent.getCollection()) except that the parent object needs to be attached to an active session. This solution takes the parent Entity and the name of the lazy-loaded field and returns the Entity with the collection fully loaded. Unfortunately, as the parent needs to be reattached to the newly opened session, I can't use a reference to the lazy collection as this would reference the detached version of the Entity; hence the fieldName and the reflection. For the same reason, this has to return the attached parent Entity. So in the OP scenario, this call can be made when the user chooses to view the lazy collection: ``` Parent parentWithChildren = dao.initialize(parent,"lazyCollectionName"); ``` The Method: ``` public Entity initialize(Entity detachedParent,String fieldName) { // ...open a hibernate session... // reattaches parent to session Entity reattachedParent = (Entity) session.merge(detachedParent); // get the field from the entity and initialize it Field fieldToInitialize = detachedParent.getClass().getDeclaredField(fieldName); fieldToInitialize.setAccessible(true); Object objectToInitialize = fieldToInitialize.get(reattachedParent); Hibernate.initialize(objectToInitialize); return reattachedParent; } ```
How can I access lazy-loaded fields after the session has closed, using hibernate?
[ "", "java", "hibernate", "lazy-loading", "" ]
I need to pass an array of "id's" to a stored procedure, to delete all rows from the table EXCEPT the rows that match id's in the array. How can I do it in a most simple way?
Use a stored procedure: **EDIT:** A complement for serialize List (or anything else): ``` List<string> testList = new List<int>(); testList.Add(1); testList.Add(2); testList.Add(3); XmlSerializer xs = new XmlSerializer(typeof(List<int>)); MemoryStream ms = new MemoryStream(); xs.Serialize(ms, testList); string resultXML = UTF8Encoding.UTF8.GetString(ms.ToArray()); ``` The result (ready to use with XML parameter): ``` <?xml version="1.0"?> <ArrayOfInt xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <int>1</int> <int>2</int> <int>3</int> </ArrayOfInt> ``` --- **ORIGINAL POST:** Passing XML as parameter: ``` <ids> <id>1</id> <id>2</id> </ids> ``` --- ``` CREATE PROCEDURE [dbo].[DeleteAllData] ( @XMLDoc XML ) AS BEGIN DECLARE @handle INT EXEC sp_xml_preparedocument @handle OUTPUT, @XMLDoc DELETE FROM YOURTABLE WHERE YOUR_ID_COLUMN NOT IN ( SELECT * FROM OPENXML (@handle, '/ids/id') WITH (id INT '.') ) EXEC sp_xml_removedocument @handle ``` ---
If you are using Sql Server 2008 or better, you can use something called a Table-Valued Parameter (TVP) instead of serializing & deserializing your list data every time you want to pass it to a stored procedure. Let's start by creating a simple schema to serve as our playground: ``` CREATE DATABASE [TestbedDb] GO USE [TestbedDb] GO /* First, setup the sample program's account & credentials*/ CREATE LOGIN [testbedUser] WITH PASSWORD=N'µ×? ?S[°¿Q­¥½q?_Ĭ¼Ð)3õļ%dv', DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[us_english], CHECK_EXPIRATION=OFF, CHECK_POLICY=ON GO CREATE USER [testbedUser] FOR LOGIN [testbedUser] WITH DEFAULT_SCHEMA=[dbo] GO EXEC sp_addrolemember N'db_owner', N'testbedUser' GO /* Now setup the schema */ CREATE TABLE dbo.Table1 ( t1Id INT NOT NULL PRIMARY KEY ); GO INSERT INTO dbo.Table1 (t1Id) VALUES (1), (2), (3), (4), (5), (6), (7), (8), (9), (10); GO ``` With our schema and sample data in place, we are now ready to create our TVP stored procedure: ``` CREATE TYPE T1Ids AS Table ( t1Id INT ); GO CREATE PROCEDURE dbo.FindMatchingRowsInTable1( @Table1Ids AS T1Ids READONLY ) AS BEGIN SET NOCOUNT ON; SELECT Table1.t1Id FROM dbo.Table1 AS Table1 JOIN @Table1Ids AS paramTable1Ids ON Table1.t1Id = paramTable1Ids.t1Id; END GO ``` With both our schema and API in place, we can call the TVP stored procedure from our program like so: ``` // Curry the TVP data DataTable t1Ids = new DataTable( ); t1Ids.Columns.Add( "t1Id", typeof( int ) ); int[] listOfIdsToFind = new[] {1, 5, 9}; foreach ( int id in listOfIdsToFind ) { t1Ids.Rows.Add( id ); } // Prepare the connection details SqlConnection testbedConnection = new SqlConnection( @"Data Source=.\SQLExpress;Initial Catalog=TestbedDb;Persist Security Info=True;User ID=testbedUser;Password=letmein12;Connect Timeout=5" ); try { testbedConnection.Open( ); // Prepare a call to the stored procedure SqlCommand findMatchingRowsInTable1 = new SqlCommand( "dbo.FindMatchingRowsInTable1", testbedConnection ); findMatchingRowsInTable1.CommandType = CommandType.StoredProcedure; // Curry up the TVP parameter SqlParameter sqlParameter = new SqlParameter( "Table1Ids", t1Ids ); findMatchingRowsInTable1.Parameters.Add( sqlParameter ); // Execute the stored procedure SqlDataReader sqlDataReader = findMatchingRowsInTable1.ExecuteReader( ); while ( sqlDataReader.Read( ) ) { Console.WriteLine( "Matching t1ID: {0}", sqlDataReader[ "t1Id" ] ); } } catch ( Exception e ) { Console.WriteLine( e.ToString( ) ); } /* Output: * Matching t1ID: 1 * Matching t1ID: 5 * Matching t1ID: 9 */ ``` There is probably a less painful way to do this using a more abstract API, such as Entity Framework. However, I do not have the time to see for myself at this time.
Passing an array of parameters to a stored procedure
[ "", "sql", "sql-server-2005", "stored-procedures", "" ]
Lets say we have the following *file* and [folder] structure in a project with a main namespace of `MyNamespace`: * [Entities] + *Article.cs* + *Category.cs* * [Interfaces] + *IReviewable.cs* + *ISearchable.cs* * *Enumerations.cs* According to ReSharper's suggestions, the namespace of the classes `Article` and `Category` should be **MyNamespace.Entities**, the namespace of `IReviewable` and `ISearchable` should be **MyNamespace.Interfaces** and the namespace for the `Enumerations` class should be simply **MyNamespace**. This is because ReSharper's suggestions are based on the folder structure depending and its suggestions are based on where th file is located in the structure. --- What do you think of the above namespaces? Do you think that it is correct to implement namespaces for classes (interfaces etc...) solely on their folder location? Or do you think that namespace declaration shouldn't depend solely on the folder structure? Personally, I would put all the above files under the single **MyNamespace** since they are all kind-of related to one another.
I think ReSharper's suggestions are fine. I think it's a mistake to group classes, etc., by what they are instead of what they do. An analogy is grouping documents in subfolders Word, Excel, etc. instead of by Project or some other functional grouping.
Note that ReSharper adds a "Namespace Provider" property to each folder in your project. You can set that property to false for a folder that you want to use for organization, but to not contribute to namespaces.
Properly assigning Namespaces (with ReSharper's suggestions)
[ "", "c#", "visual-studio", "namespaces", "resharper", "" ]
If I have 50,000-100,000 product skus with accompanying information, including specifications and descriptions, that needs to be updated on a regular basis (at least once a day), is XML the best way to go as a data interchange format? The application is written in PHP, and I'm thinking SimpleXML to PHP's native MySQL calls (as opposed to using application hooks to dump data into the appropriate location in the DB). The server will be Linux-based, and I will have full root access. I know this is a rather generic question, which is why I made it community wiki -- I'm looking for an overall approach that is considered best practice. If it matters the application is Magento.
You have to define the parameters of "best" for your given scenario. XML is verbose, which means two things * You can supply a lot of detail about the data, including metadata * Filesize is going to be big The other advantage you gain with XML is more advanced parsing/selection "out-of-the-box" with tools like XPath. But there are many other formats you could choose, each with their own advantage and disadvange * [Database Dump](http://en.wikipedia.org/wiki/Database_dump) * [Data Interchange Format](http://en.wikipedia.org/wiki/Data_Interchange_Format) * [CSV](http://en.wikipedia.org/wiki/Comma-separated_values) * [JSON](http://en.wikipedia.org/wiki/Json) * [Serialized PHP](http://php.net/manual/en/function.serialize.php) And several others. My point is, that you need to figure out what's important to your system (speed? character-set support? human-readability?) and choose a format that's going to be compatible for both sides.
The only real down side to XML is that it is very verbose. XML files are generally very large compared to other formats. The upside is that it is relatively easy to read (for people) and parse (for software). With only 100K records (without knowing the size of each record) I think I would go with XML.
Is XML the best data interchange format when importing large amounts of data as a scripted routine in PHP?
[ "", "php", "xml", "import", "magento", "" ]
I found this function in the header file of an abstract class: ``` virtual ostream & print( ostream & out ) const; ``` Can anyone tell me what kind of function this is and how to declare it in a derived class? From what I can tell, it looks like it returns a reference to an outstream. If I implement it in my cc file with nothing in it, I get a compiler error: `error: expected constructor, destructor, or type conversion before ‘&’ token` Can someone show me a simple implementation of how to use it?
some implementation: ``` ostream& ClassA::print( ostream& out) const { out << myMember1 << myMember2; return out; } ``` Returning the same ostream allows combinations like ``` a.print( myStream) << someOtherVariables; ``` However, it is still strange to use it this way. Regarding the error, ostream is part of std namespace, and not part of the global namespace or the namespace the class you're refering is part of.
You probably forgot to include `iostream` which makes `ostream` visible. You also need to change this into `std::ostream`, because C++ standard library names are within the namespace `std`. > Do **not** write `using namespace std;` in a header-file, ever! It's ok to place it into the implementation file, if you want, or if you write up an example for a friend. Because any file that includes that header will have all of the standard library visible as global names, which is a huge mess and smells a lot. It suddenly increases the chance for name-clashes with other global names or other `using`'ed names - i would avoid using directives at all (see [Using me](http://www.ddj.com/cpp/184401782) by Herb Sutter). So change the code into this one ``` #include <iostream> // let ScaryDream be the interface class HereBeDragons : public ScaryDream { ... // mentioning virtual in the derived class again is not // strictly necessary, but is a good thing to do (documentary) virtual std::ostream & print( std::ostream & out ) const; ... }; ``` And in the implementation file (".cpp") ``` #include "HereBeDragons.h" // if you want, you could add "using namespace std;" here std::ostream & HereBeDragons::print( std::ostream & out ) const { return out << "flying animals" << std::endl; } ```
How to implement 'virtual ostream & print( ostream & out ) const;'
[ "", "c++", "" ]
Which is a better choice on a development box if you primarily develop Asp.Net applications and SSRS reports. I have never had to use the Express editions, so I don't really know the pros or cons. The cons I have listed for Standard+ editions are: 1. toll it takes on system resources 2. pain to attach database for projects 3. pain to detach unused databases 4. $$$ Pros: 1. You have everything you need 2. Management Studio features 3. Easy move to production
Are you talking about for your dev machine, or for production? If it's your dev machine I would just pony up the ~$50USD for the [developer sku](https://rads.stackoverflow.com/amzn/click/com/B001B8EZR4), the only caveat is to make sure you don't make use of enterprise features unless you will have enterprise in prod.
I don't have experience with the 2008 versions as yet, but I've used both the 2005 and 2000 equivalent (MSDE) on live production projects. The codebase for both of these is essentially the same as the full blown product but with restrictions on ussage and the absence of some tools - the later of which can be generally worked around with 3rd party replacements. If the number of concurrent users is low, and the the database is unlikely to grow that large, then generally the express versions are fine. It's a little more hassle to manage than having the full edition to hand, but the cost saving is significant. Low and that large are of course elastic, but for example we have a real estate application that runs in several offices with half a dozen users and a couple of tables with a million rows and performance and management is perfectly fine.
SQL Server 2008 : Standard or SQL Express
[ "", "sql", "sql-server-2008", "" ]
I've a table which contains sequence number. Table structure ``` SequenceGenerator Year int Month int NextNumber int ``` Year + Month make primary key. Sequence is reset every month. I'm using Subsonic to generate DAL. To get next sequence number I've written a class which returns next number to requestors: ``` private static readonly object _lock = new Object(); private static readonly string FormatString = "{0}{1}{2}{3}"; private static readonly string NumberFormat = "000000"; public static object GetNextNumber(string prefix) { lock (_lock) { int yr = DateTime.Now.Year; int month = DateTime.Now.Month; SequenceGeneratorCollection col = new SequenceGeneratorCollection() .Where(SequenceGenerator.Columns.Year, Comparison.Equals, yr) .Where(SequenceGenerator.Columns.Month, Comparison.Equals, month) .Load(); if (col==null || col.Count == 0) { SequenceGenerator tr = new SequenceGenerator(); tr.Year = yr; tr.Month = month; tr. NextNumber = 1; tr.Save(); return string.Format(FormatString, prefix, yr, month,tr.NextNumber.ToString(NumberFormat)); } SequenceGenerator t = col[0]; t.NextNumber += 1; t.Save(); return string.Format(FormatString, prefix, yr, month, t.NextNumber.ToString(NumberFormat)); } } ```
This lock won't when you have more than one client locking different \_lock objects. You should use the database locking mechanisms for this.
This locking is really risky, you should use database level transaction if you want to ensure the data remains coherent. The lock(\_lock) does not protect you against having two app domains talking to the DB at the same time.
Is this locking Okay?
[ "", "c#", "synchronization", "" ]
I have the following servlet code ``` public void doPost(HttpServletRequest request, HttpServletResponse response){ Backup bup = new Backup(); bup.doBackup(); response.setContentType("text/html"); PrintWriter out; try { out = response.getWriter(); out.println("You backed up your data - well hopefully"); out.flush(); out.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } ``` And the following stack trace > javax.servlet.ServletException: Class > view.BackupServlet is not a Servlet > org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) > org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) > org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849) > org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) > org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454) > java.lang.Thread.run(Thread.java:619) > > root cause > > java.lang.ClassCastException: > view.BackupServlet cannot be cast to > javax.servlet.Servlet > org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) > org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) > org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849) > org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) > org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454) > java.lang.Thread.run(Thread.java:619) and the web.xml is ``` <?xml version="1.0" encoding="UTF-8"?> <web-app id="WebApp_ID" version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-<br>app_2_4.xsd"> <display-name>Backup</display-name> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list> <servlet> <description>BackupServlet</description> <display-name>BackupServlet</display-name> <servlet-name>BackupServlet</servlet-name><br> <servlet-class>view.BackupServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>BackupServlet</servlet-name> <url-pattern>/BackupServlet</url-pattern> </servlet-mapping> </web-app>` ```
You haven't shown your class declaration - my guess is that your class doesn't extend `Servlet` or `HttpServlet`. If it *does*, then check how many different servlet.jar files you have in your deployment - it could be that it's being loaded by two different classloaders.
Your class containing the method `doPost` must extend `javax.servlet.Servlet`, but preferrably `javax.servlet.HttpServlet` ``` public class BackupServlet extends HttpServlet { public void doPost(HttpServletRequest request, HttpServletResponse response) { . . . } } ```
My class is not a servlet error
[ "", "java", "servlets", "" ]
How do I convert from 8 bit byte to 7 bit byte (Base 256 to Base 128) I am looking to do something like this: ``` public string BytesToString(byte[] in) { } public byte[] StringToBytes(string in) { } ``` I know base64 is available but it expands a byte array too much.
Base64 encodes 6 bits per character, producing a string which can be reliably transmitted with very little effort (modulo being careful about URLs). There is no 7-bit alphabet with the same properties - many, *many* systems will fail if they're given control characters, for example. Are you *absolutely sure* that you're not going to need to go through any such systems (including for storage)? It the extra tiny bit of space saving really enough to justify having to worry about whether something's going to change "\n" to "\r\n" or vice versa, or drop character 0? (For a storage example, 2100 bytes = 2800 chars in base64, or 2400 chars in base128. Not a massive difference IMO.) I'd strongly urge you to see whether you can find the extra storage space - it's likely to save a lot of headaches later.
Is [UTF-7](http://en.wikipedia.org/wiki/UTF-7) what you're looking for?
How do I convert from 8 bit byte to 7 bit byte (Base 256 to Base 128)
[ "", "c#", "" ]
Looking through some java code and this just does not seem right. To me, it looks like every time you call projects, you will get a new hashmap, so that this statement is always false ``` projects.get(soapFileName) != null ``` Seems like it should have a backing field ``` public static HashMap<String,WsdlProject> projects = new HashMap<String,WsdlProject>(); public Object[] argumentsFromCallSoapui(CallT call, Vector<String> soapuiFiles, HashMap theDPLs,int messageSize) { try { for (String soapFileName:soapuiFiles){ System.out.println("Trying "+soapFileName); WsdlProject project ; if (projects.get(soapFileName) != null){ project = projects.get(soapFileName); } else { project = new WsdlProject(soapFileName); projects.put(soapFileName,project); } } } ... } ```
Nope. In Java that static variable only gets initialized once. So, this line will only get called once. ``` public static HashMap<String,WsdlProject> projects = new HashMap<String,WsdlProject> (); ```
The *projects* variable will be initialized once, when the class first loads. Generally, static maps of this sort are a bad idea: they often turn into memory leaks, as you hold entries long past their useful life. In this particular case, I'd also worry about thread safety. If you have multiple threads calling this method (which is likely in code dealing with web services), you'll need to synchronize access to the map or you could corrupt it. And, in a general stylistic note, it's a good idea to define variables using the least restrictive class: in this case, the interface *Map*, rather than the concrete class *HashMap*.
Static method in Java
[ "", "java", "static-methods", "" ]
I'm looking for some "inference rules" (similar to set operation rules or logic rules) which I can use to reduce a SQL query in complexity or size. Does there exist something like that? Any papers, any tools? Any equivalencies that you found on your own? It's somehow similar to query optimization, but not in terms of performance. To state it different: Having a (complex) query with JOINs, SUBSELECTs, UNIONs is it possible (or not) to reduce it to a simpler, equivalent SQL statement, which is producing the same result, by using some transformation rules? So, I'm looking for equivalent transformations of SQL statements like the fact that most SUBSELECTs can be rewritten as a JOIN.
> To state it different: Having a (complex) query with JOINs, SUBSELECTs, UNIONs is it possible (or not) to reduce it to a simpler, equivalent SQL statement, which is producing the same result, by using some transformation rules? --- *This answer was written in 2009. Some of the query optimization tricks described here are obsolete by now, others can be made more efficient, yet others still apply. The statements about feature support by different database systems apply to versions that existed at the time of this writing.* --- That's exactly what optimizers do for a living (not that I'm saying they always do this well). Since SQL is a set based language, there are usually more than one way to transform one query to other. Like this query: ``` SELECT * FROM mytable WHERE col1 > @value1 OR col2 < @value2 ``` can be transformed into this one (provided that `mytable` has a primary key): ``` SELECT * FROM mytable WHERE col1 > @value1 UNION SELECT * FROM mytable WHERE col2 < @value2 ``` or this one: ``` SELECT mo.* FROM ( SELECT id FROM mytable WHERE col1 > @value1 UNION SELECT id FROM mytable WHERE col2 < @value2 ) mi JOIN mytable mo ON mo.id = mi.id ``` , which look uglier but can yield better execution plans. One of the most common things to do is replacing this query: ``` SELECT * FROM mytable WHERE col IN ( SELECT othercol FROM othertable ) ``` with this one: ``` SELECT * FROM mytable mo WHERE EXISTS ( SELECT NULL FROM othertable o WHERE o.othercol = mo.col ) ``` In some RDBMS's (like PostgreSQL 8.4), `DISTINCT` and `GROUP BY` use different execution plans, so sometimes it's better to replace the one with the other: ``` SELECT mo.grouper, ( SELECT SUM(col) FROM mytable mi WHERE mi.grouper = mo.grouper ) FROM ( SELECT DISTINCT grouper FROM mytable ) mo ``` vs. ``` SELECT mo.grouper, SUM(col) FROM mytable GROUP BY mo.grouper ``` In PostgreSQL, `DISTINCT` sorts and `GROUP BY` hashes. MySQL 5.6 lacks `FULL OUTER JOIN`, so it can be rewritten as following: ``` SELECT t1.col1, t2.col2 FROM table1 t1 LEFT OUTER JOIN table2 t2 ON t1.id = t2.id ``` vs. ``` SELECT t1.col1, t2.col2 FROM table1 t1 LEFT JOIN table2 t2 ON t1.id = t2.id UNION ALL SELECT NULL, t2.col2 FROM table1 t1 RIGHT JOIN table2 t2 ON t1.id = t2.id WHERE t1.id IS NULL ``` , but see this article in my blog on how to do this more efficiently in MySQL: * [**Emulating `FULL OUTER JOIN` in MySQL**](http://explainextended.com/2009/04/06/emulating-full-outer-join-in-mysql/) This hierarchical query in Oracle 11g: ``` SELECT DISTINCT(animal_id) AS animal_id FROM animal START WITH animal_id = :id CONNECT BY PRIOR animal_id IN (father, mother) ORDER BY animal_id ``` can be transformed to this: ``` SELECT DISTINCT(animal_id) AS animal_id FROM ( SELECT 0 AS gender, animal_id, father AS parent FROM animal UNION ALL SELECT 1, animal_id, mother FROM animal ) START WITH animal_id = :id CONNECT BY parent = PRIOR animal_id ORDER BY animal_id ``` , the latter one being more efficient. See this article in my blog for the execution plan details: * [**Genealogy query on both parents**](http://explainextended.com/2009/05/24/genealogy-query-on-both-parents/) To find all ranges that overlap the given range, you can use the following query: ``` SELECT * FROM ranges WHERE end_date >= @start AND start_date <= @end ``` , but in SQL Server this more complex query yields same results faster: ``` SELECT * FROM ranges WHERE (start_date > @start AND start_date <= @end) OR (@start BETWEEN start_date AND end_date) ``` , and believe it or not, I have an article in my blog on this too: * [**Overlapping ranges: SQL Server**](http://explainextended.com/2009/06/30/overlapping-ranges-sql-server/) SQL Server 2008 also lacks an efficient way to do cumulative aggregates, so this query: ``` SELECT mi.id, SUM(mo.value) AS running_sum FROM mytable mi JOIN mytable mo ON mo.id <= mi.id GROUP BY mi.id ``` can be more efficiently rewritten using, Lord help me, cursors (you heard me right: "cursors", "more efficiently" and "SQL Server" in one sentence). See this article in my blog on how to do it: * [**Flattening timespans: SQL Server**](http://explainextended.com/2009/06/11/flattening-timespans-sql-server/) There is a certain kind of query, commonly met in financial applications, that pulls effective exchange rate for a currency, like this one in Oracle 11g: ``` SELECT TO_CHAR(SUM(xac_amount * rte_rate), 'FM999G999G999G999G999G999D999999') FROM t_transaction x JOIN t_rate r ON (rte_currency, rte_date) IN ( SELECT xac_currency, MAX(rte_date) FROM t_rate WHERE rte_currency = xac_currency AND rte_date <= xac_date ) ``` This query can be heavily rewritten to use an equality condition which allows a `HASH JOIN` instead of `NESTED LOOPS`: ``` WITH v_rate AS ( SELECT cur_id AS eff_currency, dte_date AS eff_date, rte_rate AS eff_rate FROM ( SELECT cur_id, dte_date, ( SELECT MAX(rte_date) FROM t_rate ri WHERE rte_currency = cur_id AND rte_date <= dte_date ) AS rte_effdate FROM ( SELECT ( SELECT MAX(rte_date) FROM t_rate ) - level + 1 AS dte_date FROM dual CONNECT BY level <= ( SELECT MAX(rte_date) - MIN(rte_date) FROM t_rate ) ) v_date, ( SELECT 1 AS cur_id FROM dual UNION ALL SELECT 2 AS cur_id FROM dual ) v_currency ) v_eff LEFT JOIN t_rate ON rte_currency = cur_id AND rte_date = rte_effdate ) SELECT TO_CHAR(SUM(xac_amount * eff_rate), 'FM999G999G999G999G999G999D999999') FROM ( SELECT xac_currency, TRUNC(xac_date) AS xac_date, SUM(xac_amount) AS xac_amount, COUNT(*) AS cnt FROM t_transaction x GROUP BY xac_currency, TRUNC(xac_date) ) JOIN v_rate ON eff_currency = xac_currency AND eff_date = xac_date ``` Despite being bulky as hell, the latter query is six times as fast. The main idea here is replacing `<=` with `=`, which requires building an in-memory calendar table to join with. * [**Converting currencies**](http://explainextended.com/2009/05/27/converting-currencies/)
Here's a few from working with Oracle 8 & 9 (of course, sometimes doing the opposite might make the query simpler or faster): Parentheses can be removed if they are not used to override operator precedence. A simple example is when all the boolean operators in your `where` clause are the same: `where ((a or b) or c)` is equivalent to `where a or b or c`. A sub-query can often (if not always) be *merged with the main query* to simplify it. In my experience, this often improves performance considerably: ``` select foo.a, bar.a from foomatic foo, bartastic bar where foo.id = bar.id and bar.id = ( select ban.id from bantabulous ban where ban.bandana = 42 ) ; ``` is equivalent to ``` select foo.a, bar.a from foomatic foo, bartastic bar, bantabulous ban where foo.id = bar.id and bar.id = ban.id and ban.bandana = 42 ; ``` Using *ANSI joins* separates a lot of "code monkey" logic from the really interesting parts of the where clause: The previous query is equivalent to ``` select foo.a, bar.a from foomatic foo join bartastic bar on bar.id = foo.id join bantabulous ban on ban.id = bar.id where ban.bandana = 42 ; ``` If you want to check for the existence of a row, don't use *count(\*)*, instead use either `rownum = 1` or put the query in a `where exists` clause to fetch only one row instead of all.
General rules for simplifying SQL statements
[ "", "sql", "logic", "complexity-theory", "reduction", "" ]
In php, is there any way to clear/remove all previously echoed or printed items? For example: ``` <?php echo 'a'; print 'b'; // some statement that removes all printed/echoed items echo 'c'; // the final output should be equal to 'c', not 'abc' ?> ``` My script uses the include function. The included files are not supposed to echo anything. Just in case someone (ex = hacker) tries, I need a way to remove.
``` <?php ob_start(); echo 'a'; print 'b'; // some statement that removes all printed/echoed items ob_end_clean(); echo 'c'; // the final output is equal to 'c', not 'abc' ?> ``` [Output buffering functions](http://php.net/manual/en/ref.outcontrol.php) The output buffering functions are also useful in hackery to coerce functions that only print to return strings, ie. ``` <?php ob_start(); var_dump($myVar); $data = ob_get_clean(); // do whatever with $data ?> ```
while @monoxide is right, its better to find more intuitive ways of doing the same. e.g.: ``` <?php $val_to_print = $a; if( $need_to_change==true ) $val_to_print = $b; // when you are sure you won't have to change again... echo $val_to_print; ?> ``` Cheers, jrh
How to clear previously echoed items in PHP
[ "", "php", "output", "echo", "output-buffering", "" ]
GCC seems to allow "and" / "or" to be used instead of "&&" / "||" in C++ code; however, as I expected, many compilers (notably MSVC 7) do not support this. The fact that GCC allows this has caused some annoyances for us in that we have different developers working on the same code base on multiple platforms and occasionally, these "errors" slip in as people are switching back and forth between Python and C++ development. Ideally, we would all remember to use the appropriate syntax, but for those situations where we occasionally mess up, it would be really nice if GCC didn't let it slide. Anybody have any ideas on approaches to this? If "and" and "or" are simply #defines then I could #undef when using GCC but I worry that it is more likely built into the compiler at more fundamental level. Thanks.
They are part of the C++ standard, see for instance [this StackOverflow answer](https://stackoverflow.com/questions/555505/c-alternative-tokens/555517#555517) (which quotes the relevant parts of the standard). Another answer in the same question mentions how to do the opposite: make them work in MSVC. To disable them in GCC, use [`-fno-operator-names`](http://gcc.gnu.org/onlinedocs/gcc-4.4.0/gcc/C_002b_002b-Dialect-Options.html). Note that, by doing so, you are in fact switching to a non-standard dialect of C++, and there is a risk that you end up writing code which might not compile correctly on standard-compliant compilers (for instance, if you declare a variable with a name that would normally be reserved).
The words are standard in C++ without the inclusion of any header. The words are standard in C if you include the header `<iso646.h>`. MSVC is doing you no service by not supporting the standards. You could, however, use tools to enforce the non-use of the keywords. And it can be a coding guideline, and you can quickly train your team not to make silly portability mistakes. It isn't that hard to avoid the problem.
Is it possible to turn off support for "and" / "or" boolean operator usage in gcc?
[ "", "c++", "gcc", "compiler-construction", "syntax", "boolean", "" ]
I have an XSL transformation which outputs HTML. In the `head` element I have a CSS file reference. ``` <link rel="stylesheet" type="text/css" href="css/styles.css"/> ``` I would like to create a standalone HTML result without external references and thus I would like to include external CSS references. To prevent code duplication, I do not want to hard code the styles into the XSLT template, so I am looking for some XSLT command to copy the file contents of the CSS file. I know `xsl:include` or `xsl:import` won't work, since they expect XSLT files. Neither does ``` <xsl:copy-of select="document('css/styles.css')"/> ``` as it expects something XML compliant. I also have some JavaScript function declarations which I would like to copy as well. Is this possible with pure XSLT, or will I have to do some pre-processing of the XSLT file (or post-processing of the HTML file)?
XSLT 2.0 provides the unparsed-text() function to read documents via URL that are not XML. In XSLT 1.0, if you don't need to be too script about the CSS, you can use the following to make the CSS file XML-compatible. And, fortunately, the browsers tolerate the HTML comments. CSS ``` <!--/*--><root><![CDATA[<!--*/--> body { margin: 0; } div > p { background-color: yellow; } <!--/*-->]]></root><!--*/--> ``` XSLT ``` <style type="text/css"> <xsl:value-of select="document('test.css')" disable-output-escaping="yes" /> </style> ```
Use a processing instruction to wrap the CSS content: ``` <?xml version="1.0" encoding="utf-8"?> <root> <?wrapper html <html> <link rel="stylesheet" type="text/css" href="css/styles.css"/> </html> ?> </root> ``` Then tweak the existing `xsl:copy-of` select statement to render it: ``` <xsl:copy-of select="document('css/styles.css')//processing-instruction()"/> ```
How to copy external CSS and JavaScript in XSLT
[ "", "javascript", "css", "debugging", "xslt", "include", "" ]
I've been brushing up on my C++ as of late, and I have a quick question regarding the deletion of new'd memory. As you can see below i have a simple class that holds a list of FileData \*. I created an array to hold the FileData objects to be pushed into the list. When ReportData is destructed I loop through the list and delete each element. My question is, how can i delete the array when I'm done using reportData, so that I do not have any memory leaks? ## Report.h ``` class REPORTAPI ReportData { public: ReportData() { } virtual ~ReportData() { printf("Starting ReportData Delete\n"); for (list<FileData*>::iterator i = ReportFileData.begin(), e = ReportFileData.end(); i != e; ) { list<FileData*>::iterator tmp(i++); delete *tmp; ReportFileData.erase(tmp); } for (list<SupressionData*>::iterator i = ReportSupressionData.begin(), e = ReportSupressionData.end(); i != e; ) { list<SupressionData*>::iterator tmp(i++); delete *tmp; ReportSupressionData.erase(tmp); } ReportFileData.clear(); ReportSupressionData.clear(); printf("Finished ReportData Delete\n"); } list<FileData *> ReportFileData; list<SupressionData *> ReportSupressionData; } extern "C" __declspec(dllexport) FileData* __stdcall createFileData(string fileName, long recordCount, long addPageCount) { return new FileData(fileName, recordCount, addPageCount); } ``` ## Main.cpp ``` ReportData *reportData = createrd(); if (reportData != NULL) { CreateFileDataFunc createfd (reinterpret_cast<CreateFileDataFunc>(GetProcAddress (dll, "createFileData"))); const int num_files = 5; FileData *fileData[num_files]; char buff[256] = {'\0'}; for (int i = 0; i < num_files; i++) { sprintf(buff, "test: %d", i); fileData[i] = createfd(buff, 1, 1); reportData->ReportFileData.push_back(fileData[i]); } delete reportData; reportData = NULL; delete [] fileData; // this is throwing an access violation error: //EAccessViolation: 'Access violation at address 326025AF. Write of address 00000008'. } ``` --- I removed the delete oprations from the ReportData dtor and I'm now looping and deleting: ``` for(int i = 0; i < num_files; i++) { delete fileData[i]; } ``` This is easier to understand then having to rely on a separate object's dtor to clean up memory.
You don't. fileData is an automatic (stack) variable. You didn't allocate it with new, so you don't delete it. [Edit: also I'm not sure, but I think you could face problems deleting those FileData objects from main.cpp, considering that they were allocated in some dll. Does the dll provide a deleter function?]
Your array is not dynamically allocated, so you don't need to delete it. Each element, however, is pointing to a dynamically allocated object (from your comment): > createfd is a function pointer that returns a new instance of FileData though What you need to do is loop over the elements of the array, and free each of them. ``` for(int i = 0; i < num_files; i++) { delete fileData[i]; } ```
How to delete an array of pointers
[ "", "c++", "memory", "" ]
To need to login to a site, go to a particular page (eg. local router page) and click a button to do an operation(eg Connect). Since I do it almost everyday, I thought of automating it through small C# application. I don't have any idea how to do it.Any pointers?
Why code `C#` for one click? Try [AutoIt](http://www.autoitscript.com/autoit3/). Here is a [starter tutorial](http://www.technibble.com/automation-with-autoit-tutorial-part-2/). This will help you to quickly automate clicking on the default buttons of an application. Some more tricks from AutoIt and you will be able to do almost anything you can tell someone over the phone to do on the GUI. AutoIt is a useful tool to keep handy if you are working with GUI testing or were dreaming of scripting a lot of routine GUI activity.
You can also use the [Selenium IDE](http://seleniumhq.org/projects/ide/), which is a FireFox plugin that allows you to record macro like scripts for playback in the browser. It is designed for automated testing of web pages, but you can export the script in C#, which can in turn be run from a console app. If you plan to run it as a C# app, you will also need to look at [Selenium RC](http://seleniumhq.org/projects/remote-control/). Happy scripting :)
Automate login to a site and click a button
[ "", "c#", "" ]
I've seen quite a few recommendations for not seeding pseudo-random number generators more than once per execution, but never accompanied by a thorough explanation. Of course, it is easy to see why the following (C/C++) example is not a good idea: ``` int get_rand() { srand(time(NULL)); return rand(); } ``` since calling `get_rand` several times per second produces repeated results. *But wouldn't the following example still be an acceptable solution?* **MyRand.h** ``` #ifndef MY_RAND_H #define MY_RAND_H class MyRand { public: MyRand(); int get_rand() const; private: static unsigned int seed_base; }; #endif ``` **MyRand.cpp** ``` #include <ctime> #include <cstdlib> #include "MyRand.h" unsigned int MyRand::seed_base = static_cast<unsigned int>(time(NULL)); MyRand::MyRand() { srand(seed_base++); } int MyRand::get_rand() const { return rand(); } ``` **main.cpp** ``` #include <iostream> #include "MyRand.h" int main(int argc, char *argv[]) { for (int i = 0; i < 100; i++) { MyRand r; std::cout << r.get_rand() << " "; } } ``` i.e. even though `MyRand`:s constructor is called several times in rapid succession, each call to `srand` has a different parameter. Obviously, this is not thread-safe, but then again neither is `rand`.
Each time you call a pseudo-random number generator function, the generator takes some internal state and produces a pseudo-random number and a new internal state. The algorithm for transforming the internal state is carefully chosen so the output appears random. When you seed the random number generator, you're basically setting this internal state. If you reset the internal state to some predictable value, you'll lose the appearance of randomness. For example, a popular, simple RNG is a linear congruential generator. Numbers are generated like this: ``` X[n+1] = (a X[n] + c) mod m ``` In this case, X[n+1] is both the result and the new internal state. If you seed the generator every time as you suggest above, you'll get a sequence that looks like this: ``` {(ab + c) mod m, (a(b+1) + c) mod m, (a(b+2) + c) mod m, ...} ``` where b is your `seed_base`. This doesn't look random at all.
If your seed is predictable, which it is here since you're just incrementing it, the output from rand() will also be predictable. It really depends on why you want to generate random numbers, and how "random" is an acceptable random for you. In your example, it may avoid duplicates in rapid succession, and that may be good enough for you. After all, what matters is that it runs. On almost every platform there is a better way to generate random numbers than rand().
Issues with seeding a pseudo-random number generator more than once?
[ "", "c++", "random", "seed", "prng", "srand", "" ]
I am creating a custom .net hardware framework that will be used by other programmers to control some hardware. They will add a reference to our DLL to get to our hardware framework. I am in need of a shared class that will be accessed from multiple applications (processes). The singleton pattern seems to be what I need but it only works for multiple threads inside your process. I could be completely wrong but here is an example of the C# code I currently have. I can't help to feel that the design is incorrect. I wish I could share more specific information but I can't. * I must stress that I will have no control over the customer application. The solution must be contained inside the framework (DLL) itself. The Framework: (Shared DLL) ``` public class Resources { static readonly Resources m_instance = new Resources(); public string Data; private Resources() { Data = DateTime.Now.ToString(); } public static Resources Instance { get { return m_instance; } } } ``` The Test Application: (eventually customer app) ``` class Program { static void Main(string[] args) { Console.WriteLine("Press enter to capture the resource!"); Console.ReadLine(); var resources = Resources.Instance; Console.WriteLine("\r\n{0}: {1}\r\n", Thread.CurrentThread.ManagedThreadId, resources.Data); BackgroundWorker worker = new BackgroundWorker(); worker.DoWork += WorkerDoWork; worker.RunWorkerAsync(); while (worker.IsBusy) { Thread.Sleep(100); } Console.WriteLine("Press enter to close the process!"); Console.ReadLine(); } static void WorkerDoWork(object sender, DoWorkEventArgs e) { var resources = Resources.Instance; Console.WriteLine("\r\n{0}: {1}\r\n", Thread.CurrentThread.ManagedThreadId, resources.Data); } } ``` The first launched application gives an output of: > Press enter to capture the resource! > > 1: 6/24/2009 8:27:34 AM > > 3: 6/24/2009 8:27:34 AM > > Press enter to close the process! The second application gives an output of: > Press enter to capture the resource! > > 9: 6/24/2009 8:27:35 AM > > 10: 6/24/2009 8:27:35 AM > > Press enter to close the process! **Conclusion**: I would like to see both applications return the same string of the time of the first instantiation of the class. As you can see the singleton works for the multiple thread inside the process but not cross processes. Maybe this can't be done for I can't seem to find any solution.
You cannot use a singleton to sync across applications. Each runs in its own application space, and as a matter of security cannot access memory/objects/etc. from the other without a method of communication (like remoting) To sync the two they would have to remote into a third program.
Yes it is possible to share a singleton amongst several processes. However you will need to take advantage of a technology which supports interprocess communication in order to achieve this result. The most popular technologies which allow you to share out your object fairly directly are Remoting and WCF. Giving an example of sharing a singleton with either of these is beyond the scope of an SO answer. But there are many tutorials on the web for each of these. Googling either technology plus singleton should put you on the right path.
Can a Singleton Class inside a DLL be shared across processes?
[ "", "c#", "dll", "frameworks", "singleton", "shared-libraries", "" ]
I have been scratching my head over this for days and I still cannot understand how to implement this interface. Here is my code: ``` namespace ConsoleApplication32 { public static class ScanAndSerialize { public static void Serialize() { List<string> dirs = FileHelper.GetFilesRecursive("s:\\"); List<string> dirFiles = new List<string>(); foreach (string p in dirs) { string path = p; string lastAccessTime = File.GetLastAccessTime(path).ToString(); bool DirFile = File.Exists(path); DateTime lastWriteTime = File.GetLastWriteTime(p); //dirFiles.Add(p + " , " + lastAccessTime.ToString() + " , " + DirFile.ToString() + " , " + lastWriteTime.ToString()); dirFiles.Add(p); dirFiles.Add(lastAccessTime); dirFiles.Add(DirFile.ToString()); dirFiles.Add(lastWriteTime.ToString()); dirFiles.Add(Environment.NewLine); } XmlSerializer SerializeObj = new XmlSerializer(dirFiles.GetType()); string sDay = DateTime.Now.ToString("MMdd"); string fileName = string.Format(@"s:\project\{0}_file.xml", sDay); TextWriter WriteFileStream = new StreamWriter(fileName); SerializeObj.Serialize(WriteFileStream, dirFiles); WriteFileStream.Close(); } static class FileHelper { public static List<string> GetFilesRecursive(string b) { // 1. // Store results in the file results list. List<string> result = new List<string>(); // 2. // Store a stack of our directories. Stack<string> stack = new Stack<string>(); // 3. // Add initial directory. stack.Push(b); // 4. // Continue while there are directories to process while (stack.Count > 0) { // A. // Get top directory string dir = stack.Pop(); try { // B // Add all files at this directory to the result List. result.AddRange(Directory.GetFiles(dir, "*.*")); // C // Add all directories at this directory. foreach (string dn in Directory.GetDirectories(dir)) { stack.Push(dn); } } catch { // D // Could not open the directory } } return result; } } public class MyInterface: IValidationRowSet { public int RowNumber { get; set; } public string RowAsString { get; set; } public IValidationRowSet MatchedRow { get; set; } public string FriendlyNameLabel { get; set; } public string KeyFieldLabel { get; set; } IList<string> lst = new List<string>(); public string SourceWorksheetName { get; set; } public string SourceRangeName { get; set; } //public string SourceRangeName { get; set; } public bool bReported { get; set; } public int FieldCount { get { return lst.Count; } } public string FieldData(int id) { if (id <= lst.Count) return lst[id]; else return null; } public string ValidationMessage { get; set; } } ``` Here is an explanation of the interface (still scratching my head over this one) ``` namespace Validation { /// <summary> /// Implement this interface if you want the engine to callback when it finds exception /// messages. You will pass a reference to you class to the validation engine, and /// it will call "PostValidationMessage" for each exception example, including the message, /// the entire row set of data (vr), and the id of the field that created the exception. /// </summary> public interface IValidationReporter { /// <param name="sMsg"></param> /// <param name="vr"></param> /// <param name="id"></param> void PostValidationMessage(string sMsg, IValidationRowSet vr, int id); } /// <summary> /// Implement this interface in order to use the validation engine. /// The validation engine takes 2 IList<IValidationRowSet> objects and compares them. /// A class that implements this interface will contain an entire row of data that you'll /// want to compare. /// </summary> public interface IValidationRowSet { /// <summary> /// should return an int of the number of fields in this row /// </summary> int FieldCount { get; } /// <summary> /// should return an int of the row number that this row is in the set /// usually set when the data is assembled /// </summary> int RowNumber { get; set; } /// <summary> /// this is a function that should return the field data for this row at zero-indexed location "id" /// ex: if the row contains this data: smith|fred|2126782524|fred@smith.com| /// a call on this method of FieldData(2) will return the phone number 2126782524 /// </summary> /// <param name="id"></param> /// <returns></returns> string FieldData(int id); /// <summary> /// this will be modified by the validation process /// </summary> string ValidationMessage { get; set; } /// <summary> /// this will be modified by the validation process /// </summary> IValidationRowSet MatchedRow { get; set; } /// <summary> /// returns a string that uniquely identifies this row /// ex: if the row contains this data: smith|fred|2126782524|fred@smith.com| /// so for this example, the unique identifier could be the email address fred@smith.com /// </summary> string KeyFieldLabel { get; set; } /// <summary> /// returns a string with the "friendly" name of this row /// ex: if the row contains this data: smith|fred|2126782524|fred@smith.com| /// so for this example, FriendlyNameLabel could be the name, such as "Fred Smith" /// </summary> string FriendlyNameLabel { get; set; } /// <summary> /// returns all fields in the row as pipe delimited /// ex: 1,234.23|Fred Smith|Fred@smith.com| /// </summary> string RowAsString { get; set; } /// <summary> /// if this is an excel file comparison, this should return the name /// of the worksheet from whence this data came /// </summary> string SourceWorksheetName { get; set; } /// <summary> /// if this is an excel file comparison, this should return the name /// of the worksheet range from whence this data came /// </summary> string SourceRangeName { get; set; } /// <summary> /// this will be modified by the validation process /// </summary> bool bReported { get; set; } } } ``` I have read NUMEROUS articles/books/forum postings about Interfaces. This concept feels like a black hole to me...and i'm on a project where i have to implement this. Anybody have ANY idea how the heck you implement this? By the way--i'm a COMPLETE newbie programmer...less than 2 months experience...therefore please do not chastise me for my green-ness please. Thanks in advance.
Consider interfaces to be a prototype or a template for a puzzle to be filled in - think of them as white space and lines where to put the pieces. You will have to derive the interfaces into concrete classes - the pretty picture puzzle. Let me save this and I'll put up an example. ``` interface IFoo { bool DoFoo(int number); } class Foo : IFoo { public bool DoFoo(int number) { return (number++ >= 0); } } class Foo2 : IFoo { public bool DoFoo(int number) { return (number-- >= 0); } } ``` Now that I have that, I can do stuff like this. ``` IFoo foo; if (!value) foo = new Foo(); else foo = new Foo2(); bool value2 = foo.DoFoo(27); ``` Notice, I cannot do this with interfaces: ``` // WRONG Foo2 foo2 = new Foo(); ``` So that basically sums up what an interface does and how it works. What your job now is to implement those concrete implementations of the interface.
As a new developer myself, I find the best way to think of an interface as a template that includes all the public information about a class (attributes, methods etc.) as a list of what needs to be created for the class implementing it to do the job that is needed for those methods etc. Now there should be a slew of people who will explain this much better than me, and correct my mistakes and hopefully explain interfaces differently to the articles/books/postings in a way that you (and hopefully I) can understand :)
Implementing Interface in code
[ "", "c#", "interface", "" ]
I am following "The Art and Science of Java" book and it shows how to calculate a leap year. The book uses ACM Java Task Force's library. Here is the code the books uses: ``` import acm.program.*; public class LeapYear extends ConsoleProgram { public void run() { println("This program calculates leap year."); int year = readInt("Enter the year: "); boolean isLeapYear = ((year % 4 == 0) && (year % 100 != 0) || (year % 400 == 0)); if (isLeapYear) { println(year + " is a leap year."); } else println(year + " is not a leap year."); } } ``` Now, this is how I calculated the leap year. ``` import acm.program.*; public class LeapYear extends ConsoleProgram { public void run() { println("This program calculates leap year."); int year = readInt("Enter the year: "); if ((year % 4 == 0) && year % 100 != 0) { println(year + " is a leap year."); } else if ((year % 4 == 0) && (year % 100 == 0) && (year % 400 == 0)) { println(year + " is a leap year."); } else { println(year + " is not a leap year."); } } } ``` Is there anything wrong with my code or should i use the one provided by the book ? EDIT :: Both of the above code works fine, What i want to ask is which code is the best way to calculate the leap year.
The correct implementation is: ``` public static boolean isLeapYear(int year) { Calendar cal = Calendar.getInstance(); cal.set(Calendar.YEAR, year); return cal.getActualMaximum(Calendar.DAY_OF_YEAR) > 365; } ``` But if you are going to reinvent this wheel then: ``` public static boolean isLeapYear(int year) { if (year % 4 != 0) { return false; } else if (year % 400 == 0) { return true; } else if (year % 100 == 0) { return false; } else { return true; } } ```
# `java.time.Year::isLeap` I'd like to add the new [`java.time`](https://docs.oracle.com/en/java/javase/12/docs/api/java.base/java/time/package-summary.html) way of doing this with the [`Year`](https://docs.oracle.com/en/java/javase/12/docs/api/java.base/java/time/Year.html) class and [`isLeap`](https://docs.oracle.com/en/java/javase/12/docs/api/java.base/java/time/Year.html#isLeap()) method: ``` java.time.Year.of(year).isLeap(); ```
Java Code for calculating Leap Year
[ "", "java", "leap-year", "acm-java-libraries", "" ]
I'm working with a client that starts almost all of their WHERE clauses in Oracle with `1=1`. Forgive my ignorance, but isn't this a no-op? Are there any negative consequences of this usage? Here's a scrubbed example: ``` SELECT gpz.zname ,gpp.pname FROM table1 gpp INNER JOIN table2 gpz ON gpz.p_id = gpp.p_id WHERE 1=1 AND gpp.active = 1 AND gpz.active = 1 ```
It's done to simplify dynamic SQL generation. Basically each condition can be added as `AND <condition>` without treating the first condition as special (it's preceded by `WHERE` not `AND`) or even worrying if there should be a `WHERE` clause at all. So just write it off as easy of use or, arguably, laziness.
If they are building the query dynamically, you should check whether they're using bind variables. Building the query from literals requires extra parsing, potentially limiting scalability, and also can greatly increase the risk of SQL Injection attacks. ``` where 1 = 1 and my_id = :b1; ``` (and then defining the value of the bind variable) is generally much better than: ``` where 1 = 1 and my_id = 123456; ```
In Oracle, is starting the SQL Query's WHERE clause with 1=1 useful?
[ "", "sql", "oracle", "" ]
Is there any cross-platform way to check that my Python script is executed with admin rights? Unfortunately, `os.getuid()` is UNIX-only and is not available under Windows.
``` import ctypes, os try: is_admin = os.getuid() == 0 except AttributeError: is_admin = ctypes.windll.shell32.IsUserAnAdmin() != 0 print is_admin ```
Here's a utility function I created from the accepted answer: ``` import os import ctypes class AdminStateUnknownError(Exception): """Cannot determine whether the user is an admin.""" pass def is_user_admin(): # type: () -> bool """Return True if user has admin privileges. Raises: AdminStateUnknownError if user privileges cannot be determined. """ try: return os.getuid() == 0 except AttributeError: pass try: return ctypes.windll.shell32.IsUserAnAdmin() == 1 except AttributeError: raise AdminStateUnknownError ```
Cross-platform way to check admin rights in a Python script under Windows?
[ "", "python", "privileges", "admin-rights", "" ]
If the value of an element in a set changes the ordering may be no longer correct. As illustrated in this little program: ``` #include <algorithm> #include <iostream> #include <set> #include <string> struct Comp { bool operator()(const std::string * lhs, const std::string * rhs) { return *lhs < *rhs; } }; int main() { typedef std::set<std::string*, Comp> MySet; MySet mySet; std::string * a = new std::string("a"); mySet.insert(a); std::string * c = new std::string("c"); mySet.insert(c); std::string * b = new std::string("b"); mySet.insert(b); for (MySet::iterator it = mySet.begin(); it != mySet.end(); ++it) { std::cout << *(*it) << std::endl; } // Ouput has correct order: // a // b // c *b = "z"; std::cout << std::endl; std::string * d = new std::string("d"); mySet.insert(d); for (MySet::iterator it = mySet.begin(); it != mySet.end(); ++it) { std::cout << *(*it) << std::endl; } // Output no longer ordered correctly: // a // d // z // c return 0; } ``` How can I tell the set to 'refresh' its internal sorting?
Very similar subject here (though not quite a duplicate, because you're storing pointers to mutable objects with a custom comparison): [what happens when you modify an element of an std::set?](https://stackoverflow.com/questions/908949/what-happens-when-you-modify-an-element-of-an-stdset) Basically, don't do what you're trying to do. Instead, when you want to modify an object that a `set` holds a pointer to, remove the pointer first, then modify the object, then re-insert the pointer.
Simply, you can't. If you place an item into a set, you should not change the item in a way that changes its ordering. If you need to change an item in this way then you need to remove it from the set (set::erase), and reinsert a new item (std::insert) with the new value.
How to tell std::set to 'refresh' its ordering?
[ "", "c++", "stl", "" ]
It's right there, in the package that it should be indexing. Still, when I call ``` JAXBContext jc = JAXBContext.newInstance("my.package.name"); ``` I get a JAXBException saying that > "my.package.name" doesnt contain ObjectFactory.class or jaxb.index although it does contain both. What does work, but isn't quite what I want, is ``` JAXBContext jc = JAXBContext.newInstance(my.package.name.SomeClass.class); ``` This question from various other people appears on quite some mailing lists and forums but seemingly doesn't get answers. I'm running this on OpenJDK 6, so I got the source packages and stepped my debugger into the library. It starts by looking for jaxb.properties, then looks for system properties and failing to find either, it tries to create the default context using com.sun.internal.xml.bind.v2.ContextFactory. In there, the Exception gets thrown (inside `ContextFactor.createContext(String ClassLoader, Map)`), but I can't see what's going on because the source isn't here. **ETA**: Judging from the source code for ContentFactory, I found [here](http://www.java2s.com/Open-Source/Java-Document/6.0-JDK-Modules-com.sun/xml/com/sun/xml/internal/bind/v2/ContextFactory.java.htm#createContextStringClassLoaderMapStringObject), this is probably the piece of code that fails to work as intended: ``` /** * Look for jaxb.index file in the specified package and load it's contents * * @param pkg package name to search in * @param classLoader ClassLoader to search in * @return a List of Class objects to load, null if there weren't any * @throws IOException if there is an error reading the index file * @throws JAXBException if there are any errors in the index file */ private static List<Class> loadIndexedClasses(String pkg, ClassLoader classLoader) throws IOException, JAXBException { final String resource = pkg.replace('.', '/') + "/jaxb.index"; final InputStream resourceAsStream = classLoader.getResourceAsStream(resource); if (resourceAsStream == null) { return null; } ``` From my [previous](https://stackoverflow.com/questions/1034132/can-i-put-annotations-for-multiple-processors-into-a-java-class) [experience](https://stackoverflow.com/questions/1037356/how-do-i-debug-silent-failures-in-java-applications), I'm guessing that this has to do with the class loading mechanisms of the OSGi container that this is running in. Unfortunately, I am still a little out of my depth here.
OK, this took quite some digging, but the answer is not that surprising and not even that complicated: **JAXB can't find jaxb.index, because by default, `newInstance(String)` uses the current thread's class loader (as returned by `Thread.getContextClassLoader()`). This doesn't work inside Felix, because the OSGi bundles and the framework's threads have separate class loaders.** The solution is to get a suitable class loader from somewhere and use `newInstance(String, ClassLoader)`. I got a suitable class loader from one of the classes in the package that contains `jaxb.index`, a sensible choice for flexibility reasons probably is `ObjectFactory`: ``` ClassLoader cl = my.package.name.ObjectFactory.class.getClassLoader(); JAXBContext jc = JAXBContext.newInstance("my.package.name", cl); ``` Maybe you could also get at the class loader that the `Bundle` instance is using, but I couldn't figure out how, and the above solution seems safe to me.
I faced similar issue with the project I am working on. After reading <http://jaxb.java.net/faq/index.html#classloader> I realized that JAXBContext is not able to find the package containing jaxb.index. I will try to make this as clear as possible. We have ``` Bundle A -- com.a A.java aMethod() { B.bMethod("com.c.C"); } MANIFEST.MF Import-Package: com.b, com.c Bundle B -- com.b B.java bmethod(String className) { Class clazz = Class.forName(className); } Export-Package: com.b Bundle C -- com.c C.java c() { System.out.println("hello i am C"); } Export-Package: com.c ``` To relate to **JAXB**. **class B is JAXBContext and bMethod is newInstance()** If you are familiar with OSGi package restrictions then it must be very clear now that **Bundle B** is not Importing package **com.c** i.e **class C** is **not visible** to **class B** hence it cannot instantiate C. The solution would be to pass a **ClassLoader** to bMethod. This ClassLoader should come from a **bundle that is importing com.c**. In this case we can pass **A.class.getClassLoader()** since **bundle A is importing com.c** Hope this was helpful.
Why can't JAXB find my jaxb.index when running inside Apache Felix?
[ "", "java", "jaxb", "osgi", "apache-felix", "" ]
Whenever I do a commit cycle in svn, I examine the diff when writing my comments. I thought it would be really nice to show the actual function that I made the modifications in when showing the diff. I checked out [this page](http://www.xenomai.org/index.php/Teaching_-p_to_svn_diff), which mentioned that the -p option will show the C function that the change is in. When I tried using the -p option with some C++ code, however, it usually returns the access specifier (private, public, protected, etc), which isn't terribly handy. I did notice that there is a -F option for diff that does the same as -p, but takes a user-specified regex. **I was wondering: is there a simple regex to match a C++ function?** It seems like that would be all that is necessary to get this to work. I'd spend some time looking at this myself, but work is in crunch-mode and this seemed like something that a lot of people would find useful, so I figured I'd post it here. EDIT: I'm not looking for something that's a slam-dunk catch-all regex, but something that would simply find the nearest function definition above the area diff would show. The fact that it would be nowhere near perfect, and somewhat buggy is okay with me. Just as long as it works right maybe like 60% of the time would be a significant productivity improvement IMHO.
Is there a simple regex to match a C++ function? No. Is there a (complex) regex to match a C++. Maybe or could be possible to write one. But I would say regular expressions neither are easily up to such a task (given you want some kind of excat match) nor are they the right tool for such a task. Just think about [case](https://stackoverflow.com/questions/476173/regex-to-pull-out-c-function-prototype-declarations/476350#476350) like this one. How would you handle this stuff. ``` void (*function(int, void (*)(int)))(int); func1(int), func2(double); double func3(int); ``` The only real solution is to use a parser using yacc/lex. Which for your use case of course does nothing. So either hack together some incomplete regex which fits most functions signatures in your code
If you're going to be applying this only to **your** commits I would recommend making a habit of adding a commit comment to the function, e.g: ``` void something () { ... some thing = 1; ... } ``` to ``` void something () // last change by me: a better value for thing { ... some thing = 2; ... } ``` will display for you the function *and* your comment with the edits. As a bonus, other people will be able to understand what you're doing.
Getting svn diff to show C++ function during commit
[ "", "c++", "regex", "linux", "svn", "diff", "" ]
How do you find current database's transaction level on SQL Server?
Run this: ``` SELECT CASE transaction_isolation_level WHEN 0 THEN 'Unspecified' WHEN 1 THEN 'ReadUncommitted' WHEN 2 THEN 'ReadCommitted' WHEN 3 THEN 'Repeatable' WHEN 4 THEN 'Serializable' WHEN 5 THEN 'Snapshot' END AS TRANSACTION_ISOLATION_LEVEL FROM sys.dm_exec_sessions where session_id = @@SPID ``` [learn.microsoft.com reference for the constant values](https://learn.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sessions-transact-sql).
just run `DBCC useroptions` and you'll get something like this: ``` Set Option Value --------------------------- -------------- textsize 2147483647 language us_english dateformat mdy datefirst 7 lock_timeout -1 quoted_identifier SET arithabort SET ansi_null_dflt_on SET ansi_warnings SET ansi_padding SET ansi_nulls SET concat_null_yields_null SET isolation level read committed ```
How to find current transaction level?
[ "", "sql", "sql-server", "sql-server-2008", "sql-server-2005", "isolation-level", "" ]
I have a web application project called Site, and in it is a folder called **SITE** (I know, not the best naming conventions, but that's another story). When I check the designer generated code it starts like this: ``` namespace Site.@__SITE__ { ``` Why is the at sign added? I can remove it and the project compiles and runs fine. Also, with or without the at sign, the actual namespace is `Site.__SITE__`. Thanks!
The @ symbol is a way of allowing names which conflict with C# keywords to be used in a C# application. All names in C# which begin with \_\_ are reserved for future use. As a result whenever the CodeDom emits code where a name is prefixed with \_\_, it will emit a @ sign in order to guarantee the name will be legal since it may conflict with a future C# keyword. The code you are seeing is likely following the same rules or just emitted as part of the CodeDom
The @ prefix is used when the identifier clashes with a built-in keyword. Presumably double-underscore identifiers are reserved for future use, so the designer is playing safe.
Why does Visual Studio add an at sign (@) to my namespace?
[ "", "c#", ".net", "asp.net", "visual-studio", "namespaces", "" ]
I'm using the following code: ``` SELECT * FROM table WHERE Col IN (123,123,222,....) ``` However, if I put more than ~3000 numbers in the `IN` clause, SQL throws an error. Does anyone know if there's a size limit or anything similar?!!
Depending on the database engine you are using, there can be limits on the length of an instruction. SQL Server has a very large limit: <http://msdn.microsoft.com/en-us/library/ms143432.aspx> ORACLE has a very easy to reach limit on the other side. So, for large IN clauses, it's better to create a temp table, insert the values and do a JOIN. It works faster also.
There is a limit, but you can split your values into separate blocks of in() ``` Select * From table Where Col IN (123,123,222,....) or Col IN (456,878,888,....) ```
Limit on the WHERE col IN (...) condition
[ "", "sql", "sql-server", "t-sql", "" ]
``` ^([a-zA-Z0-9!@#$%^&*|()_\-+=\[\]{}:;\"',<.>?\/~`]{4,})$ ``` Would this regular expression work for these rules? * Must be atleast 4 characters * Characters can be a mix of alphabet (capitalized/non-capitalized), numeric, and the following characters: ! @ # $ % ^ & \* ( ) \_ - + = | [ { } ] ; : ' " , < . > ? / It's intended to be a password validator. The language is PHP.
Yes? Honestly, what are you asking for? Why don't you test it? If, however, you want suggestions on improving it, some questions: 1. What is this regex checking for? 2. Why do you have such a large set of allowed characters? 3. Why don't you use `/\w/` instead of `/0-9a-zA-Z_/`? 4. Why do you have the whole thing in `()`s? You don't need to capture the whole thing, since you already have the whole thing, and they aren't needed to group anything. What I would do is check the length separately, and then check against a regex to see if it has any *bad* characters. Your list of good characters seems to be sufficiently large that it might just be easier to do it that way. But it may depend on what you're doing it for. EDIT: Now that I know this is PHP-centric, `/\w/` is safe because PHP uses the PCRE library, which is not exactly Perl, and in PCRE, `\w` will *not* match Unicode word characters. Thus, why not check for length and ensure there are no invalid characters: ``` if(strlen($string) >= 4 && preg_match('[\s~\\]', $string) == 0) { # valid password } ``` Alternatively, use the little-used POSIX character class `[[:graph:]]`. It should work pretty much the same in PHP as it does in Perl. `[[:graph:]]` matches any alphanumeric or punctuation character, which sounds like what you want, and `[[:^graph:]]` should match the opposite. To test if all characters match graph: ``` preg('^[[:graph:]]+$', $string) == 1 ``` To test if any characters don't match graph: ``` preg('[[:^graph:]]', $string) == 0 ```
You forgot the comma (`,`) and full stop (`.`) and added the tilde (`~`) and grave accent (`) that were not part of your specification. Additionally just a few characters inside a character set declaration have to be escaped: ``` ^([a-zA-Z0-9!@#$%^&*()|_\-+=[\]{}:;"',<.>?/~`]{4,})$ ``` And that as a PHP string declaration for `preg_match`: ``` '/^([a-zA-Z0-9!@#$%^&*()|_\\-+=[\\]{}:;"\',<.>?\\/~`]{4,})$/' ```
Would this regular expression work?
[ "", "php", "regex", "validation", "passwords", "" ]