Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I'm running django 1.1rc. All of my code works correctly using django's built in development server; however, when I move it into production using Apache's mod\_python, I get the following error on all of my views: ``` Caught an exception while rendering: Reverse for '<django.contrib.auth.decorators._CheckLogin ``` What might I look for that's causing this error? **Update:** What's strange is that I can access the views account/login and also the admin site just fine. I tried removing the @login\_required decorator on all of my views and it generates the same type of exception. **Update2:** So it seems like there is a problem with any view in my custom package: booster. The django.contrib works fine. I'm serving the app at <http://server_name/booster>. However, the built-in auth login view redirects to <http://server_name/accounts/login>. Does this give a clue to what may be wrong? **Traceback:** ``` Environment: Request Method: GET Request URL: http://lghbb/booster/hospitalists/ Django Version: 1.1 rc 1 Python Version: 2.5.4 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.admin', 'booster.core', 'booster.hospitalists'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware') Template error: In template c:\booster\templates\hospitalists\my_patients.html, error at line 23 Caught an exception while rendering: Reverse for '<django.contrib.auth.decorators._CheckLogin object at 0x05016DD0>' with arguments '(7L,)' and keyword arguments '{}' not found. 13 : <th scope="col">Name</th> 14 : <th scope="col">DOB</th> 15 : <th scope="col">IC</th> 16 : <th scope="col">Type</th> 17 : <th scope="col">LOS</th> 18 : <th scope="col">PCP</th> 19 : <th scope="col">Service</th> 20 : </tr> 21 : </thead> 22 : <tbody> 23 : {% for patient in patients %} 24 : <tr class="{{ patient.gender }} select"> 25 : <td>{{ patient.bed }}</td> 26 : <td>{{ patient.mr }}</td> 27 : <td>{{ patient.acct }}</td> 28 : <td><a href="{% url hospitalists.views.patient patient.id %}">{{ patient }}</a></td> 29 : <td>{{ patient.dob }}</td> 30 : <td class="{% if patient.infections.count %}infection{% endif %}"> 31 : {% for infection in patient.infections.all %} 32 : {{ infection.short_name }} &nbsp; 33 : {% endfor %} Traceback: File "C:\Python25\Lib\site-packages\django\core\handlers\base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "C:\Python25\Lib\site-packages\django\contrib\auth\decorators.py" in __call__ 78. return self.view_func(request, *args, **kwargs) File "c:/booster\hospitalists\views.py" in index 50. return render_to_response('hospitalists/my_patients.html', RequestContext(request, {'patients': patients, 'user' : request.user})) File "C:\Python25\Lib\site-packages\django\shortcuts\__init__.py" in render_to_response 20. return HttpResponse(loader.render_to_string(*args, **kwargs), **httpresponse_kwargs) File "C:\Python25\Lib\site-packages\django\template\loader.py" in render_to_string 108. return t.render(context_instance) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 178. return self.nodelist.render(context) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 779. bits.append(self.render_node(node, context)) File "C:\Python25\Lib\site-packages\django\template\debug.py" in render_node 71. result = node.render(context) File "C:\Python25\Lib\site-packages\django\template\loader_tags.py" in render 97. return compiled_parent.render(context) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 178. return self.nodelist.render(context) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 779. bits.append(self.render_node(node, context)) File "C:\Python25\Lib\site-packages\django\template\debug.py" in render_node 71. result = node.render(context) File "C:\Python25\Lib\site-packages\django\template\loader_tags.py" in render 24. result = self.nodelist.render(context) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 779. bits.append(self.render_node(node, context)) File "C:\Python25\Lib\site-packages\django\template\debug.py" in render_node 81. raise wrapped Exception Type: TemplateSyntaxError at /hospitalists/ Exception Value: Caught an exception while rendering: Reverse for '<django.contrib.auth.decorators._CheckLogin object at 0x05016DD0>' with arguments '(7L,)' and keyword arguments '{}' not found. Original Traceback (most recent call last): File "C:\Python25\Lib\site-packages\django\template\debug.py", line 71, in render_node result = node.render(context) File "C:\Python25\Lib\site-packages\django\template\defaulttags.py", line 155, in render nodelist.append(node.render(context)) File "C:\Python25\Lib\site-packages\django\template\defaulttags.py", line 382, in render raise e NoReverseMatch: Reverse for '<django.contrib.auth.decorators._CheckLogin object at 0x05016DD0>' with arguments '(7L,)' and keyword arguments '{}' not found. ``` Thanks for your help, Pete
I had a problem with my apache configuration: I changed this: SetEnv DJANGO\_SETTINGS\_MODULE settings to this: SetEnv DJANGO\_SETTINGS\_MODULE booster.settings To solve the defualt auth login problem, I added the setting settings.LOGIN\_URL.
Having googled on this a bit, it sounds like you may need to delete any .pyc files on the server and let it recompile them the first time they're accessed.
django @login_required decorator error
[ "", "python", "django", "" ]
I have a template function where the template parameter is an integer. In my program I need to call the function with a small integer that is determined at run time. By hand I can make a table, for example: ``` void (*f_table[3])(void) = {f<0>,f<1>,f<2>}; ``` and call my function with ``` f_table[i](); ``` Now, the question is if there is some automatic way to build this table to arbitrary order. The best I can come up with is to use a macro ``` #define TEMPLATE_TAB(n) {n<0>,n<1>,n<2>} ``` which at leasts avoids repeating the function name over and over (my real functions have longer names than "f"). However, the maximum allowed order is still hard coded. Ideally the table size should only be determined by a single parameter in the code. Would it be possible to solve this problem using templates?
You can create a template that initializes a lookup table by using recursion; then you can call the i-th function by looking up the function in the table: ``` #include <iostream> // recursive template function to fill up dispatch table template< int i > bool dispatch_init( fpointer* pTable ) { pTable[ i ] = &function<i>; return dispatch_init< i - 1 >( pTable ); } // edge case of recursion template<> bool dispatch_init<-1>() { return true; } // call the recursive function const bool initialized = dispatch_init< _countof(ftable) >( ftable ); // the template function to be dispatched template< int i > void function() { std::cout << i; } // dispatch functionality: a table and a function typedef void (*fpointer)(); fpointer ftable[100]; void dispatch( int i ){ return (ftable[i])(); } int main() { dispatch( 10 ); } ```
It can be done by 'recursive' dispatching: a template function can check if it's runtime argument matches it's template argument, and return the target function with the template argument. ``` #include <iostream> template< int i > int tdispatch() { return i; } // metaprogramming to generate runtime dispatcher of // required size: template< int i > int r_dispatch( int ai ) { if( ai == i ) { return tdispatch< i > (); } else { return r_dispatch< i-1 >( ai ); } } template<> int r_dispatch<-1>( int ){ return -1; } // non-metaprogramming wrapper int dispatch( int i ) { return r_dispatch<100>(i); } int main() { std::cout << dispatch( 10 ); return 0; } ```
Building a call table to template functions in C++
[ "", "c++", "templates", "" ]
I have an server-application Foo that listens at a specific port and a client-application Bar which connects to Foo (both are .NET-apps). Everything works fine. So far, so good? But what happends when to Bar when the connection slows down or when it takes a long time until Foo responds? I have to test it. My question is, how can I simulate such a slowdown? Generally not that big problem (there are some free tools out there), but Foo and Bar are both running on productive machines (yes, there are developed on productive machines. I know, that's very bad, but believe me, that's not my decision). So I can't just use a tool that limits the whole bandwith of the network-adapters. Is there a tool out there where I can limit the bandwith or delay the connection of a specific port? Is it possible to achieve this in .NET/C# so I can write a proper unit/integration-tests?
This question is based on a pre-existing assumption - that typical usage of your application will be over a slow link. Is this a valid assumption? Maybe you should ask the following questions: 1. Is this a TCP connection, intended to run over an unusually slow medium, such as dialup? 2. Can you quantify the minimum acceptable throughput in order for the application to be a success? 3. Is this connection of the highly-interactive variety (in which case latency becomes an issue, not just bandwidth)? Yes, I'm questioning the assumption that's implicit in your question. Assuming that you've answered the above questions, and you're therefore pretty satisfied about the metrics and success criteria for your application, and you still think that you need some kind of stress test to prove things out, then there are a couple of ways to go. 1. Simulate a "slow connection" by using a tool. I know that the Linux traffic control stuff is pretty advanced and can simulate just about anything (see the [LARTC](http://lartc.org/))--if you really want to get flexible then set up a Linux virtual machine as a router and set your PC's default route to it. There are probably a myriad less-functional tools for Windows that can do similar types of things. 2. Write a custom proxy application that accepts a TCP connection, and does a "pass through", with custom `Thread.Sleep`'s according to some profile that you choose. That would do a reasonable job of simulating a flaky TCP connection, but is somewhat unscientific (the TCP back-off algorithms are a little hairy and difficult to accurately simulate).
In my job I sometimes have to test transfer code over slow/unreliable links. The best free way to do this that I've found is to use the dummynet module within FreeBSD. I set up a few VMs, with a freebsd box between them acting as a transparent bridge. I use dummynet to munge the traffic going across the bridge to simulate whatever latency and packet loss I want. I did a write-up about it on my blog a while back, titled [Simulating Slow WAN Links with Dummynet and VMWare ESX](http://apocryph.org/2009/05/15/simulating-slow-wan-links-with-dummynet-and-vmware-esx/). It should also be doable with VMWare Workstation or another virtualization product as long as you can control how the network interfaces operate.
Delay incoming network connection
[ "", "c#", ".net", "testing", "networking", "" ]
I have code like: ``` var t = SomeInstanceOfSomeClass.GetType(); ((t)SomeOtherObjectIWantToCast).someMethodInSomeClass(...); ``` That won't do, the compiler returns an error about the (t) saying Type or namespace expected. How can you do this? I'm sure it's actually really obvious....
C# 4.0 allows this with the [`dynamic`](http://keithhill.spaces.live.com/Blog/cns!5A8D2641E0963A97!6676.entry) type. That said, you almost surely don't want to do that *unless* you're doing COM interop or writing a runtime for a dynamic language. (Jon do you have further use cases?)
I've answered a duplicate question [here](https://stackoverflow.com/questions/972636/casting-a-variable-using-a-type-variable). However, if you just need to call a method on an instance of an arbitrary object in C# 3.0 and below, you can use reflection: ``` obj.GetType().GetMethod("someMethodInSomeClass").Invoke(obj); ```
Using get type and then casting to that type in C#
[ "", "c#", "reflection", "casting", "" ]
Given 2 fields of type datetime in mySql which look like: `2009-07-26 18:42:21`. After retrieving these values in PHP, how can I compare the 2 time stamps to figure out how many seconds have elapsed between them? I tried simply subtracting them but that didn't to work.
``` $ts1 = strtotime('2009-07-26 18:42:21'); $ts2 = strtotime('2009-07-26 18:42:20'); $elapsedSeconds = $ts1 - $ts2; // = 1 ```
Try this PHP code to compare the dates; ``` <?php $date_difference_in_seconds = abs(strtotime($date_1) - strtotime($date_2)); ?> ``` The dates that MySQL stores are in a different format to that PHP uses. PHP stores dates in seconds elapsed since the Unix Epoch, which you can read more about at <http://en.wikipedia.org/wiki/Unix_time>. This one line of code simply converts the two dates to PHP format, and subtracts the two dates from each other before absoluting the value so as to avoid negative results. If you don't want to use the date in the format that MySQL returns at all, but are just simply after subtracting them, use a straight out SQL query instead to conserve memory: ``` SELECT TIME_TO_SEC(TIME_DIFF(date_1, date_2)) AS date_difference_in_seconds FROM table; ```
How to compare elapsed time between datetime fields?
[ "", "php", "mysql", "" ]
Some webpages have a "turning" triangle control that can collapse submenus. I want the same sort of behavior, but for sections in a form. Say I had a form that had lender, name, address and city inputs. Some of my site's users are going to need a second set of these fields. I would like to conceal the extra fields for the majority of the users. The ones that need the extra fields should be able to access them with one click. How would I do that?
Ah, I think you mean you want to have collapsible sections on your form. In short: 1. Put the content you want to collapse in its own DIV, with the CSS property of "display:none" at first 2. Wrap a link (A tag) around the triangle image (or text like "Hide/Show") that runs the JavaScript to toggle the display property. 3. If you want the triangle to "turn" when the section is expanded/shown, you can have the JavaScript swap out the image at the same time. Here's a better explanation: [How to Create a Collapsible DIV with Javascript and CSS](http://www.harrymaugans.com/2007/03/05/how-to-create-a-collapsible-div-with-javascript-and-css/) [**Update 2013-01-27** the article is no longer available on the Web, you can refer to [the source of this HTML page](https://github.com/toddfoster/lectionary/blob/master/lectionary.html) for an applied example inspired by this article] Or if you Google search with words like "CSS collapsing sections" or such you will find many other tutorials, including super-fancy ones (e.g. <http://xtractpro.com/articles/Animated-Collapsible-Panel.aspx>).
Your absolute most basic way of hiding/showing an element using JavaScript is by setting the visibility property of an element. Given your example imagine you had the following form defined on your page: ``` <form id="form1"> <fieldset id="lenderInfo"> <legend>Primary Lender</legend> <label for="lenderName">Name</label> <input id="lenderName" type="text" /> <br /> <label for="lenderAddress">Address</label> <input id="lenderAddress" type="text" /> </fieldset> <a href="#" onclick="showElement('secondaryLenderInfo');">Add Lender</a> <fieldset id="secondaryLenderInfo" style="visibility:hidden;"> <legend>Secondary Lender</legend> <label for="secondaryLenderName">Name</label> <input id="secondaryLenderName" type="text" /> <br /> <label for="secondaryLenderAddress">Address</label> <input id="secondaryLenderAddress" type="text" /> </fieldset> </form> ``` There are two things to note here: 1. The second group of input fields are initially hidden using a little inline css. 2. The "Add Lender" link is calling a JavaScript method which will do all the work for you. When you click that link it will dynamically set the visibility style of that element causing it to show up on the screen. `showElement()` takes an *element id* as a parameter and looks like this: ``` function showElement(strElem) { var oElem = document.getElementById(strElem); oElem.style.visibility = "visible"; return false; } ``` Almost every JavaScript approach is going to be doing this under the hood, but I would recommend using a framework that hides the implementation details away from you. Take a look at [JQuery](http://jquery.com/), and [JQuery UI](http://jqueryui.com/) in order to get a much more polished transition when hiding/showing your elements.
Little folding triangles: how can I create collapsible sections on a webpage?
[ "", "javascript", "html", "css", "menu", "folding", "" ]
What is the easiest way in Python to replace a character in a string? For example: ``` text = "abcdefg"; text[1] = "Z"; ^ ```
Don't modify strings. Work with them as lists; turn them into strings only when needed. ``` >>> s = list("Hello zorld") >>> s ['H', 'e', 'l', 'l', 'o', ' ', 'z', 'o', 'r', 'l', 'd'] >>> s[6] = 'W' >>> s ['H', 'e', 'l', 'l', 'o', ' ', 'W', 'o', 'r', 'l', 'd'] >>> "".join(s) 'Hello World' ``` Python strings are immutable (i.e. they can't be modified). There are [a lot](https://web.archive.org/web/20201031092707/http://effbot.org/pyfaq/why-are-python-strings-immutable.htm) of reasons for this. Use lists until you have no choice, only then turn them into strings.
## Fastest method? There are three ways. For the speed seekers I recommend 'Method 2' **Method 1**: Given by [scvalex's answer](https://stackoverflow.com/a/1228597/2571620): ``` text = 'abcdefg' new = list(text) new[6] = 'W' ''.join(new) ``` Which is pretty slow compared to 'Method 2': ``` timeit.timeit("text = 'abcdefg'; s = list(text); s[6] = 'W'; ''.join(s)", number=1000000) 1.0411581993103027 ``` **Method 2 (FAST METHOD)**: Given by [Jochen Ritzel's answer](https://stackoverflow.com/a/1228332/2571620): ``` text = 'abcdefg' text = text[:1] + 'Z' + text[2:] ``` Which is much faster: ``` timeit.timeit("text = 'abcdefg'; text = text[:1] + 'Z' + text[2:]", number=1000000) 0.34651994705200195 ``` **Method 3:** Byte array: ``` timeit.timeit("text = 'abcdefg'; s = bytearray(text); s[1] = 'Z'; str(s)", number=1000000) 1.0387420654296875 ```
Changing a character in a string
[ "", "python", "string", "" ]
I'm astonished there isn't even a filter property attached to datagridview and I'm getting on my nerve, I can find examples for filtering Datagridview that was binded programmatically, I cannot find any example on how to filter a datagridview that was generated by Visual Studio. So please can someone tell me how to filter this stuff ? Thanks.
Put a filter on the BindingSource : ``` bindingSource.Filter = "Age < 21"; ```
You place the filter on the DataSource that is driving your DataGridView - for example, I have this code on a DataGridView that allows for user filtering and is called on a postback: ``` VisitsDataSource.FilterExpression = "1 = 2"; GridView1.DataBind(); ```
How to filter C# Winform datagridview that was created with Visual Studio
[ "", "c#", "winforms", "datagridview", "" ]
So I have an ASP.NET MVC app that references a number of javascript files in various places (in the site master and additional references in several views as well). I'd like to know if there is an automated way for compressing and minimizing such references into a single .js file where possible. Such that this ... ``` <script src="<%= ResolveUrl("~") %>Content/ExtJS/Ext.ux.grid.GridSummary/Ext.ux.grid.GridSummary.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.rating/ext.ux.ratingplugin.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext-starslider/ext-starslider.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.dollarfield.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.combobox.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/ext.ux.datepickerplus/ext.ux.datepickerplus-min.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/SessionProvider.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ExtJS/TabCloseMenu.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/ActivityForm.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/UserForm.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/SwappedGrid.js" type="text/javascript"></script> <script src="<%= ResolveUrl("~") %>Content/ActivityViewer/Tree.js" type="text/javascript"></script> ``` ... could be reduced to something like this ... ``` <script src="<%= ResolveUrl("~") %>Content/MyViewPage-min.js" type="text/javascript"></script> ``` Thanks
I personally think that keeping the files separate during development is invaluable and that during production is when something like this counts. So I modified my deployment script in order to do that above. I have a section that reads: ``` <Target Name="BeforeDeploy"> <ReadLinesFromFile File="%(JsFile.Identity)"> <Output TaskParameter="Lines" ItemName="JsLines"/> </ReadLinesFromFile> <WriteLinesToFile File="Scripts\all.js" Lines="@(JsLines)" Overwrite="true"/> <Exec Command="java -jar tools\yuicompressor-2.4.2.jar Scripts\all.js -o Scripts\all-min.js"></Exec> </Target> ``` And in my master page file I use: ``` if (HttpContext.Current.IsDebuggingEnabled) {%> <script type="text/javascript" src="<%=Url.UrlLoadScript("~/Scripts/jquery-1.3.2.js")%>"></script> <script type="text/javascript" src="<%=Url.UrlLoadScript("~/Scripts/jquery-ui-1.7.2.min.js")%>"></script> <script type="text/javascript" src="<%=Url.UrlLoadScript("~/Scripts/jquery.form.js")%>"></script> <script type="text/javascript" src="<%=Url.UrlLoadScript("~/Scripts/jquery.metadata.js")%>"></script> <script type="text/javascript" src="<%=Url.UrlLoadScript("~/Scripts/jquery.validate.js")%>"></script> <script type="text/javascript" src="<%=Url.UrlLoadScript("~/Scripts/additional-methods.js")%>"></script> <script type="text/javascript" src="<%=Url.UrlLoadScript("~/Scripts/form-interaction.js")%>"></script> <script type="text/javascript" src="<%=Url.UrlLoadScript("~/Scripts/morevalidation.js")%>"></script> <script type="text/javascript" src="<%=Url.UrlLoadScript("~/Scripts/showdown.js") %>"></script> <% } else {%> <script type="text/javascript" src="<%=Url.UrlLoadScript("~/Scripts/all-min.js")%>"></script> <% } %> ``` The build script takes all the files in the section and combines them all together. Then I use YUI's minifier to get a minified version of the javascript. Because this is served by IIS, I would rather turn on compression in IIS to get gzip compression. \*\*\*\* Added \*\*\*\* My deployment script is an MSBuild script. I also use the excellent MSBuild community tasks (<http://msbuildtasks.tigris.org/>) to help deploy an application. I'm not going to post my entire script file here, but here are some relevant lines to should demonstrate most of what it does. The following section will run the build in asp.net compiler to copy the application over to the destination drive. (In a previous step I just run exec net use commands and map a network share drive). ``` <Target Name="Precompile" DependsOnTargets="build;remoteconnect;GetTime"> <MakeDir Directories="%(WebApplication.SharePath)\$(buildDate)" /> <Message Text="Precompiling Website to %(WebApplication.SharePath)\$(buildDate)" /> <AspNetCompiler VirtualPath="/%(WebApplication.VirtualDirectoryPath)" PhysicalPath="%(WebApplication.PhysicalPath)" TargetPath="%(WebApplication.SharePath)\$(buildDate)" Force="true" Updateable="true" Debug="$(Debug)" /> <Message Text="copying the correct configuration files over" /> <Exec Command="xcopy $(ConfigurationPath) %(WebApplication.SharePath)\$(buildDate) /S /E /Y" /> </Target> ``` After all of the solution projects are copied over I run this: ``` <Target Name="_deploy"> <Message Text="Removing Old Virtual Directory" /> <WebDirectoryDelete VirtualDirectoryName="%(WebApplication.VirtualDirectoryPath)" ServerName="$(IISServer)" ContinueOnError="true" Username="$(username)" HostHeaderName="$(HostHeader)" /> <Message Text="Creating New Virtual Directory" /> <WebDirectoryCreate VirtualDirectoryName="%(WebApplication.VirtualDirectoryPath)" VirtualDirectoryPhysicalPath="%(WebApplication.IISPath)\$(buildDate)" ServerName="$(IISServer)" EnableDefaultDoc="true" DefaultDoc="%(WebApplication.DefaultDocument)" Username="$(username)" HostHeaderName="$(HostHeader)" /> </Target> ``` That should be enough to get you started on automating deployment. I put all this stuff in a separate file called Aspnetdeploy.msbuild. I just msbuild /t:Target whenever I need to deploy to an environment.
Actually there is a much easier way using [Web Deployment Projects](http://www.microsoft.com/downloads/details.aspx?FamilyId=0AA30AE8-C73B-4BDD-BB1B-FE697256C459&displaylang=en) (WDP). The WDP will manage the complexities of the **aspnet\_\_compiler** and **aspnet\_\_merge** tool. You can customize the process by a UI inside of Visual Studio. As for the compressing the js files you can leave all of your js files in place and just compress these files during the build process. So in the WDP you would declare something like this: ``` <Project> REMOVE CONTENT HERE FOR WEB <Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets" /> <!-- Extend the build process --> <PropertyGroup> <BuildDependsOn> $(BuildDependsOn); CompressJavascript </BuildDependsOn> </PropertyGroup> <Target Name="CompressJavascript"> <ItemGroup> <_JSFilesToCompress Include="$(OutputPath)Scripts\**\*.js" /> </ItemGroup> <Message Text="Compresing Javascript files" Importance="high" /> <JSCompress Files="@(_JSFilesToCompress)" /> </Target> </Project> ``` This uses the JSCompress MSBuild task from the [MSBuild Community Tasks](http://msbuildtasks.tigris.org/) which I think is based off of JSMin. The idea is, leave all of your js files as they are *(i.e. debuggable/human-readable)*. When you build your WDP it will first copy the js files to the **OutputPath** and then the **CompressJavascript** target is called to minimize the js files. This doesn't modify your original source files, just the ones in the output folder of the WDP project. Then you deploy the files in the WDPs output path, which includes the pre-compilied site. I covered this exact scenario in my book *(link below my name)*. You can also let the WDP handle creating the Virtual Directory as well, just check a checkbox and fill in the name of the virtual directory. For some links on MSBuild: * [Inside MSBuild](http://msdn.microsoft.com/en-us/magazine/cc163589.aspx) * [7 Steps To MSBuild](http://brennan.offwhite.net/blog/2006/11/30/7-steps-to-msbuild/) * [MSDN MSBuild Docs](http://msdn.microsoft.com/en-us/library/0k6kkbsd.aspx) Sayed Ibrahim Hashimi My Book: [Inside the Microsoft Build Engine : Using MSBuild and Team Foundation Build](https://rads.stackoverflow.com/amzn/click/com/0735626286)
How can I automatically compress and minimize JavaScript files in an ASP.NET MVC app?
[ "", "javascript", "asp.net-mvc", "compression", "extjs", "minimize", "" ]
I have a large 95% C, 5% C++ Win32 code base that I am trying to grok. What modern tools are available for generating call-graph diagrams for C or C++ projects?
Have you tried SourceInsight's call graph feature? * <http://www.sourceinsight.com/docs35/ae1144092.htm>
Have you tried [doxygen](http://www.doxygen.nl/) and [codeviz](http://www.skynet.ie/~mel/projects/codeviz/) ? Doxygen is normally used as a documentation tool, but it can generate call graphs for you with the [CALL\_GRAPH/CALLER\_GRAPH](http://www.doxygen.nl/manual/diagrams.html) options turned on. Wikipedia lists a bunch of other [options](http://en.wikipedia.org/wiki/Call_graph) that you can try.
C/C++ call-graph utility for Windows platform
[ "", "c++", "c", "winapi", "utility", "call-graph", "" ]
I'd like to test if a regex will match part of a string at a specific index (and only starting at that specific index). For example, given the string "one two 3 4 five", I'd like to know that, at index 8, the regular expression [0-9]+ will match "3". RegularExpression.IsMatch and Match both take a starting index, however they both will search the entire rest of the string for a match if necessary. ``` string text="one two 3 4 five"; Regex num=new Regex("[0-9]+"); //unfortunately num.IsMatch(text,0) also finds a match and returns true Console.WriteLine("{0} {1}",num.IsMatch(text, 8),num.IsMatch(text,0)); ``` Obviously, I could check if the resulting match starts at the index I am interested in, but I will be doing this a large number of times on large strings, so I don't want to waste time searching for matches later on in the string. Also, I won't know in advance what regular expressions I will actually be testing against the string. I don't want to: 1. split the string on some boundary like whitespace because in my situation I won't know in advance what a suitable boundary would be 2. have to modify the input string in any way (like getting the substring at index 8 and then using ^ in the regex) 3. search the rest of the string for a match or do anything else that wouldn't be performant for a large number of tests against a large string. I would like to parse a potentially large user supplied body of text using an arbitrary user supplied grammar. The grammar will be defined in a BNF or PEG like syntax, and the terminals will either be string literals or regular expressions. Thus I will need to check if the next part of the string matches any of the potential terminals as driven by the grammar.
How about using `Regex.IsMatch(string, int)` using a regular expression starting with `\G` (meaning "start of last match")? That appears to work: ``` using System; using System.Text.RegularExpressions; class Test { static void Main() { string text="one two 3 4 five"; Regex num=new Regex(@"\G[0-9]+"); Console.WriteLine("{0} {1}", num.IsMatch(text, 8), // True num.IsMatch(text, 0)); // False } } ```
If you only want to search a substring of the text, grab that substring before the regex. ``` myRegex.Match(myString.Substring(8, 10)); ```
c# regular expression match at specific index in string?
[ "", "c#", "regex", "" ]
As a C# developer I'm used to the following style of exception handling: ``` try { throw SomeException("hahahaha!"); } catch (Exception ex) { Log(ex.ToString()); } Output ------ SomeNamespace.SomeException: hahahaha! at ConsoleApplication1.Main() in ConsoleApplication1\Program.cs:line 27 ``` Its really simple, and yet tells me everything I need to know about what the exception was and where it was. How do I achieve the equivalent thing in JavaScript where the exception object itself might just be a string. I really want to be able to know the exact line of code where the exception happened, however the following code doesn't log anything useful at all: ``` try { var WshShell = new ActiveXObject("WScript.Shell"); return WshShell.RegRead("HKEY_LOCAL_MACHINE\\Some\\Invalid\\Location"); } catch (ex) { Log("Caught exception: " + ex); } Output ------ Caught exception: [object Error] ``` **EDIT** (again): Just to clarify, this is for internal application that makes heavy use of JavaScript. I'm after a way of extracting useful information from JavaScript errors that may be caught in the production system - I already have a logging mechanism, just want a way of getting a sensible string to log.
You can use almost in the same manner ie. ``` try { throw new Error("hahahaha!"); } catch (e) { alert(e.message) } ``` But if you want to get line number and filename where error is thrown i suppose there is no crossbrowser solution. Message and name are the only standart properties of Error object. In mozilla you have also lineNumber and fileName properties.
You don't specify if you are working in the browser or the server. If it's the former, there is a new [console.error](https://developer.mozilla.org/en-US/docs/Web/API/Console.error) method and [e.stack](https://stackoverflow.com/questions/591857/how-can-i-get-a-javascript-stack-trace-when-i-throw-an-exception) property: ``` try { // do some crazy stuff } catch (e) { console.error(e, e.stack); } ``` Please keep in mind that error will work on Firefox and Chrome, but it's not standard. A quick example that will downgrade to `console.log` and log `e` if there is no `e.stack`: ``` try { // do some crazy stuff } catch (e) { (console.error || console.log).call(console, e.stack || e); } ```
How to log exceptions in JavaScript
[ "", "javascript", "exception", "" ]
If javascript modifies DOM in page A, user navigates to page B and then hits back button to get back to the page A. All modifications to DOM of page A are lost and user is presented with version that was originally retrieved from the server. It works that way on stackoverflow, reddit and many other popular websites. (try to add test comment to this question, then navigate to different page and hit back button to come back - your comment will be "gone") This makes sense, yet some websites (apple.com, basecamphq.com etc) are somehow forcing browser to serve user the latest state of the page. (go to <http://www.apple.com/ca/search/?q=ipod>, click on say Downloads link at the top and then click back button - all DOM updates will be preserved) where is the inconsistency coming from?
One answer: Among other things, **unload events cause the back/forward cache to be invalidated**. Some browsers store the current state of the entire web page in the so-called "bfcache" or "page cache". This allows them to re-render the page very quickly when navigating via the back and forward buttons, and preserves the state of the DOM and all JavaScript variables. However, when a page contains onunload events, those events could potentially put the page into a non-functional state, and so the page is not stored in the bfcache and must be reloaded (but may be loaded from the standard cache) and re-rendered from scratch, including running all onload handlers. When returning to a page via the bfcache, the DOM is kept in its previous state, without needing to fire onload handlers (because the page is already loaded). Note that the behavior of the bfcache is different from the standard browser cache with regards to Cache-Control and other HTTP headers. In many cases, browsers will cache a page in the bfcache even if it would not otherwise store it in the standard cache. ~~jQuery automatically attaches an unload event to the window, so unfortunately using jQuery will disqualify your page from being stored in the bfcache for DOM preservation and quick back/forward~~. [Update: this has been fixed in jQuery 1.4 so that it only applies to IE] * [Information about the Firefox bfcache](https://developer.mozilla.org/En/Using_Firefox_1.5_caching) * [Information about the Safari Page Cache](http://webkit.org/blog/427/webkit-page-cache-i-the-basics/) and [possible future changes to how unload events work](http://webkit.org/blog/516/webkit-page-cache-ii-the-unload-event/) * [Opera uses fast history navigation](http://www.opera.com/support/kb/view/827/) * Chrome doesn't have a page cache ([[1]](http://code.google.com/p/chromium/issues/detail?id=2879), [[2]](http://code.google.com/p/chromium/issues/detail?id=48657)) * Pages for playing with DOM manipulations and the bfcache: + [This page will be stored in the regular cache](http://www.twmagic.com/misc/cache.html) + [This page will not, but will still be bfcached](http://www.twmagic.com/misc/cache-nocache.html)
I've been trying to get Chrome to behave like Safari does, and the only way I've found that works is to set `Cache-control: no-store` in the headers. This forces the browser to re-fetch the page from the server when the user presses the back button. Not ideal, but better than being shown an out-of-date page.
Ajax, back button and DOM updates
[ "", "javascript", "ajax", "firefox", "" ]
Simple question: I want to open a URL using the Default Browser, so I just do `Process.Start(url)`. However, I noticed that this returns an IDisposable object. So now I wonder if I have to dispose it? Or, for that matter, if my Application is in any way responsible for this process? The intended functionality is simply "Fire and forget", I do not want to have my application as a parent of the new process and it does not need to interact with it. I've seen some similar but unrelated questions on SO that seem to say that simply calling Process.Start on a URL is fine, but I do not want to run into some hard to debug memory leaks/resource exhaustion issues caused my me program keeping references to long dead browser processes.
Couldn't you just wrap it in a `using` clause to ensure the GC does whatever it needs to do with it IF you are required to dispose of it? This would still allow a sort of "fire and forget" but not leave memory/resources in a bad state. Probably overkill but there is a really good article on CodeProject about the IDisposable interface: <http://www.codeproject.com/KB/dotnet/idisposable.aspx>
No, you do not. ``` void Main() { Process result = Process.Start("http://www.google.com"); if (result == null) { Console.WriteLine("It returned null"); } } ``` Prints ``` It returned null ``` From [Process.Start Method (String)](http://msdn.microsoft.com/en-us/library/53ezey2s.aspx) on MSDN (.NET Framework 4): > If the address of the executable file to start is a URL, the process > is not started and null is returned. (In general, though, the `using` statement is the right way to work with IDisposable objects. Except for [WCF clients](https://stackoverflow.com/questions/573872/what-is-the-best-workaround-for-the-wcf-client-using-block-issue).)
Do I have to Dispose Process.Start(url)?
[ "", "c#", ".net", "" ]
I've noticed that some web developers put IDs on scripts tags. For example: ``` <script id="scripty" src="something.js" type="text/javascript"></script> ``` I know that according the W3C this is perfectly legal markup, but what's the benefits of doing this?
The one use I've seen of this is if you want to provide widget for customers and you instruct them to place the `<script>` tag wherever they want the widget to show up. If you give the `<script>` element an ID then you can reference that inside of it to place the code in the right place. That's not to say that it is the *only* way of achieving that, of course, but I've seen it done and [suggested it in the past](https://stackoverflow.com/questions/758676/how-can-i-append-a-new-element-in-place-in-javascript).
I've seen it used for Microtemplating, where you can put a template in a script tag and then reference it through the ID. [Here's great post with javascript microtemplating by John Resig](http://ejohn.org/blog/javascript-micro-templating/) - note that this is not the ONLY way of achieving this, only Johns version of it.
Why would I put an ID on a script tag?
[ "", "javascript", "html", "" ]
I want users of my LayoutManager class to be able to write this: ``` LayoutManager layoutManager = new LayoutManager(); layoutManager.AddMainContentView("customers", "intro", new View { Title = "Customers Intro", Content = "This is customers intro." }); ``` **But what syntax do I need to fill this dictionary in a dictionary in AddMainContentView() below?** ``` public class LayoutManager { private Dictionary<string, Dictionary<string, object>> _mainContentViews = new Dictionary<string, Dictionary<string, object>>(); public Dictionary<string, Dictionary<string, object>> MainContentViews { get { return _mainContentViews; } set { _mainContentViews = value; } } public void AddMainContentView(string moduleKey, string viewKey, object view) { //_mainContentViews.Add(moduleKey, new Dictionary<string, object>(viewKey, view)); //_mainContentViews.Add(moduleKey, viewKey, view); _mainContentViews.Add(moduleKey, ???); } ... } ```
``` public void AddMainContentView(string moduleKey, string viewKey, object view) { Dictionary<string, object> viewDict = null; if (!MainContentViews.TryGetValue(moduleKey, out viewDict)) { viewDict = new Dictionary<string, object>(); MainContentViews.Add(moduleKey, viewDict); } if (viewDict.ContainsKey(viewKey)) { viewDict[viewKey] = view; } else { viewDict.Add(viewKey, view); } } ```
The ??? could be filled with: ``` new Dictionary<string, object> { {viewKey, view} } ```
How to I add an item to a Dictionary<string, Dictionary<string, object>>?
[ "", "c#", "collections", "dictionary", "" ]
> **Possible Duplicate:** > [Querying if a Windows Service is disabled (without using the Registry)?](https://stackoverflow.com/questions/10384284/querying-if-a-windows-service-is-disabled-without-using-the-registry) I need to check if 'Event Log' services in running or not. How to do that?
Use OpenSCManager(), then OpenService(), then ControlService().
some example code for using the Win32 API can be found [here](http://msdn.microsoft.com/en-us/library/ms686335(VS.85).aspx).
Check if service is running?
[ "", "c++", "winapi", "" ]
How do I process a date that is input by the user and store it in the database in a date field? The date is entered by the user using a jquery datepicker in a text field. Here are specific needs I have: 1. How do you convert a mm/dd/yy string (the jquery picked can produce any format) to a format storable in the database as a date field. 2. How to get the date and then change it to something like "Wednesday, Aug 11, 2009"? I'm using C# for the backend, but I should be able to understand VB Code as well. Thanks!
use [`DateTime.Parse(string)`](http://msdn.microsoft.com/en-us/library/1k1skd40.aspx) to parse the string to a `DateTime` which can then be saved as a `DateTime` in the database. You may want to use TryParse as suggested in other answers, to ensure that the supplied string can be parsed to a `DateTime`. To return a DateTime in a certain format, you can call .ToString() on the DateTime and [supply a format](http://www.geekzilla.co.uk/View00FF7904-B510-468C-A2C8-F859AA20581F.htm) or use one of the pre-defined formats e.g. [`ToShortDateString()`](http://msdn.microsoft.com/en-us/library/system.datetime.toshortdatestring.aspx)
``` string dateString = "08/11/09"; DateTime yourDate; if(DateTime.TryParse(dateString, out yourDate)) { // do something with yourDate string output = yourDate.ToString("D"); // sets output to: Tuesday, August 11, 2009 } else { // invalid date entered } ``` Here is a list of `DateTime` format strings: <http://msdn.microsoft.com/en-us/library/az4se3k1.aspx>
Working with dates in asp.net
[ "", "c#", "asp.net", "date-format", "datefield", "" ]
I've created a new Class Library in C# and want to use it in one of my other C# projects - how do I do this?
Add a reference to it in your project and a using clause at the top of the CS file where you want to use it. Adding a reference: 1. In Visual Studio, click Project, and then Add Reference. 2. Click the Browse tab and locate the DLL you want to add a reference to. NOTE: Apparently using Browse is bad form if the DLL you want to use is in the same project. Instead, right-click the Project and then click Add Reference, then select the appropriate class from the Project tab: [![enter image description here](https://i.stack.imgur.com/ZCt4W.png)](https://i.stack.imgur.com/ZCt4W.png) 3. Click OK. Adding a using clause: Add "using [namespace];" to the CS file where you want to reference your library. So, if the library you want to reference has a namespace called MyLibrary, add the following to the CS file: ``` using MyLibrary; ```
In the Solution Explorer window, right click the project you want to use your class library from and click the 'Add Reference' menu item. Then if the class library is in the same solution file, go to the projects tab and select it; if it's not in the same tab, you can go to the Browse tab and find it that way. Then you can use anything in that assembly.
How do I use a C# Class Library in a project?
[ "", "c#", "class", "" ]
I was wondering how I would return the result of the left most condition in `OR` clause used in a `LEFT JOIN` if both evaluate to be `true`. The solutions I've come upon thus far both involve using `CASE` statement in the `SELECT`, this does mean I'd abandon the `OR` clause. The other solution involved using a `CASE` statement in an `ORDER BY`. Is there any other solutions that would cut down on the use of `CASE` statements. Reason I ask is because as or now there's only two `LEFT JOIN`s but over time more will be added. ``` SELECT item.id, item.part_number, lang.data AS name, lang2.data AS description FROM item LEFT JOIN language lang ON item.id = lang.item AND (lang.language = 'fr' OR lang.language = 'en') LEFT JOIN language lang2 ON item.id = lang2.item AND (lang2.language = 'fr' OR lang2.language = 'en') WHERE item.part_number = '34KM003KL' ```
Seems you want a French description if it exists, otherwise fallback to English. ``` SELECT item.id, COALESCE( ( SELECT lang.data FROM language l WHERE l.item = i.id AND l.language = 'fr' ), ( SELECT lang.data FROM language l WHERE l.item = i.id AND l.language = 'en' ) ) AS description FROM item i ``` , or this: ``` SELECT item.id, COALESCE(lfr.data, len.data) FROM item i LEFT JOIN language lfr ON lfr.item = i.id AND lfr.language = 'fr' LEFT JOIN language len ON len.item = i.id AND len.language = 'en' ``` The first query is more efficient if the probability of finding French description is high (it will not evaluate the second subquery if the first one succeeds). In `SQL Server`, `Oracle` and `PostgreSQL` this one will probably more efficient if you have lots of French descriptions: ``` SELECT item.id, COALESCE( lfr.data, ( SELECT lang.data FROM language l WHERE l.item = i.id AND l.language = 'en' ) ) AS description FROM item i LEFT JOIN language lfr ON lfr.item = i.id AND lfr.language = 'fr' ``` This query will use an efficient method (`HASH JOIN` or `MERGE JOIN`) to join the French descriptions, and will fallback to English one only if necessary. For `MySQL`, the `1st` and the `3rd` queries make no difference. In all systems, create a composite index on `language (item, language)`
You could change the multiple Ors to use an In clause instead... ``` SELECT i.id, i.part_number, L1.data name, L2.data description FROM item i LEFT JOIN language L1 ON i.id = L1.item AND L1.language In ('fr', 'en') LEFT JOIN language L2 ON i.id = L2.item AND L2.language In ('fr', 'en') WHERE i.part_number = '34KM003KL' ```
LEFT JOIN using OR
[ "", "sql", "join", "default-value", "" ]
I want to run my fabric script locally, which will in turn, log into my server, switch user to deploy, activate the projects .virtualenv, which will change dir to the project and issue a git pull. ``` def git_pull(): sudo('su deploy') # here i need to switch to the virtualenv run('git pull') ``` I typically use the workon command from virtualenvwrapper which sources the activate file and the postactivate file will put me in the project folder. In this case, it seems that because fabric runs from within shell, control is give over to fabric, so I can't use bash's source built-in to '$source ~/.virtualenv/myvenv/bin/activate' Anybody have an example and explanation of how they have done this?
Right now, you can do what I do, which is kludgy but works perfectly well\* (this usage assumes you're using virtualenvwrapper -- which you should be -- but you can easily substitute in the rather longer 'source' call you mentioned, if not): ``` def task(): workon = 'workon myvenv && ' run(workon + 'git pull') run(workon + 'do other stuff, etc') ``` Since version 1.0, Fabric has a [`prefix` context manager](http://docs.fabfile.org/en/1.11/api/core/context_managers.html?highlight=prefix#fabric.context_managers.prefix) which uses this technique so you can for example: ``` def task(): with prefix('workon myvenv'): run('git pull') run('do other stuff, etc') ``` --- \* There are bound to be cases where using the `command1 && command2` approach may blow up on you, such as when `command1` fails (`command2` will never run) or if `command1` isn't properly escaped and contains special shell characters, and so forth.
As an update to bitprophet's forecast: With Fabric 1.0 you can make use of [prefix()](http://docs.fabfile.org/en/1.13/api/core/context_managers.html#fabric.context_managers.prefix) and your own context managers. ``` from __future__ import with_statement from fabric.api import * from contextlib import contextmanager as _contextmanager env.hosts = ['servername'] env.user = 'deploy' env.keyfile = ['$HOME/.ssh/deploy_rsa'] env.directory = '/path/to/virtualenvs/project' env.activate = 'source /path/to/virtualenvs/project/bin/activate' @_contextmanager def virtualenv(): with cd(env.directory): with prefix(env.activate): yield def deploy(): with virtualenv(): run('pip freeze') ```
Activate a virtualenv via fabric as deploy user
[ "", "python", "virtualenv", "fabric", "automated-deploy", "" ]
I have a `std::string` which could be a string or could be a value (such as `0`). What is the best or easiest way to convert the `std::string` to `int` with the ability to fail? I want a C++ version of C#'s `Int32.TryParse`.
Use [boost::lexical\_cast](http://www.boost.org/doc/libs/1_39_0/libs/conversion/lexical_cast.htm). If the cast cannot be done, it will [throw an exception](http://www.boost.org/doc/libs/1_39_0/libs/conversion/lexical_cast.htm#bad_lexical_cast). ``` #include <boost/lexical_cast.hpp> #include <iostream> #include <string> int main(void) { std::string s; std::cin >> s; try { int i = boost::lexical_cast<int>(s); /* ... */ } catch(...) { /* ... */ } } ``` --- Without boost: ``` #include <iostream> #include <sstream> #include <string> int main(void) { std::string s; std::cin >> s; try { std::stringstream ss(s); int i; if ((ss >> i).fail() || !(ss >> std::ws).eof()) { throw std::bad_cast(); } /* ... */ } catch(...) { /* ... */ } } ``` --- Faking boost: ``` #include <iostream> #include <sstream> #include <string> template <typename T> T lexical_cast(const std::string& s) { std::stringstream ss(s); T result; if ((ss >> result).fail() || !(ss >> std::ws).eof()) { throw std::bad_cast(); } return result; } int main(void) { std::string s; std::cin >> s; try { int i = lexical_cast<int>(s); /* ... */ } catch(...) { /* ... */ } } ``` --- If you want no-throw versions of these functions, you'll have to catch the appropriate exceptions (I don't think `boost::lexical_cast` provides a no-throw version), something like this: ``` #include <iostream> #include <sstream> #include <string> template <typename T> T lexical_cast(const std::string& s) { std::stringstream ss(s); T result; if ((ss >> result).fail() || !(ss >> std::ws).eof()) { throw std::bad_cast(); } return result; } template <typename T> bool lexical_cast(const std::string& s, T& t) { try { // code-reuse! you could wrap // boost::lexical_cast up like // this as well t = lexical_cast<T>(s); return true; } catch (const std::bad_cast& e) { return false; } } int main(void) { std::string s; std::cin >> s; int i; if (!lexical_cast(s, i)) { std::cout << "Bad cast." << std::endl; } } ```
The other answers that use streams will succeed even if the string contains invalid characters after a valid number e.g. "123abc". I'm not familiar with boost, so can't comment on its behavior. If you want to know if the string contains a number and only a number, you have to use strtol: ``` #include <iostream> #include <string> int main(void) { std::string s; std::cin >> s; char *end; long i = strtol( s.c_str(), &end, 10 ); if ( *end == '\0' ) { // Success } else { // Failure } } ``` strtol returns a pointer to the character that ended the parse, so you can easily check if the entire string was parsed. Note that strtol returns a long not an int, but depending on your compiler these are probably the same. There is no strtoi function in the standard library, only atoi, which doesn't return the parse ending character.
Convert string to int with bool/fail in C++
[ "", "c++", "parsing", "c++-faq", "" ]
Having used Java for some years already, we know what we are gaining by moving to Grails. The question is, what are we losing? Performance? Appreciate your input / ideas. Thanks.
Groovy compiles to JVM bytecode just like Java. With Grails you end up with a .war file to run in your container just like Java. Groovy has slower run-time performance to Java in most areas since it is a dynamic language. You can have java code in your Grails app in addition to groovy code.
I think the biggest issue is not the technical, but the manpower/skill issue. A quick (non-scientific) job search on a job portal reveals 5 jobs mentioning Grails, and 15 *pages* for Java. Obviously this doesn't cater for candidates wanting to learn Grails etc., but when you're replacing staff and looking for people to maintain it, I suspect either you'll have difficulty finding people, or you will have to spend time getting them up to speed (I know it compiles to bytecode, I know it has Java-like idioms but there's still that time to factor in).
What is the tradeoff in replacing Java with Grails?
[ "", "java", "grails", "" ]
I’m using JavaScript to pull a value out from a hidden field and display it in a textbox. The value in the hidden field is encoded. For example, ``` <input id='hiddenId' type='hidden' value='chalk &amp; cheese' /> ``` gets pulled into ``` <input type='text' value='chalk &amp; cheese' /> ``` via some jQuery to get the value from the hidden field (it’s at this point that I lose the encoding): ``` $('#hiddenId').attr('value') ``` The problem is that when I read `chalk &amp; cheese` from the hidden field, JavaScript seems to lose the encoding. I do not want the value to be `chalk & cheese`. I want the literal `amp;` to be retained. Is there a JavaScript library or a jQuery method that will HTML-encode a string?
**EDIT:** This answer was posted a long ago, and the `htmlDecode` function introduced a XSS vulnerability. It has been modified changing the temporary element from a `div` to a `textarea` reducing the XSS chance. But nowadays, I would encourage you to use the DOMParser API as suggested in [other anwswer](https://stackoverflow.com/a/34064434/5445). --- I use these functions: ``` function htmlEncode(value){ // Create a in-memory element, set its inner text (which is automatically encoded) // Then grab the encoded contents back out. The element never exists on the DOM. return $('<textarea/>').text(value).html(); } function htmlDecode(value){ return $('<textarea/>').html(value).text(); } ``` Basically a textarea element is created in memory, but it is never appended to the document. On the `htmlEncode` function I set the `innerText` of the element, and retrieve the encoded `innerHTML`; on the `htmlDecode` function I set the `innerHTML` value of the element and the `innerText` is retrieved. Check a running example [here](http://jsbin.com/ejuru).
The jQuery trick doesn't encode quote marks and in IE it will strip your whitespace. Based on the **escape** templatetag in Django, which I guess is heavily used/tested already, I made this function which does what's needed. It's arguably simpler (and possibly faster) than any of the workarounds for the whitespace-stripping issue - and it encodes quote marks, which is essential if you're going to use the result inside an attribute value for example. ``` function htmlEscape(str) { return str .replace(/&/g, '&amp;') .replace(/"/g, '&quot;') .replace(/'/g, '&#39;') .replace(/</g, '&lt;') .replace(/>/g, '&gt;'); } // I needed the opposite function today, so adding here too: function htmlUnescape(str){ return str .replace(/&quot;/g, '"') .replace(/&#39;/g, "'") .replace(/&lt;/g, '<') .replace(/&gt;/g, '>') .replace(/&amp;/g, '&'); } ``` **Update 2013-06-17:** In the search for the fastest escaping I have found this implementation of a `replaceAll` method: <http://dumpsite.com/forum/index.php?topic=4.msg29#msg29> (also referenced here: [Fastest method to replace all instances of a character in a string](https://stackoverflow.com/questions/2116558/fastest-method-to-replace-all-instances-of-a-character-in-a-string/6714233#6714233)) Some performance results here: <http://jsperf.com/htmlencoderegex/25> It gives identical result string to the builtin `replace` chains above. I'd be very happy if someone could explain why it's faster!? **Update 2015-03-04:** I just noticed that AngularJS are using exactly the method above: <https://github.com/angular/angular.js/blob/v1.3.14/src/ngSanitize/sanitize.js#L435> They add a couple of refinements - they appear to be handling an [obscure Unicode issue](http://en.wikipedia.org/wiki/UTF-8#Invalid_code_points) as well as converting all non-alphanumeric characters to entities. I was under the impression the latter was not necessary as long as you have an UTF8 charset specified for your document. I will note that (4 years later) Django still does not do either of these things, so I'm not sure how important they are: <https://github.com/django/django/blob/1.8b1/django/utils/html.py#L44> **Update 2016-04-06:** You may also wish to escape forward-slash `/`. This is not required for correct HTML encoding, however it is [recommended by OWASP](https://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet#RULE_.231_-_HTML_Escape_Before_Inserting_Untrusted_Data_into_HTML_Element_Content) as an anti-XSS safety measure. (thanks to @JNF for suggesting this in comments) ``` .replace(/\//g, '&#x2F;'); ```
HTML-encoding lost when attribute read from input field
[ "", "javascript", "jquery", "html", "escaping", "html-escape-characters", "" ]
I define the dependencies for compiling, testing and running my programs in the pom.xml files. But Eclipse still has a separately configured build path, so whenever I change either, I have to manually update the other. I guess this is avoidable? How?
Use either [m2eclipse](http://m2eclipse.codehaus.org/) or [IAM](http://www.eclipse.org/iam/) (formerly [Q4E](http://code.google.com/p/q4e/)). Both provide (amongst other features) a means to recalculate the Maven dependencies whenever a clean build is performed and presents the dependencies to Eclipse as a classpath container. See this [comparison of Eclipse Maven integrations](http://docs.codehaus.org/display/MAVENUSER/Eclipse+Integration) for details. I would personally go for m2eclipse at the moment, particularly if you do development with AspectJ. There is an optional plugin for m2eclipse that exposes the aspectLibraries from the aspectj-maven-plugin to Eclipse that avoids a whole class of integration issues. To enable m2eclipse on an existing project, right-click on it in the **Package Explorer** view, then select **Maven**->**Enable Dependency Management**, this will add the Maven builder to the .project file, and the classpath container to the .classpath file. There is also an eclipse:eclipse goal, but I found this more trouble than it is worth as it creates very basic .project and .classpath files (though it is useful for initial project setup), so if you have any complications to your configuration you'll have to reapply them each time. To be fair this was an older version and it might be better at handling the edge cases now.
Running mvn eclipse:eclipse will build the eclipse files from your maven project, but you have to run it everytime you change the pom.xml. Installing an eclipse maven plugin can like m2eclipse can keep it up to date
How can I make Maven set up (and keep up to date) the build path for my Eclipse project?
[ "", "java", "eclipse", "maven-2", "" ]
To run them together there are few options available but I have chosen using different profiles for Junit and TestNG. But now problem is with excluding and including test cases. Since if we add testNG dependency to main project in maven it will skip all Junit,I have decided to put it in separate profile. So I am excluding TestNG tests in default(main) profile from compiling using following entry in pom.xml : ``` <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.0.2</version> <configuration> <source>1.5</source> <target>1.5</target> <testExcludes> <testExclude>**/tests/**.*</testExclude> <testExclude>**/tests/utils/**.*</testExclude> </testExcludes> </configuration> </plugin> ``` and same for surefire plugin. So it works fine with main profile and executes only Junit4 tests. But when I use testNG profile it wont execute testNG test even it wont compile them. I am using following profile to execute them. ``` <profile> <id>testNG</id> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.5</source> <target>1.5</target> <testIncludes> <testInclude>**/tests/**.java</testInclude> <testInclude>**/tests/utils/**.*</testInclude> </testIncludes> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <skip>false</skip> <includes> <include>**/**.class</include> <include>**/tests/utils/**.class</include> </includes> </configuration> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>org.testng</groupId> <artifactId>testng</artifactId> <version>5.8</version> <scope>test</scope> <classifier>jdk15</classifier> </dependency> </dependencies> </profile> ``` Anybody have any idea why it is not including them and compiling again ?
The configuration for the compiler plugin excludes the TestNG types. The configuration from the profile is merged with the default configuration, so your effective compiler configuration is: ``` <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.5</source> <target>1.5</target> <testIncludes> <testInclude>**/tests/**.java</testInclude> <testInclude>**/tests/utils/**.*</testInclude> </testIncludes> <testExcludes> <testExclude>**/tests/**.*</testExclude> <testExclude>**/tests/utils/**.*</testExclude> </testExcludes> </configuration> </plugin> ``` This means that your TestNG types aren't ever compiled and therefore aren't run. If you specify the <excludes> section in the testNG profile it will override the default excludes and your TestNG types will be compiled and run. I can't remember if it will work with an empty excludes tag (i.e. <excludes/>), you may have to specify something like this to ensure the default configuration is overridden. ``` <testExcludes> <testExclude>dummy</testExclude> </testExcludes> ```
Simplest solution is like this ``` <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>${surefire.version}</version> <dependencies> <dependency> <groupId>org.apache.maven.surefire</groupId> <artifactId>surefire-junit47</artifactId> <version>${surefire.version}</version> </dependency> <dependency> <groupId>org.apache.maven.surefire</groupId> <artifactId>surefire-testng</artifactId> <version>${surefire.version}</version> </dependency> </dependencies> </plugin> ``` More info about this: [Mixing TestNG and JUnit tests in one Maven module – 2013 edition](http://solidsoft.wordpress.com/2013/03/12/mixing-testng-and-junit-tests-in-one-maven-module-2013-edition/)
Junit4 and TestNG in one project with Maven
[ "", "java", "maven-2", "junit4", "testng", "" ]
In java we can use **instanceOf** keyword to check the isA relationship. But is it possible to check hasA relationship too?
If you write your own method to do it. ``` public class Human { private Human parent; .. public boolean hasParent() { return parent!=null; } } ```
Do you mean you want to check if an object has a property of a particular type? There's no built-in way to do that - you'd have to use reflection. An alternative is to define an interface which has the relevant property - then check whether the object implements that interface using `instanceof`. Why do you want to do this though? Is it just speculation, or do you have a specific problem in mind? If it's the latter, please elaborate: there may well be a better way of approaching the task.
Is it possible in java to check has-a relationship?
[ "", "java", "oop", "" ]
Vim 7.0.237 is driving me nuts with `indentexpr=HtmlIndentGet(v:lnum)`. When I edit JavaScript in a `<script>` tag indented to match the surrounding html and press enter, it moves the previous line to column 0. When I autoindent the whole file the script moves back to the right. Where is vim's non-annoying JavaScript-in-HTML/XHTML indent?
Have you tried [this plugin](http://www.vim.org/scripts/script.php?script_id=1830)?
[Here](https://stackoverflow.com/questions/620247/how-do-i-fix-incorrect-inline-javascript-indentation-in-vim) is similar question with accepted answer with links to two vim plugins: 1. [html improved indentation : A better indentation for HTML and embedded javascript](http://www.vim.org/scripts/script.php?script_id=1830) [mentioned by Manni](https://stackoverflow.com/questions/1201509/how-do-i-make-vim-indent-javascript-in-html/1201590#1201590). 2. [OOP javascript indentation : This indentation script for OOP javascript (especially for EXTJS)](http://www.vim.org/scripts/script.php?script_id=1936) . One of them solved my problems with JavaScript scripts indention problems.
How do I make vim indent JavaScript in HTML?
[ "", "javascript", "vim", "indentation", "" ]
So, you have a page: ``` <html><head> <script type="text/javascript" src="jquery.1.3.2.js"></script> <script type="text/javascript"> $(function() { var onajax = function(e) { alert($(e.target).text()); }; var onclick = function(e) { $(e.target).load('foobar'); }; $('#a,#b').ajaxStart(onajax).click(onclick); }); </script></head><body> <div id="a">foo</div> <div id="b">bar</div> </body></html> ``` Would you expect one alert or two when you clicked on 'foo'? I would expect just one, but i get two. Why does one event have multiple targets? This sure seems to violate the principle of least surprise. Am i missing something? Is there a way to distinguish, via the event object which div the load() call was made upon? That would sure be helpful... EDIT: to clarify, the click stuff is just for the demo. having a non-generic ajaxStart handler is my goal. i want div#a to do one thing when it is the subject of an ajax event and div#b to do something different. so, fundamentally, i want to be able to tell which div the load() was called upon when i catch an ajax event. i'm beginning to think it's not possible. perhaps i should take this up with jquery-dev...
Ok, i went ahead and looked at the jQuery ajax and event code. jQuery only ever triggers ajax events globally (without a target element): ``` jQuery.event.trigger("ajaxStart"); ``` No other information goes along. :( So, when the trigger method gets such call, it looks through jQuery.cache and finds all elements that have a handler bound for that event type and jQuery.event.trigger again, but this time with that element as the target. So, it's exactly as it appears in the demo. When one actual ajax event occurs, jQuery triggers a non-bubbling event for every element to which a handler for that event is bound. So, i suppose i have to lobby them to send more info along with that "ajaxStart" trigger when a load() call happens. Update: Ariel committed support for this recently. It should be in jQuery 1.4 (or whatever they decide to call the next version).
when you set ajaxStart, it's going to go off for both divs. so when you set each div to react to the ajaxStart event, every time ajax starts, they will both go off... you should do something separate for each click handler and something generic for your ajaxStart event...
jquery ajax events called on every element in the page when only targeted at one element
[ "", "javascript", "jquery", "ajax", "events", "" ]
Our application is a client/server setup, where the client is a standalone Java application that always runs in Windows, and the server is written in C and can run on either a Windows or a Unix machine. Additionally, we use Perl for doing various reports. Generally, the way the reports work is that we generate either a text file or an xml file on the server in Perl and then send that to the client. The client then uses FOP or similar to convert the xml into a pdf. In either the case of the text file or the eventual pdf, the user select a printer via the java client and then the copied over file prints to the selected printer. One of our "reports" is used for creating barcodes. This one is different in that it uses Perl to fetch/format some data from the database and then sends that to a C application that creates some Raw print data. This data is then sent directly to the printer (via a simple pipe in Unix or a custom application in Windows. The problem is that this in no way respects the printer selected by the user in the Java client. Also, we are unable to show a preview in said client. Ideally, I'd like to be able convert the raw print data into a ps/pdf or similar on the server (or even on the client) and then send THAT to the printer from the client. This would allow me to show a preview as well as actually print to the selected printer. If I can't generate a preview, even just copying over the raw data in a file to the Java client and then sending that to the printer would probably be "good enough." I've been unable to find anything that is quite what I'm trying to accomplish so any help would of course be appreciated. Edit: The RAW data is in PCL format. I managed to reconcile the source with a PCL language reference guide.
I figured out a way to generate the barcodes using XSL-FO directly. This is the "correct" answer based on our architecture and trying to do anything else would have been just a dirty hack.
Have you had a look at [iText](http://www.lowagie.com/iText/)?
Need to either convert RAW print data to ps/pdf or print it from Java
[ "", "java", "perl", "printing", "" ]
I'm trying everything I can to get phpdocumentor to allow me to use the DocBook tutorial format to supplement the documentation it creates: 1. I am using Eclipse 2. I've installed phpDocumentor via PEAR on an OSX machine 3. I can run and auto generate code from my php classes 4. It won't format Tutorials - I can't find a solution I've tried moving the .pkg example file all over the file structure, in subfolders using a similar name to the package that is being referenced within the code .. I'm really at a loss - if someone could explain WHERE they place the .pkg and other DocBook files in relation to the code they are documenting and how they trigger phpdoc to format it I would appreciate it, I'm using this at the moment: ``` phpdoc -o HTML:Smarty:HandS -d "/path/to/code/classes/", "/path/to/code/docs/tutorials/" -t /path/to/output ```
I didn't expect to be answering my own question, but after 2 days of mind bending pain and a weekend to experiment it seems this is the problem: The tutorial and my examples should work, but ***there seems to be a minor flaw in the way phpdoc interprets the switch values***. Here is what I've been using: ``` phpdoc -o HTML:Smarty:HandS -d "/path/to/code/classes/", "/path/to/code/docs/tutorials/" -t /path/to/output ``` However if you use the following: ``` phpdoc -o HTML:Smarty:HandS -d /path/to/code/classes/, /path/to/code/docs/tutorials/ -t /path/to/output ``` It will correctly format your tutorials and extending docs, all I did was **drop the double quotes** surrounding the directory paths. Single quotes don't work at all - as phpdoc itself wraps the directories in double quotes if there are no spaces ... this does seem like a bug with phpdoc, and the same behaviour occurred with the web based interface, so its an internal issue. my original attempt should have worked but didn't I will contact the developers and bring it to their attention. Problem solved.
Have you read [this](http://manual.phpdoc.org/HTMLframesConverter/phphtmllib/phpDocumentor/tutorial_tutorials.pkg.html#howto.location)? It suggest the following path scheme: *tutorials/package/package.pkg* where package is the name of your package, did you do it this way?
how to create phpdoc Tutorial / Extended pages to supplement commented code
[ "", "php", "documentation", "phpdoc", "" ]
Hy, I want to display a certain part (a div for example) of my wicket-template only under a certain condition (for example only if I have the data to fill it). The problem is: If I only add the panel (filling the div) if I got the data, an exception is thrown every time I call the page without the data (because the referenced wicket-id is not added to the component-tree). *The only solution which came to my mind was to add a empty panel if there is no data. This is not an ideal solution because I got some unneeded code in the java-code and many empty divs in my rendered html.* **So is there a better solution to include several parts of a wicket-template only under a condition?**
Although this is an old question here could be one more solution: [wicket:enclosure](http://www.systemmobile.com/?page_id=253) (and [this](http://www.google.com/search?q=wicket+conditional+markup) ) **Update**: Now I needed this functionality by my self (for [jetwick](https://github.com/karussell/Jetwick)). I'm using WebMarkupContainer one for loggedIn state and one for loggedOut state and set the right visibility: ``` if (loggedIn()) { WebMarkupContainer loggedInContainer = new WebMarkupContainer("loggedIn"); //## do something with the user User user = getUserSomeWhere(); loggedInContainer.add(new UserSearchLink("userSearchLink")); add(loggedInContainer); add(WebMarkupContainer("loggedOut").setVisible(false)); } else { add(new WebMarkupContainer("loggedIn").setVisible(false)); WebMarkupContainer loggedOutContainer = WebMarkupContainer("loggedOut"); loggedOutContainer.add(new LoginLink() {...}); add(loggedOutContainer); } ``` The advantage of this for me is that I prevent a NullpointerExc in the //## marked line and the enclose feature of wicket would look more ugly to me in this case I think.
Like @miaubiz said, you can call setVisible(false), or you can override the isVisible() method, if the visibility is conditional to some other state (fields filled, for example).
Wicket: Conditional display in Template
[ "", "java", "templates", "wicket", "" ]
I'm creating a small app must be able to receive URLs. If the apps window is open, I should be able to drag a link from a browser and drop it into the app - and the app will save the URL to a database. I'm creating this in Python/GTk. But I am a bit confused about the drag and drop functionality in it. So, how do it? Some sample code to implement drag/drop(my app uses a bit of this code)... ``` import pygtk pygtk.require('2.0') import gtk # function to print out the mime type of the drop item def drop_cb(wid, context, x, y, time): l.set_text('\n'.join([str(t) for t in context.targets])) # What should I put here to get the URL of the link? context.finish(True, False, time) return True # Create a GTK window and Label, and hook up # drag n drop signal handlers to the window w = gtk.Window() w.set_size_request(200, 150) w.drag_dest_set(0, [], 0) w.connect('drag_drop', drop_cb) w.connect('destroy', lambda w: gtk.main_quit()) l = gtk.Label() w.add(l) w.show_all() # Start the program gtk.main() ```
You must fetch the data yourself. Here's a simple working example that will set a label to the url dropped: ``` #!/usr/local/env python import pygtk pygtk.require('2.0') import gtk def motion_cb(wid, context, x, y, time): l.set_text('\n'.join([str(t) for t in context.targets])) context.drag_status(gtk.gdk.ACTION_COPY, time) # Returning True which means "I accept this data". return True def drop_cb(wid, context, x, y, time): # Some data was dropped, get the data wid.drag_get_data(context, context.targets[-1], time) return True def got_data_cb(wid, context, x, y, data, info, time): # Got data. l.set_text(data.get_text()) context.finish(True, False, time) w = gtk.Window() w.set_size_request(200, 150) w.drag_dest_set(0, [], 0) w.connect('drag_motion', motion_cb) w.connect('drag_drop', drop_cb) w.connect('drag_data_received', got_data_cb) w.connect('destroy', lambda w: gtk.main_quit()) l = gtk.Label() w.add(l) w.show_all() gtk.main() ```
To be sure to get only the data of one file or directory on DnD'ing a list of files from your file explorer, you could use something like: ``` data.get_text().split(None,1)[0] ``` the code for the "got\_data\_cb" method would then look like this: ``` def got_data_cb(wid, context, x, y, data, info, time): # Got data. l.set_text(data.get_text().split(None,1)[0]) context.finish(True, False, time) ``` This would split the data by any whitespace and returns you the first item.
Python GTK Drag and Drop - Get URL
[ "", "python", "drag-and-drop", "gtk", "gdk", "" ]
I have to execute an `.exe` which is available on some drive. How can I do this using C++? I am doing it like this: ``` #include <stdio.h> #include <conio.h> #include <windows.h> void main() { STARTUPINFO si; PROCESS_INFORMATION pi; ZeroMemory( &si, sizeof(si) ); si.cb = sizeof(si); ZeroMemory( &pi, sizeof(pi) ); if(!CreateProcess(L"c:\\DOTNET.exe",NULL,NULL, NULL,FALSE, 0,NULL,NULL,&si,&pi ) ) { printf( "CreateProcess failed (%d).\n", GetLastError() ); } else { printf("Prcess Creation Success"); } WaitForSingleObject( pi.hProcess, INFINITE ); CloseHandle( pi.hProcess ); CloseHandle( pi.hThread ); getch(); } ``` But every time, it is showing this error: ``` process creation failed with error code 2 (i.e can not find the path specified) ``` But I place the `DOTNET.exe` at `c:\DOTNET.exe` only. What is wrong in this code?
I've just tested your code and it's working here with : ``` if(!CreateProcess(L"C:\\Program Files\\Mozilla Firefox\\firefox.exe",NULL,NULL, NULL,FALSE, 0,NULL,NULL,&si,&pi ) ) ``` A C++/Win32 solution for your C/Win32 code :) ``` void ExecuteAndWait (wstring toto) { STARTUPINFO si = { sizeof(si) }; PROCESS_INFORMATION pi; vector<TCHAR> V( toto.length() + 1); for (int i=0;i< (int) toto.length();i++) V[i] = toto[i]; CreateProcess(NULL, &V[0],0, 0, FALSE, 0, 0, 0, &si, &pi); WaitForSingleObject(pi.hProcess, INFINITE); CloseHandle(pi.hProcess); CloseHandle(pi.hThread); } ```
Are you building a Unicode executable? Try \_T("C:\\DOTNET.exe") instead. Or does your DOTNET.exe have some dependent DLLs which are not being found?
How to Create a process in c++ to execute exe?
[ "", "c++", "visual-c++", "" ]
I would like to know how you can `echo` the ip-address of the user such that you can use it in your login cookie. My code ``` <?php echo "$_SERVER['REMOTE_ADDR']"; ?> ``` I run it and I get in Firefox ``` Parse error: syntax error, unexpected T_ENCAPSED_AND_WHITESPACE, expecting T_STRING or T_VARIABLE or T_NUM_STRING in /var/www/file7.php on line 2 ``` **How can you get the IP-address by PHP?**
No need for `"` ``` echo $_SERVER['REMOTE_ADDR']; ```
``` <?php echo $_SERVER['REMOTE_ADDR']; ?> ``` You can also do this: ``` <?php echo "{$_SERVER['REMOTE_ADDR']}"; ?> ```
How to print out an associative array item?
[ "", "php", "" ]
I have this string: ``` [a [a b] [c e f] d] ``` and I want a list like this ``` lst[0] = "a" lst[1] = "a b" lst[2] = "c e f" lst[3] = "d" ``` My current implementation that I don't think is elegant/pythonic is two recursive functions (one splitting with '[' and the other with ']' ) but I am sure it can be done using list comprehensions or regular expressions (but I can't figure out a sane way to do it). Any ideas?
Actually this really isn't a recursive data structure, note that `a` and `d` are in separate lists. You're just splitting the string over the bracket characters and getting rid of some white space. I'm sure somebody can find something cleaner, but if you want a one-liner something like the following should get you close: ``` parse_str = '[a [a b] [c e f] d]' lst = [s.strip() for s in re.split('[\[\]]', parse_str) if s.strip()] >>>lst ['a', 'a b', 'c e f', 'd'] ```
If it's a recursive data structure, then recursion is good to traverse it. *However*, parsing the string to create the structure does not need to be recursive. One alternative way I would do it is iterative: ``` origString = "[a [a b] [c [x z] d e] f]".split(" ") stack = [] for element in origString: if element[0] == "[": newLevel = [ element[1:] ] stack.append(newLevel) elif element[-1] == "]": stack[-1].append(element[0:-1]) finished = stack.pop() if len(stack) != 0: stack[-1].append(finished) else: root = finished else: stack[-1].append(element) print root ``` Of course, this can probably be improved, and it will create lists of lists of lists of ... of strings, which isn't exactly what your example wanted. However, it does handle arbitrary depth of the tree.
Tokenizing blocks of code in Python
[ "", "python", "regex", "list-comprehension", "tokenize", "" ]
Guys, I am having an issue with Telerik's RadPanelBar control. I have the Q1 2009 version of the controls. I have the follow ASP.NET code: ``` <telerik:RadPanelBar Width="297px" ID="RadPanelBar1" runat="server" Skin="Web20" AllowCollapseAllItems="True" ExpandMode="SingleExpandedItem" PersistStateInCookie="True"> <Items> <telerik:RadPanelItem runat="server" Text="Standard Reports" Expanded="True"> <ItemTemplate> ... Standard HTML Template code here ... </ItemTemplate> </telerik:RadPanelItem> <telerik:RadPanelItem runat="server" Expanded="false" Text="NonStandard Reports"> <ItemTemplate> <asp:Label runat="server" Text="test"></asp:Label> </ItemTemplate> </telerik:RadPanelItem> </Items> </telerik:RadPanelBar> ``` Everything works fine, except I cannot expand or collapase the headers. My cursor changes to a hand when I hover over the headers, however nothing happens when I click on the header. Can someone help me out? Thanks
If you set the ItemTemplate of top level items - you will define the content of the item not the collapsible area. To solve the problem define a child item and set its ItemTemplate property instead: ``` <telerik:RadPanelBar runat="server"> <Items> <telerik:RadPanelItem Text="Standard Reports"> <Items> <telerik:RadPanelItem> <ItemTemplate> ... Standard HTML Template code here ... </ItemTemplate> </telerik:RadPanelItem> </Items> </telerik:RadPanelItem> </Items> </telerik:RadPanelBar> ``` I hope this helps!
Do you have a telerik:RadScriptManager on the page?
Radpanelbar collapse/expand issue
[ "", "c#", "asp.net", "vb.net", "telerik", "rad-controls", "" ]
Ok, I am creating an admin interface for my custom blog at the url /admin. Is it possible for me to be able to use the same includes (including autoload), as the root directory. If possible, I would also like to be able to automatically correct the links in the navigation so that they go that index.php in / changes to ../index.php when accessed from /admin. Thanks, Nico
The best practice for this is to define a 'ABSOLUTE\_PATH' constant that contains the directory that everything is located under. After that, you can simply copy and paste everything, because it is defining the 'full' path, which doesn't change from directory to directory. Example ``` define("ABS_PATH", $_SERVER['DOCUMENT_ROOT']); or define("ABS_PATH", dirname(__FILE__)); // This defines the path as the directory the file is in. ``` Then at any point you can simply do this to include a file ``` include(ABS_PATH . "/path/to/file"); ```
Easiest way would be to use absolute pathes / URLs. For the URLs, define a constant/variable somewhere, that points to the root of your application, like : ``` define('ROOT_URL', 'http://www.example.com'); ``` or ``` $root_url = 'http://www.example.com'; ``` And use it in every link, like : ``` <a href="{$root_url}/my-page.php">blah</a> ``` This way, always OK (and the day you install your project on another server, or in a subdirectory, you only have one constant/variable to modify, and everything still works) For includes / requires, always use absolute pathes too ; one solution is to use [`dirname`](http://php.net/dirname), like this : ``` include dirname(__FILE__) . '/my_file.php'; include dirname(__FILE__) . '/../my-other-file.php'; ``` `__FILE__` is the current file, where you are writing this line ; `dirname` gets the path (the full path) to the directory containing that file. With that, you never have to worry about the relative paths of your files.
Making php includes work in a sub-directory
[ "", "php", "directory", "include", "" ]
In the application I am developing I am facing a situation; I want to know if there is a design pattern for this. It's as follows 1. User is presented on a web interface with different algorithms for a process 2. User selection is stored in the database. 3. Now, the application should perform computations differently depending on the algorithm selected. What is a good strategy to implement this? Right now what we are doing is - 1. Have a reference DB table which has all the algorithm types and the corresponding class names in code (eg. If quick sort algorithm then we store QuickSort). This has to be manually updated everytime a new algo comes 2. In the code, get the algorithm type and use reflection to instantiate the appropriate algorithm type. In C# we use code similar to below System.Reflection.Assembly types = System.Reflection.Assembly.LoadFile(System.Reflection.Assembly.GetExecutingAssembly().Location.ToString()); foreach (Type t in types) if (t.Name==classname) createinstanceof(t) //classnames is the list of all the class types that is loaded from reference table in DB. My gut feeling is there should be a simpler/better way to do this as it seems a very standard problem. I know the [strategy pattern](http://en.wikipedia.org/wiki/Strategy_pattern) - but what I want is to simplify and possibly remove manual tasks.
You can use Interface + Reflection to avoid storing algorithm names in database. Create an Interface IMySortingAlgorithms as, ``` public interface IMySortingAlgorithms { string Name { get; } string[] Sort(string[] input); } ``` Now, write a Factory that uses reflection to get the sorting algorithm. ``` public static class MyAlgoFactory { private static Dictionary<string, IMySortingAlgorithms> m_dict; /// <summary> /// For all the assmeblies in the current application domain, /// Get me the object of all the Types that implement IMySortingAlgorithms /// </summary> static MyAlgoFactory() { var type = typeof(IMySortingAlgorithms); m_dict = AppDomain.CurrentDomain.GetAssemblies(). SelectMany(s => s.GetTypes()). Where(p => {return type.IsAssignableFrom(p) && p != type;}). Select(t=> Activator.CreateInstance(t) as IMySortingAlgorithms). ToDictionary(i=> i.Name); } public static IMySortingAlgorithms GetSortingAlgo(string name) { return m_dict[name]; } } ``` All your sorting algorithms can now implement this interface. ``` public class MySortingAlgo1 : IMySortingAlgorithms { #region IMySortingAlgorithms Members public string Name { get { return "MySortingAlgo1"; } } public string[] Sort(string[] input) { throw new NotImplementedException(); } #endregion } ``` This way you need not add the class names to database whenever you create a new class for sorting. Following is the non-Linq version of MyAlgoFactory ``` /// <summary> /// For all the assmeblies in the current application domain, /// Get me the object of all the Types that implement IMySortingAlgorithms /// </summary> static MyAlgoFactory() { m_dict = new Dictionary<string, IMySortingAlgorithms>(); var type = typeof(IMySortingAlgorithms); foreach (Assembly asm in AppDomain.CurrentDomain.GetAssemblies()) { foreach (Type p in asm.GetTypes()) { if (type.IsAssignableFrom(p) && p != type) { IMySortingAlgorithms algo = Activator.CreateInstance(p) as IMySortingAlgorithms; m_dict[algo.Name] = algo; } } } } ```
Yeah, you're right, what you want is the [Strategy](http://en.wikipedia.org/wiki/Strategy_pattern) pattern. What you really want to do, though, is define an interface which each of your algorithms uses that allows you to specify the parameters for your algorithm and which allows you to invoke each of them simply through the interface, instead of the ugly reflection process you describe in the question.
Different algorithm for different inputs
[ "", "c#", "design-patterns", "strategy-pattern", "" ]
I think that I understand the difference between Release and Debug build modes. The main differences being that in Debug mode, the executable produced isn't optimized (as this could make debugging harder) and the debug symbols are included. While building PCRE, one of the external dependencies for WinMerge, I noticed a build mode that I hadn't seen before: RelWithDebInfo. The difference between Debug and RelWithDebInfo is mentioned here: <http://www.cmake.org/pipermail/cmake/2001-October/002479.html>. exerpt: "RelwithDebInfo is quite similar to Release mode. It produces fully optimized code, but also builds the program database, and inserts debug line information to give a debugger a good chance at guessing where in the code you are at any time." This sounds like a really good idea, however not necessarily obvious how to set up. This link describes how to enable this for VC++: <http://www.cygnus-software.com/papers/release_debugging.html> Am I missing something, or does it not make sense to compile all release code as RelWithDebInfo?
> Am I missing something, or does it not > make sense to compile all release code > as RelWithDebInfo? It depends on how much you trust your customer with the debugging information. Additional Info: gcc encodes the debugging information into the object code. Here is the pdb equivalent for gcc: [How to generate gcc debug symbol outside the build target?](https://stackoverflow.com/questions/866721/how-to-generate-gcc-debug-symbol-outside-the-build-target) Note, that cmake doesn't appear to support this approach out of the box.
As far as I'm concerned, shipping code to customers without having corresponding debug symbols stored in-house is a recipe for hair-loss when it comes to debugging production problems. Debugging Release builds with debug symbols is rarely any different from debugging Debug builds, so I'd recommend always doing this. That said, I don't know if there are any drawbacks. It'd be interesting to hear, if so.
CMake build mode RelWithDebInfo
[ "", "c++", "visual-studio", "cmake", "" ]
I just enabled error reporting and wow what a shocker I have probably thousands if not hundreds of notices like this ``` Notice: Undefined index: action in C:\webserver\htdocs\header.inc.php on line 18 ``` I understand that they are because I am calling a variable withoutsetting it or whatever but is there an easier way to set for example if a page has 50 variables that it is reporting this, is there an easier way to code that page properly to fix them all? And I don't mean to just hide them I think it would be best to fix them here is an example of that line I posted ``` if ($_GET['p'] == "account.edit.topfriends" || $_GET['action'] == "newmember" || $_GET['p'] == "account.profile.name") { //some more code here } ```
I usually like to use [ternary](http://www.php.net/ternary) statements at the top of my scripts to initialise values. ``` $_GET['p'] = (isset($_GET['p']) ? $_GET['p'] : 'default'); ``` Sure you could probably use a more generic approach but that method can prove troublesome as different variables can have different default values.
As rezzif mentioned what you need to do is check with an isset() call. If you're using arrays a lot and don't want to go back and add a bunch of isset() calls you can always us a function. Something like: ``` function get_index($array, $index) { return isset($array[$index]) ? $array[$index] : null; } ``` Then you could change your if-statement to something like: ``` if (get_index($_GET, 'p') == "account.edit.topfriends" || get_index($_GET, 'action') == "newmember" || get_index($_GET, 'p') == "account.profile.name") { //some more code here } ``` If all the checks being done are against `$_GET` you could always nix the first parameter of the function and hardcode $\_GET in it, my example assumes you're doing this against several different arrays. This solution isn't necessarily the most elegant, but it should get the job done.
How to get rid of hundreds of PHP undefined index notices?
[ "", "php", "" ]
I have a loop that looks like this: ``` for (int i = 0; i < dim * dim; i++) ``` Is the condition in a for loop re-evaluated on every loop? If so, would it be more efficient to do something like this?: ``` int dimSquare = dim * dim; for (int i = 0; i < dimSquare; i++) ```
In general, if you would for example change the value of "dim" inside your loop, it would be re-evaluated every time. But since that is not the case in your example, a decent compiler would optimize your code and you wouldn't see any difference in performance.
Yes, semantically it will be evaluated on every loop. In *some cases*, compilers *may* be able to remove the condition from the loop automatically - but not always. In particular: ``` void foo(const struct rect *r) { for (int i = 0; i < r->width * r->height; i++) { quux(); } } ``` The compiler will not be able to move the multiplication out in this case, as for all it knows `quux()` modifies `r`. In general, usually only local variables are eligible for lifting expressions out of a loop (assuming you never take their address!). While under some conditions structure members may be eligible as well, there are so many things that may cause the compiler to assume everything in memory has changed - writing to just about any pointer, or calling virtually any function, for example. So if you're using any non-locals there, it's best to assume the optimization won't occur. That said, in general, I'd only recommend proactively moving potentially expensive code out of the condition if it either: * Doesn't hurt readability to do so * Obviously will take a *very* long time (eg, network accesses) * Or shows up as a hotspot on profiling.
Is the condition of a loop re-evaluated each iteration?
[ "", "c++", "for-loop", "optimization", "compiler-optimization", "loop-invariant", "" ]
I have a form with a custom control on it. Assume the custom control has the focus. If I show a message box from that form, when the message box is closed by pressing Enter on either the OK or Cancel button, the message box is closed and then the custom control gets a keyboard event (OnKeyUp) with the enter key. This doesn't happen if the space key is used to "press" either the OK or Cancel button. It's like the MessageBox doesn't consume the Enter Key for some reason. I tried this with the Form's KeyPreview property turned on, but there was no difference. Does anyone know how to stop that enter message after it is used to press the MessageBox button?
Thank you for the answers. Yes, I could ignore it but most of the time I need that event. The solution turned out to be using OnKeyDown or OnKeyPressed instead of OnKeyUp, as the former two events ARE consumed by the message box.
Can't you ignore it in code? This is VB syntax: ``` Private Sub frmEdit_KeyUp(ByVal sender As Object, ByVal e As System.Windows.Forms.KeyEventArgs) Handles Me.KeyUp If e.KeyCode = Keys.Enter Then e.Handled = True End If End Sub ```
Enter Key Not Consumed by MessageBox, C#, Windows Forms
[ "", "c#", "winforms", "" ]
I am using Numeric Library Bindings for Boost UBlas to solve a simple linear system. The following works fine, except it is limited to handling matrices A(m x m) for relatively small 'm'. In practice I have a much larger matrix with dimension m= 10^6 (up to 10^7). Is there existing C++ approach for solving Ax=b that uses memory efficiently. ``` #include<boost/numeric/ublas/matrix.hpp> #include<boost/numeric/ublas/io.hpp> #include<boost/numeric/bindings/traits/ublas_matrix.hpp> #include<boost/numeric/bindings/lapack/gesv.hpp> #include <boost/numeric/bindings/traits/ublas_vector2.hpp> // compileable with this command //g++ -I/home/foolb/.boost/include/boost-1_38 -I/home/foolb/.boostnumbind/include/boost-numeric-bindings solve_Axb_byhand.cc -o solve_Axb_byhand -llapack namespace ublas = boost::numeric::ublas; namespace lapack= boost::numeric::bindings::lapack; int main() { ublas::matrix<float,ublas::column_major> A(3,3); ublas::vector<float> b(3); for(unsigned i=0;i < A.size1();i++) for(unsigned j =0;j < A.size2();j++) { std::cout << "enter element "<<i << j << std::endl; std::cin >> A(i,j); } std::cout << A << std::endl; b(0) = 21; b(1) = 1; b(2) = 17; lapack::gesv(A,b); std::cout << b << std::endl; return 0; } ```
Short answer: Don't use Boost's `LAPACK` bindings, these were designed for dense matrices, not sparse matrices, use `UMFPACK` instead. Long answer: `UMFPACK` is one of the best libraries for solving Ax=b when A is large and sparse. * <http://www.cise.ufl.edu/research/sparse/umfpack/> * <http://www.cise.ufl.edu/research/sparse/umfpack/UMFPACK/Doc/QuickStart.pdf> Below is sample code (based on `umfpack_simple.c`) that generates a simple `A` and `b` and solves `Ax = b`. ``` #include <stdlib.h> #include <stdio.h> #include "umfpack.h" int *Ap; int *Ai; double *Ax; double *b; double *x; /* Generates a sparse matrix problem: A is n x n tridiagonal matrix A(i,i-1) = -1; A(i,i) = 3; A(i,i+1) = -1; */ void generate_sparse_matrix_problem(int n){ int i; /* row index */ int nz; /* nonzero index */ int nnz = 2 + 3*(n-2) + 2; /* number of nonzeros*/ int *Ti; /* row indices */ int *Tj; /* col indices */ double *Tx; /* values */ /* Allocate memory for triplet form */ Ti = malloc(sizeof(int)*nnz); Tj = malloc(sizeof(int)*nnz); Tx = malloc(sizeof(double)*nnz); /* Allocate memory for compressed sparse column form */ Ap = malloc(sizeof(int)*(n+1)); Ai = malloc(sizeof(int)*nnz); Ax = malloc(sizeof(double)*nnz); /* Allocate memory for rhs and solution vector */ x = malloc(sizeof(double)*n); b = malloc(sizeof(double)*n); /* Construct the matrix A*/ nz = 0; for (i = 0; i < n; i++){ if (i > 0){ Ti[nz] = i; Tj[nz] = i-1; Tx[nz] = -1; nz++; } Ti[nz] = i; Tj[nz] = i; Tx[nz] = 3; nz++; if (i < n-1){ Ti[nz] = i; Tj[nz] = i+1; Tx[nz] = -1; nz++; } b[i] = 0; } b[0] = 21; b[1] = 1; b[2] = 17; /* Convert Triplet to Compressed Sparse Column format */ (void) umfpack_di_triplet_to_col(n,n,nnz,Ti,Tj,Tx,Ap,Ai,Ax,NULL); /* free triplet format */ free(Ti); free(Tj); free(Tx); } int main (void) { double *null = (double *) NULL ; int i, n; void *Symbolic, *Numeric ; n = 500000; generate_sparse_matrix_problem(n); (void) umfpack_di_symbolic (n, n, Ap, Ai, Ax, &Symbolic, null, null); (void) umfpack_di_numeric (Ap, Ai, Ax, Symbolic, &Numeric, null, null); umfpack_di_free_symbolic (&Symbolic); (void) umfpack_di_solve (UMFPACK_A, Ap, Ai, Ax, x, b, Numeric, null, null); umfpack_di_free_numeric (&Numeric); for (i = 0 ; i < 10 ; i++) printf ("x [%d] = %g\n", i, x [i]); free(b); free(x); free(Ax); free(Ai); free(Ap); return (0); } ``` The function `generate_sparse_matrix_problem` creates the matrix `A` and the right-hand side `b`. The matrix is first constructed in triplet form. The vectors Ti, Tj, and Tx fully describe A. Triplet form is easy to create but efficient sparse matrix methods require Compressed Sparse Column format. Conversion is performed with `umfpack_di_triplet_to_col`. A symbolic factorization is performed with `umfpack_di_symbolic`. A sparse LU decomposition of `A` is performed with `umfpack_di_numeric`. The lower and upper triangular solves are performed with `umfpack_di_solve`. With `n` as 500,000, on my machine, the entire program takes about a second to run. Valgrind reports that 369,239,649 bytes (just a little over 352 MB) were allocated. Note this [page](http://www.boost.org/doc/libs/1_39_0/libs/numeric/ublas/doc/matrix_sparse.htm) discusses Boost's support for sparse matrices in Triplet (Coordinate) and Compressed format. If you like, you can write routines to convert these boost objects to the simple arrays `UMFPACK` requires as input.
I assume that your matrix is dense. If it is sparse, you can find numerous specialised algorithms as already mentioned by [DeusAduro](https://stackoverflow.com/questions/1242190/c-memory-efficient-solution-for-axb-linear-algebra-system/1242210#1242210) and [duffymo](https://stackoverflow.com/questions/1242190/c-memory-efficient-solution-for-axb-linear-algebra-system/1242217#1242217). If you don't have a (large enough) cluster at your disposal, you want to look at out-of-core algorithms. [ScaLAPACK](http://www.netlib.org/scalapack/) has a few out-of-core solvers as part of its [prototype package](http://www.netlib.org/scalapack/prototype/), see the documentation [here](http://www.netlib.org/scalapack/prototype/readme.outofcore) and [Google](http://www.google.com/search?q=out-of-core+extension+to+ScaLAPACK) for more details. Searching the web for "out-of-core LU / (matrix) solvers / packages" will give you links to a wealth of further algorithms and tools. I am not an expert on those. For this problem, most people would use a cluster, however. The package you will find on almost any cluster is ScaLAPACK, again. In addition, there are usually numerous other packages on the typical cluster, so you can pick and choose what suits your problem (examples [here](http://www.tacc.utexas.edu/resources/software/) and [here](http://www.netlib.org/utk/people/JackDongarra/la-sw.html)). Before you start coding, you probably want to quickly check how long it will take to solve your problem. A typical solver takes about O(3\*N^3) flops (N is dimension of matrix). If N = 100000, you are hence looking at 3000000 Gflops. Assuming that your in-memory solver does 10 Gflops/s per core, you are looking at 3 1/2 days on a single core. As the algorithms scale well, increasing the number of cores should reduce the time close to linearly. On top of that comes the I/O.
C++ Memory Efficient Solution for Ax=b Linear Algebra System
[ "", "c++", "boost", "linear-algebra", "lapack", "umfpack", "" ]
I have a problem... I don't understand template metaprogramming. The problem is, that I’ve read a lot about it, but it still doesn’t make much sense to me. **Fact nr.1**: Template Metaprogramming is **faster** ``` template <int N> struct Factorial { enum { value = N * Factorial<N - 1>::value }; }; template <> struct Factorial<0> { enum { value = 1 }; }; // Factorial<4>::value == 24 // Factorial<0>::value == 1 void foo() { int x = Factorial<4>::value; // == 24 int y = Factorial<0>::value; // == 1 } ``` So this metaprogram is faster ... because of the constant literal. **BUT**: Where in *the real world* do we have constant literals? Most programs I use react on user input. **FACT nr. 2** : Template metaprogramming can accomplish **better maintainability.** Yeah, the factorial example may be maintainable, but when it comes to complex functions, I and most other C++ programmers can't read them. Also, the debugging options are very poor (or at least I don't know how to debug). When does template metaprogramming make sense?
Just as factorial is not a realistic example of recursion in non-functional languages, neither is it a realistic example of template metaprogramming. It's just the standard example people reach for when they want to show you recursion. In writing templates for realistic purposes, such as in everyday libraries, often the template has to adapt what it does depending on the type parameters it is instantiated with. This can get quite complex, as the template effectively chooses what code to generate, conditionally. This is what template metaprogramming is; if the template has to loop (via recursion) and choose between alternatives, it is effectively like a small program that executes during compilation to generate the right code. Here's a really nice tutorial from the boost documentation pages (actually excerpted from a [brilliant book](http://www.informit.com/store/product.aspx?isbn=0321227255), well worth reading). <http://www.boost.org/doc/libs/1_39_0/libs/mpl/doc/tutorial/representing-dimensions.html>
I use template mete-programming for SSE swizzling operators to optimize shuffles during compile time. SSE swizzles ('shuffles') can only be masked as a byte literal (immediate value), so we created a 'mask merger' template class that merges masks during compile time for when multiple shuffle occur: ``` template <unsigned target, unsigned mask> struct _mask_merger { enum { ROW0 = ((target >> (((mask >> 0) & 3) << 1)) & 3) << 0, ROW1 = ((target >> (((mask >> 2) & 3) << 1)) & 3) << 2, ROW2 = ((target >> (((mask >> 4) & 3) << 1)) & 3) << 4, ROW3 = ((target >> (((mask >> 6) & 3) << 1)) & 3) << 6, MASK = ROW0 | ROW1 | ROW2 | ROW3, }; }; ``` This works and produces remarkable code without generated code overhead and little extra compile time.
Template Metaprogramming - I still don't get it :(
[ "", "c++", "templates", "metaprogramming", "" ]
I put "username" and "password" to a form of mine. The action starts up a handler.php. The user sees then only a white page (handler.page) if he does not reload his browser at handler.php. If he does, the handler puts him to back to index.php. I would like to put the user automatically back to the homepage after being at handler.php where he gets a login -cookie. I have the following in my *handler.php* ``` $email = $_POST['email']; $username = $_POST['username']; $passhash_md5 = $_POST['passhash_md5']; // COOKIE setting /* $cookie may look like this variables $username = "username"$ $passhash_md5 = "password"$ $email ="email"$ $_SERVER['REMOTE_ADDR']=11.44.23.94$ before md5:$ "usernamepasshash_md5email11.44.23.94"$ after md5:$ "a08d367f31feb0eb6fb51123b4cd3cb7"$ */ $login_cookie = md5( $username . $password . $email . $_SERVER['REMOTE_ADDR'] ); setcookie ("login", $login_cookie); if (isset($_COOKIE['login']) ) { $sql2 = "SELECT * from users"; $raw_user_list = pg_query($dbconn, $sql2); $user_list = pg_fetch_all($raw_user_list); // to process each user in the user-list that has a password foreach ($user_list as $user => $passhash_md5) { //match the user list with the cookie$ if ( $login_cookie == $_COOKIE['login'] ) { header("Location: index.php"); die("logged in"); } } header("Location: index.php"); die("wrong username/password"); } ?> ``` I have a form which uses the POST -method and the action is handler.php. **My form** ``` <form method="post" action="handler.php"> <p>Username: <input name="username" type="text" size="40" /> </p> <p>Email: <input name="email" type="text" size="230" /> </p> <p>Password: <input name="password" type="password" size="230" /> </p> <input type="submit" value="OK" /> </form> ``` The handler page is not being called by AJAX. I run the handler page unsuccessfully with the HEAD: ``` <head> <meta http-equiv="refresh" content="5; URL=inedx.php"> </head> ``` However, I cannot include the HEAD because PHP does not allow to have output when you use header -commands. **How can you put the user automatically to the index.php if the login is successful?**
This should be your basic setup First, the user comes to a login page and puts in their username/password. We'll call this login.php. It then sends the stuff to handler.php HTML ``` <form method="POST" action="handler.php"> <input type="text" name="login[user]"> <input type="password" name="login[password]"> </form> ``` Then, the handler script recieves the POST data, processes if, and if the password hashes match, set a cookie and redirect back to the index page. Login Script ``` // Check for a Login Form if (isset($_POST['login']) ) { // Get the Data $sql2 = "SELECT * from users"; $raw_user_list = pg_query($dbconn, $sql2); $user_list = pg_fetch_all($raw_user_list); // Go through each User foreach ($user_list as $user => $passhash_md5) { // Check if the passwords match if ( $passhash_md5 == md5($_POST['login']['password'] )) { // YOU NEED TO CREATE A COOKIE HERE header("Location: index.php"); die("logged in"); } } header("Location: index.php"); die("wrong username/password"); } ``` Then, on every page you want to check for login, you redirect someone away if they don't have a login cookie set. You could expand this to check for a correct login cookie. Every Page ``` // Check for a Cookie if(!$_COOKIE['login']) { header('Location: login.php'); die("User Required"); } ``` I'm not too certain what you were trying to do there, but this is the basic set up for how to create a basic login form. --- If you are try to check if the combination passed into the form is the same as the cookie try this: ``` // Set the Variables $email = $_POST['email']; $username = $_POST['username']; $passhash_md5 = $_POST['passhash_md5']; // COOKIE setting /* $cookie may look like this variables $username = "username"$ $passhash_md5 = "password"$ $email ="email"$ $_SERVER['REMOTE_ADDR']=11.44.23.94$ before md5:$ "usernamepasshash_md5email11.44.23.94"$ after md5:$ "a08d367f31feb0eb6fb51123b4cd3cb7"$ */ // Set what the cookie should look like $login_cookie = md5( $username . $password . $email . $_SERVER['REMOTE_ADDR'] ); // Check For the Cookie if (isset($_COOKIE['login']) ) { // Check if the Login Form is the same as the cookie if ( $login_cookie == $_COOKIE['login'] ) { header("Location: index.php"); die("logged in"); } header("Location: index.php"); die("wrong username/password"); } ``` I took out the database part because you aren't using the database part in any of the code, so it doesn't matter. It looks like you aren't trying to log someone in, but rather check that the cookie they have set to their machine contains the same string that they passed in on the form. --- Ok, final edition, hopefully ``` // Set the Variables $email = $_POST['email']; $username = $_POST['username']; $password = $_POST['password']; // COOKIE setting /* $cookie may look like this variables $username = "username"$ $passhash_md5 = "password"$ $email ="email"$ $_SERVER['REMOTE_ADDR']=11.44.23.94$ before md5:$ "usernamepasshash_md5email11.44.23.94"$ after md5:$ "a08d367f31feb0eb6fb51123b4cd3cb7"$ */ // Set what the cookie should look like $login_cookie = md5( $username . $password . $email . $_SERVER['REMOTE_ADDR'] ); // Check For the Cookie if (isset($_COOKIE['login']) ) { // Check if the Login Form is the same as the cookie if ( $login_cookie == $_COOKIE['login'] ) { header("Location: index.php"); die("logged in"); } header("Location: index.php"); die("wrong username/password"); } // If no cookie, try logging them in else { $sql2 = sprintf("SELECT * from users WHERE passhash_md5='%s', pg_escape_string($login_cookie)); $raw_user_list = pg_query($dbconn, $sql2); if ($user = pg_fetch_row($raw_user_list)) {. setcookie('login', $login_cookie); header("Location: index.php"); die("logged in"); } else { header("Location: index.php"); die("wrong username/password"); } } ``` Sprintf and Where clause provided by [Rezzif](https://stackoverflow.com/users/116407/rezzif)
As a side note are you really going through your entire users table to see if the person has a valid login? You should really be using a where clause! ``` $sql2 = sprintf("SELECT * from users WHERE UserName = '%s' AND UserPass = '%s'", pg_escape_string($_COOKIE['login']), pg_escape_string($passhash_md5)); $raw_user_list = pg_query($dbconn, $sql2); if ($user = pg_fetch_row($raw_user_list)) { //Login valid } else { //Login invalid } ``` Not familair with pg but i hope that helps.
To redirect an user back to index.php after a handler
[ "", "php", "cookies", "authentication", "redirect", "" ]
I need to read in data from excel files in my c# winforms app. Any recommendations on good components for this? I've used syncfusion some years ago and that seemed to do the trick. There'll be a bunch of header lines I need to skip (so a straight ADO approach won't work easily) and then a table of data with standard columns but variable number of rows. I'll be pumping the data into SQL Server db once it's read, but probably need to do validation etc on it before that. thanks!
Ultimately we opted for [Syncfusion's XLSIO](http://www.syncfusion.com/products/reporting-edition/xlsio) which works well. Thanks for the other suggestions too.
[SpreadsheetGear for .NET](http://www.spreadsheetgear.com/) will do it. You can see ASP.NET samples with C# and VB source [here](http://www.spreadsheetgear.com/support/samples/) and download the free trial [here](https://www.spreadsheetgear.com/downloads/register.aspx) if you want to try it out. Disclaimer: I own SpreadsheetGear LLC
Suggestions for reading data from excel in .net c#
[ "", "c#", ".net", "excel", "import-from-excel", "" ]
I have a string of the format: `string:num` where num is any number but string is a known string that I need to match on. I'd like to have this in an if statement as: ``` if( it matches 'string:' followed by a number) { //do something } ```
You want ... ``` if (stringYouHave.match(/^string:([0-9]+)$/)) { // do something } ``` This includes: 1. `^` beginning of the string 2. `string:` the literal "string:" you mentioned 3. `(.....)` This subexpression, which you can refer to later if you need to know *which* number is in the string (though in this particular case, you could also just replace `'string:'` with `''`) 4. `[0-9]` *a* character between `0` and `9` (i.e., a digit) 5. `+` Must have at least one "of those" (i.e., digits mentioned above), but can have any number 6. `$` end of the string
``` if( it.match(/^string:\d+$/) ( { ... } ```
Javascript regular expression match on string followed by number?
[ "", "javascript", "jquery", "regex", "" ]
Different compilers seem to have different ideas about TR1. G++ only seems to accept includes of the type: ``` #include <tr1/unordered_map> #include <tr1/memory> ... ``` While Microsofts compiler only accept: ``` #include <unordered_map> #include <memory> ... ``` As for as I understand TR1, the Microsoft way is the correct one. Is there a way to get G++ to accept the second version? How does one in general handle TR1 in a portable way?
Install boost on your machine. Add the following directory to your search path. <Boost Install Directory>/boost/tr1/tr1 see here [boost tr1](http://www.boost.org/doc/libs/1_39_0/doc/html/boost_tr1/usage.html) for details Now when you include <memory> you get the tr1 version of memory that has std::tr1::shared\_ptr and then it includes the platform specific version of <memory> to get all the normal goodies.
``` #ifdef _WIN32 #include <unordered_map> #include <memory> #else #include <tr1/unordered_map> #include <trl/memory> #endif ```
How does one include TR1?
[ "", "c++", "include", "c++11", "portability", "tr1", "" ]
I want to create an archive page template for Wordpress that will look like this: August 2009 * Post 4 * Post 3 * Post 2 * Post 1 July 2009 * Post 2 * Post 1 So, basically, I want all the posts from the blog, ordered descending by date and grouped by month. Can someone provide me the PHP code for this? Thanks! PS: Wordpress version is 2.8.2
This is a function I created a while back. It basically does what you want to do, but it's not a template. Maybe you can adapt it. ``` <?php /** * Displays a condensed list of the posts grouped by month/year. * * @param $order The order of the posts. Either 'DESC' or 'ASC', case sensitive. * @param $date_prefix Whether to prefix the posts with the month/date. * @param $display Whether to display the results or return it as a String. */ function condensed_post_list($order='DESC', $date_prefix=true, $display=true){ global $wpdb; if( !in_array($order, array('DESC','ASC' ) ) ) $order = 'DESC'; $query = "SELECT ID, post_title, post_date FROM $wpdb->posts ". "WHERE post_type='post' AND post_status = 'publish' ". "ORDER BY post_date $order"; $results = $wpdb->get_results( $query ); ob_start(); $current_month = ''; foreach( $results as $result ) { if( $current_month != mysql2date('F Y', $result->post_date)) { if( $current_month ) echo '</ul>'; $current_month = mysql2date('F Y', $result->post_date ); echo '<h2>'.$current_month.'</h2>'; echo '<ul>'; } echo '<li>'; echo ($date_prefix ? mysql2date('M j: ', $result->post_date) : ''); echo '<a href="'.get_permalink($result->ID).'">'; echo $result->post_title.'</a></li>'; } if( $current_month ) echo '</ul>'; if( $display ) { ob_end_flush(); } else { return ob_get_clean(); } } ?> ```
I used the above function, but replaced the SQL query with: ``` $results = query_posts('post_type=post&post_status=publish&cat=3'); ``` This allowed me to use the excellent function @scompt.com wrote, but limit it to a single blog category.
Archive Template in Wordpress
[ "", "php", "wordpress", "templates", "" ]
I'm working on some C++ code for an embedded system. The I/O interface the code uses requires that the size of each message (in bytes) is a power of two. Right now, the code does something like this (in several places): ``` #pragma pack(1) struct Message { struct internal_ { unsigned long member1; unsigned long member2; unsigned long member3; /* more members */ } internal; char pad[64-sizeof(internal_)]; }; #pragma pack() ``` I'm trying to compile the code on a 64-bit Fedora for the first time, where `long` is 64-bits. In this case, `sizeof(internal_)` is greater than 64, the array size expression underflows, and the compiler complains that the array is too large. Ideally, I'd like to be able to write a macro that will take the size of the structure and evaluate at compile time the required size of the padding array in order to round the size of the structure out to a power of two. I've looked at the [Bit Twiddling Hacks](http://graphics.stanford.edu/~seander/bithacks.html) page, but I don't know if any of the techniques there can really be implemented in a macro to be evaluated at compile time. Any other solutions to this problem? Or should I perpetuate the problem and just change the magical 64 to a magical 128?
Use a template metaprogram. (Edited in response to comment). ``` #include <iostream> #include <ostream> using namespace std; template <int N> struct P { enum { val = P<N/2>::val * 2 }; }; template <> struct P<0> { enum { val = 1 }; }; template <class T> struct PadSize { enum { val = P<sizeof (T) - 1>::val - sizeof (T) }; }; template <class T, int N> struct PossiblyPadded { T payload; char pad[N]; }; template <class T> struct PossiblyPadded<T, 0> { T payload; }; template <class T> struct Holder : public PossiblyPadded<T, PadSize<T>::val> { }; int main() { typedef char Arr[6]; Holder<Arr> holder; cout << sizeof holder.payload << endl; // Next line fails to compile if sizeof (Arr) is a power of 2 // but holder.payload always exists cout << sizeof holder.pad << endl; } ```
Why not use a union? ``` union Message { struct internal_ { unsigned long member1; /* more members */ }; char[64]; }; ``` or better yet use anonymous structs ``` union Message { struct { unsigned long member1; /* more members */ }; char[64]; }; ``` So you can access members like this: Message.member1; Edit: obviously this doesn't solve your greater than 64 problem, but provides a cleaner way of padding.
Pad a C++ structure to a power of two
[ "", "c++", "struct", "bit-manipulation", "padding", "" ]
I'm using what seems to be a common trick for creating a join view: ``` // a Customer has many Orders; show them together in one view: function(doc) { if (doc.Type == "customer") { emit([doc._id, 0], doc); } else if (doc.Type == "order") { emit([doc.customer_id, 1], doc); } } ``` I know I can use the following query to get a single `customer` and all related `Order`s: ``` ?startkey=["some_customer_id"]&endkey=["some_customer_id", 2] ``` But now I've tied my query *very* closely to my view code. Is there a value I can put where I put my "`2`" to more clearly say, "I want *everything* tied to this Customer"? I think I've seen ``` ?startkey=["some_customer_id"]&endkey=["some_customer_id", {}] ``` But I'm not sure that `{}` is *certain* to sort *after* everything else. Credit to [cmlenz](http://www.cmlenz.net/archives/2007/10/couchdb-joins) for the join method. Further clarification from the [CouchDB wiki page on collation](http://wiki.apache.org/couchdb/View_collation): > The query `startkey=["foo"]&endkey=["foo",{}]` will match most array keys with "foo" in the first element, such as `["foo","bar"]` and `["foo",["bar","baz"]]`. However it will not match `["foo",{"an":"object"}]` So `{}` is *late* in the sort order, but definitely not *last*.
Rather than trying to find the *greatest* possible value for the *second* element in your array key, I would suggest instead trying to find the *least* possible value greater than the *first*: `?startkey=["some_customer_id"]&endkey=["some_customer_id\u0000"]&inclusive_end=false`.
I have two thoughts. **Use timestamps** Instead of using simple 0 and 1 for their collation behavior, use a timestamp that the record was created (assuming they are part of the records) a la `[doc._id, doc.created_at]`. Then you could query your view with a startkey of some sufficiently early date (epoch would probably work), and an endkey of "now", eg `date +%s`. That key range should always include everything, and it has the added benefit of collating by date, which is probably what you want anyways. **or, just don't worry about it** You could just index by the customer\_id and nothing more. This would have the nice advantage of being able to query using just `key=<customer_id>`. Sure, the records won't be collated when they come back, but is that an issue for your application? Unless you are expecting tons of records back, it would likely be trivial to simply pluck the customer record out of the list once you have the data retrieved by your application. For example in ruby: `customer_records = records.delete_if { |record| record.type == "customer" }` Anyways, the timestamps is probably the more attractive answer for your case.
What is the maximum value for a compound CouchDB key?
[ "", "javascript", "couchdb", "mapreduce", "" ]
I have a script that currently has a step where I trigger a voice broadcast after a customer signs up. It's a nusoap call to callfire. The latency there is rather high, and it's added about 2 seconds to my sub-second sign up process. As a result, I have people hitting the sign up button more than once. Is there a way to tell the app to NOT wait for the results and just move on? It's possible to cheat by putting all of the nusoap code in a separate file, then open a socket to that file, but I'm looking for a cleaner way. Or is there a way to fire off a function and not wait for the results? I don't think there is.
You can move the code to a cli script. Run the cli script from the web server PHP thread. The CLI process then forks, and the parent exits. The web thread can continue and the CLI child process can sign up the user. In case of failure wih the SOAP call, I suggest you store the data somewhere and remove it or mark it as finished once is successfull. You can have a cron job routinely check to see if any calls failed, and retry them then notify someone (admin, user).
If you have some action which just has to be "soon" rather than "right away", then add it to a queue of some sort (e.g. a database table). Next, have a cron job come along every minute or so and perform tasks that are in this queue.
Fire off nusoap from php, don't wait for results
[ "", "php", "nusoap", "" ]
I need a CircularBuffer IDictionary. Can anyone point me to a good open source implementation. So a IDictionary that has a maximum capacity, say configured to 100 items, which when item 101 is added the original first item is popped/removed from the dictionary thus ensuring the item count never exceeds 100. Thanks
To keep O(1) insertion (with removal of the oldest item past 100) and O(1) lookups, you'll need a class that implements IDictionary *and* keeps an internal ordered list. If memory is more a concern, a BST implementation like `SortedList` could be more appropriate. Anyway, your class will contain both a `T[]` and a `Dictionary<T,K>` (or `SortedList<T,K>`). Do your own circular buffer indexing (easy), and keep both collections current in the add, remove, etc. methods. You'll have: * O(1) enqueue (to back) * O(n) insertion that violates order of adding (since you have to keep the array up to date); you'll likely never need this anyway * O(1) dequeue (from front) * O(1) or O(log n) keyed lookup Make it generic and implement `IDictionary<T,K>` and `IDictionary` since there's no reason not to and you'll get an edge in performance. **One major consideration**: what do you do about duplicate keys? I'm assuming you can't actually keep the duplicates, so: * Throw an exception (if there are never duplicate keys, so it's simply an error to insert something twice) * Move to back: check the `Count` of the dictionary, then insert the key using the `this[key]` indexer. if the size increases, then check if the list already has the maximum capacity, remove the front item from the list and dictionary and add the new item to the back. If the dictionary did not increase in size, move the item from its existing place in the list to the back of the list. * Overwrite without moving: The same as the previous item, but you don't have to mess with the list. Finally, note that the internal list keeps keys, not values. This is to ensure O(1) dequeue when the list capacity is exceeded.
Found two after five minutes of googling: * Free to use, looks like a fully implemented circular buffer: <http://www.codeproject.com/KB/recipes/circularbuffer.aspx> Proof of freeness: <http://www.codeproject.com/KB/recipes/circularbuffer.aspx?msg=402807#xx402807xx> * No clear licencing, probably normal copyright, threadsafe, nonboundaried circular buffer: <http://blog.umut.tezduyar.com/2008/06/c-circular-buffer-not-thread-safe.html>
CircularBuffer IDictionary in C#?
[ "", "c#", "dictionary", "buffer", "circular-buffer", "" ]
I have a couple of functions I created a while ago for reading and writing std::strings to a FILE\* opened for reading in binary mode. They have worked fine before (and WriteString() still works) but ReadString() keeps giving me memory corruption errors at run-time. The strings are stored by writing their size as an unsigned int before the string data as char. ``` bool WriteString(std::string t_str, FILE* t_fp) { // Does the file stream exist and is it valid? If not, return false. if (t_fp == NULL) return false; // Create char pointer from string. char* text = const_cast<char*>(t_str.c_str()); // Find the length of the string. unsigned int size = t_str.size(); // Write the string's size to the file. fwrite(&size, sizeof(unsigned int), 1, t_fp); // Followed by the string itself. fwrite(text, 1, size, t_fp); // Everything worked, so return true. return true; } std::string ReadString(FILE* t_fp) { // Does the file stream exist and is it valid? If not, return false. if (t_fp == NULL) return false; // Create new string object to store the retrieved text and to return to the calling function. std::string str; // Create a char pointer for temporary storage. char* text = new char; // UInt for storing the string's size. unsigned int size; // Read the size of the string from the file and store it in size. fread(&size, sizeof(unsigned int), 1, t_fp); // Read [size] number of characters from the string and store them in text. fread(text, 1, size, t_fp); // Store the contents of text in str. str = text; // Resize str to match the size else we get extra cruft (line endings methinks). str.resize(size); // Finally, return the string to the calling function. return str; } ``` Can anyone see any problems with this code or have any alternative suggestions?
Biggest major problem that jumped out at me: ``` // Create a char pointer for temporary storage. char* text = new char; // ... // Read [size] number of characters from the string and store them in text. fread(text, 1, size, t_fp); ``` This creates text as a pointer to a **single** character, and then you try to read an arbitrary number of characters (potentially many more than one) into it. In order for this to work right, you would have to create text as an *array* of characters after you figured out what the size was, like this: ``` // UInt for storing the string's size. unsigned int size; // Read the size of the string from the file and store it in size. fread(&size, sizeof(unsigned int), 1, t_fp); // Create a char pointer for temporary storage. char* text = new char[size]; // Read [size] number of characters from the string and store them in text. fread(text, 1, size, t_fp); ``` Second, you don't free the memory that you allocated to text. You need to do that: ``` // Free the temporary storage delete[] text; ``` Finally, is there a good reason why you are choosing to use C file I/O in C++? Using C++-style iostreams would have alleviated all of this and made your code much, much shorter and more readable.
The problem is: ``` char* text = new char; ``` you're allocating a single character. Do the allocation after you know `size`, and allocate all the `size` characters you need (e.g. with a `new char[size]`). (To avoid a leak, del it later after copying it, of course).
Reading std::string from binary file
[ "", "c++", "" ]
Basically, we have this here module that we offer to our users that want to include a feed from elsewhere on their pages. I works great, no sweat. The problem is that whenever users mishandle the feed link on their hands we have to manually remove the module from existence because Zend Feed crashes and burns the entire page just like any fatal error. Normally, one would expect that a code block such as.. ``` try { // Test piece straight off the Zend tutorial $slashdotRss = Zend_Feed::import('http://rss.slashdot.org/Slashdot/slashdot'); } catch (Zend_Feed_Exception $e) { // feed import failed echo "Exception caught importing feed: {$e->getMessage()}\n"; exit; } ``` .. would BEHAVE if I were to enter 'httn://rss.grrllarrrlll.aarrg/Slashdot/slashdot' and say something along the lines of "404" or "What the shit". No. It dies. It crashes and dies. It crashes and burns and dies, completely ignoring all that happy trycatch methology right there. So basically, do we have to write our on feedfetch or is there any simple remedy to Zend's slip? Added log: ``` exception 'Zend_Http_Client_Adapter_Exception' with message 'Unable to Connect to tcp://www.barglllrragglll:80. Error #10946: ' in /library/Zend/Http/Client/Adapter/Socket.php:148 #0 /library/Zend/Http/Client.php(827): Zend_Http_Client_Adapter_Socket->connect('www.barglllrragglll...', 80, false) #1 /library/Zend/Feed.php(284): Zend_Http_Client->request() ...... Trace etc .... ```
Just out of curiosity, did you try catching other kinds of exception ? ie, not only `Zend_Feed_Exception` ? Maybe, if there is some kind of 404 error during the "fetching" phase, it throws another exception ? (Because of relying on another component, like `Zend_Http_Client` ? ) Also, did you check your `error_reporting` level, to be sure errors would be reported ? Maybe in some log file somewhere, if `display_errors` is `Off` ? As a sidenot, and not really an answer to your question, but `Zend_Feed` has some drawbacks *(like returning different kind of data depending on the feed's format -- RSS vs ATOM, for instance)*. Starting with Zend Framework 1.9 *(right now, it's only available as a preview or alpha version, so don't using it in production !)*, there will be a `Zend_Feed_Reader` component, which should be more useful when consumming both RSS and ATOM Feeds. For more informations, see * [Zend Framework 1.9.0 Preview Release Now Available](http://devzone.zend.com/article/4846-Zend-Framework-1.9.0-Preview-Release-Now-Available) * [Zend\_Feed\_Reader: Approved for Combat!](http://blog.astrumfutura.com/archives/371-Zend_Feed_Reader-Approved-for-Combat!.html) * [Zend\_Feed\_Reader Component Proposal](http://framework.zend.com/wiki/pages/viewpage.action?pageId=6324361) --- **Edit after you added the log** For `Zend_Feed`, there is no problem with the Feed **itself**, so it doesn't throw a `Zend_Feed`-related Exception. The problem you have here is another one, like wrong URL : it fails getting the data, and not analysing it ; it explains why the exception is not `Zend_Feed`-related, but `Zend_Http_Client`-related. You might want to add some other exception-handling-code ; something like this : ``` try { // Test piece straight off the Zend tutorial $slashdotRss = Zend_Feed::import('http://rss.slashdot.org/Slashdot/slashdot'); } catch (Zend_Feed_Exception $e) { // feed import failed echo "Exception caught importing feed: {$e->getMessage()}\n"; exit; } catch (Zend_Http_Client_Exception $e) { echo "There is something wrong with the URL you provided for the feed"; exit; } catch (Exception $e) { echo "There is something wrong, we don't know what..."; exit; } ``` This way : * If the feed is not valid, you can tell the user * If there is an HTTP-related problem, you can tell the user too * If there is another problem you didn't think about, it still doesn't crash
How about something along the lines of `$file = file("http://rss.grrrrrrrl..."); $rss = Zend_Feed::importString($file);` ?
Why am I not getting the exception I expect in Zend Feed when a feed is misentered?
[ "", "php", "zend-framework", "zend-feed", "" ]
I'm writing a small Python app for distribution. I need to include simple XML validation (it's a debugging tool), but I want to avoid any dependencies on compiled C libraries such as lxml or pyxml as those will make the resulting app much harder to distribute. I can't find anything that seems to fit the bill - for DTDs, Relax NG or XML Schema. Any suggestions?
Do you mean something like [MiniXsv](http://www.familieleuthe.de/MiniXsv.html)? I have never used it, but from the website, we can read that > minixsv is a lightweight XML schema > validator package written in pure > Python (at least Python 2.4 is > required). so, it should work for you. I believe that [ElementTree](http://effbot.org/zone/element-index.htm) could also be used for that goal, but I am not 100% sure.
Why don't you try invoking an online XML validator and parsing the results? I couldn't find any free REST or SOAP based services but it would be easy enough to use a normal HTML form based one such as [this one](http://www.stg.brown.edu/service/xmlvalid/) or [this one](http://www.validome.org/xml/). You just need to construct the correct request and parse the results ([httplib](http://docs.python.org/library/httplib.html) may be of help here if you don't want to use a third party library such as [mechanize](http://wwwsearch.sourceforge.net/mechanize/) to easy the pain).
Validating XML in Python without non-python dependencies
[ "", "python", "xml", "validation", "schema", "dtd", "" ]
I'm performing a large number of INSERTS to a SQLite database. I'm using just one thread. I batch the writes to improve performance and have a bit of security in case of a crash. Basically I cache up a bunch of data in memory and then when I deem appropriate, I loop over all of that data and perform the INSERTS. The code for this is shown below: ``` public void Commit() { using (SQLiteConnection conn = new SQLiteConnection(this.connString)) { conn.Open(); using (SQLiteTransaction trans = conn.BeginTransaction()) { using (SQLiteCommand command = conn.CreateCommand()) { command.CommandText = "INSERT OR IGNORE INTO [MY_TABLE] (col1, col2) VALUES (?,?)"; command.Parameters.Add(this.col1Param); command.Parameters.Add(this.col2Param); foreach (Data o in this.dataTemp) { this.col1Param.Value = o.Col1Prop; this. col2Param.Value = o.Col2Prop; command.ExecuteNonQuery(); } } this.TryHandleCommit(trans); } conn.Close(); } } ``` I now employ the following gimmick to get the thing to eventually work: ``` private void TryHandleCommit(SQLiteTransaction trans) { try { trans.Commit(); } catch (Exception e) { Console.WriteLine("Trying again..."); this.TryHandleCommit(trans); } } ``` I create my DB like so: ``` public DataBase(String path) { //build connection string SQLiteConnectionStringBuilder connString = new SQLiteConnectionStringBuilder(); connString.DataSource = path; connString.Version = 3; connString.DefaultTimeout = 5; connString.JournalMode = SQLiteJournalModeEnum.Persist; connString.UseUTF16Encoding = true; using (connection = new SQLiteConnection(connString.ToString())) { //check for existence of db FileInfo f = new FileInfo(path); if (!f.Exists) //build new blank db { SQLiteConnection.CreateFile(path); connection.Open(); using (SQLiteTransaction trans = connection.BeginTransaction()) { using (SQLiteCommand command = connection.CreateCommand()) { command.CommandText = DataBase.CREATE_MATCHES; command.ExecuteNonQuery(); command.CommandText = DataBase.CREATE_STRING_DATA; command.ExecuteNonQuery(); //TODO add logging } trans.Commit(); } connection.Close(); } } } ``` I then export the connection string and use it to obtain new connections in different parts of the program. At seemingly random intervals, though at far too great a rate to ignore or otherwise workaround this problem, I get unhandled SQLiteException: Database file is locked. This occurs when I attempt to commit the transaction. No errors seem to occur prior to then. This does not ***always*** happen. Sometimes the whole thing runs without a hitch. * No reads are being performed on these files before the commits finish. * I have the very latest SQLite binary. * I'm compiling for .NET 2.0. * I'm using VS 2008. * The db is a local file. * All of this activity is encapsulated within one thread / process. * Virus protection is off (though I think that was only relevant if you were connecting over a network?). * As per Scotsman's post I have implemented the following changes: * Journal Mode set to Persist * DB files stored in C:\Docs + Settings\ApplicationData via `System.Windows.Forms.Application.AppData` windows call * No inner exception * Witnessed on two distinct machines (albeit very similar hardware and software) * Have been running Process Monitor - no extraneous processes are attaching themselves to the DB files - the problem is definitely in my code... ***Does anyone have any idea whats going on here?*** I know I just dropped a whole mess of code, but I've been trying to figure this out for way too long. My thanks to anyone who makes it to the end of this question! brian UPDATES: Thanks for the suggestions so far! I've implemented many of the suggested changes. I feel that we are getting closer to the answer...however... The code above technically works however it is non-deterministic! It is not guaranteed to do anything aside from spin in neutral forever. In practice it seems to work somewhere between the 1st and 10th iteration. If i batch my commits at a reasonable interval damage will be mitigated but I really do not want to leave things in this state... More suggestions welcome!
Run [Sysinternals Process Monitor](http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx) and filter on filename while running your program to rule out if any other process does anything to it and to see what exacly your program is doing to the file. Long shot, but might give a clue.
It looks like you failed to link the command with the transaction you've created. Instead of: ``` using (SQLiteCommand command = conn.CreateCommand()) ``` You should use: ``` using (SQLiteCommand command = new SQLiteCommand("<INSERT statement here>", conn, trans)) ``` Or you can set its Transaction property after its construction. While we are at it - your handling of failures is incorrect: The command's ExecuteNonQuery method can also fail and you are not really protected. You should change the code to something like: ``` public void Commit() { using (SQLiteConnection conn = new SQLiteConnection(this.connString)) { conn.Open(); SQLiteTransaction trans = conn.BeginTransaction(); try { using (SQLiteCommand command = conn.CreateCommand()) { command.Transaction = trans; // Now the command is linked to the transaction and don't try to create a new one (which is probably why your database gets locked) command.CommandText = "INSERT OR IGNORE INTO [MY_TABLE] (col1, col2) VALUES (?,?)"; command.Parameters.Add(this.col1Param); command.Parameters.Add(this.col2Param); foreach (Data o in this.dataTemp) { this.col1Param.Value = o.Col1Prop; this. col2Param.Value = o.Col2Prop; command.ExecuteNonQuery(); } } trans.Commit(); } catch (SQLiteException ex) { // You need to rollback in case something wrong happened in command.ExecuteNonQuery() ... trans.Rollback(); throw; } } } ``` Another thing is that you don't need to cache anything in memory. You can depend on SQLite journaling mechanism for storing incomplete transaction state.
Database file is inexplicably locked during SQLite commit
[ "", "c#", "sqlite", "" ]
Is there some clean static method to just dump (append) a string to a file?
No, the closest I am aware of is: ``` FileWriter writer = new FileWriter(fname, true); writer.append(yourString); writer.close(); ``` It's not clean or static, but neither is it the most painful code. :)
Yes, see the [documentation for FileWriter](http://java.sun.com/j2se/1.4.2/docs/api/java/io/FileWriter.html#FileWriter%28java.io.File,%20boolean%29). Set the second argument to true to append to a file.
Is there a File.AppendAllText() in Java?
[ "", "java", "file", "" ]
I have a class hierarchy where I know that a given class (B) will always be derived into a second one (D). In B's constructor, is it safe to statically cast the `this` pointer into a D\* if I'm sure that nobody will ever try to use it before the entire construction is finished? In my case, I want to pass a reference to the object to yet another class (A). ``` struct A { D & d_; A(D & d) : d_(d) {} }; struct D; //forward declaration struct B { A a; B() : a(std::static_cast<D&>(*this)) {} }; struct D : public B {}; ``` Is this code safe?
[@AProgrammer's answer](https://stackoverflow.com/questions/1193134/is-downcasting-this-during-construction-safe/1193245#1193245) made me realized that the `static_cast` could be easily avoided by passing the `this` pointer from the derived class to the base class. Consequently, the question boils down to the validity of the `this` pointer into the member-initializer-list. I found the following note in the C++ Standard [12.6.2.7]: > [*Note:* because the *mem-initializer* are evaluated in the scope of the constructor, the `this` pointer can be used in the *expression-list* of a *mem-initializer* to refer to the object being initialized. ] Therefore, using `this` in the member-initializer-list is perfectly valid, so I think the code presented is safe (as long as no members of D are accessed).
No, it is not. Constructors for D's data members didn't run yet. Since D's membrs aren't constructed, D isn't fully constructed yet, so technically, a reference to D should be invalid. I expect that to be no problem on most implementations, but still. I'd like to suggest a better mechanism, but I guess "better" depends a lot on actual details.
Is downcasting this during construction safe?
[ "", "c++", "inheritance", "static-cast", "" ]
PHP object overloading is explained [here](http://ca.php.net/manual/en/language.oop5.overloading.php). Basically it allows you to define some custom actions when an inaccessible object property or method is accessed. What are some practical uses for this feature?
Usually, those methods are useful when you are communicating with a 3rd party API or when the method/members structure is unclear. Let's say you are writing a generic XML-RPC wrapper. Since you don't know the methods available to you before you download the WDL file, it makes sense to use Overloading. Then, instead of writing the following: ``` $xmlrpc->call_method('DoSomething', array($arg1, $arg2)); ``` You can use: ``` $xmlrpc->DoSomething($arg1, $arg2); ``` which is a more natural syntax. --- You can also use member overloading in the same way as method overloading for variable objects. Just one thing you want to watch for: limit its use only to variable-structure objects or use it only for syntactical shortcuts to getters and setters. It makes sense to keep getters and setters in your class to seperate business logic in multiple methods, but there is nothing wrong in using it as a shortcut: ``` class ShortcutDemo { function &__get($name) { // Usually you want to make sure the method // exists using method_exists, but for sake // of simplicity of this demo, I will omit // that logic. return call_user_method('get'.$name, $this); } function __set($name, &$value) { return call_user_method('set'.$name, $this, $value); } private $_Name; function &getName() { return $this->_Name; } function setName(&$value) { $this->_Name = $value; } } ``` That way you can continue using your getters and setters to validate and set your data, and still use the syntactic shortcuts as such: ``` $shortcut->Name = 'Hello'; ```
Another method that Andrew didn't mention (or hasn't mentioned at the time of writing) is for getting rid of getters and setters. Instead of having to declare each setter and getter like this: ``` $obj->setName("Chacha"); $obj->setRep(10000000000000000000000); ``` You can instead just do ``` $obj->Name = "chacha"; $obj->Rep = 100000000000000000; ``` The second method is more natural. Magic Methods basically further the thought of Object Oriented programming, and the idea that how you implement a job should not matter to the outside world. Through Magic Methods, it allows you to store your variables however you want, and just let other classes set them in a natural way. Example: I could store all my user's account preferences in a single array that would make it really easy to iterate through to push it all up to the session. If I didn't use a Magic Method for this, I would either have to make a bunch of sets or gets, which means writing more code, or allow direct access to the array, which reveals the implementation, so I can't go and change it later. Instead, using Magic Methods, I just have them set the variable regularly, and I deal with it internally.
What advantage is provided by object overloading in PHP?
[ "", "php", "object", "" ]
According to my understand, Tomcat is allowed to raise as many copies of a servlet as it wishes in order to service requests. This forces my servlets to have no heavyweight state, and instead store all state in the (singleton) servlet context. Is it possible to configure Tomcat to treat my servlets as singletons, and always raise exactly one servlet instance regardless of load?
According to Servlet Specification, (v2.4, section 2.2 "Number of Instances"): > **SRV.2.2 Number of Instances** > > For a servlet not hosted in a distributed environment (the default), the **servlet > container must use only one instance per servlet declaration.** However, for a servlet > implementing the `SingleThreadModel` interface, the servlet container may > instantiate multiple instances to handle a heavy request load and serialize requests > to a particular instance. Answer to your question is simple: don't implement `SingleThreadModel`, and don't declare your servlet multiple times in `web.xml` descriptor file.
The other main reason why you don't want to store state in your Servlet is that it introduces synchronization issues, serialization issues if you want to run in a clustered environment, etc. All of these are simple to avoid if you don't store state internally in the servlet, but rather where it belongs in the Session (or Request) objects. So really ... why do you want to store state? It's a "recommended practice" for a reason.
Configuring Tomcat to only raise one Servlet per application
[ "", "java", "tomcat", "servlets", "" ]
I'm looking at some codes which makes heavy uses of templates. It compiles fine on GCC, but not on VS (tested on 2003 - 2010 beta 1), where it fails during syntax analysis. Unfortunately I don't know enough of the code structure to be able reduce the problem and reproduce it in only a few lines, so I can only guess at the cause. I'm hoping someone here can point me in the right direction. We have ``` template< class UInt, typename IntT, bool is_signed = std::numeric_limits<IntT>::is_signed > struct uii_ops_impl; // .... template<class UInt> struct uii_ops_impl< UInt, typename make_signed<typename UInt::digit_type>::type, true > { typedef UInt unbounded_int_type; typedef typename make_signed< typename unbounded_int_type::digit_type >::type integral_type; // ... static void add(unbounded_int_type& lhs, integral_type rhs); // ... }; template<class UInt> void uii_ops_impl< UInt, typename make_signed<typename UInt::digit_type>::type, true >::add(unbounded_int_type& lhs, integral_type rhs) { // .... } ``` When compiled on VS, the first error message (among many) it returns is > : error C2065: '`unbounded_int_type`' : undeclared identifier I mean, *point at the typedef* huh? :-S **EDIT:** It seems there's something to do with ``` typename make_signed<typename UInt::digit_type>::type ``` being used as a template parameter. Throughout the rest of the codes, similar typedefs being used in the member function parameter compiles fine. The only difference I can see so far is that none of the other cases have the above line as a template parameter. `make_signed` is from Boost.TypeTraits. **EDIT:** Okay, maybe that's not it, because the exact same thing is done in another file where it compiled fine. Hmm... **Bounty EDIT:** Okay, I think it's obvious at this point the problem is not actually where the compiler is complaining about. Only the two member functions definition at that particular point fail. It turns out that explicitly qualifying the parameter still **doesn't** compile. The only immediate solution is to define the function inline. That passes syntax analysis. However, when trying to instalize the template VS now fails because `std::allocator<void>` doesn't have a `size_type` member. Turns out VS have a specialization of `std::allocator<T>` for T=void that does not declare a `size_type`. I thought `size_type` is a required member of all allocators? So the question now is, what could possibly foul up VS so much during syntax analysis that it complains about completely unrelated non-problem as errors, and how do you debug such codes? p.s. For those that have too much time to spare, the code I'm trying to make work in VS is Kevin Sopp's [mp\_math](http://svn.boost.org/svn/boost/sandbox/mp_math/) in Boost's sandbox, which is based on [libtommath](http://math.libtomcrypt.com/).
I think this can be caused by a few circumstances * `unbounded_int_type` is a `non-dependent` type (defined at `14.6.2.1`) * Its declaration appears in the class template. Because it's non-dependent, its name has to be resolved to a declaration at the time the member function is defined. I suspect that Visual C++ is not able to do this lookup, and errors out instead. As someone else mentions, you can explicitly qualify the type-names in the member function definition. The types are then dependent, and this will trigger the compiler's mechanism to delay name lookup until instantiation. ``` template<class UInt> void uii_ops_impl< UInt, typename make_signed<typename UInt::digit_type>::type, true >::add(typename /* repeat-that-beast uii_ops...true> here */ ::unbounded_int_type& lhs, typename /* and here too */::integral_type rhs) { // .... } ```
Here's something funny - this guy * [Why is the use of typedef in this template necessary?](https://stackoverflow.com/questions/1215055/why-is-the-use-of-typedef-in-this-template-necessary) ran into a bug with MSVC that's very similar to what you're seeing - except that *using* a typedef worked around the problem for him. I still don't know what to make of the problems he ran into (or that you're running into). As you say, the small snippet you posted doesn't repro the error (given a simple `make_signed<>` template that lets `make_signed<>::type` compilable).
Template's member typedef use in parameter undeclared identifier in VS but not GCC
[ "", "c++", "visual-studio", "templates", "typedef", "" ]
I've seen this done in TextMate and I was wondering if there's a way to do it in IDEA. Say I have the following code: ``` leaseLabel = "Lease"; leaseLabelPlural = "Leases"; portfolioLabel = "Portfolio"; portfolioLabelPlural = "Portfolios"; buildingLabel = "Building"; ``` What is the best way to append '+ "foo"' to every line? Column mode won't work since the lines are not correctly aligned on the right side... unless there is an easy way to right justify the text :P
Since Idea IntelliJ IDEA 13.1 there is a possibility to edit multiple lines. ## Windows `Alt` + `Shift` + Mouse click ## macOS `Option` + `Shift` + Mouse click for selection. More about this new improvement in the IntelliJ blog post [here](http://blog.jetbrains.com/idea/2014/03/intellij-idea-13-1-rc-introduces-sublime-text-style-multiple-selections/). Very useful feature.
I use *Column Selection Mode* (`Cmd`+`Shift`+`8` on Mac) which allows to create multiple cursors via `Shift`+`Up` or `Shift`+`Down` then edit all the lines together. Starting from IntelliJ IDEA 14 there is also *Clone Caret Above / Below*: * Windows: `Ctrl`, `Ctrl`+`Up`/`Down` * MacOS: `Option`,`Option` + `Up`/`Down` (hold the second press of the modifier key, then press the arrow key)
IntelliJ IDEA way of editing multiple lines
[ "", "java", "android-studio", "intellij-idea", "ide", "text-editor", "" ]
Why it is recommended not to have data members in virtual base class? What about function members? If I have a task common to all derived classes is it OK for virtual base class to do the task or should the derived inherit from two classed - from virtual interface and plain base that do the task? Thanks.
As a practice you should only use virtual inheritance to define interfaces as they are usually used with multiple inheritance to ensure that only one version of the class is present in the derived class. And pure interfaces are the safest form of multiple inheritance. Of course if you know what you are doing you can use multiple inheritance as you like, but it can result in brittle code if you are not careful. The biggest drawback with virtual inheritance is if their constructors take parameters. If you have to pass parameters to the constructor of a virtual base class you force all derived classes to explicitly call the constructor (they cannot rely on a base class calling the constructor). The only reason I can see for your explicit advise is that data in your virtual base class these might require constructor parameters. **Edit** I did some home work after Martin's comment, thank Marin. The first line is not quite true: > As a practice you should only use > virtual inheritance to define > interfaces as they are usually used > with multiple inheritance to ensure > that only one version of the class is > present in the derived class. Virtual inheritance makes no difference if the base class is a pure interface (except for slightly different compiler errors, in vc8, if all the methods are not implemented). It only makes a real difference if the base class has data, in this case you end up with a diamond rather than a U shape ``` Non virtual virtual A A A | | / \ B C B C \ / \ / D D ``` In the virtual case B and C share the same copy of A. However I still agree with everything else about pure interfaces being the safest form of multiple inheritance, even if they don't require virtual inheritance. And the fact that constructor parameters and virtual inheritance are a pain.
The core advice is to have a default-constructor in the virtual base. If you don't, then every *most-derived class* (ie. any subclass) must call the virtual base ctor explicitly, and that leads to angry colleagues knocking on your office door... ``` class VirtualBase { public: explicit VirtualBase( int i ) : m_i( i ) {} virtual ~VirtualBase() {} private: int m_i; }; class Derived : public virtual VirtualBase { public: Derived() : VirtualBase( 0 ) {} // ok, this is to be expected }; class DerivedDerived : public Derived { // no VirtualBase visible public: DerivedDerived() : Derived() {} // ok? no: error: need to explicitly // call VirtualBase::VirtualBase!! DerivedDerived() : VirtualBase( 0 ), Derived() {} // ok }; ```
Virtual base class data members
[ "", "c++", "inheritance", "virtual", "" ]
I'm looking for a class where I can override a method to do the work, and return the results like an iterator. Something like this: ``` ParallelWorkIterator<Result> itr = new ParallelWorkIterator<Result>(trials,threads) { public Result work() { //do work here for a single trial... return answer; } }; while (itr.hasNext()) { Result result = itr.next(); //process result... } ``` This is mainly going to be used for things like monte carlo simulations, but I don't want to have to deal with setting up thread pools and managing returning threads every time. I rolled my own class that *hopefully* accomplishes this, but I'm not confident enough in it and thought I'd check if something like this already existed. Edit: To be clear, I want it to keep running in the background and queuing results after each work method returns until all trials have been completed. So the next method may wait to return until there is a result in the queue.
Have a look at the [ExecutorCompletionService](http://java.sun.com/javase/6/docs/api/java/util/concurrent/ExecutorCompletionService.html). It does everything you want. ``` void solve(Executor e, Collection<Callable<Result>> solvers) throws InterruptedException, ExecutionException { //This class will hold and execute your tasks CompletionService<Result> ecs = new ExecutorCompletionService<Result>(e); //Submit (start) all the tasks asynchronously for (Callable<Result> s : solvers) ecs.submit(s); //Retrieve completed task results and use them int n = solvers.size(); for (int i = 0; i < n; ++i) { Result r = ecs.take().get(); if (r != null) use(r); } } ``` The benefit of using a CompletionService is that it always returns the first completed result. This ensures you're not waiting for tasks to complete and it lets the uncompleted tasks run in the background.
I would recommend looking at Java [Executors](http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/Executors.html). You submit a number of tasks and get a [Future](http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/Future.html) object back for each one. Your work is processed in the background, and you iterate through the Future objects (like you do in the above). Each Future returns a result as it become available (by calling `get()` - this blocks until the result has been generated in a separate thread)
Java parallel work iterator?
[ "", "java", "multithreading", "concurrency", "iterator", "" ]
Can anyone point me to a good introductory materials on X509 certificates with examples in C#.
You can begin here [X509Certificate MSDN Resource](http://msdn.microsoft.com/en-us/library/system.security.cryptography.x509certificates.x509certificate.aspx) The System.Security.Cryptography.X509Certificates namespace contains the common language runtime implementation of the Authenticode X.509 v.3 certificate. This certificate is signed with a private key that uniquely and positively identifies the holder of the certificate.
I found this [C# example](http://blogs.msdn.com/alejacma/archive/2008/09/05/how-to-create-a-certificate-request-with-certenroll-and-net-c.aspx) pretty helpful in figuring out how to generate new certificates. Also this blog post covering the [more general concepts](http://www.reliablesoftware.com/DasBlog/PermaLink,guid,c953fbc9-34f2-48d4-ba33-55d621a48d7f.aspx) Other than that you'll need the platform SDK installed, which does also include several other examples (most in C++, but a couple in C# and VB too).
X509 guide/tutorial in C#
[ "", "c#", ".net", "security", "x509certificate", "" ]
I am having trouble writing a typeof statement which would be using a variable from a config file the code is like this ``` Type t = new typeof ("My.other.class" + configValue[0]); ``` configValue being the dynamic value I get from the app.config file. The error I get is "type expected" it is fine if I type out the class directly so I am assuming my formatting is incorrect. How should this be written?
The `typeof` keyword is for compile-time use. Use the [`Type.GetType(string)`](http://msdn.microsoft.com/en-us/library/system.type.gettype.aspx) API instead.
Is this what you're looking for? ``` Type t = Type.GetType("spork"); object myspork = Activator.CreateInstance(t); ```
Dynamic Typeof in C#
[ "", "c#", "typeof", "" ]
I have a cron job that ideally I want to run hourly (it's on the FB accelerators and deals with realtime social stuff, so once an hour doesn't cut it) 1) Can I have a cron job that contains "run(), sleep(60), run()..." that lasts for 60 minutes? 2) Frequency of running aside, it currently outputs: ``` ...web/public/mysqltest.php: ?php: cannot open ...web/public/mysqltest.php: require_once: not found ...web/public/mysqltest.php: include_once: not found ...web/public/mysqltest.php: include_once: not found ...web/public/mysqltest.php: syntax error at line 5: `(' unexpected ``` (the rest of the path is there). How can I start debugging that message? It's odd to me that my server can't open a PHP file. When run in a browser it does its job admirably. The first 6 lines of the script are: ``` <?php require_once 'facebook.php'; include_once 'lib.php'; include_once 'config.php'; $facebook = new Facebook($api_key, $secret); get_db_conn(); ``` Nothing super fancy as far as I can tell. I can confirm that the include/required files are indeed in the same directory. Do they need a certain level of access permissions? Any and all help would be appreciated.
**1** There is no limits for cron script execution time. Personally, I have a cron php script that lasts for more than 45 minutes after it's execution. Be aware, that in this case you may need to change php.ini, because it *will* contains php script execution time limitation. **2** You should put following ``` #!/usr/bin/php ``` at the start of your php script. This way you will enable it's execution from bash like a usual script. After this is done and is tested, put in cron.
Since its being called internally, its not being parsed by the server. The beginning of your cron job should include the path to your php parser eg: ``` /usr/bin/php /home/path_to/php_file/yourfile.php ``` youll need to find your own server's path to php of course
Cron error messages; is it my script causing the errors?
[ "", "php", "cron", "" ]
I have a problem with php file upload, so..when I try to uploads **some songs, they work, but some song don't work**..so the problems is that php doesn't see my upload filed, if i try: ``` if (isset($_FILES['song'])) { //lala }else{ echo 'no song'; ``` I receive an echo with "No song",so here you have a firebug screenshot <http://screencast.com/t/prCixoAn> I have change the file size in php.ini to 30M, and i also set the max\_file\_size input, any solutions?
Check the post\_max\_size option in php.ini. That must be larger than the value of max\_file\_input.
I don't know if you are doing this already, but you also want to set set\_time\_limit($amountOfTime) to something so the script doesn't time out.
Some uploads fail in PHP
[ "", "php", "" ]
I'm trying to send push notifications to an iPhone using Python. I've exported my **certificate and private key** into a p12 file from keychain access and then converted it into pem file using the following command: ``` openssl pkcs12 -in cred.p12 -out cert.pem -nodes -clcerts ``` I'm using [APNSWrapper](http://code.google.com/p/apns-python-wrapper/wiki/APNSWrapperOverview) in Python for the connection. I run the following code: ``` deviceToken = 'Qun\xaa\xd ... c0\x9c\xf6\xca' # create wrapper wrapper = APNSNotificationWrapper('/path/to/cert/cert.pem', True) # create message message = APNSNotification() message.token(deviceToken) message.badge(5) # add message to tuple and send it to APNS server wrapper.append(message) wrapper.notify() ``` And then I get the error message: ``` ssl.SSLError: (1, '_ssl.c:485: error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown') ``` Can anyone help me out on this?
I recently did this using Django - <http://leecutsco.de/2009/07/14/push-on-the-iphone/> May be useful? It's making use of no extra libraries other than those included with Python already. Wouldn't take much to extract the send\_message() method out.
Have you considered the [Twisted](http://twistedmatrix.com/trac/) package? The below code is taken from [here](http://blog.nuclearbunny.org/2009/05/11/connecting-to-apple-push-notification-services-using-python-twisted/): ``` from struct import pack from OpenSSL import SSL from twisted.internet import reactor from twisted.internet.protocol import ClientFactory, Protocol from twisted.internet.ssl import ClientContextFactory APNS_SERVER_HOSTNAME = "<insert the push hostname from your iPhone developer portal>" APNS_SERVER_PORT = 2195 APNS_SSL_CERTIFICATE_FILE = "<your ssl certificate.pem>" APNS_SSL_PRIVATE_KEY_FILE = "<your ssl private key.pem>" class APNSClientContextFactory(ClientContextFactory): def __init__(self): self.ctx = SSL.Context(SSL.SSLv3_METHOD) self.ctx.use_certificate_file(APNS_SSL_CERTIFICATE_FILE) self.ctx.use_privatekey_file(APNS_SSL_PRIVATE_KEY_FILE) def getContext(self): return self.ctx class APNSProtocol(Protocol): def sendMessage(self, deviceToken, payload): # notification messages are binary messages in network order # using the following format: # <1 byte command> <2 bytes length><token> <2 bytes length><payload> fmt = "!cH32cH%dc" % len(payload) command = 0 msg = struct.pack(fmt, command, deviceToken, len(payload), payload) self.transport.write(msg) class APNSClientFactory(ClientFactory): def buildProtocol(self, addr): print "Connected to APNS Server %s:%u" % (addr.host, addr.port) return APNSProtocol() def clientConnectionLost(self, connector, reason): print "Lost connection. Reason: %s" % reason def clientConnectionFailed(self, connector, reason): print "Connection failed. Reason: %s" % reason if __name__ == '__main__': reactor.connectSSL(APNS_SERVER_HOSTNAME, APNS_SERVER_PORT, APNSClientFactory(), APNSClientContextFactory()) reactor.run() ```
Connecting to APNS for iPhone Using Python
[ "", "iphone", "python", "ssl", "push-notification", "" ]
Consider: ``` template <typename T> class Base { public: static const bool ZEROFILL = true; static const bool NO_ZEROFILL = false; } template <typename T> class Derived : public Base<T> { public: Derived( bool initZero = NO_ZEROFILL ); // NO_ZEROFILL is not visible ~Derived(); } ``` I am not able compile this with GCC g++ 3.4.4 (cygwin). Prior to converting these to class templates, they were non-generic and the derived class was able to see the base class's static members. Is this loss of visibility in a requirement of the C++ spec or is there a syntax change that I need to employ? I understand that each instantiation of `Base<T>` will have it's own static member "`ZEROFILL`" and "`NO_ZEROFILL`", that `Base<float>::ZEROFILL` and `Base<double>::ZEROFILL` are different variables, but i don't really care; the constant is there for readability of the code. I wanted to use a static constant because that is more safe in terms of name conflicts rather than a macro or global.
That's two-phase lookup for you. `Base<T>::NO_ZEROFILL` (all caps identifiers are boo, except for macros, BTW) is an identifier that depends on `T`. Since, when the compiler first parses the template, there's no actual type substituted for `T` yet, the compiler doesn't "know" what `Base<T>` is. So it cannot know any identifiers you assume to be defined in it (there might be a specialization for some `T`s that the compiler only sees later) and you cannot omit the base class qualification from identifiers defined in the base class. That's why you have to write `Base<T>::NO_ZEROFILL` (or `this->NO_ZEROFILL`). That tells the compiler that `NO_ZEROFILL` is something in the base class, which depends on `T`, and that it can only verify it later, when the template is instantiated. It will therefore accept it without trying to verify the code. That code can only be verified later, when the template is instantiated by supplying an actual parameter for `T`.
The problem you have encountered is due to name lookup rules for dependent base classes. 14.6/8 has: > When looking for the declaration of a name used in a template definition, the usual lookup rules (3.4.1, > 3.4.2) are used for nondependent names. The lookup of names dependent on the template parameters is > postponed until the actual template argument is known (14.6.2). (This is not really "2-phase lookup" - see below for an explanation of that.) The point about 14.6/8 is that as far as the compiler is concerned `NO_ZEROFILL` in your example is an identifier and is not dependent on the template parameter. It is therefore looked up as per the normal rules in 3.4.1 and 3.4.2. This normal lookup doesn't search inside `Base<T>` and so NO\_ZEROFILL is simply an undeclared identifier. 14.6.2/3 has: > In the definition of a class template or a member of a class template, if a base class of the class template > depends on a template-parameter, the base class scope is not examined during unqualified name lookup > either at the point of definition of the class template or member or during an instantiation of the class template > or member. When you qualify `NO_ZEROFILL` with `Base<T>::` in essence you are changing it from being a non dependent name to a dependent one and when you do that you delay its lookup until the template is instantiated. **Side note: What is 2-phase lookup:** ``` void bar (int); template <typename T> void foo (T const & t) { bar (t); } namespace NS { struct A {}; void bar (A const &); } int main () { NS::A a; foo (a); } ``` The above example is compiled as follows. The compiler parses the function body of `foo` and see that there is a call to `bar` which has a dependent argument (ie. one that is dependent on the template parameter). At this point the compiler looks up bar as per 3.4.1 and this is the "phase 1 lookup". The lookup will find the function `void bar (int)` and that is stored with the dependent call until later. When the template is then instantiated (as a result of the call from `main`), the compiler then performs an additional lookup in the scope of the argument, this is the "phase 2 lookup". This case that results in finding `void NS::bar(A const &)`. The compiler has two overloads for `bar` and it selects between them, in the above case calling `void NS::bar(A const &)`.
Why doesn't a derived template class have access to a base template class' identifiers?
[ "", "c++", "templates", "derived-class", "c++-faq", "" ]
I am trying to create a GUI interface in VB to track... oh, nevermind. Basically, I want to create a GUI in python to display data, but I am finding that mathplotlib is not suiting my needs. I would like to be able to highlight certain datapoints, have more freedom in the text drawn to the screen, have animations on data movement, and have dropdown menus for data points. From what I have seen, I do not believe that mathplotlib can do these things. What utility can I look into to better suit my needs?
I haven't used it myself but [Chaco](http://code.enthought.com/projects/chaco/) seems to fit some of your needs. It is more interactive than matplotlib and can be used to make quite interactive applications. > Chaco is a Python plotting application toolkit that facilitates writing plotting applications at all levels of complexity, from simple scripts with hard-coded data to large plotting programs with complex data interrelationships and a multitude of interactive tools. While Chaco generates attractive static plots for publication and presentation, it also works well for interactive data visualization and exploration. [![](https://i.stack.imgur.com/bTZ2S.png)](https://i.stack.imgur.com/bTZ2S.png) (source: [enthought.com](http://code.enthought.com/projects/chaco/docs/html/_images/vanderwaals.png))
QGraphicsScene/View from PyQt4 is a fantastic piece of code. Although your description makes me think that some upfront work will be necessary to make things work. ...don 't trust me, I'm biased ;) Get the library [here](http://www.riverbankcomputing.co.uk/software/pyqt/download) and check the demos.
Python Graphing Utility for GUI with Animations
[ "", "python", "" ]
I'm losing my mind on this one. My curl command works: curl <http://testuser:testpwd@qabox3:8501/xmlcontroller> But, when I try what looks like an equivalent http connection in Java it gives "connection refused". What am I missing? I've tried a dozen flavors of trying to make this connection today and am out of ideas. ``` URL url = new URL( "http://qabox3:8051/xmlcontroller" ); HttpURLConnection conn = (HttpURLConnection)url.openConnection(); conn.setRequestMethod( "GET" ); String encoding = new sun.misc.BASE64Encoder().encode( "testuser:testpwd".getBytes() ); conn.setRequestProperty("Authorization", "Basic " + encoding ); InputStream content = conn.getInputStream(); // <--- fails here every time. BufferedReader in = new BufferedReader( new InputStreamReader( content ) ); String line; while ((line = in.readLine()) != null) { System.out.println( line ); } ``` Moreover, I can use Java's Runtime.exec() to exec the curl command and that still works...so I'm clearly doing something wrong in the HttpURLConnection stuff. Here's the stack I'm seeing (now using HttpClient, but basically same stack with the Java libs). ``` Exception in thread "main" java.net.ConnectException: Connection refused: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:520) at java.net.Socket.connect(Socket.java:470) at java.net.Socket.<init>(Socket.java:367) at java.net.Socket.<init>(Socket.java:240) at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:80) at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:122) at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707) at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387) at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171) at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397) at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323) at MyClass.sendRequest(iQ411RequestHandlerProxy.java:277) ``` Just for fun, here's the curl verbose output. Nothing special in the header... ``` > GET /xmlcontroller HTTP/1.1 > Authorization: Basic cWFfc3VwZXI6cWFfc3VwZXI= > User-Agent: curl/7.16.3 (i686-pc-cygwin) libcurl/7.16.3 OpenSSL/0.9.8k zlib/1.2.3 libssh2/0.15-CVS > Host: qabox3:8501 > Accept: */* > ```
You seem to use two different port numbers: 8501 and 8051? Could that be the problem or was that a typo in posting the question? > curl <http://testuser:testpwd@qabox3:8501/xmlcontroller> > > URL url = new URL( "<http://qabox3:8051/xmlcontroller>" ); Regards, Ian
Could this be an HTTP proxy issue? Could curl be using your HTTP proxy and your Java app be trying to connect directly?
Java HttpConnection refused, but curl equivalent works
[ "", "java", "http", "authentication", "" ]
I'm attempting to write a simple B+tree implementation (very early stages). I've got a virtual class with a few functions. Needless to say, I'm very new to these strategies and am running into all sorts of problems. I'm attempting to create a root node within the BTree class. The root node will be a BBranch, which should inherit from BNode? I'm getting errors ``` btree.cpp: In constructor âBTree::BTree()â: btree.cpp:25: error: cannot declare variable ârootâ to be of abstract type âBBranchâ btree.cpp:12: note: because the following virtual functions are pure within âBBranchâ: btree.cpp:9: note: virtual void BNode::del(int) btree.cpp: In member function âvoid BTree::ins(int)â: btree.cpp:44: error: ârootâ was not declared in this scope ``` The code is this ``` using namespace std; class BNode { public: int key [10]; int pointer [11]; virtual void ins( int num ) =0; virtual void del( int num ) =0; }; class BBranch: public BNode { public: void ins( int num ); }; class BLeaf: public BNode { public: void ins( int num ); }; class BTree { public: BTree() { BBranch root; }; void ins( int num ); }; // Insert into branch node void BBranch::ins( int num ){ // stuff for inserting specifically into branches }; // Insert for node void BTree::ins( int num ){ root.ins( num ); }; int main(void){ return 0; } ``` Thank you for any information you can give me.
The compiler seems to be pretty clear about what's wrong. You can't declare a `BBranch` because there's still a pure virtual function in that class. You defined `ins`, but `del` is still undefined. Define that in `BBranch` (and `BLeaf`) and you should be fine. You can't declare instances of abstract classes, which are classes that have pure virtual functions. Furthermore, you have declared `root` *in the constructor*. You meant for it to be a *member variable*, which means it needs to be declared *beside* the constructor, not *inside*. ``` class BTree { public: BTree() { }; BBranch root; void ins( int num ); }; ```
If you create an abstract base class like you have done with BNode and you want to create a concrete derived class, you must implement *all* pure virtual functions. Both BBranch and BLeaf miss an implementation of 'del' so they remain abstract.
C++ pure virtual class question
[ "", "c++", "inheritance", "polymorphism", "" ]
When I insert data with the following code, I see the exception. What should I do? Code: ``` Movie_List_DBDataContext Movie_list1 = new Movie_List_DBDataContext(); Actor act = new Actor(); act.Actor_Name = Acttxt.Text; Movie_list1.Actors.InsertOnSubmit(act); Movie_list1.SubmitChanges(); ``` Exception: > Violation of PRIMARY KEY constraint 'PK\_Actors'. Cannot insert duplicate key in object 'dbo.Actors'. My table has 2 columns; ID and Name, and ID is Primary Key.
In your `.dbml` designer make sure that the `ID` field is marked as "Auto Generated Value". You can found this in the properties view of the field. Normally, this is initialized accordingly to the table in the database so, if the ID is set as an auto generated value in the database, the designer will automatically set the "Auto Generated Value" to true. You can also mark the desired field as an "Auto Generated Value" in the code. Search the `ID` property in the generated code behind and set the value in the `Column` attribute : `IsDbGenerated=true`
Have you set "ID" in the Actor-table as an identity-field? If you haven't, a quick search on google or stack overflow will show you how. Also make sure that the autogenerate property is set in your designer-file for the ID column.
Error while inserting data with LINQ to SQL
[ "", "c#", "linq", "" ]
In the project properties, in the "Java Compiler" section, there are some settings for the "JDK Compliance". I wanted to set the source compatibility to 6 and the .class files' compatibility to 5. This is apparently not allowed: I get the message "Classfile compatibility must be greater or equal than source compatibility". Is this a limitation of Eclipse or a fundamental issue of Java version compatibility? Are there workarounds? I would like to use Java 6 in development and for most targets, but I want to produce Java 5 compatible artifacts for some targets such as Macs. I'm fine if this means having two builds, as long as it's the same source.
See also [this thread](http://dev.eclipse.org/newslists/news.eclipse.tools.jdt/msg20212.html) pointing to a similar (older) issue: ["Java theory and practice: Using Java 5 language features in earlier JDKs"](http://www.ibm.com/developerworks/java/library/j-jtp02277.html): Unless you are using some kind of external tool to transform Java 6 bytecode into Java 1.5-compatible bytecode, you can not indeed set the class compliance to 1.5 and the source compilance to 6.0.
As Eclipse says, you can't generate java5 class files from java6 source, it's just not an option. If you need to generate java5 builds, you need to use java5 source level. Do you really need to use Java6 source level? If there are things in the Java6 API that you need to use, then generating builds for Java5 won't work when you try and run those builds. If you don't need to use Java6 API, then you don't need to use Java6 source compatibility either.
How can I have Java 5 output from Java 6 source in Eclipse?
[ "", "java", "compatibility", "" ]
I have a relatively complicated generic type (say `Map<Long,Map<Integer,String>>`) which I use internally in a class. (There is no external visibility; it's just an implementation detail.) I would like to hide this in a typedef, but Java has no such facility. Yesterday I rediscovered the following idiom and was disappointed to learn that it's [considered an anti-pattern](http://www.ibm.com/developerworks/java/library/j-jtp02216/index.html) . ``` class MyClass { /* "Pseudo typedef" */ private static class FooBarMap extends HashMap<Long,Map<Integer,String>> { }; FooBarMap[] maps; public FooBarMap getMapForType(int type) { // Actual code might be more complicated than this return maps[type]; } public String getDescription(int type, long fooId, int barId) { FooBarMap map = getMapForType(type); return map.get(fooId).get(barId); } /* rest of code */ } ``` Can there ever be any justification for this when the type is hidden and isn't forming part of a library API (which on my reading are Goetz's main objections to using it)?
**The real problem** is that this idiom creates an high coupling between your *pseudo typedef* and your client code. However **since you are using `FooBarMap` privately there are no real problems of coupling** (they are implementation details). **NB** A modern Java IDE should definitively helps to dealing with complicated generic types.
IMO, the problem with Java anti-patterns is that they encourage black-and-white thinking. In reality, most anti-patterns are nuanced. For example, the linked article explains how pseudo-typedefs leads to APIs whose type signatures are too restrictive, too tied to particular implementation decisions, viral, and so on. But this is all in the context of public APIs. If you keep pseudo-typedefs out of public APIs (i.e. restrict them to a class, or maybe a module), they probably do no real harm and they *may* make your code more readable. My point is that you need to *understand* the anti-patterns and make *your own reasoned judgement* about when and where to avoid them. Simply taking the position that "I will *never* do X because it is an anti-pattern" means that *sometimes* you will rule out pragmatically acceptable, or even good solutions.
Is there ever justification for the "pseudo-typedef antipattern"?
[ "", "java", "typedef", "anti-patterns", "" ]
I'm trying to do another exercise from Deitel's book. The program calculates the monthly interest and prints the new balances for each of the savers. As the exercise is part of the chapter related to dynamic memory, I'm using "new" and "delete" operators. For some reason, I get these two errors: > LNK2019: unresolved external symbol WinMain@16 referenced in function \_\_\_tmainCRTStartup > > fatal error LNK1120: 1 unresolved externals Here is class header file. ``` //SavingsAccount.h //Header file for class SavingsAccount class SavingsAccount { public: static double annualInterestRate; SavingsAccount(double amount=0);//default constructor intialize //to 0 if no argument double getBalance() const;//returns pointer to current balance double calculateMonthlyInterest(); static void modifyInterestRate(double interestRate): ~SavingsAccount();//destructor private: double *savingsBalance; }; ``` > Cpp file with member function definitions ``` //SavingsAccount class defintion #include "SavingsAccount.h" double SavingsAccount::annualInterestRate=0;//define and intialize static data //member at file scope SavingsAccount::SavingsAccount(double amount) :savingsBalance(new double(amount))//intialize savingsBalance to point to new object {//empty body }//end of constructor double SavingsAccount::getBalance()const { return *savingsBalance; } double SavingsAccount::calculateMonthlyInterest() { double monthlyInterest=((*savingsBalance)*annualInterestRate)/12; *savingsBalance=*savingsBalance+monthlyInterest; return monthlyInterest; } void SavingsAccount::modifyInterestRate(double interestRate) { annualInterestRate=interestRate; } SavingsAccount::~SavingsAccount() { delete savingsBalance; }//end of destructor ``` > End finally driver program : ``` #include <iostream> #include "SavingsAccount.h" using namespace std; int main() { SavingsAccount saver1(2000.0); SavingsAccount saver2(3000.0); SavingsAccount::modifyInterestRate(0.03);//set interest rate to 3% cout<<"Saver1 monthly interest: "<<saver1.calculateMonthlyInterest()<<endl; cout<<"Saver2 monthly interest: "<<saver2.calculateMonthlyInterest()<<endl; cout<<"Saver1 balance: "<<saver2.getBalance()<<endl; cout<<"Saver1 balance: "<<saver2.getBalance()<<endl; return 0; } ``` I have spent an hour trying to figure this out with no success.
Go to "Linker settings -> System". Change the field "Subsystem" from "Windows" to "Console".
It looks like you are writing a standard console application (you have `int main()`), but that the linker is expecting to find a windows entry point `WinMain`. In yout project's property pages, in the Linker section, System/SubSystem option, do you have "Windows (/SUBSYSTEM:WINDOWS)" selected? If so, try changing it to "Console (/SUBSYSTEM:CONSOLE)"
C++ LNK1120 and LNK2019 errors: "unresolved external symbol WinMain@16"
[ "", "c++", "visual-c++", "linker-errors", "lnk2019", "" ]
I want to create a link that calls a javascript function, and I want to pass the text of the link into the function. I am trying to create a dialog that displays the name on the original link. Would jquery be helpful here?
Not sure exactly if this is what you are looking for but: ``` <a href="#" id="mylink">Some Text here</a> $('#mylink').click(function(){ myfunc($(this).text()); return false; }); ```
jQuery UI has a [dialog function](http://jqueryui.com/demos/dialog/#default) which would make it easy. I'd create a hidden div: ``` <!-- Temporary elements --> <!-- ui-dialog --> <div id="dialog" title=" "> </div> ``` And in $(document).ready add: ``` jQuery('#dialog').dialog({ autoOpen: false, modal: true, width: 625, position: 'center' }); /* end #dialog */ ``` Then, in the click event of the link, set the title and text as: ``` jQuery('.ui-dialog-title').text(/* yourtext */); jQuery('.ui-dialog-content').html(/* link name or whatever */); jQuery('#dialog').dialog('open'); return false; ``` Those classes are automatically added by the dialog. edit: forgot to mention, you'll want to open the dialog in the same click event and return false so the original link href doesn't execute.
jquery - pass text on the page into a dialog
[ "", "javascript", "jquery", "html", "" ]
I assigned a timeout to my window.resize handler so that I wouldn't call my sizable amount resize code every time the resize event fires. My code looks like this: ``` <script> function init() { var hasTimedOut = false; var resizeHandler = function() { // do stuff return false; }; window.onresize = function() { if (hasTimedOut !== false) { clearTimeout(hasTimedOut); } hasTimedOut = setTimeout(resizeHandler, 100); // 100 milliseconds }; } </script> <body onload="init();"> ... etc... ``` In IE7 (and possibly other versions) it appears that when you do this the resize event will constantly fire. More accurately, it will fire after every timeout duration -- 100 milliseconds in this case. Any ideas why or how to stop this behavior? I'd really rather not call my resize code for every resize event that fires in a single resizing, but this is worse.
In your //do stuff code, do you manipulate any of the top,left,width,height,border,margin or padding properties? You may unintentionally be triggering recursion which unintentionally triggers recursion which unintentionally triggers recursion...
[How to fix the resize event in IE](http://noteslog.com/post/how-to-fix-the-resize-event-in-ie/) also, see the answer for "scunliffe" "In your ... properties?
Why is the resize event constantly firing when I assign it a handler with a timeout?
[ "", "javascript", "internet-explorer-7", "timeout", "onresize", "" ]
I would like to pass a nested JavaScript object to my ASP.NET MVC Action Method. Here is the code(simplified): ``` $.getJSON('some/url', { index: pageIndex, pageSize: pageSize, filter:{one:'one',two:'two',three:'three'} }, someCallBack(msg) ); ``` I'm using jQuery and implemented my plugin which lazily fetches paginated data from the server. It works all charm but now I need to pass a JavaScript `Filter` object with a variable number of properties-filters. On the server-side, I get an object array where the first item is a string, containing the `[Object object]` literal. Obviously, my nested object (`filter`) is not being expanded and transformed into an object (`hash`) on the server-side. Is this possible at all?? I don't want to hard code my filters, since the plugin is meant to be universally applied. Thank you very much.
You can use `System.Web.Script.Serialization.JavaScriptSerializer` to send/receive JSON serialized data: ``` JavaScriptSerializer js = new JavaScriptSerializer(); Filter f = js.Deserialize<Filter>(json_str); ``` More details [here](http://blogs.msdn.com/rakkimk/archive/2009/01/30/asp-net-json-serialization-and-deserialization.aspx). To encode the JSON data to send to the server, use a JSON serialization library for JavaScript like [json2.js](https://github.com/douglascrockford/JSON-js/blob/master/json2.js). Then send the query like this: ``` var filter = {field1: 'value1', field2: 'value2'} $.ajax({ type: "POST", url: '/server/path', data: { filter: JSON2.stringify(filter) }, dataType: "json", success: function(data) { // do something } }); ```
[JSON](http://en.wikipedia.org/wiki/Json) would be perfect for this. Basically, you'll want to convert your object to it's JSON representation and then send that across the wire. Once it's available on the server, you can process it however you like. Crockford has a [great article](http://www.json.org/js.html) on what JSON is, how to understand the notation, and he provides a tool to convert your objects to JSON notation.
How to pass a nested JavaScript object to ASP.NET MVC Action Method?
[ "", "javascript", "jquery", "asp.net-mvc", "" ]
I have developed an application in windows forms.but now i was told to put the same in a tabbedpage.the problem I'm facing is how can i insert forms in a tabbed control .and if this is not possible,then what can i do .i need to navigate between the pages(or forms)and data shud be persistent while navigation. is there any way to insert forms in tab control? even if it is possible the form shouldn't look like a form rather it should like a page. please help me thanks in advance sri.
``` form1.TopLevel = false; form1.FormBorderStyle = FormBorderStyle.None;//this is not compulsary this.tabPage1.Controls.Add(form1); form1.Show(); ```
Try making a UserControl with all the logic and forms from your first application and use it in by dropping it on your main form on your second application. Making the UserControl shouldn't be that hard, you could copy-paste the controls from your old app. Not sure if this is of any help but I hope it does.
Regarding Tabcontrol and forms
[ "", "c#", ".net", "winforms", "" ]
Let us have a path ``` C:\Program Files\TestFolder ``` this path i got programatically and stored in a varible dirpath(for example) Now i have concatinated string ``` dirpath=getInstallationpath()+"\\ test.dll /codebase /tlb"; ``` then dirpath is become ``` C:\Program Files\TestFolder\test.dll /codebase /tlb ``` But my problem is i have make the path enclosed in double quotes ``` "C:\Program Files\TestFolder\test.dll" ``` Because when i directly pass the dirpath as commandline for regasm in a CreateProcess() then it should accept for C:\Program only because of white spaces.so i tried lot of stunts like ``` dirpath="\ "+getInstallationPath()+" \test.dll /codebase /tlb " ``` like that but did not worked... So please Hep me in this regard... Thanks in Advance...
I can see two issues with that line. First of all, you need to escape the backslash preceding test.dll. Secondly, wrapping the path in quotation marks requires that you also escape the quotation marks. After these changes, it should look like this: ``` dirpath="\""+getInstallationPath()+"\\test.dll\" /codebase /tlb " ``` **Edit:** Fixed the assignment per Martin's request. Forgot a closing quotation mark for the first string!
I believe you forgot the second \" after test.dll
How to enclose the path stored in a variable in quotes?
[ "", "c++", "visual-c++", "" ]
Whenever the value is null for this query ``` SELECT ISNULL(someDateTime,'') FROM someTable ``` the result is ``` someDateTime ------------ 1900-01-01 00:00:00.000 ``` I want it to be "No", so if I run this: ``` SELECT ISNULL(someDateTime,'No') FROM someTable ``` then there's this error: > Conversion failed when converting datetime from character string. How to do it? Thanks in advance!
The result of the expression will need to be a single type. If you want a character string (and you do, since 'No' is not a DateTime), you'll need to convert the datetime to such a string: ``` SELECT ISNULL(cast(someDatetime as varchar(20)), 'No') FROM someTable ``` As others have suggested, though, code like this smells bad, and you may want to pass the null to a client component and do the conversion there.
`isnull()` is trying to convert the second argument to the datatype of the field you specify in the first argument. If you are going to be returning a string you need to cast the `DateTime` field to a string type so that `isnull()` can work properly - see [Michael Petrotta's](https://stackoverflow.com/questions/1201541/converting-mssql-null-date-time-fields/1201574#1201574) answer for a way to accomplish this.
Converting SQL Server null date/time fields
[ "", "sql", "sql-server", "" ]
I am using SQL Server 2005. I have three tables - Users, Groups, and GroupUsers. GroupUsers contains the two PKs for a many-to-many relationship. I have a view to get all the user information for a group as follows: ``` SELECT * FROM GroupUsers JOIN Users ON GroupUsers.UserID = Users.UserId ``` I want to create the inverse of this view - I want a list of all of the users NOT attached to a specific group. The following query would accomplish this: ``` SELECT * FROM Users WHERE UserID NOT IN (SELECT UserID FROM GroupUsers WHERE GroupID=@GroupID) ``` However I don't want to have to specify the group, I want to know how to turn this into a view that joins the GroupID and then the UsersID and all the user info, but only for non-attached users. I'm not sure how to do this, maybe something with the EXCEPT operator? UPDATE: I think this is my solution, unless someone comes up with something better: ``` SELECT G.GroupId, U.* FROM Groups G CROSS JOIN Users U WHERE U.UserId NOT IN ( SELECT UserId FROM GroupUsers WHERE GroupId=G.GroupId ) ```
If I understand it correctly, you will have to do a cartersian result of users & groups and reduce the result derived from GroupUsers. That will give you records of users which do not have any groups attached to it. I apologize if I didn't understand the question correctly. EDIT: Cartesian result will give you users \* groups. You will have to subtract GroupUsers from it. I am sorry, I do not have SQL ready for it & can't try it out at this point.
You can use a `left outer join` to grab all of the users, then, blow away any user where there's a group attached. The following query will give you just the list of users where there's no group to be had: ``` select u.* from users u left outer join groupusers g on u.userid = g.userid where g.userid is null ``` If you want to find all users not in a particular group: ``` select u.* from users u left outer join groupusers g on u.userid = g.userid and g.groupid = @GroupID where g.userid is null ``` This will *only* exclude the users in that particular group. Every other user will be returned. This is because the `groupid` condition was done in the `join` clause, which limits the rows joined, not returned, which is what the `where` clause does.
Get the inverse of a join?
[ "", "sql", "sql-server", "sql-server-2005", "t-sql", "join", "" ]
I am trying to switch from using a JFileChooser to a FileDialog when my app is being run on a mac so that it will use the OS X file chooser. So far I have the following code: ``` FileDialog fd = new FileDialog(this); fd.setDirectory(_projectsBaseDir.getPath()); fd.setLocation(50,50); fd.setFile(?); fd.setVisible(true); File selectedFile = new File(fd.getFile()); ``` What would I put in for the question ? so that my file chooser would allow any directory to be the input for file chooser (the method that follows already checks to make sure that the directory is the right kind of directory I just want to the FileDialog to accept any directory).
Assuming you're determined to use the FileDialog instead of the portable JFileChooser, you need to set the system property so that FileDialogs created are for directories. The property in question is [`apple.awt.fileDialogForDirectories`](http://developer.apple.com/documentation/Java/Reference/Java_PropertiesRef/Articles/JavaSystemProperties.html). So simply do the following: ``` System.setProperty("apple.awt.fileDialogForDirectories", "true"); FileDialog fd = new FileDialog(this); fd.setDirectory(_projectsBaseDir.getPath()); fd.setLocation(50,50); fd.setVisible(true); File selectedFile = new File(fd.getFile()); System.setProperty("apple.awt.fileDialogForDirectories", "false"); ``` It should be noted that this isn't portable, however, since you're looking to replace the portable JFileDialog, I assume that's not an issue.
> I am trying to switch from using a JFileChooser to a FileDialog when my app is being run on a mac so that it will use the OSx file chooser I would suggest that you try to stay in the Swing world and shy away from the heavier-weight world of AWT. There are ways to work around issues with the Swing L&F on Macs, if that is what your problem is. Take a look at [this post to an earlier question](https://stackoverflow.com/questions/937533/jfilechooser-os-x/937547#937547), which links to a site that shows how to get the correct Mac icons in the file chooser. Excuse me for not exactly answering your question. If there are other reasons why you would prefer to stay with `FileDialog`, I will gladly remove this post.
How can I make a java FileDialog accept directories as its FileType in OS X?
[ "", "java", "macos", "look-and-feel", "filedialog", "" ]
Is it possible to express 1 or 0 as a bit when used as a field value in a select statement? e.g. In this case statement (which is part of a select statement) ICourseBased is of type int. ``` case when FC.CourseId is not null then 1 else 0 end as IsCoursedBased ``` To get it to be a bit type I have to cast both values. ``` case when FC.CourseId is not null then cast(1 as bit) else cast(0 as bit) end as IsCoursedBased ``` Is there a short hand way of expressing the values as bit type without having to cast every time? (I'm using MS SQL Server 2005)
``` cast ( case when FC.CourseId is not null then 1 else 0 end as bit) ``` The CAST spec is "CAST (expression AS type)". The CASE is an *expression* in this context. If you have multiple such expressions, I'd declare bit vars @true and @false and use them. Or use UDFs if you really wanted... ``` DECLARE @True bit, @False bit; SELECT @True = 1, @False = 0; --can be combined with declare in SQL 2008 SELECT case when FC.CourseId is not null then @True ELSE @False END AS ... ```
You might add the second snippet as a field definition for ICourseBased in a view. ``` DECLARE VIEW MyView AS SELECT case when FC.CourseId is not null then cast(1 as bit) else cast(0 as bit) end as IsCoursedBased ... SELECT ICourseBased FROM MyView ```
Imply bit with constant 1 or 0 in SQL Server
[ "", "sql", "sql-server", "t-sql", "bit", "" ]
In informal conversations with our customer service department, they have expressed dissatisfaction with our web-based CSA (customer service application). In a callcenter, calls per hour are critical, and lots of time is wasted mousing around, clicking buttons, selecting values in dropdown lists, etc. What the dirrector of customer service has wistfully asked for is a return to the good old days of keyboard-driven applications with very little visual detail, just what's necessary to present data to the CSR and process the call. I can't help but be reminded of the greenscreen apps we all used to use (and the more seasoned among us used to make). Not only would such an application be more productive, but healthier for the reps to use, as they must be risking injury doing data entry through a web app all day. I'd like to keep the convenience of browser-based deployment and preserve our existing investment in the Microsoft stack, but how can I deliver this keyboard-driven [ultra-simple greenscreen concept](http://www.coboloncogs.org/HOME.HTM) to the web? Good answers will link to libraries, other web applications with a similar style, best practices for organizing and prioritizing keyboard shortcut data (not how to add them, but how to store and maintain the shortcuts and automatically resolve conflicts, etc. **EDIT: accepted answers will not be mini-lectures on how to do UI on the web. I do not want any links, buttons or anything to click on whatsoever.** **EDIT2: this application has 500 users, spread out in call centers around North America. I cannot retrain them all to use the TAB key**
As I had to use some of those apps over time, will give my feedback as a user, FWIW, and maybe it helps you to help your users :-) Sorry it's a bit long but the topic is rather close to my heart - as I had myself to prototype the "improved" interface for such a system (which, according to our calculations, saves *very* nontrivial amounts of money and avoids the user dissatisfaction) and then lead the team that implemented it. There is one common issue that I noticed with quite a few of CRMs: there is 20+ fields on the screen, of which typically one uses 4-5 for performing of 90% of operations. But one needs to click through the unnecessary fields anyway. I might be wrong with this assumption, of course (as in my case there was a wide variety of users with different functions using the system). But do try to sit down with the users and see how they are using the application and see if you can optimize something UI-wise - or, if really it's a matter of not knowing how to use "TAB" (and they *really* need to use each and every of those 20 fields each time) - you will be able to coach a few of them and check whether this is something sufficient for them - and then roll out the training for the entire organization. Ensure you have the intuitive hotkey support, and that if a list contains 2000 items, the users do not have to scroll it manually to find the right one, but rather can use FF's feature to select the item by typing the start of its text. You might learn a lot by looking at the usage patterns of the application and then optimizing the UI accordingly. If you have multiple organizational functions that use the system - then the "ideal UI" for each of them might be different, so the question of which to implement, and if, becomes a business decision. There are also some other little details that matter for the users - sometimes what you'd thought would be the main input field for them in reality is not - and they have an empty textarea eating up half of the screen, while they have to enter the really important data into a small text field somewhere in the corner. Or that in their screen resolution they need the horizontal scrolling (or, scrolling at all). Again, sitting down with the users and observing should reveal this. One more issue: "Too fast developer hardware" phenomenon: A lot of the web developers tend to use large displays with high resolution, showing the output of a very powerful PCs. When the result is shown on the CSR's laptop screen at 1024x768 of a year-old laptop, the layout looks quite different from what was anticipated, as well as the rendering performance. Tune, tune, tune. And, finally - if your organization is geographically disperse, *always* test with the longest-latency/smallest bandwidth link equivalent. These issues are not seen when doing the testing locally, but add a lot of annoyance when using the system over the WAN. In short - try to use the worst-case scenario when doing any testing/development of your application - then this will become annoying to you and you will optimize its use - so then the users that are in better situation will jump in joy over the apps performance. If you are in for the "green screen app" - then maybe for the power users provide a single long text input field where they could type all the information in the CLI-type fashion and just hit "submit" **or** the ENTER key (though this design decision is not something to be taken lightly as it is a lot of work). But everyone needs to realize that "green-screen" applications have a rather steep learning curve - this is another factor to consider from the business point of view, along with the attrition rate, etc. Ask the boss how long does the typical agent stay at the same place and how would the productivity be affected if they needed a 3-month term to come to full speed. :) There's a balance that is not decided by the programmers alone, nor by the management alone, but requires a joint effort. And finally a side note in case you have "power users": you might want to take a look at [conkeror](http://conkeror.org/) as a browser - though fairly slow in itself, it looks quite flexible in what it can offer from the keyboard-only control perspective.
I make web based CSR apps. What your manager is forgetting is now the application is MUCH more complex. We are asking more from our reps than we did 15 years ago. We collect more information and record more data than before. Instead of a "greenscreen" application, you should focus on making the web application behave better. For example,dont have a dropdown for year when it can be a input field. Make sure the taborder is correct and sane, you can even put little numbers next to each field grouping to indicate tab order. Assign different screens/tabs to F keys and denote them on the screen. You should be able to use your web app without a mouse at all with no loss of productivity if done correctly. Leverage the use of AJAX so a round trip to the server doesn't change the focus of their cursor. On a CSR app, you often have several defaults. you should assign each default a button and allow the csr to push 1 button to get the default they want. this will reduce the amount of clicking and mousing around. **Also very important** You need to sit with the CSR's and watch them for a while to get a feel for how they use the app. if you haven't done this, you are probably overlooking simple changes that will greatly enhance their productivity.
How can I make a "greenscreen" web app?
[ "", "c#", "user-interface", "hotkeys", "" ]
I have developed a windows service which reads data from a database, the database is populated via a ASP.net MVC application. I have a requirement to make the service re-load the data in memory by issuing a select query to the database. This re-load will be triggered by the web app. I have thought of a few ways to accomplish this e.g. Remoting, MSMQ, or simply making the service listen on a socket for the reload command. I am just looking for suggestions as to what would be the best approach to this.
How reliable does the notification has to be? If a notification is lost (lets say the communication pipe has a hickup in a router and drops the socket), will the world end come or is business as usual? If the service is down, do notifications from the web site ned to be queued up for when it starts up, or they can e safely dropped? The more reliable you need it to be, the more you have to go toward a queued solution (MSMQ). If reliability is not an issue, then you can choose from the mirirad of non-queued solutions (remoting, TCP, UDP broadcast, HTTP call etc). Do you care at all about security? Do you fear an attacker my ping your 'refresh' to death, causing at least a DoS if not worse? Do you want to authenticate the web site making the 'refresh' call? Do you need privacy of the notifications (ie. encryption)? UDP is more difficult to secure (no session). Does the solution has to allow for easy deployment, configuration and management on the field (ie. is a standalone, packaged, product) or is a one time deployment that can be fixed 'just-in-time' if something changes? Withous knowing the details of all these factors, is dififcult to say 'use X'. At least one thing is sure: remoting is sort of obsolete by now. My recommendation would be to use WCF, because of the ease of changing bindings on-the-fly, so you can test various configurations (TCP, net pipe, http) w/o any code change. BTW, have you considered using [Query Notifications](http://msdn.microsoft.com/en-us/library/ms130764.aspx) to detect data changes, instead of active notifications from the web site? I reckon this is a shot in the dark, but equivalent active cache support exists on many databases.
Simply host a WCF service inside the Windows Service. You can use `netTcpBinding` for the binding, which will use binary over TCP/IP. This will be much simpler than sockets, yet easier to develop and maintain.
Communication between two separate applications
[ "", "c#", "sockets", "remoting", "msmq", "rpc", "" ]
What's the difference between the word "Abstract" and "Generic" code in case of Java? Are both mean the same?
Abstract and Generics are completely different things in Java syntax and semantics. abstract is a keyword, and indicates that a class does not contain a complete implementation, so cannot be instantiated. Example: ``` abstract class MyClass { public abstract void myMethod(); } ``` MyClass contains a method definition 'public abstract void myMethod()', but does not specify an implementation - an implementation must be provided by a subclass, usually referred to as a concrete subclass, so an abstract class defines an interface, perhaps with some implementation detail. The use of generics indicates that aspects of a class can be parameterized. I find the easiest to understand example is in the Java Collections API. For example `List<String>` can be read as 'List of objects of type String'. `List<Integer>` is the same List interface, but only accepts objects of type Integer. In the Collections API, it provides type-safety for collections that otherwise require boilerplate to check types and cast appropriately.
> **Abstract** - thought of apart from concrete realities, specific objects, > or actual instances. In Java you find the word abstract in class and method definitions. It implies that the class can not be instantiated (I can only be used as a super-class), or that a method must be overridden by a sub-class. A example of this is an Animal class, an Animal is too ambiguous to create an instance out of, however Animals share common attributes/functionality which should be defined in the Animal class. ``` public abstract class Animal{ protected int calories; public void feed(int calories){ weight += calories; } public abstract void move(); // All animals move, but all do not move the same way // So, put the move implementation in the sub-class } public class Cow extends Animal{ public void move(){ // Some code to make the cow move ... } } public class Application{ public static void main(String [] args){ //Animal animal = new Animal() <- this is not allowed Animal cow = new Cow() // This is ok. } } ``` > **Generic** - of, applicable to, or referring to all the members of a > genus, class, group, or kind; general. The term Generic, or Generics, is used in Java when explicitly declaring what type of objects will be contained in some container object. Take an ArrayList for example, we can put any object we want into an ArrayList, but this can easily lead to bugs (You might accidentally put a String in your ArrayList that is filled with ints). Generics was created in Java so that we can explicitly tell the compiler that we only want ints in our ArrayList (with generics the compiler will throw an error when you try to put a String in to your integer ArrayList). ``` public class Application{ public static void main(String [] args){ ArrayList noGenerics = new ArrayList(); ArrayList<Integer> generics = new ArrayList<Integer>(); nogenerics.add(1); nogenerics.add("Hello"); // not what we want, but no error is thrown. generics.add(1); generics.add("Hello"); // error thrown in this case int total; for(int num : nogenerics) total += num; // We will get a run time error in the for loop when we come to the // String, this run time error is avoided with generics. } } ```
Difference between Abstract and Generic code
[ "", "java", "generics", "abstract", "" ]
I've been noticing static classes getting a lot of bad rep on SO in regards to being used to store global information. (And global variables being scorned upon in general) I'd just like to know what a good alternative is for my example below... I'm developing a WPF app, and many views of the data retrieved from my db are filtered based on the ID of the current logged in user. Similarly, certain points in my app should only be accessable to users who are deemed as 'admins'. I'm currently storing a **loggedInUserId** and an **isAdmin** bool in a static class. Various parts of my app need this info and I'm wondering why it's not ideal in this case, and what the alternatives are. It seems very convienient to get up and running. The only thing I can think of as an alternative is to use an IoC Container to inject a Singleton instance into classes which need this global information, the classes could then talk to this through its interface. However, is this overkill / leading me into analysis paralysis? Thanks in advance for any insight. --- **Update** So I'm leaning towards dependency injection via IoC as It would lend itself better to testability, as I could swap in a service that provides "global" info with a mock if needed. I suppose what remains is whether or not the injected object should be a singleton or static. :-) Will prob pick Mark's answer although waiting to see if there's any more discussion. I don't think there's a right way as such. I'm just interested to see some discussion which would enlighten me as there seems to be a lot of "this is bad" "that is bad" statements on some similar questions without any constructive alternatives. --- **Update #2** So I picked Robert's answer seeing as it is a great alternative (I suppose alternative is a weird word, probably the One True Way seeing as it is built into the framework). It's not forcing me to create a static class/singleton (although it is thread static). The only thing that still makes me curious is how this would have been tackled if the "global" data I had to store had nothing to do with User Authentication.
Forget Singletons and static data. That pattern of access is going to fail you at some time. Create your own custom IPrincipal and replace Thread.CurrentPrincipal with it at a point where login is appropriate. You typically keep the reference to the current IIdentity. In your routine where the user logs on, e.g. you have verified their credentials, attach your custom principal to the Thread. ``` IIdentity currentIdentity = System.Threading.Thread.CurrentPrincipal.Identity; System.Threading.Thread.CurrentPrincipal = new MyAppUser(1234,false,currentIdentity); ``` in ASP.Net you would also set the `HttpContext.Current.User` at the same time ``` public class MyAppUser : IPrincipal { private IIdentity _identity; private UserId { get; private set; } private IsAdmin { get; private set; } // perhaps use IsInRole MyAppUser(userId, isAdmin, iIdentity) { if( iIdentity == null ) throw new ArgumentNullException("iIdentity"); UserId = userId; IsAdmin = isAdmin; _identity = iIdentity; } #region IPrincipal Members public System.Security.Principal.IIdentity Identity { get { return _identity; } } // typically this stores a list of roles, // but this conforms with the OP question public bool IsInRole(string role) { if( "Admin".Equals(role) ) return IsAdmin; throw new ArgumentException("Role " + role + " is not supported"); } #endregion } ``` This is the preferred way to do it, and it's in the framework for a reason. This way you can get at the user in a standard way. We also do things like add properties if the user is anonymous (unknown) to support a scenario of mixed anonymous/logged-in authentication scenarios. Additionally: * you can still use DI (Dependancy Injection) by injecting the Membership Service that retrieves / checks credentials. * you can use the Repository pattern to also gain access to the current MyAppUser (although arguably it's just making the cast to MyAppUser for you, there can be benefits to this)
There are many other answers here on SO that explains why statics (including Singleton) is bad for you, so I will not go into details (although I wholeheartedly second those sentiments). As a general rule, DI is the way to go. You can then inject a service that can tell you anything you need to know about the environment. However, since you are dealing with user information, Thread.CurrentPrincipal may be a viable alternative (although it *is* Thread Static). For convenience, you can [wrap a strongly typed User class around it](http://blogs.msdn.com/ploeh/archive/2007/08/20/UserContext.aspx).
C# : So if a static class is bad practice for storing global state info, what's a good alternative that offers the same convenience?
[ "", "c#", ".net", "class", "static", "global-variables", "" ]
I have a class that is generated by some tool, therefore I can't change it. The generated class is very simple (no interface, no virtual methods): ``` class GeneratedFoo { public void Write(string p) { /* do something */ } } ``` In the C# project, we want to provide a way so that we can plug in a different implementation of MyFoo. So I'm thinking to make MyFoo derived from GeneratedFoo ``` class MyFoo : GeneratedFoo { public new void Write(string p) { /* do different things */ } } ``` Then I have a CreateFoo method that will either return an instance of GeneratedFoo or MyFoo class. However it always calls the method in GeneratedFoo. ``` GeneratedFoo foo = CreateFoo(); // if this returns MyFoo, foo.Write("1"); // it stills calls GeneratedFoo.Write ``` This is expceted since it is not a virtual method. But I'm wondering if there is a way (a hack maybe) to make it call the derived method. Thanks, Ian
Without being able to make the method virtual, no. A non-virtual method is statically linked at compile time and can't be changed.
Adam gave you an answer (correct one). Now it's time for hack you were asking for :) ``` class BaseFoo { public void Write() { Console.WriteLine("Hello simple world"); } } class DerFoo : BaseFoo { public void Write() { Console.WriteLine("Hello complicated world"); } } public static void Main() { BaseFoo bs = new DerFoo(); bs.Write(); bs.GetType().GetMethod("Write").Invoke(bs, null); } ``` Prints out: ``` Hello simple world Hello complicated world ```
How to force call a C# derived method
[ "", "c#", "class", "inheritance", "" ]
I am programing under C++, MFC, windows. I want to delete a folder into recycle bin. How can I do this? ``` CString filePath = directorytoBeDeletePath; TCHAR ToBuf[MAX_PATH + 10]; TCHAR FromBuf[MAX_PATH + 10]; ZeroMemory(ToBuf, sizeof(ToBuf)); ZeroMemory(FromBuf, sizeof(FromBuf)); lstrcpy(FromBuf, filePath); SHFILEOPSTRUCT FileOp; FileOp.hwnd = NULL FileOp.wFunc=FO_DELETE; FileOp.pFrom=FromBuf; FileOp.pTo = NULL; FileOp.fFlags=FOF_ALLOWUNDO|FOF_NOCONFIRMATION; FileOp.hNameMappings=NULL; bRet=SHFileOperation(&FileOp); ``` Any thing wrong with the code above? It always failed. I found the problem: filePath should be : "c:\abc" not "c:\abc\"
The return value from SHFileOperation is an int, and should specify the error code. What do you get?
i know it is not the right way but if you cant find a solution you can try this.. download file nircmd.exe or another exe that can empty recycle bin. then you call these functions by system("nircmd.exe emptybin")
How to delete folder into recycle bin
[ "", "c++", "windows", "directory", "recycle-bin", "" ]
I'm currently working with jQuery and using the datepicker. I have two input fields: Start Date and End Date. When each is clicked, the datepicker pops out. For the "End Date" field, I would like the jQuery date picker to highlight the same day. For example, if a user picks 8/12/09 as their start date, for the End Date, they cannot pick anything BEFORE 8/12/09. Here's what I currently have: ``` $("#datepicker").datepicker({minDate: +5, maxDate: '+1M +10D', changeMonth: true}); var startDate = $("#datepicker").val(); var the_date = new Date(startDate); $("#datepicker2").datepicker({ minDate: the_date }); ``` And, unforunately, this does not work. I have found that if I "hard-code" the values in, like: ``` $("#datepicker2").datepicker({ minDate: new Date("08/12/2009") }); ``` it will work fine. Any ideas on how to pass the "start date" to the "end date" picker? Thanks!
this cant work you need a callback function from the date picker for the "picking" event. befor this event your startDate will be empty. try this: ``` $("#datepicker").datepicker({ minDate: +5, maxDate: '+1M +10D', changeMonth: true, onSelect: function(dateText, inst){ var the_date = new Date(dateText); $("#datepicker2").datepicker('option', 'minDate', the_date); } }); $("#datepicker2").datepicker(); // add options if you want ```
start date end date <http://rowsandcolumns.blogspot.com/2010/07/jquery-start-dateend-date-datepicker.html>
jQuery DatePicker - 2 Fields
[ "", "php", "jquery", "jquery-ui", "datepicker", "" ]
This doesn't seem to be working: ``` <select id="mySel" onchange="alert('foo')"> <option value="a">a</option> <option value="b">b</option> </select> <script> dojo.byId('mySel').value = 'b'; // select changes, but nothing is alerted </script> ``` (I'm using dojo, but that doesn't really matter.)
The 'onchange' name is a little misleading unless you understand that a change *event* and a value being changed aren't the same thing. A change event occurs when the user changes the value in the browser. I believe you can fire the event manually by calling `dojo.byId('mySel').onchange()` after you programmatically change the value, though. (You might need to actually define a function that calls `alert`, though. I haven't done this myself.)
For anyone looking to trigger the `change` event using javascript. ``` var evObj = document.createEvent("HTMLEvents"); evObj.initEvent("change", true, true); var elem = document.getElementById('some-id'); elem.dispatchEvent(evObj); ```
How do I programmatically change a select so that it fires its onchange behavior?
[ "", "javascript", "select", "onchange", "" ]
I have some legacy Web Services written in C# ASP.NET. There is a specific object in a library used by the Web Service that I need to inspect. Setting a breakpoint in the web service doesn't do anything. This is made harder by the fact that the code is so horrendous that the entry point to the Web Service callout is not obvious.
Here are some reasons your breakpoint might not be working: 1. You're attaching the debugger to the wrong process. 2. The PDB for your assembly does not match the assembly (Modules window shows PDB load status). Perhaps the PDB is old.
You might try putting a line in where you want it to stop using System.Diagnostics.Debugger.Break() or System.Diagnostics.Debugger.Launch().
Debugging C# ASP.NET Web Services
[ "", "c#", "asp.net", "web-services", "debugging", "" ]
I am sending a large string from Delphi 5 to a C# web service, and I'm having lots of trouble with Pound (£) signs. I URLEncode the string from the Delphi side (which seems to convert them to '%A3'). When it reaches the C# web services it appears as '�'. I have tried changing the encoding of the string on the C# side by using a StreamReader (shown below), but the best I can get it to do is to change to to a question mark (?). ``` MemoryStream mr = new MemoryStream(System.Text.Encoding.Default.GetBytes(myString)); StreamReader sr = new StreamReader(mr, System.Text.Encoding.Default); string s = sr.ReadToEnd(); ``` How can I get the £ signs to be interpreted correctly? Please help! (Further info requested) The web service signature is: ``` [WebMethod] public string ReadMyString(string PostedString) ``` The Delphi 5 code uses third party components/code that we've been using successfully for years, but this is the first time we've tried talking directly to C#. An outline of the code is shown below: ``` tmp_Str := URLEncode(myBigString); tmp_Str := WinInetPostData(myURL, tmp_Str); ``` Between these two lines I have confirmed that the £ signs have been correctly converted to '%A3'.
Based on what you wrote in [your own answer](https://stackoverflow.com/questions/1233027/sending-pound-signs-from-delphi-to-c-web-service/1234091#1234091), it looks like the problem is in how the client side is encoding the string, not in how the server is interpreting it (although the server needs to cooperate no matter what encoding you use). You're evidently expecting it to be encoded as UTF-8 (that's the default for `StreamReader` if you don't specify anything else), but I wouldn't be surprised if the NetMasters library you're using doesn't even know about UTF-8 or any other form of Unicode. Delphi 5 can handle Unicode just fine via its `WideString` type, but it lacks a lot of support utility functions. If you want to keep your code with NetMasters, then the minimal change for you is to introduce a Unicode-enabled library, such as the *JclUnicode* unit from the free [JCL](http://jcl.delphi-jedi.org/). There you can find a `Utf8Encode` function that will receive a `WideString` and return an `AnsiString`, which is then suitable for passing to your existing URL-encoding function. Better would be to get rid of the NM code altogether. The free [Indy library](http://indyproject.org/) has functions for UTF-8-encoding and URL-encoding, as well as all your other Internet-related tasks. If you're not using Unicode on the client side, then there's no reason to expect "£" to ever be encoded as the two-byte sequence `c2 a3`. That's the UTF-8-encoded form of U+00a3, the code point for the pound character. If you're *not* using Unicode on the client, then you'll have to find out what code page you *are* using. Then, specify that encoding on the server when you create the new `StreamReader`.
Got it! The URLEncode function in Delphi (which uses a third-party component called NMURL) is encoding £ as '%A3', when it should in fact be '%C2%A3'. I did a manual replace on the Delphi side to correct it and then it requires no manipulation at all on the C# side. Thanks for all your suggestions. That'll teach me to put my faith in old components!
Sending pound signs from Delphi to C# web service
[ "", "c#", "web-services", "delphi", "encoding", "asmx", "" ]
I am looking for a way to generate a graph with multiple sets of data on the X-axis, each of which is divided into multiple sets of multiple sets. I basically want to take [this graph](http://gdgraph.com/samples/sample1A.html) and place similar graphs side by side with it. I am trying to graph the build a graph of the duration (Y-axis) of the same jobs (0-3) with different configurations (0-1) on multiple servers (each group with the same 8 jobs). Hopefully the following diagram will illustrate what I am trying to accomplish (smaller groupings are separated by pipes, larger groupings by double pipes): ``` || 0 1 | 0 1 | 0 1 | 0 1 || 0 1 | 0 1 | 0 1 | 0 1 || 0 1 | 0 1 | 0 1 | 0 1 || || 0 | 1 | 2 | 3 || 0 | 1 | 2 | 3 || 0 | 1 | 2 | 3 || || Server 1 || Server 2 || Server 3 || ``` Is this possible with either the GD::Graph Perl module or the matplotlib Python module? I can't find examples or documentation on this subject for either.
Here's some Python code that will produce what you're looking for. (The example uses 3 configurations rather than 2 to make sure the code was fairly general.) ``` import matplotlib.pyplot as plt import random nconfigs, njobs, nservers = 3, 4, 4 width = .9/(nconfigs*njobs) job_colors = [(0,0,1), (0,1,0), (1,0,0), (1,0,1)] def dim(color, fraction=.5): return tuple([fraction*channel for channel in color]) plt.figure() x = 0 for iserver in range(nservers): for ijob in range(njobs): for iconfig in range(nconfigs): color = dim(job_colors[ijob], (iconfig+2.)/(nconfigs+1)) plt.bar(x, 1.+random.random(), width, color=color) x += width x += .1 plt.show() ``` This code is probably fairly transparent. The odd term `(iconfig+2.)/(nconfigs+1)` is just to dim the colors for the different configurations, but keep them bright enough so the colors can be distinguished. The output looks like: [![alt text](https://i.stack.imgur.com/JDet9.png)](https://i.stack.imgur.com/JDet9.png)
Recently, I saw a graph that I think does what you want using [protovis](http://vis.stanford.edu/protovis/ex/antibiotics-burtin.html) I have no experience with the program, but the graph was enlightening and I think would give you want you want.
Generating a graph with multiple (sets of multiple sets of multiple) X-axis data sets
[ "", "python", "perl", "matplotlib", "graphing", "" ]
I came across an interesting *bug* with linq to sql. Take a look at the code below which is loosely translated from a LINQtoSQL query from a search engine i'm writing. The goal of the query is to find any groups which have the ID's "Joe", "Jeff", "Jim" in consecutive order. Pay careful attention to the variables named localKeyword and localInt. If you were to delete the declarations of these *seemingly* useless local variables and replace them with the ones they are proxying, you would find the query no longer works. I'm still a beginner with linq to sql but it looks like it is passing all the locals as references. This results in the query only having the value of local variables when the query is evaluated. In LINQ to SQL my query ended up looking like ``` SELECT * FROM INDEX ONE, INDEX TWO, INDEX THREE WHERE ONE.ID = 'Jim' and TWO.ID = 'Jim' and TWO.SEQUENCE = ONE.SEQUENCE + 2 and THREE.ID = 'Jim' and THREE.SEQUENCE = ONE.SEQUENCE + 2 and ONE.GROUP == TWO.GROUP and ONE.GROUP == THREE.GROUP ``` The query is of course paraphrased. What exactly is happening, is this a bug? I am asking to perhaps better understand why this is happening. You should find the code compiles in visual studio 2008. ``` using System; using System.Collections.Generic; using System.Text; using System.Linq; namespace BreakLINQ { class Program { public struct DataForTest { private int _sequence; private string _ID; private string _group; public int Sequence { get { return _sequence; } set { _sequence = value; } } public string ID { get { return _ID; } set { _ID = value; } } public string Group { get { return _group; } set { _group = value; } } } static void Main(string[] args) { List<DataForTest> elements = new List<DataForTest> { new DataForTest() { Sequence = 0, ID = "John", Group="Bored" }, new DataForTest() { Sequence = 1, ID = "Joe", Group="Bored" }, new DataForTest() { Sequence = 2, ID = "Jeff", Group="Bored" }, new DataForTest() { Sequence = 3, ID = "Jim", Group="Bored" }, new DataForTest() { Sequence = 1, ID = "Jim", Group="Happy" }, new DataForTest() { Sequence = 2, ID = "Jack", Group="Happy" }, new DataForTest() { Sequence = 3, ID = "Joe", Group="Happy" }, new DataForTest() { Sequence = 1, ID = "John", Group="Sad" }, new DataForTest() { Sequence = 2, ID = "Jeff", Group="Sad" }, new DataForTest() { Sequence = 3, ID = "Jack", Group="Sad" } }; string[] order = new string[] { "Joe", "Jeff", "Jim" }; int sequenceID = 0; var query = from item in elements select item; foreach (string keyword in order) { if (sequenceID == 0) { string localKeyword = keyword; query = from item in query where item.ID == localKeyword select item; } else { string localKeyword = keyword; int localSequence = sequenceID; query = from item in query where (from secondItem in elements where secondItem.Sequence == item.Sequence + localSequence && secondItem.ID == localKeyword select secondItem.Group).Contains(item.Group) select item; } sequenceID++; } } } } ``` The value of the query after the code completes should have the value {"Joe", "Bored", 1}.
The reason this fails without the 'proxying' variables is that the variables are *captured* by the expressions in the LINQ query. Without the proxies, each iteration of the loop references the same two variables (keyword, sequenceID), and when the query is finally evaluated and executed, the value substituted for each of these references is identical; namely, whatever value is present in those variables when the loop terminates (which is when you want us to evaluate 'query'). The query behaves as expected *with* the proxies because the captured variables are uniquely declared per iteration of the loop; subsequent iterations do not modify the captured variables, because they are no longer in scope. The proxy variables are not useless at all. Furthermore, this behavior is by design; let me see if I can find a good reference link...
See also [On lambdas, capture, and mutability](http://lorgonblog.spaces.live.com/blog/cns!701679AD17B6D310!689.entry)
LINQ to SQL and object lifetimes, references vs. values
[ "", "c#", "linq", "linq-to-sql", "" ]