Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
Currently I'm building my Java Web Application on Google AppEngine (GAE), but due to a lot of limitations they have I'm afraid that I'm going to have to switch from GAE to my own server which is running Glassfish (or I can setup any other server if needed). Also I'm planning to run Oracle or MySql databases. What do I need to do in order to switch from GAE to my server? Do I need to rewrite my code? Should I continue using Datanucleus or switch to something else? Anything else? | We won't be able to give very good advice without knowing how you wrote your app. Did you create a data access layer that separates your datastore access from your business logic? Or do you pass app engine specific objects all over the place? Are you using the gae user service? or memcache?
The first thing you should do is look at all your import statements. Anytime you see a com.google.something in there, you know you need to change that. You didn't give much detail about how you wrote your app, but if you are asking about datanucleus you probably were using JDO or JPA, which means you may be able to reuse most of your data layer. You might have a bunch of Key fields which you'll have to change, and maybe a few gae specific annotations. You'll probably have to double check how you handle transactions, as that is likely to be different in a SQL database, which don't use entity groups like GAE does. | Why not follow the info given in the original Google campfire ? There was a presentation by IBM on how to run an AppEngine app using DB2. They simply dropped the datanucleus-rdbms jar in the CLASSPATH, changed the connection URL etc, and ran it. Like in this PDF
<http://download.boulder.ibm.com/ibmdl/pub/software/dw/wes/hipods/GAE_Java_Interoperability.pdf>
--Andy (DataNucleus) | Switch from Google AppEngine to another server | [
"",
"java",
"google-app-engine",
"porting",
"code-migration",
""
] |
Has anyone got any experience with Web Service Extensions? I spent time trying to make a web service extension from the MS examples.
I have an .net 3.5 web service client, built by adding a reference to the WSDL, via the VS IDE "Project > Add Service Reference". This built my web service client, and all works OK.
I need to intercept the request and response body for my web service client. I have found lots of references to Web Service Extensions, but am having an attack of the tired, and just can't get my extensions to fire.
I've used the MS example from here "How to implement a SOAP extension" ( <http://msdn.microsoft.com/en-us/library/7w06t139.aspx>) , which builds a logger for the request / response streams.
The related MS article "Soap Message Modification" (<http://msdn.microsoft.com/en-us/library/esw638yk(VS.85).aspx>) shows how to enable the SOAP extension for the web client:
> **Implementing the SOAP Extension**
>
> There are two ways to run a SOAP extension on either a client or server application. First, you can configure the application to run the extension. To configure your SOAP extension to run for all Web methods on all Web services, especially a vroot, edit the `<soapExtensionTypes>` Element section within the Web.config file. The following code shows that the type attribute value must be on one line and include the fully qualified name of the extension, plus the version, culture, and public key token of the signed assembly.
>
> <configuration>
> <system.web>
> <webServices>
> <soapExtensionTypes>
> <add type="Contoso.MySoapExtension, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
> priority="1" group="0"/>
> </soapExtensionTypes>
> </webServices>
> </system.web>
> </configuration>
I've compiled the traceextension into its own class library, and referenced it in the web.config of the web service project like so:
> <add type="TraceExtension, Version=1.0.0.0, Culture=neutral, PublicKeyToken=ef8757fac167b8d8" priority="1" group="High"/>
No joy. Nothing is logged, no breakpoints are hit.
I then removed the referenced class, and dropped the source code into the web service project.
I tried to add a reference to it like so (my namespace is ServcieTest001):
> <add type="ServiceTest001.TraceExtension" group="High" priority="1" />
I used the following thread as a guide as to enabling me extension "getting-raw-soap-data-from-a-web-reference-client-running-in-asp-net" (<http://stackoverflow.com/questions/300674/getting-raw-soap-data-from-a-web-reference-client-running-in-asp-net>).
Still no joy. I then copied the code from the above thread, and still cannot get the extension to fire when I make a SOAP request.
Can anyone point me to a functioning downloadable web service extension demo project, so I can disassemble it and work out what I'm missing? | John is right, you can intercept the messages on the client using a custom client behavior that implements IClientMessageInspector. See [How To: Inspect or Modify Messages on the Client](http://msdn.microsoft.com/en-us/library/ms733786.aspx) on MSDN.
The only thing 'tricky' about it is that if you plan on modifying the message *body* then you will need to create a copy of the original message first. See [Using the Message Class](http://msdn.microsoft.com/en-us/library/ms734675.aspx) for the gooey details. | Chances are you want to get some rest.
You don't ever want to use WSE. WSE is obsolete.
You don't want to be using ASMX Web Services - Microsoft now considers them to be "legacy" technology, and will not be fixing bugs. BTW, WSE is based on ASMX, so what's that make it?
---
You only want to work with Windows Communication Foundation. The WCF Development Center on MSDN is at <http://msdn.microsoft.com/wcf/>.
Have fun, and stay away from the nasty, ancient, obsolete stuff. | Intercept messages in a WCF Client | [
"",
"c#",
"wcf",
"soap",
"wcf-client",
""
] |
I need to make a font underline and blue for a hyperlink in one of my JButtons, but it seems the font class has no obvious way to do this. I can't use attributedtext because I'm not going to be displaying this with Graphics class. Is there anyway I can accomplish this? I just need the title of my JButton to be blue and underlined. | I ended up solving the problem of not being able to underline text by surrounding my string with ..... tags. | I am too late to reply. But anyways, I am going to post it here. Maybe it would be helpful to someone.
```
JButton button = new JButton("Label");
HashMap<TextAttribute, Object> textAttrMap = new HashMap<TextAttribute, Object>();
textAttrMap.put(TextAttribute.UNDERLINE, TextAttribute.UNDERLINE_ON);
textAttrMap.put(TextAttribute.FOREGROUND, Color.BLUE);
button.setFont(button.getFont().deriveFont(textAttrMap));
```
Ref: <http://docs.oracle.com/javase/tutorial/2d/text/examples/AttributedText.java> | Changing the color of a font and making it underlined | [
"",
"java",
"colors",
"jbutton",
""
] |
Given a GMarker JS variable, how do I obtain the HTML DOM element that represents it? I need this so I can insert a `<div>` of my own into the map with the correct z-index.
Thanks. | It looks like the Google Maps API doesn't provide a method to return a marker's DOM element.
Do you just want to create your own custom marker? If so, you can create a marker class which extends GOverlay. [MarkerLight](http://gmaps-samples.googlecode.com/svn/trunk/manymarkers/markerlight.js) is a great example of how to accomplish this (and here is the [example page](http://gmaps-samples.googlecode.com/svn/trunk/manymarkers/randommarkers.html)).
If all you need is a custom icon, [here](http://code.google.com/apis/maps/documentation/overlays.html#Custom_Icons) is how to do that. | Sorry to post on such an old question, but I've just come across this myself. The solution I used in Google Maps APIv3 was to copy the "Custom Marker" from [the Google Maps samples](http://gmaps-samples-v3.googlecode.com/svn/trunk/overlayview/custommarker.html "the Google Maps samples") and add a simple method `getDOMElement`, which returns the `div` generated in the Marker's construction.
```
CustomMarker.prototype.getDOMElement = function() {
return this.div_;
}
```
You can then use `marker.getDOMElement().style` to dynamically change the styling of your marker, and the `img` child element of `marker.getDOMElement()` is the icon used, so you can change that dynamically too. | How to obtain the HTML DOM element of a Google Maps marker? | [
"",
"javascript",
"dom",
"google-maps",
""
] |
How to receive HTTP cookies from a web server in PHP? | I'm assuming that you want to interact with cookies from a remote webserver using PHP, rather than simply manipulating cookies from a client browser.
In that case, take a look at the [Zend\_Http](http://framework.zend.com/manual/en/zend.http.html) and related classes in the Zend Framework. These let you perform HTTP requests with cookie persistence and a whole lot more besides. | Check the [HTTPRequest](http://il.php.net/manual/en/class.httprequest.php) class and take a look at the [getCookies](http://il.php.net/manual/en/function.httprequest-getcookies.php) and [setCookies](http://il.php.net/manual/en/function.httprequest-setcookies.php) functions. You can also use the $\_COOKIES but it's much more low level. | How to receive HTTP cookies from a web server in PHP? | [
"",
"php",
"cookies",
""
] |
I know that I need prepared statements because I make more than one call to my database during one script.
I would like to get concrete examples about the [following sentence](https://stackoverflow.com/questions/1245396/to-understand-phps-include-command/1245686#1245686)
> Look at typecasting, validating and sanitizing variables and using PDO with prepared statements.
I know what he mean by validating and sanitizing variables. However, I am not completely sure about prepared statements. How do we prepare statements? By filters, that is by sanitizing? Or by some PDO layer? What is the definition of the layer?
**What do prepared statements mean in the statement?** Please, use concrete examples. | > What do prepared statements mean in
> the statement?
From the [documentation](https://www.php.net/manual/en/function.pg-prepare.php):
This feature allows commands that will be used repeatedly to be parsed and planned just once, rather than each time they are executed.
See [pg\_prepare](https://www.php.net/manual/en/function.pg-prepare.php)
Example from the page linked above:
```
<?php
// Connect to a database named "mary"
$dbconn = pg_connect("dbname=mary");
// Prepare a query for execution
$result = pg_prepare($dbconn, "my_query", 'SELECT * FROM shops WHERE name = $1');
// Execute the prepared query. Note that it is not necessary to escape
// the string "Joe's Widgets" in any way
$result = pg_execute($dbconn, "my_query", array("Joe's Widgets"));
// Execute the same prepared query, this time with a different parameter
$result = pg_execute($dbconn, "my_query", array("Clothes Clothes Clothes"));
?>
```
The [MySQL documentation for Prepared Statements](http://dev.mysql.com/tech-resources/articles/4.1/prepared-statements.html) nicely answers the following questions:
* Why use prepared statements?
* When should you use prepared
statements? | It means it will help you prevent SQL injection attacks by eliminating the need to manually quote the parameters.
Instead of placing a variable into the sql you use a named or question mark marker for which real values will be substituted when the statement is executed.
Definition of [PDO](http://php.net/pdo) from the PHP manual:
'The PHP Data Objects (PDO) extension defines a lightweight, consistent interface for accessing databases in PHP.'
See the php manual on [PDO](http://php.net/pdo) and [PDO::prepare](http://php.net/manual/en/pdo.prepare.php).
An example of a prepared statement with named markers:
```
<?php
$pdo = new PDO('pgsql:dbname=example;user=me;password=pass;host=localhost;port=5432');
$sql = "SELECT username, password
FROM users
WHERE username = :username
AND password = :pass";
$sth = $pdo->prepare($sql);
$sth->execute(array(':username' => $_POST['username'], ':pass' => $_POST['password']));
$result = $sth->fetchAll();
```
An example of a prepared statement with question mark markers:
```
<?php
$pdo = new PDO('pgsql:dbname=example;user=me;password=pass;host=localhost;port=5432');
$sql = "SELECT username, password
FROM users
WHERE username = ?
AND password = ?";
$sth = $pdo->prepare($sql);
$sth->execute(array($_POST['username'], $_POST['password']));
$result = $sth->fetchAll();
``` | How to use prepared statements with Postgres | [
"",
"php",
"postgresql",
"pdo",
""
] |
As a result of my [previous](https://stackoverflow.com/questions/1054697/why-isnt-my-new-operator-called) [questions](https://stackoverflow.com/questions/1131064/transfer-a-boostptrlist-from-a-library-to-a-client) I asked myself: Is it usefull at all to setup a C++ interface for a plugin system? The following points are speaking against it:
* No common ABI between different compilers and their versions, no common layout of the objects in memory
* No direct class export. You have to export factories and destructors. Problems arises if your objects are held by other objects which only `delete` them, for example smart pointers.
* Different implementations of the STL, you can't pass a `std::list<T>` to the plugin
* Different versions of used libraries like Boost
If you restrain yourself to the remaining parts of the C++ language you nearly end up with the "C subset". Are there any points speaking for using C++? How do the Qt-Toolkit solve the mentioned problems?
Remark: I'm referring mostly to the Linux system. Nevertheless I'm interested in solutions on other platforms.
Additional question: What are the problems using a C interface? The memory layout of `struct`s? Which language parts of C should be avoided? | Although this is more about the "how" than the "why", you may be interested in the [(not yet)Boost.Extension](http://boost-extension.redshoelace.com/docs/boost/extension/index.html) library, as well as the [author's blog](http://boost-extension.blogspot.com/) on the topic.
For the "why" part, my 2 (Canadian) cents: It depends on the audience (the plugin writers) and on the richness of the interface between your application and its plugins:
* If the audience is large or heterogeneous, the limitations of a C++ plugin system (keeping the plugin side and the app side in synch with respect to compiler and library versions) gets impractical, and a C interface is more maintainable. If the audience is small, homogeneous, or under your control, these problems are not as significant.
* If the interface is rich (hand-waving on the precise meaning of "rich"), a C interface may get cumbersome to write, and the balance tilts on the C++ side.
However, the first criterion (the audience) is more important, and a C++ interface thus makes sense only if the audience is homogeneous and the interface significantly benefits from the expressiveness gains. | I once made in C++ the plugin interface for a system I developed and it was a big mistake. Feasible, but not practical at all. Today, I'd always make the interface purely in C, and as simple as I can. The benefits of these choices are really significant. And if your plugin writers want a C++ API, you can simply write a C++ wrapper that calls the C interface.
As an added bonus, if your plugin writers want an API in any other language, a C API will always be the easier to create bindings for. | Why should I setup a plugin interface in c++ instead of c | [
"",
"c++",
"c",
"plugins",
""
] |
Please help me in separating the classes, headers and `main()` in the following program. I tried my best but there is problem.
```
#include "stdafx.h"
#include<iostream>
#include<string>
using namespace std;
class player
{
public:
string name;
string type;
void getdata()
{
cout<<"Enter the name of the Player : "<<endl;
cin>>name;
cout<<"Enter the Game he play : "<<endl;
cin>>type;
}
void display()
{
cout<<"The name of the Player is : "<<name<<endl;
cout<<"The game he will play is : "<<type<<endl;
}
};
int main()
{
player sachin;
sachin.getdata();
sachin.display();
system("pause");
return(0);
}
``` | If you want to separate your classes you should use create two files; .h & .cpp.
In the header file you place your definitions and declarations, and in the CPP file you implement your methods.
Player.h
```
#ifndef __PLAYER_H_
#define __PLAYER_H_
#include <string>
class Player
{
public:
Player();
~Player();
// Methods
void GetData();
void Display();
private:
std::string Name;
std::string Type;
}
#endif
```
Player.cpp
```
#include "Player.h"
Player::Player(): Name(""),
Type("")
{
}
Player::~Player(){}
void Player::GetData()
{
std::cout << "Enter the name of the Player : " << std::endl;
std::cin >> name;
std::cout << "Enter the Game he play : " << std::endl;
std::cin >> type;
}
void Player::Display()
{
std::cout <<"The name of the Player is : " << name << std::endl;
std::cout <<"The game he will play is : " << type << std::endl;
}
```
Edit:
Class member variables should never be public; Write a set method if you have a need to modify a member variable. | Do you mean how you would seperate that into header and cpp files?
If so in Player.h you'd do the following:
```
#ifndef __PLAYER_H_
#define __PLAYER_H_
#include<string>
using namespace std;
class player
{
protected: // could be private
string name;
string type;
public:
void getdata();
void display();
};
#endif
```
in player.cpp:
```
#include "stdafx.h"
#include "Player.h"
#include<iostream>
void player::getdata()
{
cout<<"Enter the name of the Player : "<<endl;
cin>>name;
cout<<"Enter the Game he play : "<<endl;
cin>>type;
}
void player::display()
{
cout<<"The name of the Player is : "<<name<<endl;
cout<<"The game he will play is : "<<type<<endl;
}
```
And then in main.cpp you'd do the following:
```
#include "stdafx.h"
#include "player.h"
int main()
{
player sachin;
sachin.getdata();
sachin.display();
system("pause");
return(0);
}
```
This would be the ideal way to split out everything into seperate header and cpp files :) | How can I separate headers, classes and main functions in C++? | [
"",
"c++",
""
] |
I'm databinding a listbox to an object that contains two array of strings. Each listbox item is set to a data template made up of a textbox and a combo box. The first string array is bound to the list, and the second string array is bound to the combo box. Well, at least that's I'm trying to achieve. The problem is that I can't figure out the binding syntax to set the second array to the combo box. Here's what I have:
The first thing is my class with my two string arrays. Pretty straightforward. Please note that the string array content is there for simplicity.
```
public class JobAssignments
{
public JobAssignments()
{
m_people.Add("John");
m_people.Add("Bill");
m_people.Add("Frank");
m_people.Add("Steve");
m_jobs.Add("Architect");
m_jobs.Add("Teacher");
m_jobs.Add("Carpenter");
m_jobs.Add("Plumber");
}
private List<string> m_people = new List<string>();
public List<string> People { get { return m_people; } set { m_people = value; } }
private List<string> m_jobs = new List<string>();
public List<string> Jobs { get { return m_jobs; } set { m_jobs = value; } }
};
```
In code, I set an instance of this class as the datacontext of this listbox:
```
<ListBox x:Name="listBox"
Grid.Row="0"
HorizontalContentAlignment="Stretch"
DataContext="{Binding}"
ItemsSource="{Binding People}"
ItemTemplate="{StaticResource JobAssignmentDataTemplate}">
</ListBox>
```
With a data template that looks like this:
```
<DataTemplate x:Key="JobAssignmentDataTemplate">
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition/>
<ColumnDefinition/>
</Grid.ColumnDefinitions>
<TextBlock Grid.Column="0"
Text="{Binding}"/>
<ComboBox Grid.Column="2"
SelectedIndex="0"
ItemsSource="{Binding Jobs ???? }"/>
</Grid>
</DataTemplate>
```
What I usually get out my experiments is a list box of People, but the combo box of each list item is empty.
I can get it to work if I use
```
ItemsSource="{Binding ElementName=listBox, Path=DataContext.Jobs }"/>
```
but I don't want to use ElementName as it hardcodes the source of the array a specific listbox which I'd like to avoid.
Trying something like
```
ItemsSource="{Binding RelativeSource={RelativeSource Self}, Path=Parent.Jobs}"/>
```
Doesn't seem to work either as it's looking for Jobs inside the Grid.
A little push in the right direction would help greatly.
Thanks! | When you are at DataTemplate that represents just one People your JobAssignments instance is not there. Try the below
```
ItemsSource="{Binding ElementName= listBox, Path=Jobs}"/>
```
But I dont recommend the above step. You need to refactor your ViewModel logic. I guess 'Jobs' can be a static instance so that you can do a x:Static binding
```
ItemsSource="{Binding Source={x:Static JobAssignments}, Path=Jobs}"
``` | I had roughly the same problem as you except I had a listbox within a tabcontrol. I was able to solve it by using getting the tab control that contained the listbox. Sample code as follows:
```
<TabControl ItemsSource="{Binding Groups, Mode=TwoWay}" SelectedItem="{Binding SelectedGroup, Mode=TwoWay}">
<TabControl.ContentTemplate>
<DataTemplate>
<ListBox ItemsSource="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType=TabControl, AncestorLevel=1}, Path=DataContext.ItemsInGroup, Mode=TwoWay}">
<ListBox.ItemTemplate>
<DataTemplate>
<TextBlock Text="{Binding ItemGroup}" />
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
</DataTemplate>
</TabControl.ContentTemplate>
</TabControl>
``` | ListBox DataTemplate Binding With Two Lists | [
"",
"c#",
"wpf",
"data-binding",
"binding",
""
] |
I am relatively new to programming, as you will soon see...
I have 2 events, which execute the same code. I currently have the following pseudo code for a datagridview:
```
private void dgv_CellEnter(object sender, DataGridViewCellEventArgs e)
{
string abc = "abc";
}
private void dgv_CellClick(object sender, DataGridViewCellEventArgs e)
{
string abc = "abc";
}
```
Is there a way to combine this into one event? Is there a better way to do this? | Why not just have one method and map it to two events?
```
private void dgv_CellEvent(object sender, DataGridViewCellEventArgs e)
{
string abc = "123";
}
// In the event mapping
dgv.CellEnter += dgv_CellEvent;
dgv.CellClick += dgv_CellEvent;
``` | Well the quick answer is yes. Put the guts of the methods in its own method and then just have the onclick event call that method. This will give you only one place to update code should it need to change.
There are a 100 different ways to do this, and this is prob the easiest.
So create something like this:
```
protected void MyNewMethod()
{
string abc = "123";
}
```
and then your other methods will just call it like this:
```
private void dgv_CellEnter(object sender, DataGridViewCellEventArgs e)
{
MyNewMethod();
}
private void dgv_CellClick(object sender, DataGridViewCellEventArgs e)
{
MyNewMethod();
}
```
# Option 2
Just call the same method from the markup. You really only need one of those methods and the event in the markup can call the same one. | C# combine 2 events in one method | [
"",
"c#",
".net",
"winforms",
""
] |
Problem:
A relational database (Postgres) storing timeseries data of various measurement values. Each measurement value can have a specific "measurement type" (e.g. temperature, dissolved oxygen, etc) and can have specific "measurement units" (e.g. Fahrenheit/Celsius/Kelvin, percent/milligrams per liter, etc).
Question:
***Has anyone built a similar database such that dimensional integrity is conserved? Have any suggestions?***
I'm considering building a measurement\_type and a measurement\_unit table, both of these would have text two columns, ID and text. Then I would create foreign keys to these tables in the measured\_value table. Text worries me somewhat because there's the possibility for non-unique duplicates (e.g. 'ug/l' vs 'µg/l' for micrograms per liter).
The purpose of this would be so that I can both convert and verify units on queries, or via programming externally. Ideally, I would have the ability later to include strict dimensional analysis (e.g. linking µg/l to the value 'M/V' (mass divided by volume)).
*Is there a more elegant way to accomplish this?* | I produced a database sub-schema for handling units an aeon ago (okay, I exaggerate slightly; it was about 20 years ago, though). Fortunately, it only had to deal with simple mass, length, time dimensions - not temperature, or electric current, or luminosity, etc. Rather less simple was the currency side of the game - there were a myriad different ways of converting between one currency and another depending on date, currency, and period over which conversion rate was valid. That was handled separately from the physical units.
Fundamentally, I created a table 'measures' with an 'id' column, a name for the unit, an abbreviation, and a set of dimension exponents - one each for mass, length, time. This gets populated with names such as 'volume' (length = 3, mass = 0, time = 0), 'density' (length = 3, mass = -1, time = 0) - and the like.
There was a second table of units, which identified a measure and then the actual units used by a particular measurement. For example, there were barrels, and cubic metres, and all sorts of other units of relevance.
There was a third table that defined conversion factors between specific units. This consisted of two units and the multiplicative conversion factor that converted unit 1 to unit 2. The biggest problem here was the dynamic range of the conversion factors. If the conversion from U1 to U2 is 1.234E+10, then the inverse is a rather small number (8.103727714749e-11).
The comment from S.Lott about temperatures is interesting - we didn't have to deal with those. A stored procedure would have addressed that - though integrating one stored procedure into the system might have been tricky.
The scheme I described allowed most conversions to be described once (including hypothetical units such as furlongs per fortnight, or less hypothetical but equally obscure ones - outside the USA - like acre-feet), and the conversions could be validated (for example, both units in the conversion factor table had to have the same measure). It could be extended to handle most of the other units - though the dimensionless units such as angles (or solid angles) present some interesting problems. There was supporting code that would handle arbitrary conversions - or generate an error when the conversion could not be supported. One reason for this system was that the various international affiliate companies would report their data in their locally convenient units, but the HQ system had to accept the original data and yet present the resulting aggregated data in units that suited the managers - where different managers each had their own idea (based on their national background and length of duty in the HQ) about the best units for their reports. | "Text worries me somewhat because there's the possibility for non-unique duplicates"
Right. So don't use text as a key. Use the ID as a key.
"Is there a more elegant way to accomplish this?"
Not really. It's hard. Temperature is it's own problem because temperature is itself an average, and doesn't sum like distance does; plus F to C conversion is not a multiply (as it is with every other unit conversion.) | dimensional and unit analysis in SQL database | [
"",
"sql",
"postgresql",
"dimensions",
"time-series",
""
] |
I'm adding validation so if a post is in a particular category, it needs certain Custom Fields to be set.
This should be easy hooking `wp_insert_post_data` and `admin_notices`, but there is a redirect that causes the `admin_notices` callback to disappear.
OK - So I created a hack that uses the Session to store my error message across the redirect:
```
function set_post_pending($data, $postarr) {
// If it's not valid...
$error = "You are missing some Custom Fields.";
$_SESSION['admin_notices'] = $error;
$data['post_status'] = 'pending';
return $data;
}
add_filter('wp_insert_post_data', 'set_post_pending',1,2);
function session_admin_notice() {
if($out = $_SESSION['admin_notices']) {
$_SESSION["admin_notices"] = "";
echo $out;
}
return false;
}
add_action('admin_notices', "session_admin_notice");
```
The problem with this solution is that *some*how the session is not available when calling `session_admin_notice`, which has an easy (but *crazy*) solution:
```
public static function fix_session_bs() {
// TODO: Why do I have to do this?
if(!session_id() && $_COOKIE["PHPSESSID"]) {
session_start($_COOKIE["PHPSESSID"]);
}
}
add_action('admin_init', 'fix_session_bs');
```
The question is: Why do I have to go through all this craziness to throw an error message?
**What am I doing wrong?** | Wordpress doesn't use sessions, and if `register_globals` is on it will clear the `$_SESSION` array.
Wordpress passes it's messages along using a `message` integer in the URL, an array of messages is then defined in the relevant `edit-[type]-form.php` file in the `wp-admin` folder. I think you could probably append your own variable to the redirect and then get that in your `admin_notices` hook function. Take a look at the `edit-[type]-form.php` files to get an idea of how this might work. | You can simply do like WordPress do : using [transients](http://codex.wordpress.org/Transients_API "Transients API") like this :
```
function set_post_pending($data, $postarr) {
// If it's not valid...
$error = "You are missing some Custom Fields.";
set_transient( get_current_user_id().'missingfield', $error );
$data['post_status'] = 'pending';
return $data;
}
add_filter('wp_insert_post_data', 'set_post_pending',1,2);
function show_admin_notice() {
if($out = get_transient( get_current_user_id().'missingfield' ) ) {
delete_transient( get_current_user_id().'missingfield' );
echo "<div class=\"error\"><p>$out</p></div>";
}
// return false; // nothing to return here
}
add_action('admin_notices', "session_admin_notice");
```
*ps : avoid $\_SESSION in WordPress, thx* | Wordpress: displaying an error message - hook admin_notices fails on wp_insert_post_data or publish_post | [
"",
"php",
"wordpress",
"plugins",
"session",
"hook",
""
] |
Trying to parameterize the value of TOP in my sql statement.
```
SELECT TOP @topparam * from table1
command.Parameters.Add("@topparam",SqlDbType.VarChar, 10).Value = somevalue.ToString();
```
This doesn't seem to work. Anyone have any suggestions?
Just to clarify, I don't want to use stored procedures. | In [SQL Server 2005 and above](https://web.archive.org/web/20150520123828/http://sqlserver2000.databases.aspfaq.com:80/how-do-i-use-a-variable-in-a-top-clause-in-sql-server.html), you can do this:
```
SELECT TOP (@topparam) * from table1
``` | You need to have at least SQL Server 2005. This code works fine in 2005/8 for example ...
```
DECLARE @iNum INT
SET @iNum = 10
SELECT TOP (@iNum) TableColumnID
FROM TableName
```
If you have SQL Server 2000, give this a try ...
```
CREATE PROCEDURE TopNRecords
@intTop INTEGER
AS
SET ROWCOUNT @intTop
SELECT * FROM SomeTable
SET ROWCOUNT 0
GO
``` | C# SQL Top as parameter | [
"",
"sql",
"sql-server",
"t-sql",
"ado.net",
""
] |
Say I have this dropdown:
```
<select id="theOptions1">
<option value="1">1</option>
<option value="2">2</option>
<option value="3">3</option>
</select>
```
I want it so that when the user selects 1, this is the thing that the user can choose for dropdown 2:
```
<select id="theOptions2">
<option value="a">a</option>
<option value="b">b</option>
<option value="c">c</option>
</select>
```
Or if the user selects 2:
```
<select id="theOptions2">
<option value="a">a</option>
<option value="b">b</option>
</select>
```
Or if the user selects 3:
```
<select id="theOptions2">
<option value="b">b</option>
<option value="c">c</option>
</select>
```
I tried the code posted here:
[jQuery disable SELECT options based on Radio selected (Need support for all browsers)](https://stackoverflow.com/questions/877328/jquery-disable-select-options-based-on-radio-selected-need-support-for-all-brows)
But it doesn't work for selects.
Please help!
Thank you!
UPDATE:
I really like the answer Paolo Bergantino had on:
[jQuery disable SELECT options based on Radio selected (Need support for all browsers)](https://stackoverflow.com/questions/877328/jquery-disable-select-options-based-on-radio-selected-need-support-for-all-brows)
Is there anyway to modify this to work with selects instead of radio buttons?
```
jQuery.fn.filterOn = function(radio, values) {
return this.each(function() {
var select = this;
var options = [];
$(select).find('option').each(function() {
options.push({value: $(this).val(), text: $(this).text()});
});
$(select).data('options', options);
$(radio).click(function() {
var options = $(select).empty().data('options');
var haystack = values[$(this).attr('id')];
$.each(options, function(i) {
var option = options[i];
if($.inArray(option.value, haystack) !== -1) {
$(select).append(
$('<option>').text(option.text).val(option.value)
);
}
});
});
});
};
``` | This works (tested in Safari 4.0.1, FF 3.0.13):
```
$(document).ready(function() {
//copy the second select, so we can easily reset it
var selectClone = $('#theOptions2').clone();
$('#theOptions1').change(function() {
var val = parseInt($(this).val());
//reset the second select on each change
$('#theOptions2').html(selectClone.html())
switch(val) {
//if 2 is selected remove C
case 2 : $('#theOptions2').find('option:contains(c)').remove();break;
//if 3 is selected remove A
case 3 : $('#theOptions2').find('option:contains(a)').remove();break;
}
});
});
```
And the beautiful UI:
```
<select id="theOptions1">
<option value="1">1</option>
<option value="2">2</option>
<option value="3">3</option>
</select>
<br />
<select id="theOptions2">
<option value="a">a</option>
<option value="b">b</option>
<option value="c">c</option>
</select>
``` | You can add classes to your `<option>`s to store which go with each value of `#theOptions1`:
```
<select id="theOptions2">
<option value="a" class="option-1 option-2">a</option>
<option value="b" class="option-1 option-2 option-3">b</option>
<option value="c" class="option-1 option-3">c</option>
</select>
```
then do this:
```
$(function() {
var allOptions = $('#theOptions2 option').clone();
$('#theOptions1').change(function() {
var val = $(this).val();
$('#theOptions2').html(allOptions.filter('.option-' + val));
});
});
``` | jQuery remove SELECT options based on another SELECT selected (Need support for all browsers) | [
"",
"javascript",
"jquery",
"select",
""
] |
I have a cron which generates the whole mail info and puts in a database table using `$mail_queue->put(.....)` with an option to delete emails after they're sent.
Here's where I need a little help: What is the best way to send the emails after I have the above info? Running the `$mail_queue->sendMailsInQueue()` right away, using other cron job(s) or something else?
The server limit by the way is 100 emails / minute. Currently the last csv diff for `Mail_Queue` is not applied (currently working with the support on that), so I can't use the "delay" option.
I had an idea to use the `$seconds_to_send` option, but it's calculated on the base of create\_time field, which isn't a big help, but it's also an option.
Any ideas and suggestions would be really appreciated. | Personally, I would go the cron way because it gives less opportunity for failure. Say your mail server stops responding or for some other reason becomes unavailable. Or what if your entire network goes offline for a few hours, but the servers are still generating emails. I say use the queue.
As for the delay thing, just have a service/cronjob pick up the queue every sixty seconds, pop 100 emails and send them, then quit. You might get a queue of emails to be sent but that's going to happen no matter what system you choose. The queue will empty during off-peak hours, anyways. | use two scripts. one for populating your mail\_queue table with the emails you need to send and the second script to send those emails in chunks of 90 mails at a go. set the second script to be activated about every 2 minutes or so.
you could also just upgrade your hosting plan ;-) | Best way to send 10,000+ emails with PEAR/Mail_Queue | [
"",
"php",
"email",
"pear",
"mailing-list",
"mail-queue",
""
] |
Understandably many of the tickets we file in Trac contain tracebacks. It would be excellent if these were nicely formatted and syntax highlighted.
I've conducted a cursory Google search for a Python traceback wiki processor and have not found any quick hits.
I'm happy to roll my own if anyone can recommend a traceback formatter (stand alone or embedded in an open source project) that outputs HTML/reStructuredText/etc. | [Pygments](http://pygments.org/languages/) has support for syntax-coloring Python tracebacks, and there's a [trac plugin](http://trac-hacks.org/wiki/TracPygmentsPlugin), but the wiki page claims Trac 0.11 supports Pygments natively. | I don't believe you need that patch. You could specify the shortcode mapping in the [trac.ini](http://trac.edgewall.org/wiki/TracIni#mimeviewer-section), but you can also (at least in trac 0.12) just use the mime type directly:
```
{{{
#!text/x-python-traceback
<traceback>
}}}
```
See more at <http://trac.edgewall.org/wiki/TracSyntaxColoring>. x-python-traceback isn't in the list there, but you'll get an error previewing if trac can't handle it and it WorkedForMe. | Is there a wiki processor for Trac to format and colour Python tracebacks? | [
"",
"python",
"syntax",
"trac",
""
] |
I have a windows console app (that accepts parameters) and runs a process.
I was wondering if there was any way to run this app from within a windows form button click event. I would like to pass an argument to it as well.
Thanks | Just use [System.Diagnostics.Process.Start](http://msdn.microsoft.com/en-us/library/h6ak8zt5.aspx) with the path to the console application, and the parameters as the second argument. | Assuming you have a form with a multiline textbox called txtOutput.....
```
private void RunCommandLine(string commandText)
{
try
{
Process proc = new Process();
proc.StartInfo.CreateNoWindow = true;
proc.StartInfo.UseShellExecute = false;
proc.StartInfo.RedirectStandardOutput = true;
proc.StartInfo.RedirectStandardError = true;
proc.StartInfo.FileName = "cmd.exe";
proc.StartInfo.Arguments = "/c " + commandText;
txtOutput.Text += "C:\\> " + commandText + "\r\n";
proc.Start();
txtOutput.Text += proc.StandardOutput.ReadToEnd().Replace("\n", "\r\n");
txtOutput.Text += proc.StandardError.ReadToEnd().Replace("\n", "\r\n");
proc.WaitForExit();
txtOutput.Refresh();
}
catch (Exception ex)
{
txtOutput.Text = ex.Message;
}
}
``` | Run a console application from a windows Form | [
"",
"c#",
".net",
"winforms",
"console-application",
""
] |
I have a dataset which contains a string key field and up to 50 keywords associated with that information. Once the data has been inserted into the database there will be very few writes (INSERTS) but mostly queries for one or more keywords.
I have read "[Tagsystems: performance tests](http://www.pui.ch/phred/archives/2005/06/tagsystems-performance-tests.html)" which is MySQL based and it seems 2NF appears to be a good method for implementing this, however I was wondering if anyone had experience with doing this with SQL Server 2008 and very large datasets.
I am likely to initially have 1 million key fields which could have up to 50 keywords each.
Would a structure of
```
keyfield, keyword1, keyword2, ... , keyword50
```
be the best solution or two tables
```
keyid
keyfield
| 1
|
| M
keyid
keyword
```
Be a better idea if my queries are mostly going to be looking for results that have one or more keywords? | I would normalize a step further.
You should have a table of unique KeyWords with an integer primary key column. Then, another association table that has KeyField and KeyWordId.
```
KeyWords
----------
KeyWordId Int Identity(1,1)
KeyWord VarChar(200)
KeyFieldKeyWords
----------------
Keyfield Int
KeyWordId Int
```
With 1 million keyfields having 50 keywords each, that's 50 million rows. There will be a HUGE difference in performance if you have a table with 2 columns, each being an integer. | As long as you have correct indexes, 50M rows isn't that much. I would just store it as
```
CREATE TABLE mytable (
keyfield nvarchar(200),
keyword nvarchar(200),
CONSTRAINT PK_mytable PRIMARY KEY(keyfield, keyword)
)
```
and, of course index the keyword column. If you never need to get all keywords for a keyfield, you can avoid the extra index by just changing the order in the primary key
Edit: I should not post when I'm too tired. This is the way. | Best way to store large dataset in SQL Server? | [
"",
"sql",
"sql-server",
"database",
"sql-server-2008",
""
] |
I'm using a `std::deque` to store a fairly large number of objects. If I remove a bunch of those objects, it appears to me that its memory usage does not decrease, in a similar fashion to std::vector.
Is there a way to reduce it? I know that in a vector you have to use the 'swap trick', which I assume would work here too, but I'd rather avoid that since it would require copying all the elements left in the container (and thus requires that you have enough memory to store every object twice). I'm not intimately familiar with the implementation of deque, but my understanding of it is that it might be possible to achieve such a thing without lots of copies (whereas with a vector it's clearly not).
I'm using the VC++ (Dinkumware) STL, if that makes any difference. | There is no way to do this directly in a std::deque. However, it's easy to do by using a temporary (which is basically what happens in a std::vector when you shrink it's capacity).
Here is a [good article on std::deque](http://www.gotw.ca/gotw/054.htm), comparing it to std::vector. The very bottom shows a clean way to swap out and shrink a vector, which works the same with deque. | As added information to this:
In C++0x/C++11, deque (and several other containers) has a new function called "shrink\_to\_fit" which will remove the excess items and basically align capacity() == size() | How to release memory from std::deque? | [
"",
"c++",
"stl",
"deque",
""
] |
suppose we have the following PHP page "index.php":
```
<?
if (!isset($_GET['req'])) $_GET['req'] = "null";
echo $_SERVER['REQUEST_URI'] . "<br>" . $_GET['req'];
?>
```
and the following ".htaccess" file:
```
RewriteRule ^2.php$ index.php?req=%{REQUEST_URI}
RewriteRule ^1.php$ 2.php
```
Now, let's access "index.php". We get this:
```
/index.php
null
```
That's cool. Let's access "2.php". We get this:
```
/2.php
/2.php
```
That's cool too. But now let's have a look at "1.php":
```
/1.php
/2.php
```
So... we ask for "1.php", it silently redirects to "2.php" which silently redirects to "index.php?req=%{REQUEST\_URI}", but here the "%{REQUEST\_URI}" seems to be "2.php" (the page we're looking for **after** the first redirection) and the $\_SERVER['REQUEST\_URI'] is "1.php" (the original request).
Shouldn't these variables be equal? This gave me a lot of headaches today as I was trying to do a redirection based only on the **original** request. Is there any variable I can use in ".htaccess" that will tell me the original request even after a redirection?
Thanks in advance and I hope I've made myself clear. It's my first post here :) | Well I guess I solved the problem. I used the %{THE\_REQUEST} variable which basically contains something like this: "GET /123.php HTTP/1.1". It remains the same even after a redirection. Thanks everyone for your help! :) | I'm not sure whether it will meet your needs, but try looking at `REDIRECT_REQUEST_URI` first, then if it's not there, `REQUEST_URI`. You mention in your comment to Gumbo's answer that what you're truly looking for is the original URI; `REDIRECT_*` versions of server variables are how Apache tries to make that sort of thing available. | Apache's mod_rewrite and %{REQUEST_URI} problem | [
"",
"php",
"apache",
"mod-rewrite",
"request",
""
] |
I understand that there are two ways to access a PHP class - "::" and "->". Sometime one seems to work for me, while the other doesn't, and I don't understand why.
What are the benefits of each, and what is the right situation to use either? | Simply put, `::` is for *class-level* properties, and `->` is for *object-level* properties.
If the property belongs to the class, use `::`
If the property belongs to *an instance of the class*, use `->`
```
class Tester
{
public $foo;
const BLAH;
public static function bar(){}
}
$t = new Tester;
$t->foo;
Tester::bar();
Tester::BLAH;
``` | The "::" symbol is for accessing methods / properties of an object that have been declared with the static keyword, "->" is for accessing the methods / properties of an object that represent instance methods / properties. | PHP Classes: when to use :: vs. ->? | [
"",
"php",
"class",
""
] |
I have some javascript that manipulates html based on what the user has selected. For real browsers the methods I'm using leverage the "Range" object, obtained as such:
```
var sel = window.getSelection();
var range = sel.getRangeAt(0);
var content = range.toString();
```
The content variable contains all the selected text, which works fine. However I'm finding that I cannot detect the newlines in the resulting string. For example:
Selected text is:
abc
def
ghi
range.toString() evaluates to "abcdefghi".
Any search on special characters returns no instance of \n \f \r or even \s. If, however, I write the variable out to an editable control, the line feeds are present again.
Does anyone know what I'm missing?
It may be relevant that these selections and manipulations are on editable divs. The same behaviour is apparent in Chrome, FireFox and Opera. Surprisingly IE needs totally different code anyway, but there aren't any issues there, other than it just being IE.
Many thanks. | Editing my post:
Experimenting a bit, I find that `sel.toString()` returns new lines in contenteditable divs, while `range.toString()` returns newlines correctly in normal non-editable divs, but not in editable ones, as you reported.
Could not find any explanation for the behaviour though.
This is a useful link <http://www.quirksmode.org/dom/range_intro.html> | I found at least two other ways, so you may still use the range to find the position of the caret in Mozilla.
One way is to call
```
var documentFragment = rangeObj.cloneContents ();
```
which holds an array of childNodes, and any line breaks will show as a node of class "HTMLBRElement".
The other way is to make sure every "br" tag is followed by a newline character (0x0a)!
This won't hurt the HTML content in any visible way, but now all HTML breaks are translated to plain text line breaks as soon as range.toString() is being called!
I hope this helps - even if this topic is very old. (I'm a necromancer anyway already, hehe) :) | Problem detecting Newlines in JavaScript Range Object | [
"",
"javascript",
"newline",
"range",
"contenteditable",
""
] |
I'm looking for a change log / features request program my client can use to see where I am at with the development. Trac looks sexy (<http://trac.edgewall.org/>) but too complex for what I need.
Very simply, the client needs to be able to order tasks and for us to be able to sort them and keep a record of everything that is going on.
We could update a static file or something, but that seems a bit err time consuming and not systematic enough.
What do you use? | I'd use Trac even for a small project since it allows you to disable most of the advanced features if you don't need them.
First, whole pages aren't shown at all if you remove a user's permission to view them. So if you e.g. don't like the "Roadmap" page, just get rid of it. There is a whole list of possible permissions in the [trac help](http://trac.edgewall.org/wiki/TracPermissions).
Second, Trac even allows removing ticket fields like priorities, resolutions, severities by just deleting all possible options for them in the administration panel.
Using these tricks, you can simplify your Trac a lot but still keep the usefull stuff. | Basecamp is a pretty slick project management solution: <http://basecamphq.com>
I've used this or trac depending on the project. My 2 cents :) | How do you maintain a community change log? | [
"",
"php",
"bug-tracking",
"changelog",
""
] |
When using NHibernate, you define your entites with virtual methods, and NHibernate will create a proxy object that tracks changes to your object.
In Moq, the framework will magically create a derived type from an interface or a base class. e.g.
```
var mock = new Mock<IFoo>();
IFoo magicFoo = mock.Object;
```
This is really cool. How do these frameworks do it? Do they use reflection, generics, some kind of dynamic compilation, or something else?
*I realize these are both open source projects, and I could go spelunking through the code, but I'd like to have a concise answer here - possibly with alternatives.* | Moq uses Castle Dynamic Proxy, however, just thought it would be worth adding there are also a number of other frameworks that allow you to create Proxy objects. As of NHibernate 2.1 it also allows you to use any one of the following:
* [Castle Dynamic Proxy](http://www.castleproject.org/projects/dynamicproxy)
* [LinFu Framework](http://www.codeproject.com/KB/cs/LinFuPart1.aspx)
* [Spring.NET](http://www.springframework.net/doc-latest/reference/html/aop.html#aop-proxyfactoryobject)
Each of these projects has a brief explaination of how they achieve this, which is hopefully the kind of answer you're looking for. | They use a combination of reflection (to figure out what needs to be generated) and reflection-emit (to generate the derived class dynamically, and emitting IL for the methods). .NET provides both of these APIs (reflection and reflection-emit). | How do Moq and NHibernate automatically create derived types? | [
"",
"c#",
".net",
"nhibernate",
"moq",
""
] |
I can't seem to find a super-global for this. Basically, if a PHP file is executed on, say, `http://www.example.com/services/page.php`, I would like to retrieve `http://example.com/services`. How do I achieve this? | You can abuse [`dirname`](http://us.php.net/manual/en/function.dirname.php):
```
<p>
<?php echo $_SERVER["HTTP_HOST"] . dirname($_SERVER["REQUEST_URI"]) ?>
</p>
``` | Take a look at [`$_SERVER['HTTP_HOST']` and `$_SERVER['REQUEST_URI']`](http://docs.php.net/manual/en/reserved.variables.server.php). `HTTP_HOST` would contain the host name the resource was requested from an `REQUEST_URI` URI path and query that was requested. | PHP: How do I get the URL a file is in? | [
"",
"php",
""
] |
I have am trying to pass the value of a label into the php.
how should i do that?
My HTML looks like:
```
<form action='unsubscribe.php' method='get'>
<label for='zee@server.com'>zee@server.com</label>
<input type='submit' value='Unsubscribe me'>
</form>
```
How can i get the value of this label passed into my unsubscribe.php?
Best
Zeeshan | By using the `<input type="hidden" ... />` tag:
```
<form action='unsubscribe.php' method='get'>
<input type="hidden" name="email" value="zee@server.com" />
<input type='submit' value='Unsubscribe me'>
</form>
```
If you are going to use `<form method="get" ...>` you might as well just make a url:
```
<a href="http://www.example.org/unsubscribe.php?email=zee%40example.org">Unsubscribe</a>
```
Or with php (note the [urlencode](http://php.net/urlencode)):
```
print("<a href=\"http://www.example.org/unsubscribe.php?email=".urlencode("zee@example.org")."\">Unsubscribe</a>");
``` | You will need to create a hidden input, then use JavaScript to populate the hidden input prior to posting the form so that `unsubscribe.php` can retrieve it via `$_POST`. | How to pass value to a Html label into php? | [
"",
"php",
""
] |
In C++, is there any reason to not access static member variables through a class instance? I know Java frowns on this and was wondering if it matters in C++. Example:
```
class Foo {
static const int ZERO = 0;
static const int ONE = 1;
...
};
void bar(const Foo& inst) {
// is this ok?
int val1 = inst.ZERO;
// or should I prefer:
int val2 = Foo::ZERO
...
};
```
I have a bonus second question. If I declare a static double, I have to define it somewhere and that definition has to repeat the type. Why does the type have to be repeated?
For example:
```
In a header:
class Foo {
static const double d;
};
In a source file:
const double Foo::d = 42;
```
Why do I have to repeat the "const double" part in my cpp file? | For the first question, aside from the matter of style (it makes it obvious it's a class variable and has no associated object), Fred Larsen, in comments to the question, makes reference to previous question. Read [Adam Rosenthal's answer](https://stackoverflow.com/questions/840522/given-a-pointer-to-a-c-object-what-is-the-preferred-way-to-call-a-static-membe/840549#840549) for *very* good reason why you want to be careful with this. (I'd up-vote Fred if he'd posted it as answer, but I can't so credit where it's due. I did up-vote Adam.)
As to your second question:
> Why do I have to repeat the "const double" part in my cpp file?
You have to repeat the type primarily as an implementation detail: it's how the C++ compiler parses a declaration. This isn't strictly ideal for local variables either, and C++1x (formerly C++0x) makes use of the `auto` keyword to avoid needing to be repetitive for regular function variables.
So this:
```
vector<string> v;
vector<string>::iterator it = v.begin();
```
can become this:
```
vector<string> v;
auto it = v.begin();
```
There's no clear reason why this couldn't work with static as well, so in your case thos:
```
const double Foo::d = 42;
```
could well become this.
```
static Foo::d = 42;
```
The key is to have *some* way of identifying this as a declaration.
Note I say no *clear* reason: C++'s grammar is a living legend: it is *extremely* hard to cover all of its edge cases. I don't *think* the above is ambiguous but it might be. If it isn't they could add that to the language. Tell them about it ... for C++2x :/. | I would prefer `Foo::ZERO` over `inst.ZERO` because it more clearly tells what is going on. However, in a method of class `Foo`, I would just use `ZERO`.
As for the bonus question, `const` just is part of the complete type. | C++: access const member vars through class or an instance? | [
"",
"c++",
"static",
"constants",
""
] |
I'm trying to pass a 2D array of char\* into a function. I am getting this error:
```
"cannot convert 'char* (*)[2]' to 'char***' for argument '1' to 'int foo(char***)'"
```
Code:
```
int foo(char*** hi)
{
...
}
int main()
{
char* bar[10][10];
return foo(bar);
}
``` | Your array is an array of `10` `char*` arrays, each storing `10` `char*` pointers.
This means that when passing it to a function whose parameter is not a reference, it is converted to a `pointer to an array of 10 char*`. The correct function parameter type is thus
```
int foo(char* (*hi)[10])
{
...
}
int main()
{
char* bar[10][10];
return foo(bar);
}
```
Read further on this [Pet peeve](https://stackoverflow.com/questions/423823/whats-your-favorite-programmer-ignorance-pet-peeve/484900#484900) entry on Stackoverflow. | If the size of your array is not going to change, you're better off using references to the array in your function. Its safer and cleaner. For example:
```
int foo(char* (&hi)[10][10] )
{
int return_val = 0;
//do something
//hi[5][5] = 0;
return return_val;
}
int main()
{
char* bar[10][10];
return foo(bar);
}
``` | Error passing 2D char* array into a function | [
"",
"c++",
"arrays",
"casting",
"char",
""
] |
I need to know how to convert a date time of GMT time zone format into other formats like Eastern, Pacific, Mountain and India time zone formats in C#, asp.net 2.0 and dot net framework 2.0. | Really, you want .NET 3.5 for this...
(I know you asked about .NET 2.0, but this answer may be heplful for somebody searching for this topic in the future)
```
TimeZoneInfo mountain = TimeZoneInfo.FindSystemTimeZoneById(
"US Mountain Standard Time");
DateTime utc = DateTime.UtcNow;
DateTime local = TimeZoneInfo.ConvertTimeFromUtc(utc, mountain);
```
Before then... pain. You can maintain your own list of offsets, but then you have to worry about DST. | Using .NET 2.0 you're mostly stuffed, unfortunately. You'll need to use P/Invoke to create an instance of the [`TimeZone`](http://msdn.microsoft.com/en-us/library/system.timezone.aspx) class. From .NET 3.5 onwards it's a lot better - [`TimeZoneInfo`](http://msdn.microsoft.com/en-us/library/system.timezoneinfo.aspx) allows you to fetch non-local zones. I seem to remember that the latter has better support for historical time zone information too, instead of just a pair of rules for when DST changes.
You can get information using [GetTimeZoneInformation](http://www.pinvoke.net/default.aspx/kernel32.GetTimeZoneInformation) but that's relatively ugly. There may well be some way of using P/Invoke just to make the conversion for you... although it's still likely to be hairy.
How firm is the requirement to use .NET 2.0? You'd save yourself a *lot* of hassle using .NET 3.5... | Convert a datetime from GMT to other (Eastern, Mountain, Pacific, Indian,..) formats | [
"",
"c#",
"datetime",
""
] |
I have given a tree like this:
[http://www.seqan.de/dddoc/html/streePreorder.png http://www.seqan.de/dddoc/html/streePreorder.png](http://www.seqan.de/dddoc/html/streePreorder.png)
i can acces each node with the next operator.
```
// postorder dfs
Iterator< Index<String<char> >, BottomUp<> >::Type myIterator(myIndex);
for (; !atEnd(myIterator); goNext(myIterator))
// do something with myIterator
```
but i want to use a recursive algorithm on the tree.
Is there a way i can make the recursive algorithm (exlude the biggest subtree on each node) iterative ?
or how i can acces the elements non-recursively ?
**Edit:
The Actual problem :**
I have given a recursive algorithm , that works on trees. (recursive)
I also use a library where i only can acces the Items with an iterator ( non standard , iterative)
recursive <-> iterative.
How can i solve this ? | If your iterator only supports forward (and possibly backward) traversal, but not following links on the tree or fast random access, you will have a very hard time adapting a tree algorithm to it. However, in the end any answer will depend on the interface presented by your custom iterators, which you have not provided.
For example, consider the easy algorithm of tree search. If the only operation given by your iterator is "start from the first element and move on one-by-one", you obviously cannot implement tree search efficiently. Nor can you implement binary search. So you must provide a list of exactly what operations are supported, and (critically) the complexity bounds for each. | You can convert *that* recursive function to an iterative function with the help of a stack.
```
//breadth first traversal pseudo-code
push root to a stack
while( stack isn't empty )
pop element off stack
push children
perform action on current node
```
depending on how you want to traverse the nodes the implementation will be different. All recursive functions can be transformed to iterative ones. A general usage on how requires information on the specific problem. Using stacks/queues and transforming into a for loop are common methods that should solve most situations.
You should also look into tail recursion and how to identify them, as these problems nicely translates into a for loop, many compilers even do this for you.
Some, more mathematically oriented recursive calls can be solved by [recurrence relations](http://mathworld.wolfram.com/RecurrenceRelation.html). The likelihood that you come across these which haven't been solved yet is unlikely, but it might interest you.
//edit, performance?
Really depends on your implementation and the size of the tree. If there is a lot of depth in your recursive call, then you will get a stack overflow, while an iterative version will perform fine. I would get a better grasp on recursion (how memory is used), and you should be able to decide which is better for your situation. [Here is an example of this type of analysis with the fibonacci numbers](http://www.ics.uci.edu/~eppstein/161/960109.html). | Is it possible to implement a recursive Algorithm with an Iterator? | [
"",
"c++",
"algorithm",
"computer-science",
""
] |
In the same way we have coding standards, can there be things such as architecture standards?
These would be high level principals that we can pass to our less experienced programmers so when they start designing changes or small applications they know what we expect to see without the more experienced programmers having to correct them (note this isn't instead of review, just preventing obvious mistakes as early as possible).
I'm thinking potentially it could contain things such as "Always normalise your database unless there is a specific and clearly understood reason for not doing so", "Always separate out presentation, logic and persistence".
Do people think that this would be too high level to be of any practical use? Or that such generalisations are almost impossible given the variety of functionality we might be asked to code? Or might it be useful?
If you think they're useful and realistic what would you include?
(If it helps clarify I'm thinking about Java/.NET level programming for corporate style business systems). | The [SOLID principles](http://www.hanselman.com/blog/HanselminutesPodcast145SOLIDPrinciplesWithUncleBobRobertCMartin.aspx) would be a good place to start | We do have architecture guidelines. They are called [design patterns](http://en.wikipedia.org/wiki/Design_pattern_%28computer_science%29). These are basically formalized, architectural guidelines that you can follow as appropriate.
Unfortunately, part of being an true software architect is being able to decide when and how to apply certain patterns/guidelines/practices. That's not something that's necessarily automatable or printable directly - There is almost always more than one way to do something, and each has costs and benefits. The skill comes in understanding the entire range of these, and making a good decision up front. | Is it possible to have architecture guidelines? If so what should be in them? | [
"",
"java",
".net",
"architecture",
""
] |
I've been going through Sun Microsystem's Java Tutorial and got some questions while reading the following:
[I/O from the Command Line: The Console Object](http://java.sun.com/docs/books/tutorial/essential/io/cl.html)
```
"Second, readPassword returns a character array, not a String, so the password can be
overwritten, removing it from memory as soon as it is no longer needed."
```
My questions are:
1) With regard to other data types, such as value data types(int, float, boolean etc), and reference types(user-defined objects, etc), how are arrays and Strings in Java different?
2) Can you elaborate on the above statement about character array and String?
P.S:
Clarification to Q1: What I wanted to ask on Q1 was more about what arrays and Strings are as data-types in Java... With their object-like attributes, I get easily confused when someone claims that Strings and Arrays are not objects in a strict sense... | At a practical level, the main difference between `String` and `char[]` is that the instances of the former are immutable, and instances of the latter are mutable. And of course, the String API offers a wide range of useful methods string manipulation methods.
So lets talk about the linguistic similarities and differences.
First and foremost, (notwithstanding what you may have heard) strings and array instances in Java *are both objects*. According to the Java Language Specification:
> 4.3.1 Objects An object is a class instance or an array.
>
> The reference values (often just
> references) are pointers to these
> objects, and a special null reference,
> which refers to no object.
... where the class of a string is `java.lang.String`.
The linguistic distinction between arrays and other object types is that the type of an array is not a normal Java class. For example:
* Array types are declared with a different syntax to normal classes.
* Array instances are created with a different syntax to normal class instances.
* Array types cannot be named at the Java source code level.
* You cannot declare a subtype / subclass of an array type.
But all array types are (direct) subtypes of `java.lang.Object`, which means that you can (for example) assign an array to a variable of type `Object`, and invoke methods in the Object API. (And there are some interesting things that you can do with these methods to demonstrate the "object-ness" of an array ... but I digress)
So what about strings? As mentioned above, a "string" is a normal object which is an instance of the class `java.lang.String`. There is nothing unusual about this class from a linguistic perspective. It is declared as "final" so that you cannot declare subtypes, but that is not unusual.
The thing that makes `String` a bit special compared with other classes is that the Java language provides some linguistic constructs to support strings:
* There is a special `String` literal syntax for obtaining strings whose content can be determined at compile time.
* The '+' operator is overloaded to support `String` concatenation.
* From Java 7 onwards, the `switch` statement supports switching on `String` values.
* The Java Language Specification defines/assumes that the java.lang.String class has certain properties and methods; e.g. that strings are mutable, that there is a `concat` method, that string literals are "interned".
By the way, the answer that said that all string instances are held in a string pool is incorrect. Strings are only put in the pool when they interned, and this only happens automatically for string literals and for strings whose values can be calculated at compile-time. (You can force a string instance to be interned by calling the `String.intern()` method, but this is bit expensive, and not generally a good idea.) | A `String` stores its contents internally as an array of `chars`. You cannot manipulate this array directly (without reflection), since `Strings` are immutable.
The reason the password would be in a `char[]` is so that you can immediately overwrite it in memory. If it were in a `String`, you would have to wait for the next garbage collection, and you never know how long that's going to be; an attacker could potentially read it out of memory before then. | Java Data Types: String and Array | [
"",
"java",
"arrays",
"string",
""
] |
I've got a one line method that resolves a null string to string.Empty which I thought might be useful as an extension method - but I can't find a useful way of making it so.
The only way I could see it being useful is as a static method on the string class because obviously it can't be attributed to an instance as the instance is null and this causes a compiler error. [Edit: Compiler error was due to uninitialized variable, which I misinterpreted]
I thought about adding it to a helper class but that just adds unnecessary complexity from a discoverability standpoint.
So this question is in two parts I suppose:
1. Does the .NET framework have a built in way of resolving a null string to string.Empty that is common knowledge that I have missed somewhere along the way?
2. If it doesn't - does anyone know of a way to add this method as a static extension of the string class?
Cheers in advance
Edit:
Okay, I guess I should've been a little more clear - I'm already well aware of null coallescing and I was using this in a place where I've got a dozen or so strings being inspected for calculation of a hash code.
As you can imagine, 12 lines of code closely following each other all containing the null coallescing syntax is an eyesore, so I moved the null coallescing operation out to a method to make things easier easier on the eyes. Which is perfect, however, it would be a perfect extension to the string object:
```
int hashcode =
FirstValue.ResolveNull().GetHashCode() ^
SecondValue.ResolveNull().GetHashCode() ^
...
```
over a dozen lines is a lot easier to read than:
```
int hashcode =
(FirstValue ?? String.Empty).GetHashCode() ^
(SecondValue ?? String.Empty).GetHashCode() ^
...
```
I was running into compiler problems when I didn't explicitly declare my string values as null but relied on the implicit:
```
string s;
```
If however, you explicitly define:
```
string s = null;
```
You can quite easily call:
```
s.ResolveNull();
```
Thanks all for your input. | > The only way I could see it being useful is as a static method on the string class because obviously it can't be attributed to an instance as the instance is null and this would cause a runtime error.
C# 3.0 extension methods can be called on null receivers (since they are static in practice), but behave as instance methods. So just make it an extension method. | I don't think there's anything built in for this. My first thought, and what I do often, is use the coalesce operator:
```
string s = null;
string x = s ?? string.Empty;
``` | Does .NET have a way of resolving a null string to String.Empty? | [
"",
"c#",
".net",
""
] |
[LuaSQL](http://www.keplerproject.org/luasql/), which seems to be the canonical library for most SQL database systems in Lua, doesn't seem to have any facilities for quoting/escaping values in queries. I'm writing an application that uses SQLite as a backend, and I'd love to use an interface like the one specified by [Python's DB-API](http://www.python.org/dev/peps/pep-0249/):
```
c.execute('select * from stocks where symbol=?', t)
```
but I'd even settle for something even dumber, like:
```
conn:execute("select * from stocks where symbol=" + luasql.sqlite.quote(t))
```
Are there any other Lua libraries that support quoting for SQLite? ([LuaSQLite3](http://luasqlite.luaforge.net/lsqlite3.html) doesn't seem to.) Or am I missing something about LuaSQL? I'm worried about rolling my own solution (with regexes or something) and getting it wrong. Should I just write a wrapper for [sqlite3\_snprintf](http://www.sqlite.org/c3ref/mprintf.html)? | I haven't looked at LuaSQL in a while but last time I checked it didn't support it. I use Lua-Sqlite3.
```
require("sqlite3")
db = sqlite3.open_memory()
db:exec[[ CREATE TABLE tbl( first_name TEXT, last_name TEXT ); ]]
stmt = db:prepare[[ INSERT INTO tbl(first_name, last_name) VALUES(:first_name, :last_name) ]]
stmt:bind({first_name="hawkeye", last_name="pierce"}):exec()
stmt:bind({first_name="henry", last_name="blake"}):exec()
for r in db:rows("SELECT * FROM tbl") do
print(r.first_name,r.last_name)
end
``` | [LuaSQLite3](http://luasqlite.luaforge.net/lsqlite3.html) as well an any other low level binding to SQLite offers prepared statements with variable parameters; these use methods to bind values to the statement parameters. Since SQLite does not interpret the binding values, there is simply no possibility of an SQL injection. This is by far the safest (and best performing) approach.
uroc shows an example of using the bind methods with prepared statements. | How to quote values for LuaSQL? | [
"",
"sql",
"sqlite",
"lua",
"sql-injection",
"escaping",
""
] |
How you get new line or line feed in Sql Query ? | Pinal Dave explains this well in his blog.
<http://blog.sqlauthority.com/2009/07/01/sql-server-difference-between-line-feed-n-and-carriage-return-r-t-sql-new-line-char/>
```
DECLARE @NewLineChar AS CHAR(2) = CHAR(13) + CHAR(10)
PRINT ('SELECT FirstLine AS FL ' + @NewLineChar + 'SELECT SecondLine AS SL')
``` | ```
-- Access:
SELECT CHR(13) & CHR(10)
-- SQL Server:
SELECT CHAR(13) + CHAR(10)
``` | New line in Sql Query | [
"",
"sql",
""
] |
Consider the following code:
```
class Base(object):
@classmethod
def do(cls, a):
print cls, a
class Derived(Base):
@classmethod
def do(cls, a):
print 'In derived!'
# Base.do(cls, a) -- can't pass `cls`
Base.do(a)
if __name__ == '__main__':
d = Derived()
d.do('hello')
> $ python play.py
> In derived!
> <class '__main__.Base'> msg
```
From `Derived.do`, how do I call `Base.do`?
I would normally use `super` or even the base class name directly if this is a normal object method, but apparently I can't find a way to call the classmethod in the base class.
In the above example, `Base.do(a)` prints `Base` class instead of `Derived` class. | If you're using a new-style class (i.e. derives from `object` in Python 2, or always in Python 3), you can do it with `super()` like this:
```
super(Derived, cls).do(a)
```
This is how you would invoke the code in the base class's version of the method (i.e. `print cls, a`), from the derived class, with `cls` being set to the derived class. | this has been a while, but I think I may have found an answer. When you decorate a method to become a classmethod the original unbound method is stored in a property named 'im\_func':
```
class Base(object):
@classmethod
def do(cls, a):
print cls, a
class Derived(Base):
@classmethod
def do(cls, a):
print 'In derived!'
# Base.do(cls, a) -- can't pass `cls`
Base.do.im_func(cls, a)
if __name__ == '__main__':
d = Derived()
d.do('hello')
``` | Calling a base class's classmethod in Python | [
"",
"python",
"class",
"overriding",
"class-method",
""
] |
What is difference between these two statements:
```
var result = from c in context.CustomerEntities
join p in context.ProjectEntities on c.Pk equals p.CustomerPk
where p.Entered > DateTime.Now.AddDays(-15)
select c;
```
and
```
var result = from c in context.CustomerEntities
join p in context.ProjectEntities on c.Pk equals p.CustomerPk
where p.Entered > DateTime.Now.AddDays(-15)
select new (c.Company, c.Entered, c.pk);
```
Is there any performance related issue in these statements. (For simplicity c contains only these 3 coloums.)
Thanks. | > What is difference between these two statements
The first returns a filtered sequence of the original/complete source object; the second still does the filter, but returns a sequence of an anonymous type with **just** those three properties.
> Is there any performance related issue in these statements
Performance depends on the back-end. If this is LINQ-to-Objects, then with `new {...}` you are creating extra objects (anonymous types) per record, so there may be a very small overhead. However, if this is LINQ-to-SQL etc (a database back-end), then this can be a huge *benefit*. The query builder will check which columns are needed, and will only fetch the three in your anon-type; if you have (for example) a BLOB (or just long `varchar`) in your data that you don't need, this can be a **huge** benefit.
Additional notes: you can't include anonymous types in the signature of a method, so you might find you need to declare your own DTO type for this purpose:
```
return new CustomerDto { Company = c.Company, Entered = c.Entered, PK = c.pk};
...
public class CustomerDto { ... }
``` | The main difference is that the first example returns references to existing instances while the second example creates new instances of an anonymous type. I would be more concerned with this issue than any possible performance issues. | LINQ: Difference between 'Select c' and 'Select new (c...' | [
"",
"c#",
"asp.net",
"linq",
"linq-to-sql",
""
] |
Does anybody know some websites who offers online tutoring for C#? I am particularly seeking one-on-one tutoring. | On books for beginners i'd recommend the [galileo-openbooks](http://openbook.galileocomputing.de/csharp/) | Check out [Inner Workings](http://www.innerworkings.com/solutions/developer).
They offer self-paced .NET training to developers as an add-in to Visual Studio. The code is checked against provided solutions, so you can gain insight into different areas of the framework. Whilst it's not quite an *online* tutor, the training provided is a good start. | Seeking a C# online tutor | [
"",
"c#",
""
] |
I often need to implement an interface by delegating the implementation to a member of my class. This task is quite tedious because, even though Visual Studio generates stubs for the interface methods, I still have to write the code to delegate the implementation. It doesn't require much thinking, so it could probably be automated by a code generation tool...
I'm probably not the first one to think of this, so there must be such a tool already, but I couldn't find anything on Google... Any idea ?
---
EDIT : it seems that [ReSharper can do it](http://www.jetbrains.com/resharper/features/code_generation.html#Generate_Delegating_Members), but it's pretty expensive... is there a free alternative with the same feature ? | I've been using Resharper for a few months now, and it has a great feature to do this.
For instance, write the following code:
```
class MyList<T> : IList<T>
{
private readonly IList<T> _list;
}
```
Place the caret on `_list`, press `Alt` + `Ins` (shortcut for Generate Code), and select "Delegating members". Select the members you need, and R# generates delegating members for them:
```
public void Add(T item)
{
_list.Add(item);
}
public void Clear()
{
_list.Clear();
}
public bool Contains(T item)
{
return _list.Contains(item);
}
public void CopyTo(T[] array, int arrayIndex)
{
_list.CopyTo(array, arrayIndex);
}
public bool Remove(T item)
{
return _list.Remove(item);
}
public int Count
{
get { return _list.Count; }
}
public bool IsReadOnly
{
get { return _list.IsReadOnly; }
}
public int IndexOf(T item)
{
return _list.IndexOf(item);
}
public void Insert(int index, T item)
{
_list.Insert(index, item);
}
public void RemoveAt(int index)
{
_list.RemoveAt(index);
}
public T this[int index]
{
get { return _list[index]; }
set { _list[index] = value; }
}
``` | You could try this : <http://www.codeproject.com/KB/codegen/decorators.aspx?display=PrintAll&fid=1532149&df=90&mpp=25&noise=3&sort=Position&view=Quick&select=2837870> | Tool to generate interface implementation by delegation? | [
"",
"c#",
"interface",
"code-generation",
"delegation",
""
] |
I would like to modify a file inside my jar. Is it possible to do this without extracting and re jarring, from within my application?
File i want to modify are configuration files, mostly xml based.
The reason i am interested in not un jarring is that the application is wrapped with launch4j if i unjar it i can't create the .exe file again. | You can use the `u` option for `jar`
From the Java Tutorials:
```
jar uf jar-file input-file(s)
```
"Any files already in the archive having the same pathname as a file being added will be overwritten."
See [Updating a JAR File](http://docs.oracle.com/javase/tutorial/deployment/jar/update.html).
Much better than making the whole jar all over again. Invoking this from within your program sounds possible too. Try [Running Command Line in Java](https://stackoverflow.com/questions/8496494/running-command-line-in-java) | You can use Vim:
`vim my.jar`
Vim is able to edit compressed text files, given you have `unzip` in your environment. | Modifying a file inside a jar | [
"",
"java",
"jar",
""
] |
I have a generic question about javascript specification or implementation of functions pointer (delegates?) which are points to object methods.
Please, read the following code snippet. Here we have an object with a method using 'this' to access an object field. When we call this method as usual (`o.method();`), returns value of the specified field of the object. But when we create pointer to this method (callback) and invoke it, returns an undefined value, because 'this' inside method's scope now is global object.
```
var o = {
field : 'value',
method : function() {
return this.field;
}
};
o.method(); // returns 'value'
var callback = o.method;
callback(); // returns 'undefined' cause 'this' is global object
```
So, where is my 'this'? | "this" is late binding. that is, it gets bound to a thing just before the function is executed. What it is bound to depends on how you call your function.
if you call it like (function invokation):
```
myfunction();
```
"this" is bound to the global object
if you call it like (method invokation):
```
myobject.myfunction();
```
"this" gets bound to "myobject"
you can also call it like so (call invokation):
```
myfunction.call(myobject);
```
in which case "this" gets bound to myobject
there is also (constructor invokation):
```
new MyFunction();
```
in which "this" gets bound to a newly constructed blank object whose prototype is MyFunction.prototype.
this is how the creators of javascript talk about it, anyway. (and I think it is discussed this way in the spec) Different ways of invoking a function.
the new version of the ecmascript standard (ecmascript5) includes the prototype library's "bind" method, which returns a new function with "this" prebound to something you specify. for instance:
```
mynewfunction = myfunction.bind(myobject);
mynewfunction();
```
the invokation of mynewfunction has "this" already bound to myobject. | You can specify the context of `this` for example when you call a method through the `.apply()` or `.call()` methods. In your case, like you said, the context has changed. If you really need to make `field` be `o.field`, you have to tell explicitly so, for example by using a closure when defining your `method` method. | Where is my 'this'? Using objects method as a callback function | [
"",
"javascript",
"oop",
"pointers",
"callback",
""
] |
In SQL, I've got a table that maps token=>count; it's a dict where the default value is 0. If I want to increment a token, I can say:
```
insert into my_table (token, count)
values (my_token, 1)
on duplicate key update count = count + 1;
```
and if a token doesn't exist yet, it's inserted with count=1. Nifty.
Is there a similarly easy way to do the opposite in SQL, i.e., "given a token, decrement its count, and if count becomes 0 then just remove the record entirely"? I can think of longer ways to do it, but nothing as concise as the above. | My advice is to use transactions as follows (my SQL is a bit rusty but you should get the idea):
```
-- begin transaction
update my_table set count = count - 1 where id = '7'
delete from my_table where id = '7' and count = 0
commit
```
This will ensure the atomicity of the decrement-and-delete operation.
However, one other possibility you may want to consider - don't remove it at the point where it reaches zero. Since you say the default value is zero anyway, just leave the row in there with a value of 0.
Of course, your queries will need to change to adapt to that. If you have one that lists active tokens, it will change from:
```
select token from my_table
```
to:
```
select token from my_table where count > 0
```
The decrement SQL in that case must be careful not to push the token count down to -1, so it would become:
```
update my_table set count = count - 1 where id = '7' and count > 0
```
That simplifies your SQL at the decrement time. If you still want those rows with zero-count to disappear, you could have another process that runs periodically to sweep them all up:
```
delete from my_table where count = 0
```
That's just some alternatives to consider - if you really want them gone at the exact time their count reaches zero, then use the transaction method above. | You want triggers. (I take it this is MS SQL). | Decrement-or-delete in SQL | [
"",
"sql",
""
] |
I'm studying Smalltalk right now. It looks very similar to python (actually, the opposite, python is very similar to Smalltalk), so I was wondering, as a python enthusiast, if it's really worth for me to study it.
Apart from message passing, what are other notable conceptual differences between Smalltalk and python which could allow me to see new programming horizons ? | In Python, the "basic" constructs such as `if/else`, short-circuiting boolean operators, and loops are part of the language itself. In Smalltalk, they are all just messages. In that sense, while both Python and Smalltalk agree that "everything is an object", Smalltalk goes further in that it also asserts that "everything is a message".
[EDIT] Some examples.
Conditional statement in Smalltalk:
```
((x > y) and: [x > z])
ifTrue: [ ... ]
ifFalse: [ ... ]
```
Note how `and:` is just a message on `Boolean` (itself produced as a result of passing message `>` to `x`), and the second argument of `and:` is not a plain expression, but a block, enabling lazy (i.e. short-circuiting) evaluation. This produces another `Boolean` object, which also supports the message `ifTrue:ifFalse:`, taking two more blocks (i.e. lambdas) as arguments, and running one or the other depending on the value of the Boolean. | As someone new to smalltalk, the two things that really strike me are the image-based system, and that reflection is everywhere. These two simple facts appear to give rise to everything else cool in the system:
* The image means that you do everything by manipulating objects, including writing and compiling code
* Reflection allows you to inspect the state of any object. Since classes are objects and their sources are objects, you can inspect and manipulate code
* You have access to the current execution context, so you can have a look at the stack, and from there, compiled code and the source of that code and so on
* The stack is an object, so you can save it away and then resume later. Bingo, continuations!
All of the above starts to come together in cool ways:
* The browser lets you explore the source of literally everything, including the VM in Squeak
* You can make changes that affect your live program, so there's no need to restart and navigate your way through to whatever you're working on
* Even better, when your program throws an exception you can debug the live code. You fix the bug, update the state if it's become inconsistent and then have your program continue.
* The browser will tell you if it thinks you've made a typo
* It's absurdly easy to browse up and down the class hierarchy, or find out what messages a object responds to, or which code sends a given message, or which objects can receive a given message
* You can inspect and manipulate the state of any object in the system
* You can make any two objects literally switch places with become:, which lets you do crazy stuff like stub out any object and then lazily pull it in from elsewhere if it's sent a message.
The image system and reflection has made all of these perfectly natural and normal things for a smalltalker for about thirty years. | Differences between Smalltalk and python? | [
"",
"python",
"smalltalk",
""
] |
I am working on a top-down view 2d game at the moment and I am learning a ton about sprites and sprite handling. My question is how to handle a set of sprites that can be rotated in as many as 32 directions.
At the moment a given object has its sprite sheet with all of the animations oriented with the object pointing at 0 degrees at all times. Now, since the object can rotate in as many as 32 directions, what is the best way to work with that original sprite sheet. My current best guess is to have the program basically dynamically create 32 more sprite sheets when the object is first loaded into the game, and then all subsequent instances of that type of object will share those sprite sheets.
Anyways, any advice in this regard would be helpful. Let me know if I need to rephrase the question, I know its kindof an odd one. Thanks
Edit: I guess for more clarification. If I have, for instance an object that has 2 animations of 5 frames a peice, that is a pretty easy sprite sheet to create and organize, its a simple 2x5 grid (or 5x2 depending on how you lay it out). But the problem is that now those 2 animations have to be rotated in 32 directions. This means that in the end there will be 320 individual sprites. I am going to say that (and correct me if im wrong) since I'm concerned about performance and frame-rate, rotating the sprites on the fly every single frame is not an option. So, how should these 320 sprites that make up these 2 animations be organized? Would it be better to
* Think of it as 32 2x5 sprite sheets
* split the sprite sheet up into individual frames, and then have an array the 32 different directions per frame (so 10 arrays of 32 directional sprites)
* Other....?
* Doesn't matter?
Thanks | Typically you will sacrifice either processor time or memory, and you need to balance between the two. Unless you've got some great limit to your processor or you're computing a lot of expensive stuff, there's no real reason to put all that into the memory. Rotating a few sprites with a transform is cheap enough that it is definitely not worth it to store 32x as much information in memory - especially because that information is a bunch of images, and images use up a lot of memory, relatively speaking. | The 32 directions for the sprite translate into 32 rotations by 11.25 degrees.
You can reduce the number of precalculated images to 8 you only calculate the first 90 degrees (`11.25, 22.5, 33.75, 45.0, 56.25, 67.5, 78.75, 90.0`) and use the flip operations dynamically. Flips are much faster because they essentially only change the order an image is copied from the buffer.
For example, when you display an image that is rotated by 101.25 degrees, load the precalculated image of 67.5 degrees and flip it vertically.
I just realized that this only works if your graphic is symmetrical ;-)
When talking about a modern computer, you might not need to optimize anything. The memory used by precalculating the sprites is certainly negligible, and the cpu usage when rotating the image probably too. When you are programming for a embedded device however, it does matter. | What is the best way to handle rotating sprites for a top-down view game | [
"",
"python",
"pygame",
"sprite",
""
] |
PHP is my first programming language. I can't quite wrap my head around when to use static classes vs instantiated objects.
I realize that you can duplicate and clone objects. However in all of my time using php any object or function always ended up as a single return (array, string, int) value or void.
I understand concepts in books like a video game character class. *duplicate car object and make the new one red*, that all makes sense, but what doesn't is its application in php and web apps.
A simple example. A blog. What objects of a blog would be best implemented as static or instantiated objects? The DB class? Why not just instantiate the db object in the global scope? Why not make every object static instead? What about performance?
Is it all just style? Is there a proper way to do this stuff? | This is quite an interesting question -- and answers might get interesting too ^^
The simplest way to consider things might be :
* use an instanciated class where each object has data on its own (like a user has a name)
* use a static class when it's just a tool that works on other stuff (like, for instance, a syntax converter for BB code to HTML ; it doesn't have a life on its own)
*(Yeah, I admit, really really overly-simplified...)*
One thing about static methods/classes is that they don't facilitate unit testing (at least in PHP, but probably in other languages too).
Another thing about static data is that only one instance of it exists in your program : if you set MyClass::$myData to some value somewhere, it'll have this value, and only it, every where -- Speaking about the user, you would be able to have only one user -- which is not that great, is it ?
For a blog system, what could I say ? There's not much I would write as static, actually, I think ; maybe the DB-access class, but probably not, in the end ^^ | The main two reasons against using static methods are:
* code using static methods is hard to **test**
* code using static methods is hard to **extend**
Having a static method call inside some other method is actually worse than importing a global variable. In PHP, classes are global symbols, so every time you call a static method you rely on a global symbol (the class name). This is a case when global is evil. I had problems with this kind of approach with some component of Zend Framework. There are classes which use static method calls (factories) in order to build objects. It was impossible for me to supply another factory to that instance in order to get a customized object returned. The solution to this problem is to only use instances and instace methods and enforce singletons and the like in the beginning of the program.
[Miško Hevery](http://misko.hevery.com/), who works as an Agile Coach at Google, has an interesting theory, or rather advise, that we should separate the object creation time from the time we use the object. So the life cycle of a program is split in two. The first part (the `main()` method let's say), which takes care of all the object wiring in your application and the part that does the actual work.
So instead of having:
```
class HttpClient
{
public function request()
{
return HttpResponse::build();
}
}
```
We should rather do:
```
class HttpClient
{
private $httpResponseFactory;
public function __construct($httpResponseFactory)
{
$this->httpResponseFactory = $httpResponseFactory;
}
public function request()
{
return $this->httpResponseFactory->build();
}
}
```
And then, in the index/main page, we'd do (this is the object wiring step, or the time to create the graph of instances to be used by the program):
```
$httpResponseFactory = new HttpResponseFactory;
$httpClient = new HttpClient($httpResponseFactory);
$httpResponse = $httpClient->request();
```
The main idea is to decouple the dependencies out of your classes. This way the code is much more extensible and, the most important part for me, testable. Why is it more important to be testable? Because I don't always write library code, so extensibility is not that important, but testability is important when I do refactoring. Anyways, testable code usually yields extensible code, so it's not really an either-or situation.
Miško Hevery also makes a clear distinction between singletons and Singletons (with or without a capital S). The difference is very simple. Singletons with a lower case "s" are enforced by the wiring in the index/main. You instantiate an object of a class which does **not** implement the Singleton pattern and take care that you only pass that instance to any other instance which needs it. On the other hand, Singleton, with a capital "S" is an implementation of the classical (anti-)pattern. Basically a global in disguise which does not have much use in the PHP world. I haven't seen one up to this point. If you want a single DB connection to be used by all your classes is better to do it like this:
```
$db = new DbConnection;
$users = new UserCollection($db);
$posts = new PostCollection($db);
$comments = new CommentsCollection($db);
```
By doing the above it's clear that we have a singleton and we also have a nice way to inject a mock or a stub in our tests. It's surprisingly how unit tests lead to a better design. But it makes lots of sense when you think that tests force you to think about the way you'd use that code.
```
/**
* An example of a test using PHPUnit. The point is to see how easy it is to
* pass the UserCollection constructor an alternative implementation of
* DbCollection.
*/
class UserCollection extends PHPUnit_Framework_TestCase
{
public function testGetAllComments()
{
$mockedMethods = array('query');
$dbMock = $this->getMock('DbConnection', $mockedMethods);
$dbMock->expects($this->any())
->method('query')
->will($this->returnValue(array('John', 'George')));
$userCollection = new UserCollection($dbMock);
$allUsers = $userCollection->getAll();
$this->assertEquals(array('John', 'George'), $allUsers);
}
}
```
The only situation where I'd use (and I've used them to mimic the JavaScript prototype object in PHP 5.3) static members is when I know that the respective field will have the same value cross-instance. At that point you can use a static property and maybe a pair of static getter/setter methods. Anyway, don't forget to add possibility for overriding the static member with an instance member. For example Zend Framework was using a static property in order to specify the name of the DB adapter class used in instances of `Zend_Db_Table`. It's been awhile since I've used them so it may no longer be relevant, but that's how I remember it.
Static methods that don't deal with static properties should be functions. PHP has functions and we should use them. | When to use static vs instantiated classes | [
"",
"php",
"oop",
"class",
""
] |
I have a sqlite table containing records of variable length number prefixes. I want to be able to find the most complete prefix against another variable length number in the most efficient way:
eg. The table contains a column called prefix with the following numbers:
```
1. 1234
2. 12345
3. 123456
```
What would be an efficient sqlite query to find the second record as being the most complete match against 12345999.
Thanks. | A neat trick here is to reverse a LIKE clause -- rather than saying
```
WHERE prefix LIKE '...something...'
```
as you would often do, turn the *prefix* into the pattern by appending a % to the end and comparing it to your input as the fixed string. Order by length of prefix descending, and pick the top 1 result.
I've never used Sqlite before, but just downloaded it and this works fine:
```
sqlite> CREATE TABLE whatever(prefix VARCHAR(100));
sqlite> INSERT INTO WHATEVER(prefix) VALUES ('1234');
sqlite> INSERT INTO WHATEVER(prefix) VALUES ('12345');
sqlite> INSERT INTO WHATEVER(prefix) VALUES ('123456');
sqlite> SELECT * FROM whatever WHERE '12345999' LIKE (prefix || '%')
ORDER BY length(prefix) DESC LIMIT 1;
```
output:
```
12345
``` | Personally I use next method, it will use indexes:
statement '('1','12','123','1234','12345','123459','1234599','12345999','123459999')'
should be generated by client
```
SELECT * FROM whatever WHERE prefix in
('1','12','123','1234','12345','123459','1234599','12345999','123459999')
ORDER BY length(prefix) DESC LIMIT 1;
``` | Sqlite : Sql to finding the most complete prefix | [
"",
"sql",
"sqlite",
""
] |
I have an instance of `dict` with `int`s, `float`s, `string`s as keys, but the problem is when there are `a` as `int` and `b` as `float`, and `float(a) == b`, then their hash values are the same, and thats what I do NOT want to get because I need unique hash vales for this cases in order to get corresponding values.
Example:
```
d = {1:'1', 1.0:'1.0', '1':1, '1.0':1.0}
d[1] == '1.0'
d[1.0] == '1.0'
d['1'] == 1
d['1.0'] == 1.0
```
What I need is:
```
d = {1:'1', 1.0:'1.0', '1':1, '1.0':1.0}
d[1] == '1'
d[1.0] == '1.0'
d['1'] == 1
d['1.0'] == 1.0
``` | Since `1 == 1.0`, it would horribly break the semantics of hashing (and therefore dicts and sets) if it were the case that `hash(1) != hash(1.0)`. More generally, it must ALWAYS be the case that `x == y` implies `hash(x) == hash(y)`, for ALL `x` and `y` (there is of course no condition requiring the reverse implication to hold).
So your dict `d` has just three entries, as the second one you've written in the dict display overrides the first one. If you need to force equality to hold only between identical types (as opposed to numbers more generally), you need a wrapper such as:
```
class W(object):
def __init__(self, x):
self.x = x
self.t = type(x)
def __eq__(self, other):
t = type(other)
if t != type(self):
return False
return self.x == other.x and self.t == other.t
def __hash__(self):
return hash(self.x) ^ hash(self.t)
def __getattr__(self, name):
return getattr(self.x, name)
```
Depending on your exact needs you may also want to override other methods (other comparison methods such as `__cmp__` or `__le__`, arithmetic ones, `__repr__`, etc etc). At any rate, this will allow you to build a dict similar to what you require, just use as keys `W(1)` instead of bare `1` and `W(1.0)` instead of bare `1.0` (you may not need to wrap non-numbers, although there's no harm if you choose to do so, and it may ease retrieval from your dict if all keys are equally wrapped). | Using a float as a dictionary key is 'unwise' it's impossible to guarantee that two floats will evaluate to the same value.
The best thing is to multiply the keys to a predetermined number of decimal places and use that integer as the key.
edit: Sorry it seems you don't want a dict with real number keys, you simply want to format an output based on the type of input? | Problem with hash function: hash(1) == hash(1.0) | [
"",
"python",
"hash",
"dictionary",
""
] |
What are the most common naming conventions in C# for classes, namespaces and methods? Is it common to have getter/setter style methods as in Java? | There are a few common patterns out there. One I frequently see and use in my own projects is:
```
namespace Company.Concept.SubConcept
{
public class MyClass
{
private MyType someData;
public MyType SomeData
{
get { return someData; }
set { someData = value; }
}
public void MyMethod()
{
}
}
}
``` | [Guidelines for Names](http://msdn.microsoft.com/en-us/library/ms229002.aspx) (from [Design Guidelines for Developing Class Libraries](http://msdn.microsoft.com/en-us/library/ms229042.aspx), in which you will also find sections about [properties](http://msdn.microsoft.com/en-us/library/ms229006.aspx) and [choosing between properties and methods](http://msdn.microsoft.com/en-us/library/ms229054.aspx). In that last article, you will find the following:
> Do use a property, rather than a
> method, if the value of the property
> is stored in the process memory and
> the property would just provide access
> to the value. | What are the most common naming conventions in C#? | [
"",
"c#",
"naming-conventions",
""
] |
I have many asynchronous operations in different class.
When error,it will throw special exception which is inherited from System.Exception.
In some form,I wanna catch it by messageBox.
The function "Application.ThreadException" cannot catch it.
In other word,I cannot catch all the exceptions by the function when there is some exception thrown
How can I do? | Subscribe to Application.UnhandledException it fires on any unhandled exception and exposes the exception object with details in EventArgs. | Basically you should handle exceptions on a per-thread basis, with Application.ThreadException as a last resort.
How to do that depends on what Async pattern you are using, but for instance when using IAsyncResult the exception is 'stored' for you and thrown when you call EndIvoke(.., iar) | How to catch an exception in WinForms when there are many actions that are asynchronous operations | [
"",
"c#",
".net",
"winforms",
"exception",
""
] |
I've been tasked with creating some sort of service that will take any e-mail sent to an e-mail address and handle the contents of the e-mail (including binary attachments.)
I've got full access to the server (Windows Server 2008) that will run this service, and have decided to code the solution using the .NET framework (3.5).
Now I'm thinking about the different ways I can do this. Since I'm not very experienced in handling e-mails programmatically, the first solution that came into my head was to create an e-mail client that will periodically poll an existing e-mail server for incoming e-mail (using the POP3 protocol) and process them. But there are other ways to do it.
One could be to use IMAP to stay connected to the mail server and fetch e-mails as soon as they arrive (more responsive, but I believe the IMAP protocol is harder to implement.) The follow-up question here is: where can I find reliable e-mail libraries that support the POP3 or the IMAP protocol?
Another could be to somehow configure the e-mail server to directly pipe the e-mails sent to a specific address to my binary code (I've seen this done on Linux servers.) I have no idea how to go about this, though.
The last I can think of would be to create a dummy e-mail server on its own address that handles the e-mail directly as it arrives, but to me this seems like a bad idea.
Does anyone have experience in this area? I would imagine that having to write e-mail handlers for ticket support systems isn't that uncommon, all I need in addition to that is to handle the attachments.
I'd be grateful for any hints and tips. | As with alot of things - it depends. Ask yourself the following questions:
1. What are your latency requirements--do you need to process incoming messages as quickly as possible, or can processing be batched? If it can be batched, then how often would you have to process the "inbox"?
2. What are your throughput requirements? How many messages are we talking about per minute here? How big are the messages? This would affect the decision you make about polling interval if using a batch scenario;
3. What sort of e-mail system are you integrating with? If it's Exchange, what programmatic interfaces are available to access a mailbox? Until the most recent version of Exchange, interestingly enough, there were issues with accessing a mailbox on an Exchange server (The client-side CDO COM components needed to be used which is not ideal---and there were security limitations).
By far the simplest approach is to poll a mailbox using POP3. However, if you need to respond *immediately* to an incoming message, then this isn't going to cut it.
As far as possible *avoid* writing your own SMTP service--it's been done a thousand times before and you're just creating unnecessary work for yourself and exposing yourself to security threats. If you *absolutely have to* respond immediately to messages, then rather set up an instance of [Sendmail](http://www.sendmail.org/) or [Postfix](http://www.postfix.org/) to spawn a process that you have written.
If you're going to go for the POP3 solution (it looks like you are), then have a read of related questions "[Free POP3 .NET library?](https://stackoverflow.com/questions/26606/free-pop3-net-library)" and "[Reading Email using POP3 in C#](https://stackoverflow.com/questions/44383/reading-email-using-pop3-in-c)". | I've used webdav in the past with c# to access an exchange server periodically and process emails.
This has worked quite well, and I'd probably use that method again if I need to do it. | What methods are there for having .NET code run and handle e-mails as they arrive? | [
"",
"c#",
".net",
"email",
""
] |
How to access DOM of a web page in QtWebKit?
I don't see any methods exposing DOM in QtWebKit... | Right now as of Qt 4.4/4.5 I don't think there are any direct way, but it's coming. See <http://labs.trolltech.com/blogs/2009/04/07/qwebelement-sees-the-light-do-i-hear-a-booyakasha/> | Currently, you need to do DOM manipulation via JavaScript, injected via
QVariant QWebFrame::evaluateJavaScript(const QString & scriptSource); | How to access DOM of a web page in QtWebKit? | [
"",
"c++",
"qt",
"dom",
"webkit",
"qtwebkit",
""
] |
How can I use configure and make tools to specify to use 64 bit libraries? I thought it was automatic, but I get wrong ELF Class.
I'm trying to compile Xdebug for Ubuntu 64 for use with LAMPP (XAMPP for Linux).
```
./lampp start
Failed loading /opt/lampp/lib/php/extensions/xdebug.so: /opt/lampp/lib/php/extensions/xdebug.so: wrong ELF class: ELFCLASS64
```
The `./configure` looks OK to me, and the make works without errors, I've copied the configure in case its relevant:
```
/xdebug-2.0.3$ ./configure
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for a sed that does not truncate output... /bin/sed
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether gcc and cc understand -c and -o together... yes
checking for system library directory... lib
checking if compiler supports -R... no
checking if compiler supports -Wl,-rpath,... yes
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
checking for PHP prefix... /usr
checking for PHP includes... -I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM -I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib
checking for PHP extension directory... /usr/lib/php5/20060613
checking for PHP installed headers prefix... /usr/include/php5
checking for re2c... no
configure: WARNING: You will need re2c 0.12.0 or later if you want to regenerate PHP parsers.
checking for gawk... no
checking for nawk... nawk
checking if nawk is broken... no
checking whether to enable eXtended debugging support... yes, shared
checking for gettimeofday... yes
checking for cos in -lm... yes
checking for a sed that does not truncate output... (cached) /bin/sed
checking for fgrep... /bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... yes
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
checking the name lister (/usr/bin/nm -B) interface... BSD nm
checking whether ln -s works... yes
checking the maximum length of command line arguments... 1572864
checking whether the shell understands some XSI constructs... yes
checking whether the shell understands "+="... yes
checking for /usr/bin/ld option to reload object files... -r
checking for objdump... objdump
checking how to recognize dependent libraries... pass_all
checking for ar... ar
checking for strip... strip
checking for ranlib... ranlib
checking command to parse /usr/bin/nm -B output from gcc object... ok
checking how to run the C preprocessor... gcc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for dlfcn.h... yes
checking for objdir... .libs
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC -DPIC
checking if gcc PIC flag -fPIC -DPIC works... yes
checking if gcc static flag -static works... yes
checking if gcc supports -c -o file.o... yes
checking if gcc supports -c -o file.o... (cached) yes
checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... GNU/Linux ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... no
configure: creating ./config.status
config.status: creating config.h
config.status: config.h is unchanged
config.status: executing libtool commands
``` | I'm not sure of the error, but why are you compiling xdebug? Its available in the repositories of Ubuntu as php5-xdebug (`sudo apt-get install php5-xdebug`). Though you don't mention your distribution, perhaps its under a similar name?
But to answer your question, the error message looks like xdebug *is* 64 bit, but that's a problem (is something lampp executes a 32 bit binary?) | Won't:
```
LD_LIBRARY_PATH=/lib64:$LD_LIBRARY_PATH ./lampp start
```
work? I don't use Ubuntu, so your path might vary. One thing to do is
```
ldd ./lampp
```
to see what the default executable points to. | How to use "make" to use 64 bit libs because of ELFCLASS64 error | [
"",
"c++",
"gcc",
"makefile",
"compilation",
""
] |
At the moment I execute a native process using the following:
```
java.lang.Process process = Runtime.getRuntime().exec(command);
int returnCode = process.waitFor();
```
Suppose instead of waiting for the program to return I wish to terminate if a certain amount of time has elapsed. How do I do this? | This is how the Plexus CommandlineUtils does it:
```
Process p;
p = cl.execute();
...
if ( timeoutInSeconds <= 0 )
{
returnValue = p.waitFor();
}
else
{
long now = System.currentTimeMillis();
long timeoutInMillis = 1000L * timeoutInSeconds;
long finish = now + timeoutInMillis;
while ( isAlive( p ) && ( System.currentTimeMillis() < finish ) )
{
Thread.sleep( 10 );
}
if ( isAlive( p ) )
{
throw new InterruptedException( "Process timeout out after " + timeoutInSeconds + " seconds" );
}
returnValue = p.exitValue();
}
public static boolean isAlive( Process p ) {
try
{
p.exitValue();
return false;
} catch (IllegalThreadStateException e) {
return true;
}
}
``` | All other responses are correct but it can be made more robust and efficient using FutureTask.
For example,
```
private static final ExecutorService THREAD_POOL
= Executors.newCachedThreadPool();
private static <T> T timedCall(Callable<T> c, long timeout, TimeUnit timeUnit)
throws InterruptedException, ExecutionException, TimeoutException
{
FutureTask<T> task = new FutureTask<T>(c);
THREAD_POOL.execute(task);
return task.get(timeout, timeUnit);
}
final java.lang.Process[] process = new Process[1];
try {
int returnCode = timedCall(new Callable<Integer>() {
public Integer call() throws Exception {
process[0] = Runtime.getRuntime().exec(command);
return process[0].waitFor();
}
}, timeout, TimeUnit.SECONDS);
} catch (TimeoutException e) {
process[0].destroy();
// Handle timeout here
}
```
If you do this repeatedly, the thread pool is more efficient because it caches the threads. | Java native process timeout | [
"",
"java",
"process",
""
] |
I work on a project where there is a huge number of objects being instanced by a few classes that stay in memory for the lifetime of the application. There are a lot of memory leaks being caused with OutOfMemoryExceptions being thrown every now and again. It seems like after the instantiated objects ago out of scope, they are not being garbage collected.
I have isolated the problem to being mostly about the event handlers that are attached to the long-living object that are never detached, thus causing the long-living object to still have a reference to the out of scope objects, which then will never be garbage collected.
The solution that has been proposed by my colleagues is as follows: Implement IDisposable on all classes, across the board and in the Dispose method, null all the references in your objects and detach from all event that you attached to.
I believe this is a really really bad idea. Firstly because it's 'overkill' since the problem can be mostly solved by fixing a few problem areas and secondly because the purpose of IDisposable is to release any *unmanaged* resources your objects control, not because you don't trust the garbage collector. So far my arguments have fallen on deaf ears. How can I convince them that this is futile? | By coincidence I just posted this comment elsewhere:
> An reference to an object being
> incorrectly retained is still a
> resource leak. This is why GC programs
> can still have leaks, usually due to
> the Observer pattern - the observer is
> on a list instead the observable and
> never gets taken off it. Ultimately, a
> `remove` is needed for every `add`, just
> as a `delete` is needed for every `new`.
> Exactly the same programming error,
> causing exactly the same problem. A
> "resource" is a really just a pair of
> functions that have to be called an
> equal number of times with
> corresponding arguments, and a
> "resource leak" is what happens when
> you fail to do that.
And you say:
> the purpose of `IDisposable` is to release any
> Unmanaged resources your objects
> controls
Now, the `+=` and `-=` operators on an event are effectively a pair of functions that you have to call an equal number of times with corresponding arguments (the event/handler pair being the corresponding arguments).
Therefore they constitute a *resource.* And as they are not dealt with (or "managed") by the GC for you, it can be helpful to think of them as just another kind of unmanaged resource. As Jon Skeet points out in the comments, unmanaged usually has a specific meaning, but in the context of `IDisposable` I think it's helpful to broaden it to include anything resource-like that has to be "torn down" after it has been "built up".
So event detaching is a very good candidate for handling with `IDisposable`.
Of course, you need to call `Dispose` somewhere, and you don't need to implement it on every single object (just those with event relationships that need management).
Also, bear in mind that if a pair of objects are connected by an event, and you "cast them adrift", by losing all references to them in all other objects, they don't keep each other alive. GC doesn't use reference counting. Once an object (or island of objects) is unreachable, it is up for being collected.
You only have to worry about objects enlisted as event handlers with events on objects that live a long time. e.g. a static event such as `AppDomain.UnhandledException`, or events on your application's main window. | Point them at [Joe Duffy's post about IDisposable/finalizers](http://web.archive.org/web/20130426072619/http://www.bluebytesoftware.com/blog/PermaLink.aspx?guid=88e62cdf-5919-4ac7-bc33-20c06ae539ae) - combined wisdom of many smart people.
I'm currently finding it hard to see a statement there saying "don't implement it when you don't need it" - but aside from anything else, showing them the complexity involved in implementing it properly may well help to dissuade them from it...
Unfortunately, if people won't listen, they won't listen. Try to get them to explain *why* they think they need `IDisposable`. Do they think the garbage collector doesn't work? Show them that it works. If you can convince them that it's doing no good (for most types) then surely they'll stop adding work for themselves...
As Brian says, implementing `IDisposable` isn't going to help with the event problem on its own - it needs to actually be *called* by something. Finalizers aren't going to help you in this case either. They really need to explicitly do *something* to remove the event handlers. | How do I convince my colleagues not to implement IDisposable on everything? | [
"",
"c#",
".net",
"memory-management",
"idisposable",
""
] |
I've been using JSLint to make me feel bad about my JavaScript. It is great, by the way. There is one check that I don't quite understand and I'd like your views, please.
From [jslint.com](http://www.jslint.com/lint.html#scope):
> In languages with block scope, it is usually recommended that variables be declared at the site of first use. But because JavaScript does not have block scope, it is wiser to declare all of a function's variables at the top of the function. **It is recommended that a single var statement be used per function.**
What is the last sentance in bold really saying? I think I should be declaring multiple variables like this?
```
var foo = 1, bar = 2;
```
And, is the "wise" part just a programming style to discourage errors down the line or is there more to it than that?
Thanks for your help. | The problem is that, whether you realise it or not, javascript invisibly moves all the var declarations to the top of the function scope.
so if you have a function like this
```
var i = 5;
function testvar () {
alert(i);
var i=3;
}
testvar();
```
the alert window will contain undefined. because internally, it's been changed into this:
```
var i = 5;
function testvar () {
var i;
alert(i);
i=3;
}
testvar();
```
this is called "hoisting".
The reason crockford so strongly advocates var declarations go at the top, is that it makes the code visibly match what it's going to do, instead of allowing invisible and unexpected behavior to occur. | Basically in JavaScript blocks (`{ ... }`) do not introduce a new scope, there is only function-scope, so no scope is created on any other statement.
A variable introduced anywhere in a function is visible everywhere in the function.
For example:
```
function myFunction(){
var one = 1;
if (one){
var two = one + 1;
}
(function () {
var three = one + two;
})();
// at this point both variables *one* and *two* are accessible but
// the variable *three* was declared in the scope of a inner function
// and is not accessible at this point.
}
```
In languages with block scope, it recommended to declare the variables at the point of first use, but since JavaScript does not have block scope, it is better to declare all of a function's variables at the top of the function.
Check this [article](http://wonko.com/post/try-to-use-one-var-statement-per-scope-in-javascript). | One var per function in JavaScript? | [
"",
"javascript",
"lint",
""
] |
I'm implementing a URL shortening feature in my application in order to provide my users shorter alternative URLs that can be used in Twitter. The point is to be independent from the shortening services that offer this same service and include it as a feature of my web app.
What's the best way to create an unique random sequence of characters of about 6 chars? I plan to use that as an index for the items in my database that will have the alternative URLs.
Edited:
This feature will be used in a job board website, where every new job ad will get a custom URL with the title plus the shorter one to be used in Twitter. That said, the total number of unique 6 char combinations will be more than enough for a long time. | Do you really need 'random', or would 'unique' be sufficient?
Unique is extremely simple - just insert the URL into a database, and convert the sequential id for that record to a base-n number which is represented by your chosen characterset.
For example, if you want to only use [A-Z] in your sequence, you convert the id of the record to a base 26 number, where A=1, B=2,... Z=26. The algothithm is a recursive div26/mod26, where the quotient is the required character and the remainder is used to calculate the next character.
Then when retrieving URL, you perform the inverse function, which is to convert the base-26 number back to decimal. Perform SELECT URL WHERE ID = decimal, and you're done!
EDIT:
```
private string alphabet = "abcdefghijklmnopqrstuvwxyz";
// or whatever you want. Include more characters
// for more combinations and shorter URLs
public string Encode(int databaseId)
{
string encodedValue = String.Empty;
while (databaseId > encodingBase)
{
int remainder;
encodedValue += alphabet[Math.DivRem(databaseId, alphabet.Length,
out remainder)-1].ToString();
databaseId = remainder;
}
return encodedValue;
}
public int Decode(string code)
{
int returnValue;
for (int thisPosition = 0; thisPosition < code.Length; thisPosition++)
{
char thisCharacter = code[thisPosition];
returnValue += alphabet.IndexOf(thisCharacter) *
Math.Pow(alphabet.Length, code.Length - thisPosition - 1);
}
return returnValue;
}
``` | The simplest way to make unique sequences is to do this sequentially, ie: aaaaaa aaaaab aaaaac ... These aren't necessarily the prettiest, but will guarantee uniqueness for the first 12230590463 sequences (provided you used a-z and A-Z as unique characters). If you need more URLs than that, you'd need to add a seventh char.
They aren't random sequences, though. If you make random ones, just pick a random char of the 48, 6 times. You'll need to check your existing DB for "used" sequences, though, as you'll be more likely to get collisions. | How can I create an unique random sequence of characters in C#? | [
"",
"c#",
"string",
"random",
""
] |
Due to disk space considerations I'd like to only ever keep one version of any snapshot in my repository. Rather than keeping multiple versions with timestamp suffixes
*e.g. ecommerce-2.3-20090806.145007-1.ear*
How can I set this up? Is this a build setting or repository (Artifactory) setting
Thanks! | The simplest (and [recommended](http://wiki.jfrog.org/confluence/display/RTF/Usage)) way is to use non-unique snapshots. If you must use unique snapshots, you can do this in Artifactory by specifying the <maxUniqueSnapshots> property on the <localRepository> definition in artifactory.config.xml
For example:
```
<localRepository>
<key>snapshots</key>
<blackedOut>false</blackedOut>
<handleReleases>false</handleReleases>
<handleSnapshots>true</handleSnapshots>
<maxUniqueSnapshots>1</maxUniqueSnapshots>
<includesPattern>**/*</includesPattern>
<snapshotVersionBehavior>non-unique</snapshotVersionBehavior>
</localRepository>
```
For reference you can do this in [Nexus](http://nexus.sonatype.org/) (via the UI) by setting up a [scheduled service](http://www.sonatype.com/books/nexus-book/reference/config.html#config-sect-scheduled-services), it allows you to specify the minimum number to retain, the maximum period to retain them for, and whether to remove the snapshot if a release version is deployed. | **NOTE THAT THIS FEATURE/CAPABILITY HAS BEEN REMOVED IN MAVEN 3.0**
Just add something to my own question:
Adding
```
<distributionManagement>
...
<snapshotRepository>
...
<uniqueVersion>false</uniqueVersion>
</snapshotRepository>
...
</distributionManagement>
```
to my parent *pom* also contributed to the solution of this.
See:
<http://i-proving.com/space/Jessamyn+Smith/blog/2008-06-16_1>
To alter the unique settings on the repository in Artifactory - log in as an admin - and select edit on the relevant repo - screenshot here:
<http://wiki.jfrog.org/confluence/display/RTF/Understanding+Repositories> | How to stop Maven/Artifactory from keeping Snapshots with timestamps | [
"",
"java",
"maven",
"maven-2",
"artifactory",
""
] |
I have a sql server nvarchar field with a "Default Value or Binding" of empty string. It also happens to be a not null field.
Does this mean that there is no default or that it is a default of a string with no characters in it.
If I don't insert a value, will it insert with an empty string or fail with a "not null" error? | The default is a blank (empty) string.
If you don't provide a value, the insert will be successful and the value will be blank, not null. | Its the same as (assuming data is the col in question):
```
create table #t (id int, data varchar(100) not null default(''))
```
So:
```
insert into #t (id) values (1)
insert into #t (id,data) values (2,default)
insert into #t (id,data) values (3, 'allowed')
select * from #t
```
will return
```
1
2
3 allowed
```
and ..
```
insert into #t (id,data) values (1, null)
-- will error
``` | SQL Server 2000 - Default Value for varchar fields | [
"",
"sql",
"sql-server",
"sql-server-2000",
""
] |
I believe that at one point, I saw a TinySVG implementation that worked in the browser, using the canvas element as the backend. I found a few sites that appeared to indicate it was at <http://fuchsia-design.com/CanvaSVG/>, however, that site appears to no longer exist. Is this project (or a similar one) still on the web anywhere? | After a good look around on Google, it looks like CanvaSVG was never much more than a project [hacked together in a couple of days](http://tech.groups.yahoo.com/group/svg-developers/message/50072) by [Antoine Quint](http://www.linkedin.com/in/antoinequint), who now seems to work for Apple.
I did see his site was cached by Google on 2 July, at least, so it hasn't been offline for too long yet, though there's no trace of CanvaSVG in the Google cache, only an outdated blog.
Archive.org does have a [snapshot](http://web.archive.org/web/20070928172356/http%3a//fuchsia-design.com/CanvaSVG/) of that page, though, which includes a download link for the code.
I also found a project [using CanvaSVG](http://code.google.com/p/ajaxanimator/source/browse/trunk/Animator/richdraw/CanvaSVG.js) on Google Code.
Both of those downloads are version 0.1.
I did come across a few other, similar projects, but none that actually converted SVG to `<canvas>`. [SVGCanvas](http://svgkit.sourceforge.net/SVGCanvas.html) goes the other way around, for example. | In an environment where you are able to use the canvas element (firefox etc), you already have built in support for rendering SVG using the img tag.
If you are looking for something a little more cross-browser; I would take a good look at dojo or more specifically [dojox.gfx](http://docs.dojocampus.org/dojox/gfx), which allows SVG rendering using canvas, VML (for IE) or silverlight. It allows you to do all kinds of other very clever things with transformation matrices and draw functions. | SVG Parser in Javascript | [
"",
"javascript",
"canvas",
"svg",
""
] |
I have a test.php script which contains this:
```
<?php echo 'test'; ?>
```
When I point to it via my browser, it works and prints out "test" as expected.
I have another script which I am trying to test but however, it is ignoring all my echos! I thought I would put an echo right at the top as this will surely work, but when I get to this script via POST request from a form. It does not even echo out what is at the top of the line:
```
<?php echo 'START START START';
error_reporting(E_ALL);
```
I even point to from my browser and still no output?
What the hell is going on?
## Update
The error log shows this :
PHP Parse error: syntax error, unexpected T\_ECHO in /var/www/tester/convertor.php
But the echo is at the top of the line. I am going to set diaplay errors. | You have a parse error, so the script isn't even executed. So it's expected that it won't output anything.
The parse error can be for any echo in the script. It may be because of an "echo" statement after a line missing a semicolon, for example. | Things to try:
* Check your error log
* Check the headers (is it coming back 404?)
* Make sure you view-source: don't just look in the browser
* Delete everything except the echo. If it works, add things back a bit at a time until it breaks.
* Load your script in a hex editor and make sure there aren't any weird characters in there
* Are you including this file from another file? If so, check the other file too. Watch out for output buffering. | Unbelievable PHP script won't echo out string | [
"",
"php",
""
] |
I am assigning values in a C++ program out of the bounds like this:
```
#include <iostream>
using namespace std;
int main()
{
int array[2];
array[0] = 1;
array[1] = 2;
array[3] = 3;
array[4] = 4;
cout << array[3] << endl;
cout << array[4] << endl;
return 0;
}
```
The program prints `3` and `4`. It should not be possible. I am using g++ 4.3.3
Here is compile and run command
```
$ g++ -W -Wall errorRange.cpp -o errorRange
$ ./errorRange
3
4
```
Only when assigning `array[3000]=3000` does it give me a segmentation fault.
If gcc doesn't check for array bounds, how can I be sure if my program is correct, as it can lead to some serious issues later?
I replaced the above code with
```
vector<int> vint(2);
vint[0] = 0;
vint[1] = 1;
vint[2] = 2;
vint[5] = 5;
cout << vint[2] << endl;
cout << vint[5] << endl;
```
and this one also produces no error. | Welcome to every C/C++ programmer's bestest friend: **Undefined Behavior**.
There is a lot that is not specified by the language standard, for a variety of reasons. This is one of them.
In general, whenever you encounter undefined behavior, *anything* might happen. The application may crash, it may freeze, it may eject your CD-ROM drive or make demons come out of your nose. It may format your harddrive or email all your porn to your grandmother.
It may even, if you are really unlucky, *appear* to work correctly.
The language simply says what should happen if you access the elements *within* the bounds of an array. It is left undefined what happens if you go out of bounds. It might *seem* to work today, on your compiler, but it is not legal C or C++, and there is no guarantee that it'll still work the next time you run the program. Or that it hasn't overwritten essential data even now, and you just haven't encountered the problems, that it *is* going to cause — yet.
As for *why* there is no bounds checking, there are a couple aspects to the answer:
* An array is a leftover from C. C arrays are about as primitive as you can get. Just a sequence of elements with contiguous addresses. There is no bounds checking because it is simply exposing raw memory. Implementing a robust bounds-checking mechanism would have been almost impossible in C.
* In C++, bounds-checking is possible on class types. But an array is still the plain old C-compatible one. It is not a class. Further, C++ is also built on another rule which makes bounds-checking non-ideal. The C++ guiding principle is "you don't pay for what you don't use". If your code is correct, you don't need bounds-checking, and you shouldn't be forced to pay for the overhead of runtime bounds-checking.
* So C++ offers the `std::vector` class template, which allows both. `operator[]` is designed to be efficient. The language standard does not require that it performs bounds checking (although it does not forbid it either). A vector also has the `at()` member function which *is guaranteed* to perform bounds-checking. So in C++, you get the best of both worlds if you use a vector. You get array-like performance without bounds-checking, *and* you get the ability to use bounds-checked access when you want it. | Using g++, you can add the command line option: `-fstack-protector-all`.
On your example it resulted in the following:
```
> g++ -o t -fstack-protector-all t.cc
> ./t
3
4
/bin/bash: line 1: 15450 Segmentation fault ./t
```
It doesn't really help you find or solve the problem, but at least the segfault will let you know that *something* is wrong. | Accessing an array out of bounds gives no error, why? | [
"",
"c++",
"arrays",
""
] |
I have to document an MS Access database with many many macros queries, etc. I wish to use code to extract each SQL query to a file which is named the same as the query, eg if a query is named q\_warehouse\_issues then i wish to extract the SQL to a file named q\_warehouse\_issues.sql
I DO NOT WISH TO EXPORT THE QUERY RESULT SET, JUST THE SQL!
I know I can do this manually in Access, but i am tired of all the clicking, doing saveas etc. | This should get you started:
```
Dim db As DAO.Database
Dim qdf As DAO.QueryDef
Set db = CurrentDB()
For Each qdf In db.QueryDefs
Debug.Print qdf.SQL
Next qdf
Set qdf = Nothing
Set db = Nothing
```
You can use the File System Object or the built-in VBA File I/O features to write the SQL out to a file. I assume you were asking more about how to get the SQL than you were about how to write out the file, but if you need that, say so in a comment and I'll edit the post (or someone will post their own answer with instructions for that). | Hope this helps.
```
Public Function query_print()
Dim db As Database
Dim qr As QueryDef
Set db = CurrentDb
For Each qr In db.QueryDefs
TextOut (qr.Name)
TextOut (qr.SQL)
TextOut (String(100, "-"))
Next
End Function
Public Sub TextOut(OutputString As String)
Dim fh As Long
fh = FreeFile
Open "c:\File.txt" For Append As fh
Print #fh, OutputString
Close fh
End Sub
``` | Export all MS Access SQL queries to text files | [
"",
"sql",
"vba",
"ms-access",
""
] |
I just learned about the `serialize()` and`unserialize()` functions. What are some uses for this? I know people serialize things to put into a database. Could you give me some example uses where it is helpful?
I also see serialized code in javascript, is this the same? Can a serialized string in javascript can be unserialized with php `unserialize()`? | PHP serialize allows you to keep an array or object in a text form. When assigning arrays to things like $\_SESSION, it allows PHP to store it in a text file, and then recreate it later. Serialize is used like this for objects and variables. (Just make sure you have declared the class the object uses beforehand)
Wordpress on the other hand uses it for a very similar method, by storing the serialized arrays directly in a database. If you keep a database of keys => values, this could be very beneficial because of the flexibility of arrays, you can store anything in the value parameter.
And heres the link (courtesy of first commentor): <https://www.php.net/serialize> | I often see seralized data store in Database, and I really don't like this :
* it's really hard to work in SQL with that data : how do you write conditions on serialized data ? Even harder : how do you update it ? Do you really write a PHP script that fetches every lines, unserialize those, modifies them, re-serialize, and stores them back in DB ? :-(
* the day you will want to migrate your data to another software, it'll require more work to migrate the data *(and be even more work if the new software is not written in PHP, btw)*
Still, I admit it is an easy way to store not-well-defined data... and I do sometimes use it for that...
Another use for serialization is to facilitate data exchange between two systems : sending objects through some kind of webservice, for instance, requires them to be serialized in some kind of way.
If the two systems are PHP, you could envisage using `serialize`/`unserialize`. But, here too, what if one of the system is not PHP anymore ? Using JSON or SOAP is probably a better choice : a bit harder at first, but probably a more long-term solution, as those formats are known in other languages too.
One thing I use PHP's `serialize` function is to store data in cache *(like APC's user's cache)*, in a PHP application : you cannot store objects as-is : you have to serialize them. As the cache is used only by one application, it is not necessary to use a format known by many languages ; so, serialize is OK... And, to store data in cache, you should use a really fast serialization function -- and serialize is pretty fast ^^ | What is PHP's serialize() function for? | [
"",
"php",
"serialization",
""
] |
I have some Python code that executes an external app which works fine when the app has a small amount of output, but hangs when there is a lot. My code looks like:
```
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
errcode = p.wait()
retval = p.stdout.read()
errmess = p.stderr.read()
if errcode:
log.error('cmd failed <%s>: %s' % (errcode,errmess))
```
There are comments in the docs that seem to indicate the potential issue. Under wait, there is:
> Warning: This will deadlock if the child process generates enough output to a `stdout` or `stderr` pipe such that it blocks waiting for the OS pipe buffer to accept more data. Use `communicate()` to avoid that.
though under communicate, I see:
Note The data read is buffered in memory, so do not use this method if the data size is large or unlimited.
So it is unclear to me that I should use either of these if I have a large amount of data. They don't indicate what method I should use in that case.
I do need the return value from the exec and do parse and use both the `stdout` and `stderr`.
So what is an equivalent method in Python to exec an external app that is going to have large output? | You're doing blocking reads to two files; the first needs to complete before the second starts. If the application writes a lot to `stderr`, and nothing to `stdout`, then your process will sit waiting for data on `stdout` that isn't coming, while the program you're running sits there waiting for the stuff it wrote to `stderr` to be read (which it never will be--since you're waiting for `stdout`).
There are a few ways you can fix this.
The simplest is to not intercept `stderr`; leave `stderr=None`. Errors will be output to `stderr` directly. You can't intercept them and display them as part of your own message. For commandline tools, this is often OK. For other apps, it can be a problem.
Another simple approach is to redirect `stderr` to `stdout`, so you only have one incoming file: set `stderr=STDOUT`. This means you can't distinguish regular output from error output. This may or may not be acceptable, depending on how the application writes output.
The complete and complicated way of handling this is `select` (<http://docs.python.org/library/select.html>). This lets you read in a non-blocking way: you get data whenever data appears on either `stdout` or `stderr`. I'd only recommend this if it's really necessary. This probably doesn't work in Windows. | Reading `stdout` and `stderr` independently with very large output (ie, lots of megabytes) using `select`:
```
import subprocess, select
proc = subprocess.Popen(cmd, bufsize=8192, shell=False, \
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
with open(outpath, "wb") as outf:
dataend = False
while (proc.returncode is None) or (not dataend):
proc.poll()
dataend = False
ready = select.select([proc.stdout, proc.stderr], [], [], 1.0)
if proc.stderr in ready[0]:
data = proc.stderr.read(1024)
if len(data) > 0:
handle_stderr_data(data)
if proc.stdout in ready[0]:
data = proc.stdout.read(1024)
if len(data) == 0: # Read of zero bytes means EOF
dataend = True
else:
outf.write(data)
``` | Using subprocess.Popen for Process with Large Output | [
"",
"python",
"subprocess",
""
] |
i'm having a hard time finding a solution to this and am pretty sure that regex supports it. i just can't recall the name of the concept in the world of regex.
i need to search and replace a string for a specific pattern but the patterns can be different and the replacement needs to "remember" what it's replacing.
For example, say i have an arbitrary string: 134kshflskj9809hkj
and i want to surround the numbers with parentheses,
so the result would be: (134)kshflskj(9809)hkj
Finding numbers is simple enough, but how to surround them?
Can anyone provide a sample or point me in the right direction? | In some various langauges:
```
// C#:
string result = Regex.Replace(input, @"(\d+)", "($1)");
// JavaScript:
thestring.replace(/(\d+)/g, '($1)');
// Perl:
s/(\d+)/($1)/g;
// PHP:
$result = preg_replace("/(\d+)/", '($1)', $input);
```
The parentheses around `(\d+)` make it a "group" specifically the first (and only in this case) group which can be [backreferenced](http://www.regular-expressions.info/brackets.html) in the replacement string. The `g` flag is required in some implementations to make it match multiple times in a single string). The replacement string is fairly similar although some languages will use `\1` instead of `$1` and some will allow both. | Most regex replacement functions allow you to reference capture groups specified in the regex (a.k.a. backreferences), when defining your replacement string. For instance, using `preg_replace()` from PHP:
```
$var = "134kshflskj9809hkj";
$result = preg_replace('/(\d+)/', '(\1)', $var);
// $result now equals "(134)kshflskj(9809)hkj"
```
where `\1` means "the first capture group in the regex". | Regex search and replace where the replacement is a mod of the search term | [
"",
"c#",
"regex",
""
] |
I want my widget to look exactly like it does now, except to be smaller. It includes buttons, labels, text, images, etc. Is there any way to just say "scale this to be half the size", and have GTK do all the image processing, widget resizing, etc., necessary? If not, what's the easiest way to accomplish this? | Resolution independence has been worked on by some gtk devs, and here is an update with a very big patch to introduce it into GTK. The patch is however a year old now and it is still unclear how/when/if it is going to be included: (screenshots at the end)
<http://mail.gnome.org/archives/gtk-devel-list/2008-August/msg00044.html> | Change the theme from the user interface is not something that I recommend, but you can do it if you require it, using a custom gtkrc may help you to change the font and the way the buttons are drawed, mostly because of the xthickness and ythickness.
```
import gtk
file = "/path/to/the/gtkrc"
gtk.rc_parse(file)
gtk.rc_add_default_file(file)
gtk.rc_reparse_all()
```
And the custom gtkrc may look like this:
```
gtk_color_scheme = "fg_color:#ECE9E9;bg_color:#ECE9E9;base_color:#FFFFFF;text_color:#000000;selected_bg_color:#008DD7;selected_fg_color:#FFFFFF;tooltip_bg_color:#000000;tooltip_fg_color:#F5F5B5"
style "theme-fixes" {
fg[NORMAL] = @fg_color
fg[PRELIGHT] = @fg_color
fg[SELECTED] = @selected_fg_color
fg[ACTIVE] = @fg_color
fg[INSENSITIVE] = darker (@bg_color)
bg[NORMAL] = @bg_color
bg[PRELIGHT] = shade (1.02, @bg_color)
bg[SELECTED] = @selected_bg_color
bg[INSENSITIVE] = @bg_color
bg[ACTIVE] = shade (0.9, @bg_color)
base[NORMAL] = @base_color
base[PRELIGHT] = shade (0.95, @bg_color)
base[ACTIVE] = shade (0.9, @selected_bg_color)
base[SELECTED] = @selected_bg_color
base[INSENSITIVE] = @bg_color
text[NORMAL] = @text_color
text[PRELIGHT] = @text_color
text[ACTIVE] = @selected_fg_color
text[SELECTED] = @selected_fg_color
text[INSENSITIVE] = darker (@bg_color)
GtkTreeView::odd_row_color = shade (0.929458256, @base_color)
GtkTreeView::even_row_color = @base_color
GtkTreeView::horizontal-separator = 12
font_name = "Helvetica World 7"
}
class "*" style "theme-fixes"
``` | In GTK, is there an easy way to scale all widgets by an arbitrary amount? | [
"",
"python",
"user-interface",
"gtk",
"pygtk",
""
] |
Say I have a HTML form containing this select element:
```
<select name="mySelect" id="mySelect">
<option value="1" id="option1">1</option>
<option value="2" id="option2">2</option>
</select>
```
How can I use prototype to select one of the option elements?
The methods listed in [the API reference of Form.Element](http://www.prototypejs.org/api/form/element) don't seem to help with this.
edit: by "select" I mean the equivalent effect of the "selected" attribute on an option element. | ```
var options = $$('select#mySelect option');
var len = options.length;
for (var i = 0; i < len; i++) {
console.log('Option text = ' + options[i].text);
console.log('Option value = ' + options[i].value);
}
```
`options` is an array of all option elements in `#mySelect` dropdown. If you want to mark one or more of them as selected just use `selected` property
```
// replace 1 with index of an item you want to select
options[1].selected = true;
``` | To get the currently selected option, use:
```
$$('#mySelect option').find(function(ele){return !!ele.selected})
``` | How do I select an option using prototype | [
"",
"javascript",
"html",
"prototypejs",
""
] |
This one will need a bit of explanation...
I want to set up IIS automagically so our Integration tests (using Watin) can run on any environment. To that end I want to create a Setup and Teardown that will create and destroy a virtual directory respectively.
The solution I thought of was to use the [MSBuild Community Tasks](http://msbuildtasks.tigris.org/) to automate IIS in code with a mocked IBuildEngine.
However, when I try to create a virtual directory, I get the following error code: **0x80005008**.
Edit: I removed artifacts from earlier attemps, and now I get an **IndexOutOfRangeException**
I'm on Vista with IIS7.
This is the code I'm using to run the task:
```
var buildEngine = new MockedBuildEngine();
buildEngine.OnError += (o, e) => Console.WriteLine("ERROR" + e.Message);
buildEngine.OnMessage += (o, e) => Console.WriteLine("MESSAGE" + e.Message);
buildEngine.OnWarning += (o, e) => Console.WriteLine("WARNING: " + e.Message);
var createWebsite = new MSBuild.Community.Tasks.IIS.WebDirectoryCreate()
{
ServerName = "localhost",
VirtualDirectoryPhysicalPath = websiteFolder.FullName,
VirtualDirectoryName = "MedicatiebeheerFittests",
AccessRead = true,
AuthAnonymous = true,
AnonymousPasswordSync = false,
AuthNtlm = true,
EnableDefaultDoc = true,
BuildEngine = buildEngine
};
createWebsite.Execute();
```
Somebody knows what is going on here? Or does someone know a better way of doing this? | This is the code my application uses that references the System.DirectoryServices dll:
```
using System.DirectoryServices;
using System.IO;
/// <summary>
/// Creates the virtual directory.
/// </summary>
/// <param name="webSite">The web site.</param>
/// <param name="appName">Name of the app.</param>
/// <param name="path">The path.</param>
/// <returns></returns>
/// <exception cref="Exception"><c>Exception</c>.</exception>
public static bool CreateVirtualDirectory(string webSite, string appName, string path)
{
var schema = new DirectoryEntry("IIS://" + webSite + "/Schema/AppIsolated");
bool canCreate = !(schema.Properties["Syntax"].Value.ToString().ToUpper() == "BOOLEAN");
schema.Dispose();
if (canCreate)
{
bool pathCreated = false;
try
{
var admin = new DirectoryEntry("IIS://" + webSite + "/W3SVC/1/Root");
//make sure folder exists
if (!Directory.Exists(path))
{
Directory.CreateDirectory(path);
pathCreated = true;
}
//If the virtual directory already exists then delete it
IEnumerable<DirectoryEntry> matchingEntries = admin.Children.Cast<DirectoryEntry>().Where(v => v.Name == appName);
foreach (DirectoryEntry vd in matchingEntries)
{
admin.Invoke("Delete", new[] { vd.SchemaClassName, appName });
admin.CommitChanges();
break;
}
//Create and setup new virtual directory
DirectoryEntry vdir = admin.Children.Add(appName, "IIsWebVirtualDir");
vdir.Properties["Path"][0] = path;
vdir.Properties["AppFriendlyName"][0] = appName;
vdir.Properties["EnableDirBrowsing"][0] = false;
vdir.Properties["AccessRead"][0] = true;
vdir.Properties["AccessExecute"][0] = true;
vdir.Properties["AccessWrite"][0] = false;
vdir.Properties["AccessScript"][0] = true;
vdir.Properties["AuthNTLM"][0] = true;
vdir.Properties["EnableDefaultDoc"][0] = true;
vdir.Properties["DefaultDoc"][0] =
"default.aspx,default.asp,default.htm";
vdir.Properties["AspEnableParentPaths"][0] = true;
vdir.CommitChanges();
//the following are acceptable params
//INPROC = 0, OUTPROC = 1, POOLED = 2
vdir.Invoke("AppCreate", 1);
return true;
}
catch (Exception)
{
if (pathCreated)
Directory.Delete(path);
throw;
}
}
return false;
}
``` | What about using the same approach suggested by [Dylan](https://stackoverflow.com/users/3074/dylan), but directly in C#?
See: [Creating Sites and Virtual Directories Using System.DirectoryServices](http://msdn.microsoft.com/en-us/library/ms524896.aspx) for a starting point. | Creating Virtual directory in IIS with c# | [
"",
"c#",
"iis",
"msbuild",
""
] |
I have the these tables:
```
- Users
- id
- Photos
- id
- user_id
- Classifications
- id
- user_id
- photo_id
```
I would like to order Users by the total number of Photos + Classifications which they own.
I wrote this query:
```
SELECT users.id,
COUNT(photos.id) AS n_photo,
COUNT(classifications.id) AS n_classifications,
(COUNT(photos.id) + COUNT(classifications.id)) AS n_sum
FROM users
LEFT JOIN photos ON (photos.user_id = users.id)
LEFT JOIN classifications ON (classifications.user_id = users.id)
GROUP BY users.id
ORDER BY (COUNT(photos.id) + COUNT(classifications.id)) DESC
```
The problem is that this query does not work as I expect and returns high numbers while I have only a few photos and classifications in the db. It returns something like this:
```
id n_photo n_classifications n_sum
29 19241 19241 38482
16 16905 16905 33810
1 431 0 431
...
``` | You are missing distinct.
```
SELECT U.ID, COUNT(DISTINCT P.Id)+COUNT(DISTINCT C.Id) Count
FROM User U
LEFT JOIN Photos P ON P.User_Id=U.Id
LEFT JOIN Classifications C ON C.User_Id=U.Id
GROUP BY U.Id
ORDER BY COUNT(DISTINCT P.Id)+COUNT(DISTINCT C.ID)
``` | I could be misinterpreting your schema, but shouldn't this:
```
LEFT JOIN classifications ON (classifications.user_id = users.id)
```
Be this:
```
LEFT JOIN classifications ON (classifications.user_id = users.id)
AND (classifications.photo_id = photos.id)
```
? | Complex SQL query | [
"",
"sql",
""
] |
I read in a lot of books that C is a subset of C++.
Some books say that C is a subset of C++, *except for the little details*.
What are some cases when code will compile in C, but not C++? | If you compare `C89` with `C++` then here are a couple of things
### No tentative definitions in C++
```
int n;
int n; // ill-formed: n already defined
```
### int[] and int[N] not compatible (no compatible types in C++)
```
int a[1];
int (*ap)[] = &a; // ill-formed: a does not have type int[]
```
### No K&R function definition style
```
int b(a) int a; { } // ill-formed: grammar error
```
### Nested struct has class-scope in C++
```
struct A { struct B { int a; } b; int c; };
struct B b; // ill-formed: b has incomplete type (*not* A::B)
```
### No default int
```
auto a; // ill-formed: type-specifier missing
```
---
C99 adds a whole lot of other cases
### No special handling of declaration specifiers in array dimensions of parameters
```
// ill-formed: invalid syntax
void f(int p[static 100]) { }
```
### No variable length arrays
```
// ill-formed: n is not a constant expression
int n = 1;
int an[n];
```
### No flexible array member
```
// ill-formed: fam has incomplete type
struct A { int a; int fam[]; };
```
### No restrict qualifier for helping aliasing analysis
```
// ill-formed: two names for one parameter?
void copy(int *restrict src, int *restrict dst);
``` | In C, `sizeof('a')` is equal to `sizeof(int)`.
In C++, `sizeof('a')` is equal to `sizeof(char)`. | Where is C not a subset of C++? | [
"",
"c++",
"c",
""
] |
just wondering how I can check to see if a text variable is contained within another variable. like
```
var A = "J";
var B = "another J";
```
something like :contains but for variables.
thanks | Javascript itself has a function for this: indexOf.
```
alert("blaat".indexOf('a') != -1);
``` | Assuming you mean you want to find whether the contents of A are in B, just use the following:
```
var found = !(B.indexOf(A) == -1);
``` | How to check if a text variable is contained within another variable | [
"",
"javascript",
"jquery",
""
] |
I'm implementing an EJB-based system in JBoss.
One of my message driven beans will be responsible for sending emails. I want the email template to be stored externally (probably as XML) so that it can easily be changed without having to change the code/redeploy the bean, etc.
Where should this file be placed and how do I reference it? | The JBoss documentation specifies that the environment variable jboss.server.data.dir is the "location available for use by services that want to store content in the file system". See [here](http://docs.jboss.org/jbossas/guides/installguide/r1/en/html/dirs.html) for more details.
You can get the value of the variable by
```
System.getProperty("jboss.server.data.dir");
```
And, as shown in the link, the location of the server/[config]/data directory will be returned. Store the template file there when you deploy your app, and instruct your admins to modify it there. | This [question](https://stackoverflow.com/questions/883858/what-is-the-best-practice-for-reading-property-files-in-j2ee) and this [blog post](http://minddiary.com/2007/09/14/reading-properties-file-from-outside-of-a-jar-file/) refer to property files, but what is discussed also applies for other kind of files. One solution would be to place the XML file in a standard location and read it like this (code taken from the link above):
```
String path = System.getProperty("catalina.base")
+ System.getProperty("file.seperator")
+ "YOUR_FILE.properties";
FileInputStream fis = new FileInputStream(path);
```
Instead of catalina.base you can use user.dir or even define your own environmental variable (mypath\_to\_xml\_file) and read it with System.getProperty.
Another solution would be to use JNDI to define the path to the file. | JBoss/EJB - location of custom configuration file | [
"",
"java",
"jboss",
"ejb-3.0",
"configuration-files",
""
] |
I have a java app that uses log4j.
Config:
```
log4j.rootLogger=info, file
log4j.appender.file=org.apache.log4j.DailyRollingFileAppender
log4j.appender.file.File=${user.home}/logs/app.log
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d [%t] %c %p %m%n
```
So all the log statements are correctly appended to the file, but i am losing stdout and stderr. How do i redirect exception stack traces and sysouts to the daily rolled file ? | ```
// I set up a ConsoleAppender in Log4J to format Stdout/Stderr
log4j.rootLogger=DEBUG, CONSOLE
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=[%t] %-5p %c - %m%n
// And I call this StdOutErrLog.tieSystemOutAndErrToLog() on startup
public class StdOutErrLog {
private static final Logger logger = Logger.getLogger(StdOutErrLog.class);
public static void tieSystemOutAndErrToLog() {
System.setOut(createLoggingProxy(System.out));
System.setErr(createLoggingProxy(System.err));
}
public static PrintStream createLoggingProxy(final PrintStream realPrintStream) {
return new PrintStream(realPrintStream) {
public void print(final String string) {
realPrintStream.print(string);
logger.info(string);
}
};
}
}
``` | I picked up the idea from Michael S., but like mentioned in one comment, it has some problems: it doesn't capture everything, and it prints some empty lines.
Also I wanted to separate `System.out` and `System.err`, so that `System.out` gets logged with log level `'INFO'` and `System.err` gets logged with `'ERROR'` (or `'WARN'` if you like).
So this is my solution:
First a class that extends `OutputStream` (it's easier to override all methods for `OutputStream` than for `PrintStream`). It logs with a specified log level and also copies everything to another `OutputStream`. And also it detects "empty" strings (containing whitespace only) and does not log them.
```
import java.io.IOException;
import java.io.OutputStream;
import org.apache.log4j.Level;
import org.apache.log4j.Logger;
public class LoggerStream extends OutputStream
{
private final Logger logger;
private final Level logLevel;
private final OutputStream outputStream;
public LoggerStream(Logger logger, Level logLevel, OutputStream outputStream)
{
super();
this.logger = logger;
this.logLevel = logLevel;
this.outputStream = outputStream;
}
@Override
public void write(byte[] b) throws IOException
{
outputStream.write(b);
String string = new String(b);
if (!string.trim().isEmpty())
logger.log(logLevel, string);
}
@Override
public void write(byte[] b, int off, int len) throws IOException
{
outputStream.write(b, off, len);
String string = new String(b, off, len);
if (!string.trim().isEmpty())
logger.log(logLevel, string);
}
@Override
public void write(int b) throws IOException
{
outputStream.write(b);
String string = String.valueOf((char) b);
if (!string.trim().isEmpty())
logger.log(logLevel, string);
}
}
```
And then a very simple utility class to set `out` and `err`:
```
import java.io.PrintStream;
import org.apache.log4j.Level;
import org.apache.log4j.Logger;
public class OutErrLogger
{
public static void setOutAndErrToLog()
{
setOutToLog();
setErrToLog();
}
public static void setOutToLog()
{
System.setOut(new PrintStream(new LoggerStream(Logger.getLogger("out"), Level.INFO, System.out)));
}
public static void setErrToLog()
{
System.setErr(new PrintStream(new LoggerStream(Logger.getLogger("err"), Level.ERROR, System.err)));
}
}
``` | log4j redirect stdout to DailyRollingFileAppender | [
"",
"java",
"file",
"redirect",
"log4j",
"stdout",
""
] |
```
cipher = new Dictionary<char,int>;
cipher.Add( 'a', 324 );
cipher.Add( 'b', 553 );
cipher.Add( 'c', 915 );
```
How to get the 2nd element? For example, I'd like something like:
```
KeyValuePair pair = cipher[1]
```
Where pair contains `( 'b', 553 )`
---
Based on the coop's suggestion using a List, things are working:
```
List<KeyValuePair<char, int>> cipher = new List<KeyValuePair<char, int>>();
cipher.Add( new KeyValuePair<char, int>( 'a', 324 ) );
cipher.Add( new KeyValuePair<char, int>( 'b', 553 ) );
cipher.Add( new KeyValuePair<char, int>( 'c', 915 ) );
KeyValuePair<char, int> pair = cipher[ 1 ];
```
Assuming that I'm correct that the items stay in the list in the order they are added, I believe that I can just use a `List` as opposed to a `SortedList` as suggested. | The problem is a Dictionary isn't sorted. What you want is a [SortedList](http://msdn.microsoft.com/en-us/library/ms132319.aspx), which allows you to get values by index as well as key, although you may need to specify your own comparer in the constructor to get the sorting you want. You can then access an ordered list of the Keys and Values, and use various combinations of the IndexOfKey/IndexOfValue methods as needed. | like this:
```
int n = 0;
int nthValue = cipher[cipher.Keys.ToList()[n]];
```
note that you will also need a reference to Linq at the top of your page...
```
using System.Linq;
``` | How do I get the nth element from a Dictionary? | [
"",
"c#",
"dictionary",
""
] |
Give me the best standard way of coding in PHP. where to store my css, php, images etc. How to separate my folders, How many folders and whats the name of that folder? | What I use to do is the next:
```
-Main php files
-private
|_private web zone files
-images
|_image files
-flash
|_flash files
-script
|_javascript files
-css
|_css files
```
and son on
hope I've helped you | There is no standard. PHP is a language, not a framework, and as with any language, you can organize your project however you see fit.
However, there are some great frameworks written in PHP that have a directory structure and provide tools, etc. For example, [Cake PHP](http://cakephp.org) and [Code Igniter](http://codeigniter.com/). | What is the standard of coding in PHP in the sense separating css, phpfles and database | [
"",
"php",
"mysql",
"css",
"standards",
""
] |
> **Possible Duplicate:**
> [500 - An error has occurred! DB function reports no errors when adding new article in Joomla!](https://stackoverflow.com/questions/1304809/500-an-error-has-occurred-db-function-reports-no-errors-when-adding-new-arti)
I have an article that I want to publish on my Joomla! site. Every time I click apply or save. I get error `500 - An error has occurred! DB function reports no errors`. I have no idea why this error comes up, al I can think is that it's a server error.
I'm using TinyMCE to type articles together with Joomla! 1.5.11.
Updated: I turned on Maximum error reporting in Joomla! and in the article manager I tried to save the article and got these couple of errors. Please check screenshot
[](https://i.stack.imgur.com/VDSPB.png)
(source: [techportal.co.za](http://techportal.co.za/images/articles/error.png))
I tried adding
```
<?php
ini_set('error_reporting', E_ALL);
error_reporting(E_ALL);
ini_set('log_errors',TRUE);
ini_set('html_errors',TRUE);
ini_set('display_errors',true);
?>
```
at the top of the index.php pages for Joomla! but it does not show any errors. I checked the error logs on the server and also no errors come up.
I managed to publish the article via phpMyAdmin but then something else happens. I try to access to article from the front end, by clicking on the link to the article, but only a blank page comes up.
This is really weird, since the error log does not show any information. So I assume the error needs to be coming from Joomla!
This happens if I add a print\_r($\_POST) before `if (!$row->check()) {`
```
Array
(
[title] => Test.
[state] => 0
[alias] => test
[frontpage] => 0
[sectionid] => 10
[catid] => 44
[details] => Array
(
[created_by] => 62
[created_by_alias] =>
[access] => 0
[created] => 2008-10-25 13:31:21
[publish_up] => 2008-10-25 13:31:21
[publish_down] => Never
)
[params] => Array
(
[show_title] =>
[link_titles] =>
[show_intro] =>
[show_section] =>
[link_section] =>
[show_category] =>
[link_category] =>
[show_vote] =>
[show_author] => 1
[show_create_date] => 0
[show_modify_date] => 0
[show_pdf_icon] =>
[show_print_icon] =>
[show_email_icon] =>
[language] =>
[keyref] =>
[readmore] =>
)
[meta] => Array
(
[description] => Test.
[keywords] => Test
[robots] =>
[author] => Test
)
[id] => 58
[cid] => Array
(
[0] => 58
)
[version] => 30
[mask] => 0
[option] => com_content
[task] => apply
[ac1e0853fb1b3f41730c0d52de89dab7] => 1
)
``` | The exception is being thrown here in `/administrator/components/com_content/controller.php` (around 693)
```
if (!$row->check()) {
JError::raiseError( 500, $db->stderr() );
return false;
}
```
The `check()` function only returns false in two cases: either the `title` or the `introtext` are empty.
What I would do in your case is edit the `controller.php` file above and echo a `var_dump` of `$row` before the error is raised. It might be that no data is coming in from `$_POST`.
---
**Edit**: It looks like there's no body of your article being sent through in your `$_POST`. This is most likely because of something to do with the form which is submitting the data. On the page where you are trying to create the article, take a look at the HTML source. In my Joomla installation, the textarea is named `"text"`. Make sure it has that name, and that nothing else in that form is named `"text"`. | It's a fatal server error. We cannot solve your problem until we know what the error message is, so the first step is to look through the log files and turn on debug output to see what the error message is.
To enable all debug output add the following code to the top of your page:
```
<?php
ini_set('error_reporting', E_ALL);
error_reporting(E_ALL);
ini_set('log_errors',TRUE);
ini_set('html_errors',TRUE);
ini_set('display_errors',true);
?>
```
It looks like you are trying to add a content article in the admin.
In components/com\_content/controller.php on like 693 we see an error is raised because the data you tried to save was not valid. Either you have installed a extension that is messing up joomla, or by some other method the system is not stable.
I recommend removing extensions; if that doesn't help, reinstall Joomla to fix your problem. | DB function reports no errors when adding new article in Joomla | [
"",
"php",
"mysql",
"joomla",
"joomla1.5",
""
] |
How can I convert a `std::string` to `LPCSTR`? Also, how can I convert a `std::string` to `LPWSTR`?
I am totally confused with these `LPCSTR` `LPSTR` `LPWSTR` and `LPCWSTR`.
Are `LPWSTR` and `LPCWSTR` the same? | `str.c_str()` gives you a `const char *`, which is an `LPCSTR` (Long Pointer to Constant STRing) -- means that it's a pointer to a `0` terminated string of characters. `W` means wide string (composed of `wchar_t` instead of `char`). | Call `c_str()` to get a `const char *` (`LPCSTR`) from a `std::string`.
It's all in the name:
`LPSTR` - (long) pointer to string - `char *`
`LPCSTR` - (long) pointer to constant string - `const char *`
`LPWSTR` - (long) pointer to Unicode (wide) string - `wchar_t *`
`LPCWSTR` - (long) pointer to constant Unicode (wide) string - `const wchar_t *`
`LPTSTR` - (long) pointer to TCHAR (Unicode if UNICODE is defined, ANSI if not) string - `TCHAR *`
`LPCTSTR` - (long) pointer to constant TCHAR string - `const TCHAR *`
You can ignore the L (long) part of the names -- it's a holdover from 16-bit Windows. | How to convert std::string to LPCSTR? | [
"",
"c++",
"windows",
"string",
""
] |
When ever I try to call store procedure in mysql that sends back a result set, it keeps saying me that "can't return a result set in the given context".
I've google it and some said it's mysql bug, some said you should change your mysqli driver and ....
Situation :
Using mysqli driver Client API library version 5.0.51a , PHP Version 5.2.4-2ubuntu5.6, Using Zend 1.9 RC 1 Mysqli adapter.
What should I do!? | Not sure this is the solution to your problem, but what about trying with a more recent version of PHP ?
PHP 5.2.4 is definitly quite old -- so, if it's a bug in PHP's mysqli driver, it might have been corrected since...
Actually, after a quick search, it seems a problem like the one you are witnessing has been introduced between PHP 5.2.3 and PHP 5.2.4 (and was still here in PHP 5.2.5).
See [bug #42548 : PROCEDURE xxx can't return a result set in the given context (works in 5.2.3!!)](http://bugs.php.net/bug.php?id=42548)
Are you able to test with something like PHP 5.2.9 or 5.2.10 ?
I know these are not provided by Ubuntu, even in the last Ubuntu stable version :-( You might have to compile from sources :-(
---
Yet another idea would be to try mith PDO\_MySql adapter : maybe it would work with that one ?
It might be possible to change Adapter without causing too much trouble / without taking hours to test ?
---
As you are working with Zend Framework 1.9, here's another post that might interest you, and might be easier to test : [stored procedure error after upgrade to 1.8](http://www.nabble.com/stored-procedure-error-after-upgrade-to-1.8-tt23872061.html#a23872061)
An easy solution to try that would be to go back to Zend Framework 1.7 ; would it be possible for you, just to test ?
Anyway... Good luck !
And, if you find the solution, don't forget to indicate what the problem was, and how you solved it ;-) | The answer is to upgrade your php, I've just upgraded mine to 5.3.0, and it's works likes Candy! | Can't return a result set in the given context | [
"",
"php",
"zend-framework",
"mysqli",
"multi-select",
""
] |
I am using the following code to output a Crystal Report to an ASP.NET application:
```
Dim rptDocument As New ReportDocument
Dim rptPath As String = Server.MapPath("Reports/Employees.rpt")
rptDocument.Load(rptPath)
Me.CrystalReportViewer1.ReportSource = rptDocument
```
Everything is working fine. My question is, is there a way to render the report as a PDF file instead of rendering to a crystalreportviewer?
I am using Visual Studio 2008 and Crystal Reports for Visual Studio 2008. | Yes, you can use [ExportToHttpResponse](http://msdn.microsoft.com/en-us/library/ms227645(VS.80).aspx). Set the [ExportFormatType](http://msdn.microsoft.com/en-us/library/ms226443(VS.80).aspx) to PortableDocFormat. Check out this [tutorial](http://www.beansoftware.com/ASP.NET-Tutorials/Export-Crystal-Reports-To-PDF.aspx). | Well, if you can to view it as a PDF you would:
1.) Export the report
2.) Load the report in a "viewer" control of some kind or load the report and let Acrobat Reader do all the work.
With the runtime licences that Crystal give you for ASP.net development (which is crap BTW), using a PDF makes alot of since.
This is how I figured out how to export to PDF(there might be a better way..)
```
Dim rptDocument As New ReportDocument
Dim rptPath As String = Server.MapPath("Reports/Employees.rpt")
Dim crExportOptions As ExportOptions
Dim crDiskFileDestinationOptions As New DiskFileDestinationOptions()
rptDocument.Load(rptPath)
crDiskFileDestinationOptions.DiskFileName = "**Path**"
crExportOptions.ExportDestinationType = ExportDestinationType.DiskFile
crExportOptions.ExportFormatType = ExportFormatType.PortableDocFormat
crExportOptions.ExportDestinationOptions = crDiskFileDestinationOptions
rptDocument.Export(crExportOptions)
'Insert code to load the PDF you just created
``` | Crystal Reports in a PDF file? | [
"",
"c#",
"asp.net",
"vb.net",
"crystal-reports",
""
] |
I have a MySQL database called "bookfeather." It contains 56 tables. Each table has the following structure:
```
id site votes_up votes_down
```
The value for "site" is a book title. The value for "votes\_up" is an integer. Sometimes a unique value for "site" appears in more than one table.
For each unique value "site" in the entire database, I would like to sum "votes\_up" from all 56 tables. Then I would like to print the top 25 values for "site" ranked by total "votes\_up".
How can I do this in PHP?
Thanks in advance,
John | Here's a PHP code snip that should get it done.
I have not tested it so it might have some typos and stuff, make sure you replace DB\_NAME
```
$result = mysql_query("SHOW TABLES");
$tables = array();
while ($row = mysql_fetch_assoc($result)) {
$tables[] = '`'.$row["Tables_in_DB_NAME"].'`';
}
$subQuery = "SELECT site, votes_up FROM ".implode(" UNION ALL SELECT site, votes_up FROM ",$tables);
// Create one query that gets the data you need
$sqlStr = "SELECT site, sum(votes_up) sumVotesUp
FROM (
".$subQuery." ) subQuery
GROUP BY site ORDER BY sum(votes_up) DESC LIMIT 25";
$result = mysql_query($sqlStr);
$arr = array();
while ($row = mysql_fetch_assoc($result)) {
$arr[] = $row["site"]." - ".$row["sumVotesUp"];
}
print_r($arr)
``` | You can do something like this (warning: Extremely poor SQL ahead)
```
select site, sum(votes_up) votes_up
from (
select site, votes_up from table_1
UNION
select site, votes_up from table_2
UNION
...
UNION
select site, votes_up from table_56
) group by site order by sum(votes_up) desc limit 25
```
But, as Dav asked, does your data *have* to be like this? There are much more efficient ways of storing this kind of data.
Edit: You just mentioned in a comment that you expect there to be more than 56 tables in the future -- I would look into MySQL limits on how many tables you can UNION before going forward with this kind of SQL. | Summing a field from all tables in a database | [
"",
"php",
"mysql",
""
] |
I was wondering if the enum structure type has a limit on its members. I have this very large list of "variables" that I need to store inside an enum or as constants in a class but I finally decided to store them inside a class, however, I'm being a little bit curious about the limit of members of an enum (if any).
So, do enums have a limit on .Net? | Yes. The number of members *with distinct values* is limited by the underlying type of `enum` - by default this is `Int32`, so you can get that many different members (2^32 - I find it hard that you will reach that limit), but you can explicitly specify the underlying type like this:
```
enum Foo : byte { /* can have at most 256 members with distinct values */ }
```
Of course, you can have as many members as you want if they all have the same value:
```
enum { A, B = A, C = A, ... }
```
In either case, there is probably some implementation-defined limit in C# compiler, but I would expect it to be MIN(range-of-Int32, free-memory), rather than a hard limit. | Due to a limit in the PE file format, you probably can't exceed some 100,000,000 values. Maybe more, maybe less, but definitely not a problem. | Do enums have a limit of members in C#? | [
"",
"c#",
".net",
""
] |
In GTK, how can I scale an image? Right now I load images with PIL and scale them beforehand, but is there a way to do it with GTK? | Load the image from a file using gtk.gdk.Pixbuf for that:
```
import gtk
pixbuf = gtk.gdk.pixbuf_new_from_file('/path/to/the/image.png')
```
then scale it:
```
pixbuf = pixbuf.scale_simple(width, height, gtk.gdk.INTERP_BILINEAR)
```
Then, if you want use it in a gtk.Image, crate the widget and set the image from the pixbuf.
```
image = gtk.Image()
image.set_from_pixbuf(pixbuf)
```
Or maybe in a direct way:
```
image = gtk.image_new_from_pixbuf(pixbuf)
``` | It might be more effective to simply scale them before loading. I especially think so since I use these functions to load in 96x96 thumbnails from sometimes very large JPEGs, still very fast.
```
gtk.gdk.pixbuf_new_from_file_at_scale(..)
gtk.gdk.pixbuf_new_from_file_at_size(..)
``` | Scale an image in GTK | [
"",
"python",
"user-interface",
"image",
"gtk",
"pygtk",
""
] |
Is there any way to generify this two classes?
```
class Tag1{
public Tag1 Parent{get;set;}
}
class Tag2{
public Tag2 Parent{get;set;}
}
```
So I will have:
```
class Tag1 : Tag{}
class Tag2 : Tag{}
```
Seems no, but possible that I missed something global.
Thanks to Jon, I ended with the following solution:
```
class Tag1 : Tag<T> {
public T Parent {get;set;}
public T AddNew(){
return new T(){Parent = (T)this;} // This works
}
}
``` | Okay, slightly messier than before, and the subclass needs to do a bit more work, but...
```
public abstract class SelfRef<T> where T : SelfRef<T>
{
private readonly Func<T> childFactory;
public T Parent { get; set; }
protected SelfRef(Func<T> childFactory)
{
this.childFactory = childFactory;
}
public T AddNew()
{
return childFactory();
}
}
public sealed class Ref1 : SelfRef<Ref1>
{
public Ref1()
: base(() => new Ref1 { Parent = this })
{
}
}
```
I think that meets the requirements. Note that you'll always have trouble implementing a hierarchy more than one level deep under `SelfRef` however you do it, so don't bother :)
You could potentially make the child factory take the new parent. Basically the type relationship you really want - that `T = (the same type deriving from SelfRef<T>)` - can't be expressed in .NET generics, which is why you have a slight mess. | Heh... my first thought was this:
```
public class ParentizedClass<T>
{
public T Parent {get; set;}
}
```
But then I thought... "no, that can't work."
I don't believe it's possible.
Hmm... working on it some more, you could use an interface:
```
public interface ParentizedClass<T>
{
public T Parent {get; set;}
}
```
And then have the class implement the interface using itself as the base class:
```
public class MyClass : ParentizedClass<MyClass>
{
public MyClass Parent {get; set;}
}
```
But I can't check that for validity at the moment. | Generify classes that reference themselves | [
"",
"c#",
"generics",
""
] |
When I run tests with `./manage.py test`, whatever I send to the standard output through `print` doesn't show. When tests fail, I see an "stdout" block per failed test, so I guess Django traps it (but doesn't show it when tests pass). | Checked `TEST_RUNNER` in `settings.py`, it's using a project-specific runner that calls out to Nose. *Nose has the `-s` option to stop it from capturing `stdout`*, but if I run:
`./manage.py test -s`
`manage.py` captures it first and throws a "no such option" error. The help for `manage.py` doesn't mention this, but I found that if I run:
`./manage.py test -- -s`
it ignores the `-s` and lets me capture it on the custom runner's side, passing it to Nose without a problem. | Yeah, this issue is caused by `NoseTestSuiteRunner`. Adding `-- -s` is tricky and not the best solution.
Try to add the following lines in the `settings.py`:
```
NOSE_ARGS = ['--nocapture',
'--nologcapture',]
```
This solved my problems. | How do I see stdout when running Django tests? | [
"",
"python",
"django",
"debugging",
"testing",
"stdout",
""
] |
I have some commands which I am running using the subprocess module. I then want to loop over the lines of the output. The documentation says do not do data\_stream.stdout.read which I am not but I may be doing something which calls that. I am looping over the output like this:
```
for line in data_stream.stdout:
#do stuff here
.
.
.
```
Can this cause deadlocks like reading from data\_stream.stdout or are the Popen modules set up for this kind of looping such that it uses the communicate code but handles all the callings of it for you? | You have to worry about deadlocks if you're **communicating** with your subprocess, i.e. if you're writing to stdin as well as reading from stdout. Because these pipes may be cached, doing this kind of two-way communication is very much a no-no:
```
data_stream = Popen(mycmd, stdin=PIPE, stdout=PIPE)
data_stream.stdin.write("do something\n")
for line in data_stream:
... # BAD!
```
However, if you've not set up stdin (or stderr) when constructing data\_stream, you should be fine.
```
data_stream = Popen(mycmd, stdout=PIPE)
for line in data_stream.stdout:
... # Fine
```
If you need two-way communication, use [communicate](http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate). | The two answer have caught the gist of the issue pretty well: don't mix writing something to the subprocess, reading something from it, writing again, etc -- the pipe's buffering means you're at risk of a deadlock. If you can, write everything you need to write to the subprocess FIRST, close that pipe, and only THEN read everything the subprocess has to say; `communicate` is nice for the purpose, IF the amount of data is not too large to fit in memory (if it is, you can still achieve the same effect "manually").
If you need finer-grain interaction, look instead at [pexpect](http://pexpect.sourceforge.net/pexpect.html) or, if you're on Windows, [wexpect](http://code.google.com/p/wexpect/). | python subprocess module: looping over stdout of child process | [
"",
"python",
"subprocess",
""
] |
Suppose I have the following:
```
std::string some_string = "2009-06-27 17:44:59.027";
```
The question is: Give code that will replace all instances of "-" and ":" in some\_string with a space i.e. " "
I'm looking for a simple one liner (if at all possible)
Boost can be used. | You could use Boost regex to do it. Something like this:
```
e = boost::regex("[-:]");
some_string = regex_replace(some_string, e, " ");
``` | ```
replace_if( some_string.begin(), some_string.end(), boost::bind( ispunct<char>, _1, locale() ), ' ' );
```
One line and not n^2 running time or invoking a regex engine ;v) , although it is a little sad that you need boost for this. | C++: Looking for a concise solution to replace a set of characters in a std::string with a specific character | [
"",
"c++",
"string",
""
] |
If i have an object of type Photo and a Result set which is sorted in a particular order, is there a way for me to get the position of the current Photo object in the result set. and then get all objects that would follow it? | If you're sorting against an Id:
```
// gets the previous photo, or null if none:
var previousPhoto = db.Photos
.Where(p => p.Id < currentPhotoId)
.OrderByDescending(p => p.Id)
.FirstOrDefault();
// gets the next photo, or null if none:
var nextPhoto = db.Photos
.Where(p => p.Id > currentPhotoId)
.OrderBy(p => p.Id)
.FirstOrDefault();
```
If you have custom ordering, you'd need to replace the `OrderBy/OrderByDescending` expression with your custom ordering. You'd also need to use the same ordering criteria in `Where()` to get only those photos before or after the current photo. | You can do something like this (not terribly efficient):
```
var result =
photos.Select((p, i) => new { Index = i, Photo = p })
.SkipWhile(x => x.Photo != photo).Skip(1);
```
This will give you all photos following `photo` combined with their index in the original collection. | LINQ-To-SQL row number greater than | [
"",
"c#",
"linq-to-sql",
""
] |
Here is a code that I copied from the web
```
/**
* A simple example that uses HttpClient to perform a GET using Basic
* Authentication. Can be run standalone without parameters.
*
* You need to have JSSE on your classpath for JDK prior to 1.4
*
* @author Michael Becke
*/
public class BasicAuthenticationExample {
/**
* Constructor for BasicAuthenticatonExample.
*/
public BasicAuthenticationExample() {
super();
}
public static void main(String[] args) throws Exception {
HttpClient client = new HttpClient();
client.getState().setCredentials(
new AuthScope("login.website.com", 443),
new UsernamePasswordCredentials("login id", "password")
);
GetMethod get = new GetMethod(
"https://url to the file");
get.setDoAuthentication( true );
try {
// execute the GET
int status = client.executeMethod( get );
// print the status and response
System.out.println(status + "\n" +
get.getResponseBodyAsString());
} finally {
// release any connection resources used by the method
get.releaseConnection();
}
}
}
```
now because of the line
```
int status = client.executeMethod( get );
```
I get the following error
```
Exception in thread "main" javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Unknown Source)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(Unknown Source)
at com.sun.net.ssl.internal.ssl.Handshaker.fatalSE(Unknown Source)
at com.sun.net.ssl.internal.ssl.Handshaker.fatalSE(Unknown Source)
at com.sun.net.ssl.internal.ssl.ClientHandshaker.serverCertificate(Unknown Source)
at com.sun.net.ssl.internal.ssl.ClientHandshaker.processMessage(Unknown Source)
at com.sun.net.ssl.internal.ssl.Handshaker.processLoop(Unknown Source)
at com.sun.net.ssl.internal.ssl.Handshaker.process_record(Unknown Source)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.writeRecord(Unknown Source)
at com.sun.net.ssl.internal.ssl.AppOutputStream.write(Unknown Source)
at java.io.BufferedOutputStream.flushBuffer(Unknown Source)
at java.io.BufferedOutputStream.flush(Unknown Source)
at org.apache.commons.httpclient.HttpConnection.flushRequestOutputStream(HttpConnection.java:828)
at org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpMethodBase.java:2116)
at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1096)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
at BasicAuthenticationExample.main(BasicAuthenticationExample.java:38)
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(Unknown Source)
at sun.security.validator.PKIXValidator.engineValidate(Unknown Source)
at sun.security.validator.Validator.validate(Unknown Source)
at com.sun.net.ssl.internal.ssl.X509TrustManagerImpl.validate(Unknown Source)
at com.sun.net.ssl.internal.ssl.X509TrustManagerImpl.checkServerTrusted(Unknown Source)
at com.sun.net.ssl.internal.ssl.X509TrustManagerImpl.checkServerTrusted(Unknown Source)
... 18 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(Unknown Source)
at java.security.cert.CertPathBuilder.build(Unknown Source)
... 24 more
```
Now the reason for this error is of course that the certificate sent by my server is not already in my list of trusted certificates.
My question is how to get the certificate into the trusted list.
Thanks
Pranabesh | This page seems to answer your question:
[Adding Certificates to your Java Keystore](http://wiki.unisa.edu.au/display/AAI/Add+Certificates+to+your+Java+Keystore)
It references Andres Sterbenz's InstallCert & [explanation blog post](http://blogs.oracle.com/andreas/entry/no_more_unable_to_find).
You might not want to actually add the certificate in your trusted store, if it's only for testing. So you can also [disable certificate validation](http://www.nakov.com/blog/2009/07/16/disable-certificate-validation-in-java-ssl-connections/). | You will need a keystore that contains the certificate to trust. You can use Java's [keytool](http://java.sun.com/javase/6/docs/technotes/tools/windows/keytool.html) command to do that. There are a few ways to do this:
* Add it to the default Java keystore (`.keystore` in your home directory or user directory on Windows)
* Set the system property `javax.net.ssl.trustStore` to the location of an alternate keystore (something like `java -Djavax.net.ssl.trustStore=/path/to/keystore`)
* Build your own [SSLSocketFactory](http://java.sun.com/j2se/1.5.0/docs/api/javax/net/ssl/SSLSocketFactory.html) implementation that can load a keystore any way you want (this is definitely the most complicated)
A keystore is just a file that contains keys and/or certificates that can be used for various things, in this case a certificate that your application will need to trust. Keytool can be used as follows:
```
keytool -importcert -file /path/to/certificate/file -keystore /path/to/keystore
```
For other command-line options, just run
```
keytool -help
``` | Downloading a file over https over java | [
"",
"java",
"https",
"download",
"certificate",
"unsigned",
""
] |
I am exploring the `HashSet<T>` type, but I don't understand where it stands in collections.
Can one use it to replace a `List<T>`? I imagine the performance of a `HashSet<T>` to be better, but I couldn't see individual access to its elements.
Is it only for enumeration? | The important thing about `HashSet<T>` is right there in the name: it's a *set*. The only things you can do with a single set is to establish what its members are, and to check whether an item is a member.
Asking if you can retrieve a single element (e.g. `set[45]`) is misunderstanding the concept of the set. There's no such thing as the 45th element of a set. Items in a set have no ordering. The sets {1, 2, 3} and {2, 3, 1} are identical in every respect because they have the same membership, and membership is all that matters.
It's somewhat dangerous to iterate over a `HashSet<T>` because doing so imposes an order on the items in the set. That order is not really a property of the set. You should not rely on it. If ordering of the items in a collection is important to you, that collection isn't a set.
Sets are really limited and with unique members. On the other hand, they're really fast. | Here's a real example of where I use a `HashSet<string>`:
Part of my syntax highlighter for UnrealScript files is a new feature that [highlights Doxygen-style comments](http://wiki.pixelminegames.com/index.php?title=Image:Uc_doccommentassist.png). I need to be able to tell if a `@` or `\` command is valid to determine whether to show it in gray (valid) or red (invalid). I have a `HashSet<string>` of all the valid commands, so whenever I hit a `@xxx` token in the lexer, I use `validCommands.Contains(tokenText)` as my O(1) validity check. I really don't care about anything except *existence* of the command in the *set* of valid commands. Lets look at the alternatives I faced:
* `Dictionary<string, ?>`: What type do I use for the value? The value is meaningless since I'm just going to use `ContainsKey`. Note: Before .NET 3.0 this was the only choice for O(1) lookups - `HashSet<T>` was added for 3.0 and extended to implement `ISet<T>` for 4.0.
* `List<string>`: If I keep the list sorted, I can use `BinarySearch`, which is O(log n) (didn't see this fact mentioned above). However, since my list of valid commands is a fixed list that never changes, this will never be more appropriate than simply...
* `string[]`: Again, `Array.BinarySearch` gives O(log n) performance. If the list is short, this could be the best performing option. It always has less space overhead than `HashSet`, `Dictionary`, or `List`. Even with `BinarySearch`, it's not faster for large sets, but for small sets it'd be worth experimenting. Mine has several hundred items though, so I passed on this. | When should I use the HashSet<T> type? | [
"",
"c#",
".net",
"data-structures",
"hashset",
""
] |
I need to create a custom attribute that is applicable only for non static class member.
How can I validate this constraint on project compilation or using code analysis tools? | There's [no such constraint](http://msdn.microsoft.com/en-us/library/system.attributetargets.aspx). | You could always write some post-build event that uses reflection to verify this... Granted, it may not be the most elegant of solutions....
To set this up, you would go into project properties, then the 'Build Events' tab. You would then enter the command line for the reflection based tool you'd write to implement this verification | Validation that Custom Attibute is assigned to non static class member | [
"",
"c#",
".net",
"attributes",
"code-analysis",
""
] |
I have a big file. It includes approximately 3.000-20.000 lines. How can I get the total count of lines in the file using Java? | ```
BufferedReader reader = new BufferedReader(new FileReader("file.txt"));
int lines = 0;
while (reader.readLine() != null) lines++;
reader.close();
```
**Update:** To answer the performance-question raised here, I made a measurement. First thing: 20.000 lines are too few, to get the program running for a noticeable time. I created a text-file with 5 million lines. This solution (started with java without parameters like -server or -XX-options) needed around 11 seconds on my box. The same with `wc -l` (UNIX command-line-tool to count lines), 11 seconds. The solution reading every single character and looking for '\n' needed 104 seconds, 9-10 times as much. | # `Files.lines`
**Java 8+** has a nice and short way using [NIO](https://en.wikipedia.org/wiki/Non-blocking_I/O_(Java)) using [`Files.lines`](https://docs.oracle.com/javase/8/docs/api/java/nio/file/Files.html#lines-java.nio.file.Path-). Note that [you have to close the stream](https://stackoverflow.com/questions/34072035/why-is-files-lines-and-similar-streams-not-automatically-closed) using try-with-resources:
```
long lineCount;
try (Stream<String> stream = Files.lines(path, StandardCharsets.UTF_8)) {
lineCount = stream.count();
}
```
If you don't specify the character encoding, the default one used is [UTF-8](https://en.wikipedia.org/wiki/UTF-8). You may [specify an alternate encoding](https://docs.oracle.com/javase/8/docs/api/java/nio/file/Files.html#lines-java.nio.file.Path-java.nio.charset.Charset-) to match your particular data file as shown in the example above. | How can I get the count of line in a file in an efficient way? | [
"",
"java",
"file",
""
] |
I need a python 3.1 deep update function for dictionaries (a function that will recursively update child dictionaries that are inside a parent dictionary).
But I think, in the future, my function could have to deal with objects that behave like dictionaries but aren't. And furthermore I want to avoid using **`isinstance`** and **`type`** (because they are considered bad, as I can read on almost every Pythonista's blog).
But duck typing is part of Python's philosophy, so how could I check if an object is dictionary-like?
Thanks!
Edit : Thank all for the answers. Just in case, the function I coded can be found at this place : <http://blog.cafeaumiel.com/public/python/dict_deep_update.py> | Check out the (new in 3.x) abstract base classes (ABC's) in the collections module:
<http://docs.python.org/3.1/library/collections.html>
I would consider checking with isinstance against Mapping like in the following:
```
>>> import collections
>>> isinstance({},collections.Mapping)
True
```
Then, if you make your own dict-like class, make collections.Mapping one of its bases.
The other route is trying and catching whatever exceptions for the dictionary operations, - but with the recursion you're talking about, I'd rather check against the base type first than handle figuring out what dict or subdict or other dict member was or was not there to throw an exception.
Editing to add: The benefit of checking against the Mapping ABC instead of against dict is that the same test will work for dict-like classes regardless of whether or not they subclass dict, so it's more flexible, just in case. | use `isinstance`, there is nothing wrong with it and it's routinely used in code requiring recursion.
If by dictionary-like you mean the object's class inherit from the `dict`, `isinstance` will also return `True`.
```
>>> class A(dict):
pass
>>> a = A()
>>> isinstance(a, dict)
True
``` | Python and dictionary like object | [
"",
"python",
"dictionary",
"duck-typing",
""
] |
How would you create an algorithm to solve the following puzzle, "Mastermind"?
Your opponent has chosen four different colours from a set of six (yellow, blue, green, red, orange, purple). You must guess which they have chosen, and in what order. After each guess, your opponent tells you how many (but not which) of the colours you guessed were the right colour in the right place ["blacks"] and how many (but not which) were the right colour but in the wrong place ["whites"]. The game ends when you guess correctly (4 blacks, 0 whites).
For example, if your opponent has chosen (blue, green, orange, red), and you guess (yellow, blue, green, red), you will get one "black" (for the red), and two whites (for the blue and green). You would get the same score for guessing (blue, orange, red, purple).
I'm interested in what algorithm you would choose, and (optionally) how you translate that into code (preferably Python). I'm interested in coded solutions that are:
1. Clear (easily understood)
2. Concise
3. Efficient (fast in making a guess)
4. Effective (least number of guesses to solve the puzzle)
5. Flexible (can easily answer questions about the algorithm, e.g. what is its worst case?)
6. General (can be easily adapted to other types of puzzle than Mastermind)
I'm happy with an algorithm that's very effective but not very efficient (provided it's not just poorly implemented!); however, a very efficient and effective algorithm implemented inflexibly and impenetrably is not of use.
I have my own (detailed) solution in Python which I have posted, but this is by no means the only or best approach, so please post more! I'm not expecting an essay ;) | Key tools: entropy, greediness, branch-and-bound; Python, generators, itertools, decorate-undecorate pattern
In answering this question, I wanted to build up a language of useful functions to explore the problem. I will go through these functions, describing them and their intent. Originally, these had extensive docs, with small embedded unit tests tested using doctest; I can't praise this methodology highly enough as a brilliant way to implement test-driven-development. However, it does not translate well to StackOverflow, so I will not present it this way.
Firstly, I will be needing several standard modules and **future** imports (I work with Python 2.6).
```
from __future__ import division # No need to cast to float when dividing
import collections, itertools, math
```
I will need a scoring function. Originally, this returned a tuple (blacks, whites), but I found output a little clearer if I used a namedtuple:
```
Pegs = collections.namedtuple('Pegs', 'black white')
def mastermindScore(g1,g2):
matching = len(set(g1) & set(g2))
blacks = sum(1 for v1, v2 in itertools.izip(g1,g2) if v1 == v2)
return Pegs(blacks, matching-blacks)
```
To make my solution general, I pass in anything specific to the Mastermind problem as keyword arguments. I have therefore made a function that creates these arguments once, and use the \*\*kwargs syntax to pass it around. This also allows me to easily add new attributes if I need them later. Note that I allow guesses to contain repeats, but constrain the opponent to pick distinct colours; to change this, I only need change G below. (If I wanted to allow repeats in the opponent's secret, I would need to change the scoring function as well.)
```
def mastermind(colours, holes):
return dict(
G = set(itertools.product(colours,repeat=holes)),
V = set(itertools.permutations(colours, holes)),
score = mastermindScore,
endstates = (Pegs(holes, 0),))
def mediumGame():
return mastermind(("Yellow", "Blue", "Green", "Red", "Orange", "Purple"), 4)
```
Sometimes I will need to *partition* a set based on the result of applying a function to each element in the set. For instance, the numbers 1..10 can be partitioned into even and odd numbers by the function n % 2 (odds give 1, evens give 0). The following function returns such a partition, implemented as a map from the result of the function call to the set of elements that gave that result (e.g. { 0: evens, 1: odds }).
```
def partition(S, func, *args, **kwargs):
partition = collections.defaultdict(set)
for v in S: partition[func(v, *args, **kwargs)].add(v)
return partition
```
I decided to explore a solver that uses a *greedy entropic approach*. At each step, it calculates the information that could be obtained from each possible guess, and selects the most informative guess. As the numbers of possibilities grow, this will scale badly (quadratically), but let's give it a try! First, I need a method to calculate the entropy (information) of a set of probabilities. This is just -∑p log p. For convenience, however, I will allow input that are not normalized, i.e. do not add up to 1:
```
def entropy(P):
total = sum(P)
return -sum(p*math.log(p, 2) for p in (v/total for v in P if v))
```
So how am I going to use this function? Well, for a given set of possibilities, V, and a given guess, g, the information we get from that guess can only come from the scoring function: more specifically, how that scoring function partitions our set of possibilities. We want to make a guess that distinguishes best among the remaining possibilites — divides them into the largest number of small sets — because that means we are much closer to the answer. This is exactly what the entropy function above is putting a number to: a large number of small sets will score higher than a small number of large sets. All we need to do is plumb it in.
```
def decisionEntropy(V, g, score):
return entropy(collections.Counter(score(gi, g) for gi in V).values())
```
Of course, at any given step what we will actually have is a set of remaining possibilities, V, and a set of possible guesses we could make, G, and we will need to pick the guess which maximizes the entropy. Additionally, if several guesses have the same entropy, prefer to pick one which could also be a valid solution; this guarantees the approach will terminate. I use the standard python decorate-undecorate pattern together with the built-in max method to do this:
```
def bestDecision(V, G, score):
return max((decisionEntropy(V, g, score), g in V, g) for g in G)[2]
```
Now all I need to do is repeatedly call this function until the right result is guessed. I went through a number of implementations of this algorithm until I found one that seemed right. Several of my functions will want to approach this in different ways: some enumerate all possible sequences of decisions (one per guess the opponent may have made), while others are only interested in a single path through the tree (if the opponent has already chosen a secret, and we are just trying to reach the solution). My solution is a "lazy tree", where each part of the tree is a generator that can be evaluated or not, allowing the user to avoid costly calculations they won't need. I also ended up using two more namedtuples, again for clarity of code.
```
Node = collections.namedtuple('Node', 'decision branches')
Branch = collections.namedtuple('Branch', 'result subtree')
def lazySolutionTree(G, V, score, endstates, **kwargs):
decision = bestDecision(V, G, score)
branches = (Branch(result, None if result in endstates else
lazySolutionTree(G, pV, score=score, endstates=endstates))
for (result, pV) in partition(V, score, decision).iteritems())
yield Node(decision, branches) # Lazy evaluation
```
The following function evaluates a single path through this tree, based on a supplied scoring function:
```
def solver(scorer, **kwargs):
lazyTree = lazySolutionTree(**kwargs)
steps = []
while lazyTree is not None:
t = lazyTree.next() # Evaluate node
result = scorer(t.decision)
steps.append((t.decision, result))
subtrees = [b.subtree for b in t.branches if b.result == result]
if len(subtrees) == 0:
raise Exception("No solution possible for given scores")
lazyTree = subtrees[0]
assert(result in endstates)
return steps
```
This can now be used to build an interactive game of Mastermind where the user scores the computer's guesses. Playing around with this reveals some interesting things. For example, the most informative first guess is of the form (yellow, yellow, blue, green), not (yellow, blue, green, red). Extra information is gained by using exactly half the available colours. This also holds for 6-colour 3-hole Mastermind — (yellow, blue, green) — and 8-colour 5-hole Mastermind — (yellow, yellow, blue, green, red).
But there are still many questions that are not easily answered with an interactive solver. For instance, what is the most number of steps needed by the greedy entropic approach? And how many inputs take this many steps? To make answering these questions easier, I first produce a simple function that turns the lazy tree of above into a set of paths through this tree, i.e. for each possible secret, a list of guesses and scores.
```
def allSolutions(**kwargs):
def solutions(lazyTree):
return ((((t.decision, b.result),) + solution
for t in lazyTree for b in t.branches
for solution in solutions(b.subtree))
if lazyTree else ((),))
return solutions(lazySolutionTree(**kwargs))
```
Finding the worst case is a simple matter of finding the longest solution:
```
def worstCaseSolution(**kwargs):
return max((len(s), s) for s in allSolutions(**kwargs)) [1]
```
It turns out that this solver will always complete in 5 steps or fewer. Five steps! I know that when I played Mastermind as a child, I often took longer than this. However, since creating this solver and playing around with it, I have greatly improved my technique, and 5 steps is indeed an achievable goal even when you don't have time to calculate the entropically ideal guess at each step ;)
How likely is it that the solver will take 5 steps? Will it ever finish in 1, or 2, steps? To find that out, I created another simple little function that calculates the solution length distribution:
```
def solutionLengthDistribution(**kwargs):
return collections.Counter(len(s) for s in allSolutions(**kwargs))
```
For the greedy entropic approach, with repeats allowed: 7 cases take 2 steps; 55 cases take 3 steps; 229 cases take 4 steps; and 69 cases take the maximum of 5 steps.
Of course, there's no guarantee that the greedy entropic approach minimizes the worst-case number of steps. The final part of my general-purpose language is an algorithm that decides whether or not there are *any* solutions for a given worst-case bound. This will tell us whether greedy entropic is ideal or not. To do this, I adopt a branch-and-bound strategy:
```
def solutionExists(maxsteps, G, V, score, **kwargs):
if len(V) == 1: return True
partitions = [partition(V, score, g).values() for g in G]
maxSize = max(len(P) for P in partitions) ** (maxsteps - 2)
partitions = (P for P in partitions if max(len(s) for s in P) <= maxSize)
return any(all(solutionExists(maxsteps-1,G,s,score) for l,s in
sorted((-len(s), s) for s in P)) for i,P in
sorted((-entropy(len(s) for s in P), P) for P in partitions))
```
This is definitely a complex function, so a bit more explanation is in order. The first step is to partition the remaining solutions based on their score after a guess, as before, but this time we don't know what guess we're going to make, so we store all partitions. Now we *could* just recurse into every one of these, effectively enumerating the entire universe of possible decision trees, but this would take a horrifically long time. Instead I observe that, if at this point there is no partition that divides the remaining solutions into more than n sets, then there can be no such partition at any future step either. If we have k steps left, that means we can distinguish between at most nk-1 solutions before we run out of guesses (on the last step, we must always guess correctly). Thus we can discard any partitions that contain a score mapped to more than this many solutions. This is the next two lines of code.
The final line of code does the recursion, using Python's any and all functions for clarity, and trying the highest-entropy decisions first to hopefully minimize runtime in the positive case. It also recurses into the largest part of the partition first, as this is the most likely to fail quickly if the decision was wrong. Once again, I use the standard decorate-undecorate pattern, this time to wrap Python's *sorted* function.
```
def lowerBoundOnWorstCaseSolution(**kwargs):
for steps in itertools.count(1):
if solutionExists(maxsteps=steps, **kwargs):
return steps
```
By calling solutionExists repeatedly with an increasing number of steps, we get a strict lower bound on the number of steps needed in the worst case for a Mastermind solution: 5 steps. The greedy entropic approach is indeed optimal.
Out of curiosity, I invented another guessing game, which I nicknamed "twoD". In this, you try to guess a pair of numbers; at each step, you get told if your answer is correct, if the numbers you guessed are no less than the corresponding ones in the secret, and if the numbers are no greater.
```
Comparison = collections.namedtuple('Comparison', 'less greater equal')
def twoDScorer(x, y):
return Comparison(all(r[0] <= r[1] for r in zip(x, y)),
all(r[0] >= r[1] for r in zip(x, y)),
x == y)
def twoD():
G = set(itertools.product(xrange(5), repeat=2))
return dict(G = G, V = G, score = twoDScorer,
endstates = set(Comparison(True, True, True)))
```
For this game, the greedy entropic approach has a worst case of five steps, but there is a better solution possible with a worst case of four steps, confirming my intuition that myopic greediness is only coincidentally ideal for Mastermind. More importantly, this has shown how flexible my language is: all the same methods work for this new guessing game as did for Mastermind, letting me explore other games with a minimum of extra coding.
What about performance? Obviously, being implemented in Python, this code is not going to be blazingly fast. I've also dropped some possible optimizations in favour of clear code.
One cheap optimization is to observe that, on the first move, most guesses are basically identical: (yellow, blue, green, red) is really no different from (blue, red, green, yellow), or (orange, yellow, red, purple). This greatly reduces the number of guesses we need consider on the first step — otherwise the most costly decision in the game.
However, because of the large runtime growth rate of this problem, I was not able to solve the 8-colour, 5-hole Mastermind problem, even with this optimization. Instead, I ported the algorithms to C++, keeping the general structure the same and employing bitwise operations to boost performance in the critical inner loops, for a speedup of many orders of magnitude. I leave this as an exercise to the reader :)
**Addendum, 2018:** It turns out the greedy entropic approach is not optimal for the 8-colour, 4-hole Mastermind problem either, with a worst-case length of 7 steps when an algorithm exists that takes at most 6! | I once wrote a "[Jotto](http://en.wikipedia.org/wiki/Jotto)" solver which is essentially "Master Mind" with words. (We each pick a word and we take turns guessing at each other's word, scoring "right on" (exact) matches and "elsewhere" (correct letter/color, but wrong placement).
**The key to solving such a problem is the realization that the scoring function is symmetric.**
In other words if `score(myguess) == (1,2)` then I can use the same `score()` function to compare my previous guess with any other possibility and eliminate any that don't give exactly the same score.
Let me give an example: The hidden word (target) is "score" ... the current guess is "fools" --- the score is 1,1 (one letter, 'o', is "right on"; another letter, 's', is "elsewhere"). I can eliminate the word "guess" because the `score("guess") (against "fools") returns (1,0) (the final 's' matches, but nothing else does). So the word "guess" is not consistent with "fools" and a score against some unknown word that gave returned a score of (1,1).
So I now can walk through every five letter word (or combination of five colors/letters/digits etc) and eliminate anything that doesn't score 1,1 against "fools." Do that at each iteration and you'll very rapidly converge on the target. (For five letter words I was able to get within 6 tries every time ... and usually only 3 or 4). Of course there's only 6000 or so "words" and you're eliminating close to 95% for each guess.
Note: for the following discussion I'm talking about five letter "combination" rather than four elements of six colors. The same algorithms apply; however, the problem is orders of magnitude smaller for the old "Master Mind" game ... there are only 1296 combinations (6\*\*4) of colored pegs in the classic "Master Mind" program, assuming duplicates are allowed. The line of reasoning that leads to the convergence involves some combinatorics: there are 20 non-winning possible scores for a five element target (`n = [(a,b) for a in range(5) for b in range(6) if a+b <= 5]` to see all of them if you're curious. We would, therefore, expect that any random valid selection would have a roughly 5% chance of matching our score ... the other 95% won't and therefore will be eliminated for each scored guess. This doesn't account for possible clustering in word patterns but the real world behavior is close enough for words and definitely even closer for "Master Mind" rules. However, with only 6 colors in 4 slots we only have 14 possible non-winning scores so our convergence isn't quite as fast).
For Jotto the two minor challenges are: generating a good world list (`awk -f 'length($0)==5' /usr/share/dict/words` or similar on a UNIX system) and what to do if the user has picked a word that not in our dictionary (generate every letter combination, 'aaaaa' through 'zzzzz' --- which is 26 \*\* 5 ... or ~1.1 million). A trivial combination generator in Python takes about 1 minute to generate all those strings ... an optimized one should to far better. (I can also add a requirement that every "word" have at least one vowel ... but this constraint doesn't help much --- 5 vowels \* 5 possible locations for that and then multiplied by 26 \*\* 4 other combinations).
For Master Mind you use the same combination generator ... but with only 4 or 5 "letters" (colors). Every 6-color combination (15,625 of them) can be generated in under a second (using the same combination generator as I used above).
If I was writing this "Jotto" program today, in Python for example, I would "cheat" by having a thread generating all the letter combos in the background while I was still eliminated words from the dictionary (while my opponent was scoring me, guessing, etc). As I generated them I'd also eliminate against all guesses thus far. Thus I would, after I'd eliminated all known words, have a relatively small list of possibilities and against a human player I've "hidden" most of my computation lag by doing it in parallel to their input. (And, if I wrote a web server version of such a program I'd have my web engine talk to a local daemon to ask for sequences consistent with a set of scores. The daemon would keep a locally generated list of all letter combinations and would use a `select.select()` model to feed possibilities back to each of the running instances of the game --- each would feed my daemon word/score pairs which my daemon would apply as a filter on the possibilities it feeds back to that client).
(By comparison I wrote my version of "Jotto" about 20 years ago on an XT using Borland TurboPascal ... and it could do each elimination iteration --- starting with its compiled in list of a few thousand words --- in well under a second. I build its word list by writing a simple letter combination generator (see below) ... saving the results to a moderately large file, then running my word processor's spell check on that with a macro to delete everything that was "mis-spelled" --- then I used another macro to wrap all the remaining lines in the correct punctuation to make them valid static assignments to my array, which was a #include file to my program. All that let me build a standalone game program that "knew" just about every valid English 5 letter word; the program was a .COM --- less than 50KB if I recall correctly).
For other reasons I've recently written a simple arbitrary combination generator in Python. It's about 35 lines of code and I've posted that to my "trite snippets" wiki on bitbucket.org ... it's not a "generator" in the Python sense ... but a class you can instantiate to an infinite sequence of "numeric" or "symbolic" combination of elements (essentially counting in any positive integer base).
You can find it at: [Trite Snippets: Arbitrary Sequence Combination Generator](http://bitbucket.org/jimd/trite-snippets/wiki/Arbitrary_Sequence_Combination_Generator)
For the exact match part of our `score()` function you can just use this:
```
def score(this, that):
'''Simple "Master Mind" scoring function'''
exact = len([x for x,y in zip(this, that) if x==y])
### Calculating "other" (white pegs) goes here:
### ...
###
return (exact,other)
```
I think this exemplifies some of the beauty of Python: `zip()` up the two sequences,
return any that match, and take the length of the results).
Finding the matches in "other" locations is deceptively more complicated. If no repeats were allowed then you could simply use sets to find the intersections.
[In my earlier edit of this message, when I realized how I could use `zip()` for exact matches, I erroneously thought we could get away with `other = len([x for x,y in zip(sorted(x), sorted(y)) if x==y]) - exact` ... but it was late and I was tired. As I slept on it I realized that the method was flawed. **Bad, Jim! Don't post without *adequate* testing!\* (Tested several cases that happened to work)**].
In the past the approach I used was to sort both lists, compare the heads of each: if the heads are equal, increment the count and pop new items from both lists. otherwise pop a new value into the lesser of the two heads and try again. Break as soon as either list is empty.
This does work; but it's fairly verbose. The best I can come up with using that approach is just over a dozen lines of code:
```
other = 0
x = sorted(this) ## Implicitly converts to a list!
y = sorted(that)
while len(x) and len(y):
if x[0] == y[0]:
other += 1
x.pop(0)
y.pop(0)
elif x[0] < y[0]:
x.pop(0)
else:
y.pop(0)
other -= exact
```
Using a dictionary I can trim that down to about nine:
```
other = 0
counters = dict()
for i in this:
counters[i] = counters.get(i,0) + 1
for i in that:
if counters.get(i,0) > 0:
other += 1
counters[i] -= 1
other -= exact
```
(Using the new "collections.Counter" class (Python3 and slated for Python 2.7?) I could presumably reduce this a little more; three lines here are initializing the counters collection).
It's important to decrement the "counter" when we find a match and it's vital to test for counter greater than zero in our test. If a given letter/symbol appears in "this" once and "that" twice then it must only be counted as a match once.
The first approach is definitely a bit trickier to write (one must be careful to avoid boundaries). Also in a couple of quick benchmarks (testing a million randomly generated pairs of letter patterns) the first approach takes about 70% longer as the one using dictionaries. (Generating the million pairs of strings using `random.shuffle()` took over twice as long as the slower of the scoring functions, on the other hand).
A formal analysis of the performance of these two functions would be complicated. The first method has two sorts, so that would be 2 \* O(n*log(n)) ... and it iterates through at least one of the two strings and possibly has to iterate all the way to the end of the other string (best case O(n), worst case O(2n)) -- force I'm mis-using big-O notation here, but this is just a rough estimate. The second case depends entirely on the perfomance characteristics of the dictionary. If we were using b-trees then the performance would be roughly O(n*log(n) for creation and finding each element from the other string therein would be another O(n\*log(n)) operation. However, Python dictionaries are very efficient and these operations should be close to constant time (very few hash collisions). Thus we'd expect a performance of roughly O(2n) ... which of course simplifies to O(n). That roughly matches my benchmark results.
Glancing over the Wikipedia article on "[Master Mind](http://en.wikipedia.org/wiki/Mastermind_%28board_game%29)" I see that Donald Knuth used an approach which starts similarly to mine (and 10 years earlier) but he added one significant optimization. After gathering every remaining possibility he selects whichever one would eliminate the largest number of possibilities on the next round. I considered such an enhancement to my own program and rejected the idea for practical reasons. In his case he was searching for an optimal (mathematical) solution. In my case I was concerned about playability (on an XT, preferably using less than 64KB of RAM, though I could switch to .EXE format and use up to 640KB). I wanted to keep the response time down in the realm of one or two seconds (which was easy with my approach but which would be much more difficult with the further speculative scoring). (Remember I was working in Pascal, under MS-DOS ... no threads, though I did implement support for crude asynchronous polling of the UI which turned out to be unnecessary)
If I were writing such a thing today I'd add a thread to do the better selection, too. This would allow me to give the best guess I'd found within a certain time constraint, to guarantee that my player didn't have to wait too long for my guess. Naturally my selection/elimination would be running while waiting for my opponent's guesses. | How to solve the "Mastermind" guessing game? | [
"",
"python",
"algorithm",
""
] |
I'm working on a small java game which is a sort of clone of the 1995 game Theme Hospital.
I've recently been working on the GUI and the Level structure now I have the path finding and grid working.
My current way of "building stuff" consists of one room and one reception desk, and it works ok so far, but I need to implement a way to add different types of rooms and items.
Here is my current code on the button press for the creation of the object (<http://snipt.org/lUm>).
I'm sure there must be a better way to do this! My implementation is poor, I know, but I have only recently began to understand how to use abstract classes and interfaces.
Actually setting the properties of the room or item, I currently have no issue with. I am thinking ahead, but I love how I get different opinions on the best way to do things here on Stack. At ideas on how such a thing should be achieved? Any examples of this sort of thing? I would search it, but I'm not really sure what to search for, as I don't if what I'm doing has an exact name.
I'm sorry if I haven't been too clear exactly what I'm talking about. I will gladly answer any further questions on this, and or upload more sections of code as needed or requested.
Thanks in advance for your time and thoughts! | Check out the [factory pattern](http://en.wikipedia.org/wiki/Factory_method_pattern). The factory is a means of creating objects by providing parameters, and getting objects back that adhere to a common interface. The implementation, however, is determined by the factory. The calling code does not need to know this.
By encapsulating the object construction in another object (the factory), the mechanics of selecting the particular object to build is abstracted away from the calling code. You can easily provide further subclasses at a later date by only modifying the factory itself.
So in your example code, the `buildMe()` method is a form of factory. It takes some parameter specifying what's required to be built - a `Room` or a `ReceptionDesk`. Both these would implement the same interface (a `HospitalComponent`?) and the calling code would then place that component within the hospital. You can add further subclasses of HospitalComponent to the factory (an `OperatingTheatre`?) and the calling code doesn't have to change.
It's worth investigating [design patterns](http://en.wikipedia.org/wiki/Design_pattern_%28computer_science%29) (in this scenario, check out this list of [creational patterns](http://en.wikipedia.org/wiki/Creational_pattern)) to understand different ways of using objects to solve common problems, and for how to communicate your solutions with other people. The [Gang-of-Four](http://en.wikipedia.org/wiki/Design_Patterns_%28book%29) book is the bible for this subject. | Polymorphism might be overkill for what you are doing.
Personally, I would just have a Room class, with a table of the values for each building (max size, cost per tile etc.). Then, when you build a new room, get the matching table entry, and create your Room object with the details from the table.
This may not be the best practice, and it probably goes against Java conventions (I came to Java from dynamic languages) but in terms of lines of code that need changing to make a new room, it's the lowest I've found. | How should i "build" things now I've implemented polymorphism? (Java, sim game) | [
"",
"java",
"switch-statement",
"simulator",
""
] |
Is it possible to set the DataFormatString property of a column or cell in an ASP.NET DataGridView at runtime? | There doesn't seem to be a way to set the DataFormatString property. I have ended up binding the datasource to the table and then going through all the cells and formatting them manually:
```
DataGridView.AutoGenerateColumns = true;
DataGridView.DataSource = dbconnection.getDataReader();
DataGridView.DataBind();
int result;
for (int i = 0; i < DataGridView.Rows.Count; i++)
{
foreach (TableCell c in DataGridView.Rows[i].Cells)
{
if (int.TryParse(c.Text, out result))
{
c.Text = String.Format("{0:n0}", result);
}
}
}
```
This method works perfectly for me. Not sure how it would scale up with a large dataset although my guess is it would be fine. | This should work.
```
BoundField priceField = grid.Columns[0] as BoundField;
priceField.DataFormatString = "{0:c}";
priceField.HtmlEncode = false;
grid.DataSource = list;
grid.DataBind();
```
Found via [Link](https://web.archive.org/web/20161118144429/http://geekswithblogs.net/michelotti/archive/2006/03/30/73896.aspx) | Set DataFormatString on DataGridView at Runtime? | [
"",
"c#",
"asp.net",
"datagridview",
"dataformat",
""
] |
```
$sql = mysql_query("SELECT * FROM comments WHERE user = 1");
$i = 1;
while ($row = mysql_fetch_assoc($sql)) {
<p>$i. <?php echo $row['comment'] ?></p>
<div class="border"></div>
$i++;
}
```
How could I do to not output `<div class="border"></div>` under the last comment? | ```
$sql = mysql_query("SELECT * FROM comments WHERE user = 1");
$number = mysql_num_rows($sql);
$i = 1;
while ($row = mysql_fetch_assoc($sql)) {
echo '<p>' . $i . $row['comment'] . '</p>';
if ($i < $number)
{
echo '<div class="border"></div>';
}
$i ++;
}
```
Using WebDevHobo's suggestion. | ```
$sql = mysql_query("SELECT * FROM comments WHERE user = 1");
$output = array ();
while ($row = mysql_fetch_assoc($sql)) {
$output[] = $row['comment'];
}
echo join('<div class="border"></div>', $output);
``` | PHP while loop find last row | [
"",
"php",
""
] |
I have a float with the value of e.g 57.400002. I use `sprintf_s` to display the value on my GUI.
```
sprintf_s(xPosition, 19, "%f", xPositionValue);
```
How can I format the float so it displays as 57.40? | `sprintf_s(xPosition, 19, "%.2f", xPositionValue);` | ```
sprintf_s(xPosition, 19, "%.2f", xPositionValue);
```
See <http://www.cplusplus.com/reference/clibrary/cstdio/printf/> for more documentation on format codes. | Printing the value of a float to 2 decimal places | [
"",
"c++",
"winapi",
"floating-point",
"format",
"printf",
""
] |
I didn't really pay as much attention to Python 3's development as I would have liked, and only just noticed some interesting new syntax changes. Specifically from [this SO answer](https://stackoverflow.com/questions/1269795/unusual-speed-difference-between-python-and-c/1271927#1271927) function parameter annotation:
```
def digits(x:'nonnegative number') -> "yields number's digits":
# ...
```
Not knowing anything about this, I thought it could maybe be used for implementing static typing in Python!
After some searching, there seemed to be a lot discussion regarding (entirely optional) static typing in Python, such as that mentioned in [PEP 3107](http://www.python.org/dev/peps/pep-3107/), and ["Adding Optional Static Typing to Python"](http://www.artima.com/weblogs/viewpost.jsp?thread=85551) (and [part 2](http://www.artima.com/weblogs/viewpost.jsp?thread=86641))
..but, I'm not clear how far this has progressed. Are there any implementations of static typing, using the parameter-annotation? Did any of the parameterised-type ideas make it into Python 3? | Thanks for reading my code!
Indeed, it's not hard to create a generic annotation enforcer in Python. Here's my take:
```
'''Very simple enforcer of type annotations.
This toy super-decorator can decorate all functions in a given module that have
annotations so that the type of input and output is enforced; an AssertionError is
raised on mismatch.
This module also has a test function func() which should fail and logging facility
log which defaults to print.
Since this is a test module, I cut corners by only checking *keyword* arguments.
'''
import sys
log = print
def func(x:'int' = 0) -> 'str':
'''An example function that fails type checking.'''
return x
# For simplicity, I only do keyword args.
def check_type(*args):
param, value, assert_type = args
log('Checking {0} = {1} of {2}.'.format(*args))
if not isinstance(value, assert_type):
raise AssertionError(
'Check failed - parameter {0} = {1} not {2}.'
.format(*args))
return value
def decorate_func(func):
def newf(*args, **kwargs):
for k, v in kwargs.items():
check_type(k, v, ann[k])
return check_type('<return_value>', func(*args, **kwargs), ann['return'])
ann = {k: eval(v) for k, v in func.__annotations__.items()}
newf.__doc__ = func.__doc__
newf.__type_checked = True
return newf
def decorate_module(module = '__main__'):
'''Enforces type from annotation for all functions in module.'''
d = sys.modules[module].__dict__
for k, f in d.items():
if getattr(f, '__annotations__', {}) and not getattr(f, '__type_checked', False):
log('Decorated {0!r}.'.format(f.__name__))
d[k] = decorate_func(f)
if __name__ == '__main__':
decorate_module()
# This will raise AssertionError.
func(x = 5)
```
Given this simplicity, it's strange at the first sight that this thing is not mainstream. However, I believe there are good reasons why it's **not as useful as it might seem**. Generally, type checking helps because if you add integer and dictionary, chances are you made some obvious mistake (and if you meant something reasonable, it's still *better to be explicit than implicit*).
But in real life you often mix quantities of the same **computer type** as seen by compiler but clearly different **human type**, for example the following snippet contains an obvious mistake:
```
height = 1.75 # Bob's height in meters.
length = len(sys.modules) # Number of modules imported by program.
area = height * length # What's that supposed to mean???
```
Any human should immediately see a mistake in the above line provided it knows the 'human type' of variables `height` and `length` even though it looks to computer as **perfectly legal** multiplication of `int` and `float`.
There's more that can be said about possible solutions to this problem, but enforcing 'computer types' is apparently a half-solution, so, at least in my opinion, it's *worse than no solution at all*. It's the same reason why **Systems Hungarian** is a terrible idea while **Apps Hungarian** is a great one. There's more at the very informative **[post of Joel Spolsky](http://www.joelonsoftware.com/articles/Wrong.html)**.
Now if somebody was to implement some kind of Pythonic third-party library that would automatically assign to real-world data its **human type** and then took care to transform that type like `width * height -> area` and enforce that check with function annotations, I think that would be a type checking people could really use! | As mentioned in that PEP, static type checking is one of the possible applications that function annotations can be used for, but they're leaving it up to third-party libraries to decide how to do it. That is, there isn't going to be an official implementation in core python.
As far as third-party implementations are concerned, there are some snippets (such as <http://code.activestate.com/recipes/572161/>), which seem to do the job pretty well.
EDIT:
As a note, I want to mention that checking behavior is preferable to checking type, therefore I think static typechecking is not so great an idea. My answer above is aimed at answering the question, not because I would do typechecking myself in such a way. | Python 3 and static typing | [
"",
"python",
"python-3.x",
"static-typing",
""
] |
Using jquery, I currently append html to a div on a click event. The following code allows me to fade in only the appended portion of the div:
```
var html = "..";
$('<div></div>').appendTo("#id").hide().append(html).fadeIn('slow');
```
This portion works perfectly. But how can I later remove (fade out) only the appended portion? I tried hacking this by storing the html *prior* to the appending, and then simply hiding everything and showing the stored html. But this does not work well when the same procedure is reused for several divs on the same page (and this seems like poor implementation). Is there a good way to do this?
Just to give an idea of why I need this: Think of a blog type page where for every article on the page there are several comments with only x amount showing by default: the click event fetches the remaining comments and displays them, and then toggling the button again removes the appended comments and sends it back to the original state. | I'd just set and clear the html with '.html()' ...
-- edit
to be more clear, have an area layed out specifically for the addition of these comments:
```
<div id='commentarea1'></div>
```
etc. | empty() is always an option
jQuery('#main').empty(); | Removing appended html using jQuery? | [
"",
"javascript",
"jquery",
"html",
""
] |
I'm wondering if you guys are aware of any article of some sort that shows how to make code fully unicode? Reason I ask is because I'm dealing with winapi right now and it seems that everything's supposed to be unicode like L"blabla" .. Functions that I've encountered won't work properly by simply using the standard string for example.
Thanks! | When one of my projects need to be compiled with UNICODE on and off, I usually use the following definition to create an STL string that uses TCHAR instead of CHAR and wchar\_t:
```
#ifdef _UNICODE
typedef std::wstring tstring;
#else
typedef std::string tstring;
#endif
```
or the following may also work:
```
typedef std::basic_string<TCHAR> tstring;
```
In my whole project I will then define all strings as tstring and use the \_T() macro to create the strings correctly.
When you then call a WIN32 API just use the .c\_str() method on the string. | Regarding your last statement, if you're talking about std::string instead use std::wstring.
Toto has already answered the key question: just use `L""`, WCHAR/wchar\_t, and wstring everywhere that you'd normally use `""`, char, and string.
If you find yourself needing to convert between unicode and ansi, there lie the dragons. very evil dragons that will eat you alive if you don't understand code pages, UTF8, and so on. but for most types of applications this is the 2% case, if that. the rest is easy as long as you stay all-unicode. | tutorials about WinAPI unicode? | [
"",
"c++",
"winapi",
"unicode",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.