instruction stringlengths 0 30k ⌀ |
|---|
Maybe you forgot to associate the tooltip with the control using [SetToolTip][1].
[1]: http://msdn.microsoft.com/en-us/library/system.windows.forms.tooltip.settooltip.aspx |
It depends on the size of the project. The throughput is low and the latency is high, so you're going to get hit every which way, but due to the latency you'll be hit harder if you have a lot of little files rather than a few large ones.
Have you considered simply carrying around a GIT or other distributed repository and updating the machine repositories as you move around? Then you can compile locally and treat the drive and a roving server. Since only changes will be moved across, it should be faster, and your code will be 'backed up' in more places.
If you forget the drive, it breaks, or is lost/stolen, then you can still sit down at a PC and program with no code missing if you're at the last PC you used, or very little code missing (which will be updated later with a resync anyway).
And it's just a hop skip and a jump away from simply using the network to move the changes between the systems if you don't want to carry the drive around later.
-Adam |
Not only is `System.Collections.ObjectModel.Collection<T>` a good bet, but in the help docs there's [an example][1] of how to override its various protected methods in order to get notification. (Scroll down to Example 2.)
[1]: http://msdn.microsoft.com/en-us/library/ms132397.aspx |
It depends mostly on how much the respository is used. With one user checking in once a day and a branch/marge/etc operation once a week you probably don't need to run it more than once a year.
With several dozen developers working on several dozen projects each checking in 2-3 times a day, you might want to run it nightly.
It won't hurt to run it more frequently than needed, though.
What I'd do is run it now, and then a week from now take a measurement of disk utilization and then run it, and measure disk utilization again. If it drops 5% in size, then run it once a week. If it drops more, then run it more frequently, if it drops less then run it less frequently.
-Adam |
It depends mostly on how much the respository is used. With one user checking in once a day and a branch/marge/etc operation once a week you probably don't need to run it more than once a year.
With several dozen developers working on several dozen projects each checking in 2-3 times a day, you might want to run it nightly.
It won't hurt to run it more frequently than needed, though.
What I'd do is run it now, then a week from now take a measurement of disk utilization, run it again, and measure disk utilization again. If it drops 5% in size, then run it once a week. If it drops more, then run it more frequently, if it drops less then run it less frequently.
-Adam |
If you are creating your menu items using the <a href="http://msdn.microsoft.com/en-us/library/system.windows.forms.menuitem.aspx" title="MenuItem Members">System.Windows.Forms.MenuItem</a> class you won't have a "ToolTipText" property.
You should use the <a href="http://msdn.microsoft.com/en-us/library/system.windows.forms.toolstripmenuitem.aspx" title="ToolStripMenuItem class">System.Windows.Forms.ToolStripMenuItem</a> class which is new as of .Net Framework 2.0 and DOES include the "ToolTipText" property. |
My guess here is that because the data was able to import that the field is actually a varchar or some character field, because importing to a numeric field might have failed. Here was a test case I ran purely a MySQL, SQL solution.
1. The table is just a single column (alpha) that is a varchar.
mysql> desc t;
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| alpha | varchar(15) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
2. Add a record
mysql> insert into t values('"1,000,000"');
Query OK, 1 row affected (0.00 sec)
mysql> select * from t;
+-------------+
| alpha |
+-------------+
| "1,000,000" |
+-------------+
3. Update statement.
mysql> update t set alpha = replace( replace(alpha, ',', ''), '"', '' );
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> select * from t;
+---------+
| alpha |
+---------+
| 1000000 |
+---------+
So in the end the statement I used was:
UPDATE table
SET field_name = replace( replace(field_name, ',', ''), '"', '' );
I looked at the [MySQL Documentation](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_replace) and it didn't look like I could do the regular expressions find _and replace_. Although you could, like Eldila use a regular expression for a find and then an alternative solution for replace.
[@Eldila](http://stackoverflow.com/questions/7917/remove-quotes-and-commas-from-a-string-in-mysql#8135): I didn't quite understand your remark about what I said. Did you mean to say "Please _take_ joe seriously?" Or did you mean something else?
Also be careful with `s/"(\d+),(\d+)"/$1$2/` because what if the number has more then just a single comma, for instance "1,000,000" you're going to want to do a global replace (in perl that is `s///g`). But even with a global replace the replacement starts where you last left off (unless perl is different), and would miss the every other comma separated group. A possible solution would be to make the first (\d+) optional like so `s/(\d+)?,(\d+)/$1$2/g` and in this case I would need a second find and replace to strip the quotes.
Here are some ruby examples of the regular expressions acting on just the string "1,000,000", notice there are NOT double quote inside the string, this is just a string of the number itself.
>> "1,000,000".sub( /(\d+),(\d+)/, '\1\2' )
# => "1000,000"
>> "1,000,000".gsub( /(\d+),(\d+)/, '\1\2' )
# => "1000,000"
>> "1,000,000".gsub( /(\d+)?,(\d+)/, '\1\2' )
# => "1000000"
>> "1,000,000".gsub( /[,"]/, '' )
# => "1000000"
>> "1,000,000".gsub( /[^0-9]/, '' )
# => "1000000" |
My guess here is that because the data was able to import that the field is actually a varchar or some character field, because importing to a numeric field might have failed. Here was a test case I ran purely a MySQL, SQL solution.
1. The table is just a single column (alpha) that is a varchar.
mysql> desc t;
+-------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| alpha | varchar(15) | YES | | NULL | |
+-------+-------------+------+-----+---------+-------+
2. Add a record
mysql> insert into t values('"1,000,000"');
Query OK, 1 row affected (0.00 sec)
mysql> select * from t;
+-------------+
| alpha |
+-------------+
| "1,000,000" |
+-------------+
3. Update statement.
mysql> update t set alpha = replace( replace(alpha, ',', ''), '"', '' );
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> select * from t;
+---------+
| alpha |
+---------+
| 1000000 |
+---------+
So in the end the statement I used was:
UPDATE table
SET field_name = replace( replace(field_name, ',', ''), '"', '' );
I looked at the [MySQL Documentation](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_replace) and it didn't look like I could do the regular expressions find _and replace_. Although you could, like [Eldila](http://stackoverflow.com/questions/7917/remove-quotes-and-commas-from-a-string-in-mysql#8135), use a regular expression for a find and then an alternative solution for replace.
- - -
Also be careful with `s/"(\d+),(\d+)"/$1$2/` because what if the number has more then just a single comma, for instance "1,000,000" you're going to want to do a global replace (in perl that is `s///g`). But even with a global replace the replacement starts where you last left off (unless perl is different), and would miss the every other comma separated group. A possible solution would be to make the first (\d+) optional like so `s/(\d+)?,(\d+)/$1$2/g` and in this case I would need a second find and replace to strip the quotes.
Here are some ruby examples of the regular expressions acting on just the string "1,000,000", notice there are NOT double quote inside the string, this is just a string of the number itself.
>> "1,000,000".sub( /(\d+),(\d+)/, '\1\2' )
# => "1000,000"
>> "1,000,000".gsub( /(\d+),(\d+)/, '\1\2' )
# => "1000,000"
>> "1,000,000".gsub( /(\d+)?,(\d+)/, '\1\2' )
# => "1000000"
>> "1,000,000".gsub( /[,"]/, '' )
# => "1000000"
>> "1,000,000".gsub( /[^0-9]/, '' )
# => "1000000" |
You might also try:
sys.stdout.write("\r%2d%%" % percent)
sys.stdout.flush()
Using a single carriage return at the beginning of your string rather than several backspaces. Your cursor will still blink, but it'll blink after the percent sign rather than under the first digit, and with one control character instead of three you may get less flicker. |
Seperating concerns with Linq To SQL and DTO's |
|c#|dto-mapping| |
I recently started a new webforms project and decided to seperate the business classes from any DBML references. My business layer classes instead access discrete Data layer methods and are returned collections of DTO's. So the data layer might project DTO's like the following:
(from c in dataContext.Customers
where c.Active == true
select new DTO.Customer
{
CustomerID = c.CustomerID,
Name = c.CustomerName,
...
}).ToList()
Although building the DTO objects adds work, this feels like a better approach to a tight binding between Business & Data layers and means I can test the Business layer without a database being present.
My question is, is this good practice?, Is there a way of generating the DTO's (maybe via SQLMetal), and what other problems might I strike as the project progresses. |
|c#|linq|dto-mapping| |
You could use a service like www.unfuddle.com to set up a free SVN or GIT repository.
We use Unfuddle and it's really great. There are free and paid versions (depending on your needs).
Or, you could of course set up a local copy. There are plenty of tutorials to be found via Google for that: http://www.google.com/search?rlz=1C1GGLS_enUS291&aq=f&sourceid=chrome&ie=UTF-8&q=set+up+svn |
Two concerns to keep in mind:
1. Your example involved placing a
password on a command line. That
cleartext representation of a secret
could be a security vulnerability.
2. When enumerating processes, ask
yourself which processes you really
want to enumerate. All users, or
just the current user? What if the
current user is logged in twice (two
desktops)?
|
As specified in [this thread][1], distributed VCSs (git, Mercurial) are a better model than centralized ones, due to branch creation ease, branch merging ease, no need to setup a special server, no need to have network access to work and some other advantages. If you'll work alone the DVCS allows you to incorporate people to your projects if the need arises very easily.
Anyhow, answering directly your question, a way to set up SVN would be to have a repository per project and, depending on if stored procedures and scripts and libraries are shared or not, create a directory on each project tree for scripts and stored procedures, or a full repository for the shared code.
[1]: http://stackoverflow.com/questions/49601/is-there-a-barebones-windows-version-control-system-thats-suitable-for-only-one#49722 |
Usefulness of SQL Server "with encryption" statement |
|sql-server| |
Recently a friend and I were talking about securing stored procedure code in a SQL server database.
From distant memory, I'm pretty certain that "with encryption" is incredibly easily broken in all versions of SQL Server, however he said it has been greatly improved in SQL 2005. As a result I have not seriously considered it as a security option in any systems I have ever worked on.
So in what scenarious could "with encryption" be used, and when should it be avoided at all costs? |
# Shuffle card deck evenly in C++
#include <algorithm>
class Deck {
// each card is 8-bit: 4-bit for suit, 4-bit for value
// suits and values are extracted using bit-magic
char cards[52];
public:
// ...
void shuffle() {
std::random_shuffle(cards, cards + 52);
}
// ...
};
Complexity: Linear in N. Exactly 51 swaps are performed. See http://www.sgi.com/tech/stl/random_shuffle.html
**Testing**:
// ...
int main() {
typedef std::map<std::pair<size_t, Deck::value_type>, size_t> Map;
Map freqs;
Deck d;
const size_t ntests = 100000;
// compute frequencies of events: card at position
for (size_t i = 0; i < ntests; ++i) {
d.shuffle();
size_t pos = 0;
for(Deck::const_iterator j = d.begin(); j != d.end(); ++j, ++pos)
++freqs[std::make_pair(pos, *j)];
}
// if Deck.shuffle() is correct then all frequencies must be similar
for (Map::const_iterator j = freqs.begin(); j != freqs.end(); ++j)
std::cout << "pos=" << j->first.first << " card=" << j->first.second
<< " freq=" << j->second << std::endl;
}
|
# Shuffle card deck evenly in C++
#include <algorithm>
class Deck {
// each card is 8-bit: 4-bit for suit, 4-bit for value
// suits and values are extracted using bit-magic
char cards[52];
public:
// ...
void shuffle() {
std::random_shuffle(cards, cards + 52);
}
// ...
};
Complexity: Linear in N. Exactly 51 swaps are performed. See http://www.sgi.com/tech/stl/random_shuffle.html
**Testing**:
// ...
int main() {
typedef std::map<std::pair<size_t, Deck::value_type>, size_t> Map;
Map freqs;
Deck d;
const size_t ntests = 100000;
// compute frequencies of events: card at position
for (size_t i = 0; i < ntests; ++i) {
d.shuffle();
size_t pos = 0;
for(Deck::const_iterator j = d.begin(); j != d.end(); ++j, ++pos)
++freqs[std::make_pair(pos, *j)];
}
// if Deck.shuffle() is correct then all frequencies must be similar
for (Map::const_iterator j = freqs.begin(); j != freqs.end(); ++j)
std::cout << "pos=" << j->first.first << " card=" << j->first.second
<< " freq=" << j->second << std::endl;
}
As usual, one test is not sufficient. |
Since you mentioned fork() I assume you're on a Unix-like system, in which case <a href="http://en.wikipedia.org/wiki/POSIX_Threads">POSIX threads</a> (usually referred to as pthreads) are what you want to use.
Specifically, pthread_create() is the function you need to create a new thread. Its arguments are:
int pthread_create(pthread_t * thread, pthread_attr_t * attr, void *
(*start_routine)(void *), void * arg);
The first argument is the returned pointer to the thread id. The second argument is the thread arguments, which can be NULL unless you want to start the thread with a specific priority. The third argument is the function executed by the thread. The fourth argument is the single argument passed to the thread function when it is executed. |
I think the screencasts and short books from [Peepcode][1] are really good. They have screencasts to get you started and have some as you get more advanced.
[1]: http://peepcode.com/ |
Why do we need entity objects? |
|sql|database|orm|entities| |
Ok, I realize I might be downvoted into oblivion for this question, especially given my stance on the matter, but I really need to see some honest, thoughtful debate on the merits of the currently accepted ***enterprise application*** design paradigm.
I am not convinced that entity objects should exist.
By entity objects I mean the typical things we tend to build for our applications, like "Person", "Account", "Order", etc.
My current design philosophy is this:
- All database access must be accomplished via stored procedures.
- Whenever you need data, call a stored procedure and iterate over a SqlDataReader or the rows in a DataTable
(Note: I have also built enterprise applications with J2EE, java folks please substitute the equvalent for my .NET examples)
I am not anti-OO. I write lots of classes for different purposes, just not entities. I will admit that a large portion of the classes I write are static helper classes.
I am not building toys. I'm talking about large, high volume transactional applications deployed across multiple machines. Web applications, windows services, web services, b2b interaction, you name it.
I have used OR Mappers. I have written a few. I have used the J2EE stack, CSLA, and a few other equivalents. I have not only used them but actively developed and maintained these applications in production environments.
I have come to the battle-tested conclusion that entity objects are getting in our way, and our lives would be *so* much easier without them.
Consider this simple example: you get a support call about a certain page in your application that is not working correctly, maybe one of the fields is not being persisted like it should be. With my model, the developer assigned to find the problem opens *exactly 3 files*. An ASPX, an ASPX.CS and a SQL file with the stored procedure. The problem, which might be a missing parameter to the stored procedure call, takes minutes to solve. But with any entity model, you will invariably fire up the debugger, start stepping through code, and you may end up with 15-20 files open in Visual Studio. By the time you step down to the bottom of the stack, you forgot where you started. We can only keep so many things in our heads at one time. Software is incredibly complex without adding any unnecessary layers.
Development complexity and troubleshooting are just one side of my gripe.
Now let's talk about scalability.
Do developers realize that each and every time they write or modify any code that interacts with the database, they need to do a throrough analysis of the exact impact on the database? And not just the development copy, I mean a mimic of production, so you can see that the additional column you now require for your object just invalidated the current query plan and a report that was running in 1 second will now take 2 minutes, just because you added a single column to the select list? And it turns out that the index you now require is so big that the DBA is going to have to modify the physical layout of your files?
If you let people get too far away from the physical data store with an abstraction, they will create havoc with an application that needs to scale.
I am not a zealot. I can be convinced if I am wrong, and maybe I am, since there is such a strong push towards Linq to Sql, ADO.NET EF, Hibernate, J2EE, etc. Please think through your responses, if I am missing something I really want to know what it is, and why I should change my thinking.
***[Edit]***
It looks like this question is suddenly active again, so now that we have the new comment feature I have commented directly on several answers. Thanks for the replies, I think this is a healthy discussion.
I probably should have been more clear that I am talking about enterprise applications. I really can't comment on, say, a game that's running on someone's desktop, or a mobile app.
One thing I have to put up here at the top in response to several similar answers: orthogonality and separation of concerns often get cited as reasons to go entity/ORM. Stored procedures, to me, are the best example of separation of concerns that I can think of. If you disallow all other access to the database, other than via stored procedures, you could in theory redesign your entire data model and not break any code, so long as you maintained the inputs and outputs of the stored procedures. They are a perfect example of programming by contract (just so long as you avoid "select *" and document the result sets).
Ask someone who's been in the industry for a long time and has worked with long-lived applications: how many application and UI layers have come and gone while a database has lived on? How hard is it to tune and refactor a database when there are 4 or 5 different persistence layers generating SQL to get at the data? You can't change anything! ORMs or any code that generates SQL ***lock your database in stone***. |
Ok, I realize I might be downvoted into oblivion for this question, especially given my stance on the matter, but I really need to see some honest, thoughtful debate on the merits of the currently accepted enterprise application design paradigm.
I am not convinced that entity objects should exist.
By entity objects I mean the typical things we tend to build for our applications, like "Person", "Account", "Order", etc.
My current design philosophy is this:
- All database access must be accomplished via stored procedures.
- Whenever you need data, call a stored procedure and iterate over a SqlDataReader or the rows in a DataTable
(Note: I have also built enterprise applications with J2EE, java folks please substitute the equvalent for my .NET examples)
I am not anti-OO. I write lots of classes for different purposes, just not entities. I will admit that a large portion of the classes I write are static helper classes.
I am not building toys. I'm talking about large, high volume transactional applications deployed across multiple machines. Web applications, windows services, web services, b2b interaction, you name it.
I have used OR Mappers. I have written a few. I have used the J2EE stack, CSLA, and a few other equivalents. I have not only used them but actively developed and maintained these applications in production environments.
I have come to the battle-tested conclusion that entity objects are getting in our way, and our lives would be *so* much easier without them.
Consider this simple example: you get a support call about a certain page in your application that is not working correctly, maybe one of the fields is not being persisted like it should be. With my model, the developer assigned to find the problem opens *exactly 3 files*. An ASPX, an ASPX.CS and a SQL file with the stored procedure. The problem, which might be a missing parameter to the stored procedure call, takes minutes to solve. But with any entity model, you will invariably fire up the debugger, start stepping through code, and you may end up with 15-20 files open in Visual Studio. By the time you step down to the bottom of the stack, you forgot where you started. We can only keep so many things in our heads at one time. Software is incredibly complex without adding any unnecessary layers.
Development complexity and troubleshooting are just one side of my gripe.
Now let's talk about scalability.
Do developers realize that each and every time they write or modify any code that interacts with the database, they need to do a throrough analysis of the exact impact on the database? And not just the development copy, I mean a mimic of production, so you can see that the additional column you now require for your object just invalidated the current query plan and a report that was running in 1 second will now take 2 minutes, just because you added a single column to the select list? And it turns out that the index you now require is so big that the DBA is going to have to modify the physical layout of your files?
If you let people get too far away from the physical data store with an abstraction, they will create havoc with an application that needs to scale.
I am not a zealot. I can be convinced if I am wrong, and maybe I am, since there is such a strong push towards Linq to Sql, ADO.NET EF, Hibernate, J2EE, etc. Please think through your responses, if I am missing something I really want to know what it is, and why I should change my thinking. |
Visual studio is being built on top of the [MSBuild][1] configurations files. You can consider *proj and *sln files as makefiles. They allow you to fully customize build process.
[1]: http://msdn.microsoft.com/en-us/library/0k6kkbsd.aspx |
You can use nant to build the projects individually thus replacing the solution and have 1 coding solution and no build solutions.
1 thing to keep in mind, is that the solution and csproj files from vs 2005 and up are msbuild scripts. So if you get acquainted with msbuild you might be able to wield the existing files, to make vs easier, and to make your deployment easier. |
This method uses the actual file modification date, to figure out which one is the latest file:
@echo off
for /F %%i in ('dir /B /O:-D *.txt') do (
call :open "%%i"
exit /B 0
)
:open
start "dummy" "%~1"
exit /B 0
This method, however, chooses the last file in alphabetic order (or the first one, in reverse-alphabetic order), so if the filenames are consistent - it will work:
@echo off
for /F %%i in ('dir /B *.txt^|sort /R') do (
call :open "%%i"
exit /B 0
)
:open
start "dummy" "%~1"
exit /B 0
You actually have to choose which method is better for you. |
In VS menu you have Build -> Rebuild Solution |
my #1 way to do this, add white space to the top of the web config file, after the xml declaration tag.
It forces the node to re-cache and recompile. We even have a page deep in the admin called Flush.aspx that does it for us. |
While it's technically possible, it's not a very friendly solution within Visual Studio. It will be fighting you the entire time.
I recommend you take a look at [NAnt][1]. It's a very robust build system where you can do basically anything you need to.
Our NAnt script does this on every build:
1. Migrate the database to the latest version
2. Generate C# entities off of the database
3. Compile every project in our "master" solution
4. Run all unit tests
5. Run all integration tests
Additionally, our build server leverages this and adds 1 more task, which is generating Sandcastle documentation.
If you don't like XML, you might also take a look at [Rake][2] (ruby), [Bake/BooBuildSystem][3] (Boo), or [Psake][4] (PowerShell)
[1]: http://nant.sf.net
[2]: http://rake.rubyforge.org/
[3]: http://code.google.com/p/boo-build-system/
[4]: http://codebetter.com/blogs/james.kovacs/archive/2008/06/27/introducing-psake.aspx |
It's always best to just actually run a build after making .cs changes.
Where are you running it? Is this for debugging or production? |
Operating System Overheads while profiling? |
|profile| |
I am doing profiling of a C code in Microsoft VS 2005 on a Intel Core-2Duo platform.
I measure the time(secs:millisecs) counsumed by my function. But i have some doubts about the accuracy of this measurement as the operating system will not continuously run my application, but instead schedule others apps/services in between the execution of my code.(Although i have no major applications running while i do the profile run, still windows will have lot of code of its own which it will run by preempting my app.). Because of all this i believe the profiling number(time taken by my app to run) is not accurate.
So my question is there any way to find out the Operating system overheads, scheduling overhead on a typical windows system(I run Windows XP)e.g. if my applications says it ran for 60 milliseconds, out of that 60 msec, how much time really was used by my app. and how much time it was sitting idle, due to being pre-empted by some other task scheduled by the OS?
or
Atleast is there any ball-park number to get such OS overhead, based on your experience you came across while doing something similar?
|
How do you determine the size of a file in C? |
|c|io| |
How can I figure out the size of a file, in bytes?
unsigned int fsize(char* file){
//what goes here?
} |
|c|file|io| |
How can I figure out the size of a file, in bytes?
#include <stdio.h>
unsigned int fsize(char* file){
//what goes here?
} |
I have not yet developed using WPF.. But I would assume that its the same as most other applications in that you want to keep the UI code as light as possible.. A number of design patterns may be used in this such as the obvious [MVC](http://en.wikipedia.org/wiki/Model-view-controller) and [Façade](http://en.wikipedia.org/wiki/Facade_pattern). I personally always try and keep the objects travelling between the UI and BL layers as light as possible, keeping them to primitives if I can.
This then helps me focus on improving the UI layer without the concerns of anything going on once I throw my (primitive) data back..
I hope I understood your question correctly, and sorry I cannot offer more contextual help with WPF. |
If you're fine with using the std c library:
#include <sys/stat.h>
off_t fsize(char *file) {
struct stat;
if (stat(file, &stat) == 0) {
return stat.st_size;
}
return 0;
} |
Matt's solution should work, except that it's C++ instead of C, and the initial tell shouldn't be necessary.
unsigned int fsize(char* file
{
FILE * f = fopen(file, "r");
fseek(f, 0, SEEK_END);
return (unsigned int)ftell(f);
}
Fixed your brace for you, too. ;) |
Here is how I was able to accomplish this. With help from [Chris Hays][1]
Size the image to be as big as you would want it on the report, change "Sizing" property to "Clip".
Dynamically set the image's left padding using an expression:
=CStr(Round((4.625-System.Drawing.Image.FromStream(System.Net.WebRequest.Create(Parameters!LogoURL.Value).GetResponse().GetResponseStream()).Width/96)/2,2)) & "in"
Dynamically set the image's top padding using an expression:
=CStr(Round((1.125-System.Drawing.Image.FromStream(System.Net.WebRequest.Create(Parameters!LogoURL.Value).GetResponse().GetResponseStream()).Height/96)/2,2)) & "in"
The modifications that I made to Chris's code was to swap out the dimensions of my image element on the report (my image was 4.625x1.125 - see numbers above)
Most importantly: Then I also needed to get the **stream from a URL**, Chris was grabbing it **from the database**. I used WebRequest.Create.GetResponse.GetResponseStream do to so.
So far so good - I Hope that helps!
[1]: http://blogs.msdn.com/chrishays/archive/2004/10/27/CenteredImages.aspx |
Here is how I was able to accomplish this. With help from [Chris Hays][1]
Size the image to be as big as you would want it on the report, change "Sizing" property to **"Clip"**.
Dynamically set the image's left padding using an expression:
=CStr(Round((4.625-System.Drawing.Image.FromStream(System.Net.WebRequest.Create(Parameters!LogoURL.Value).GetResponse().GetResponseStream()).Width/96)/2,2)) & "in"
Dynamically set the image's top padding using an expression:
=CStr(Round((1.125-System.Drawing.Image.FromStream(System.Net.WebRequest.Create(Parameters!LogoURL.Value).GetResponse().GetResponseStream()).Height/96)/2,2)) & "in"
The first modification made to Chris's code was to swap out the dimensions of my image element on the report (my image was **4.625x1.125** - see numbers above).
I also chose to get the stream **from a URL** instead of **the database**. I used WebRequest.Create.GetResponse.[GetResponseStream][2] do to so.
So far so good - I Hope that helps!
[1]: http://blogs.msdn.com/chrishays/archive/2004/10/27/CenteredImages.aspx
[2]: http://msdn.microsoft.com/en-us/library/system.net.webresponse.getresponsestream.aspx |
It always comes down to the right tool for the job, so it really depends on what you are trying to accomplish.
However, as a general rule, you're right that CLR procs have a greater overhead and will never perform on set operations like T-SQL. My guideline is do it all in T-SQL unless what you need becomes overly complicated in T-SQL. Then, try harder to get the T-SQL approach to work. :-)
CLR procs are great and do have their place, but their use should be the exception, not the rule. |
Yes, I have written a Windows Forms control that wraps DirectX 9.0 and provides direct pixel level manipulation of the video surface.
I actually wrote another post on StackOverflow asking if there are other better approaches, search on "Unsafe C# and pointers for 2d rendering, good or bad?"
While it is relatively high performance, it requires the unsafe compiler option as it uses pointers to access the memory efficiently. Hence the reason for this earlier post.
|
Yes, I have written a Windows Forms control that wraps DirectX 9.0 and provides direct pixel level manipulation of the video surface.
I actually wrote another post on StackOverflow asking if there are other better approaches, search on "Unsafe C# and pointers for 2d rendering, good or bad?"
While it is relatively high performance, it requires the unsafe compiler option as it uses pointers to access the memory efficiently. Hence the reason for this earlier post.
This is a high level of the required steps:
1. Download the DirectX SDK.
2. Create a new C# winforms project and reference the installed
Microsoft DirectX assembly.
3. Initialize a new DirectX Device object with Presentation Parameters
(windowed, back buffering etc) you
require.
4. Create the Device, taking care to record the surface "Pitch" and
current display mode (bits per
pixel).
5. When you need to display something, Lock the backbuffer
surface and store the returned
pointer to the start of surface
memory.
6. Use pointer arithmetic, calculate the actual pixel position in the
data based on the surface pitch,
bits per pixel and the actual x/y
pixel coordinate.
7. In my case for simplicity I am sticking to 32bpp, meaning setting a
pixel is as simple as: *(surfacePointer + (y * pitch + x))=Color.FromARGB(255,0,0);
8. When finished drawing, Unlock the back buffer surface. Present the surface.
9. Repeat from step 5 as required.
Be aware that taking this approach you need to be very careful about checking the current display mode (pitch and bits per pxiel) of the target surface. Also you will need to have a strategy in place to deal with window resizing or changes of screen format while your program is running. |
You could try looking into WPF, using Visual Studio and/or Expression Blend. I'm not sure how sophisticated you're trying to get, but it should be able to handle a simple editor. Check out this [MSDN Article](http://msdn.microsoft.com/en-us/library/ms742562.aspx) for more info. |
Having a menu strip and a menu item (this is done in the code behind designer file for you automatically)
MenuStrip menuStrip1;
ToolStripMenuItem testToolStripMenuItem;
testToolStripMenuItem.DropDownItems.Add(testToolStripMenuItem)
Tooltip is set manually by:
testToolStripMenuItem.ToolTipText = "My tooltip text"; |
AFAIK, ANSI C doesn't define threading, but there are various libraries available.
If you are running on Windows, link to msvcrt and use _beginthread or _beginthreadex.
If you are running on other platforms, check out the pthreads library (I'm sure there are others as well).
|
round(n,1)+epsilon |
can't help the way it's stored, but at least formatting works correctly:
'%.1f' % round(n, 1) gives you '5.6' |
Have a looke at the Ekiga project at [http://www.Ekiga.org][1].
This provides audio and or video chat between users using the standard SIP (Session Initiation Protocol) over the Internet. Like most SIP clients, it can also be used to make calls to and receive calls from the telephone network, but this requires an account with a commercial service provder (there are many, and fees are quite reasonable compared to normal phone line accounts).
Ekiga uses the open source OPAL library to implement SIP communications (OPAL has support for serveral VoIP and video over IP standards - see [www.opalvoip.org][2] for more info).
[1]: http://www.Ekiga.net "Ekiga Project"
[2]: http://www.opalvoip.org/ |
How can I make an exact copy of a xml node's children with XSLT? |
|xml|xslt|xhtml| |
My problem is that my XML document contains snippets of XHTML within it and while passing it through an XSLT I would like it to render those snippets without mangling them.
I've tried wrapping the snippet in a CDATA but it doesn't work since less than and greater than are translated to < and > as opposed to being echoed directly.
What's the XSL required for doing this? |
[nProf][1] is a good tool if you're looking for something free. It's kind of finicky at points, and a little buggy, but if you're on a tight budget, it'll do the job.
[1]: http://nprof.sourceforge.net/Site/Description.html |
You can switch the data type to a integer:
>>> n = 5.59
>>> int(n * 10) / 10.0
5.5
>>> int(n * 10 + 0.5)
56
And then display the number by inserting the locale's decimal separator.
However, [Jimmy's answer](http://stackoverflow.com/questions/56820/round-in-python-doesnt-seem-to-be-rounding-properly#56833) is better. |
You can use the string format operator `%`, similar to sprintf.
mystring = "%.2f" % 5.5999
|
[Leon Timmermans][1] got exactly what I was looking for. I would add one small improvement for those who come here later:
/(?<!\\)(?:\\\\)*\\q/
The additional `?:` at the beginning of the `(\\\\)` group makes it not saved into any match-data. I can't imagine a scenario where I'd want the text of that saved.
[1]: http://stackoverflow.com/users/4727/leon-timmermans |
You get '5.6' if you do `str(round(n, 1))` instead of just `round(n, 1)`. |
Floating point math is vulnerable to slight, but annoying, precision inaccuracies. If you can work with integer or fixed point, you will be guaranteed precision. |
|sql-server|encryption| |
ASP.NET 3.5 Without Microsoft SQL Server - What do I lose? |
|asp.net|mysql| |
I was just assigned to do a CMS using ASP.net 3.5 and MySQL. I am kind of new to ASP.NET development (quite sufficient with C#) and **I am wondering what major ASP.NET and general .NET features I am losing when I don't have the option to use Microsoft SQL Server.**
I know already from quick Googling that I lose LINQ (and I was really looking forward to using this to build my model layer!), but I am not sure what other handy features I will lose. Since I've been relying on ASP.net tutorials which assume that you use MS SQL Server, I feel a chunk of my ASP.net knowledge just become invalid.
Thanks!
|
|asp.net|mysql|.net-3.5| |
Spartan Programming |
|language-agnostic|style|spartan| |
I really enjoyed [Jeff's post][1] on [Spartan Programming][2]. I agree that code like that is a joy to read. Unfortunately, I'm not so sure it would necessarily be a joy to work with.
For years I have read about and adhered to the "one-expression-per-line" practice. I have fought the good fight and held my ground when many programming books countered this advice with example code like:
while (bytes = read(...))
{
...
}
while (GetMessage(...))
{
...
}
Recently, I've advocated one expression per line for more practical reasons - debugging and production support. Getting a log file from production that claims a NullPointer exception at "line 65" which reads:
ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber());
is frustrating and entirely avoidable. Short of grabbing an expert with the code that can choose the "most likely" object that was null ... this is a real practical pain.
One expression per line also helps out quite a bit while stepping through code. I practice this with the assumption that most modern compilers can optimize away all the superfluous temp objects I've just created ...
I try to be neat - but cluttering my code with explicit objects sure feels laborious and at times, does not make the code easier to browse - but it really has come in handy when tracing things down in production or stepping through my or someone else's code.
What style do *you* advocate and can you rationalize it in a practical sense?
[1]: http://www.codinghorror.com/blog/archives/001148.html
[2]: http://ssdl-wiki.cs.technion.ac.il/wiki/index.php/Spartan_programming
|
I really enjoyed [Jeff's post][1] on [Spartan Programming][2]. I agree that code like that is a joy to read. Unfortunately, I'm not so sure it would necessarily be a joy to work with.
For years I have read about and adhered to the "one-expression-per-line" practice. I have fought the good fight and held my ground when many programming books countered this advice with example code like:
while (bytes = read(...))
{
...
}
while (GetMessage(...))
{
...
}
Recently, I've advocated one expression per line for more practical reasons - debugging and production support. Getting a log file from production that claims a NullPointer exception at "line 65" which reads:
ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber());
is frustrating and entirely avoidable. Short of grabbing an expert with the code that can choose the "most likely" object that was null ... this is a real practical pain.
One expression per line also helps out quite a bit while stepping through code. I practice this with the assumption that most modern compilers can optimize away all the superfluous temp objects I've just created ...
I try to be neat - but cluttering my code with explicit objects sure feels laborious at times. It does not generally make the code easier to browse - but it really has come in handy when tracing things down in production or stepping through my or someone else's code.
What style do *you* advocate and can you rationalize it in a practical sense?
[1]: http://www.codinghorror.com/blog/archives/001148.html
[2]: http://ssdl-wiki.cs.technion.ac.il/wiki/index.php/Spartan_programming
|
Tooltip is set manually by:
testToolStripMenuItem2.ToolTipText = "My tooltip text";
For completeness. Having a menu strip and a menu item (this is done in the code behind designer file for you automatically)
MenuStrip menuStrip1;
ToolStripMenuItem testToolStripMenuItem;
menuStrip1.Items.Add(testToolStripMenuItem);
ToolStripMenuItem testToolStripMenuItem2;
testToolStripMenuItem.DropDownItems.Add(testToolStripMenuItem2)
|
Tooltip is set manually by:
testToolStripMenuItem2.ToolTipText = "My tooltip text";
The MenuItem can for example be part of this menu constellation: a menu strip with a menu item and a sub menu item. (This plumbing code is generated automatically for you in the code behind designer file if you use visual studio)
MenuStrip menuStrip1;
ToolStripMenuItem testToolStripMenuItem; // Menu item on menu bar
menuStrip1.Items.Add(testToolStripMenuItem);
ToolStripMenuItem testToolStripMenuItem2; // Sub menu item
testToolStripMenuItem.DropDownItems.Add(testToolStripMenuItem2)
|
@Chris Karcher,
May be I misunderstood you problem, but why do you need to use Tooltip class? You can assign your text to MenuItem.[ToolTipText][1] property and it will be shown to user.
[1]: http://msdn.microsoft.com/en-us/library/aa978609%28VS.71%29.aspx |
@Chris Karcher,
May be I misunderstood you problem, but why do you need to use Tooltip class? You can assign your text to [ToolTipText][1] property and it will be shown to user.
[1]: http://msdn.microsoft.com/en-us/library/aa978609%28VS.71%29.aspx |
Namespaces are a logical grouping, while projects are a physical grouping.
Why is this important? Think about .NET 2.0, 3.0, and 3.5. .NET 3.0 is basically .NET 2.0 with some extra assemblies, and 3.5 adds a few more assemblies. So for instance, .NET 3.5 adds the `DataPager` control, which is a web control and should be grouped in `System.Web.UI.WebControls`. If namespaces and physical locations were identical, it couldn't be because it's in a different assembly.
So having namespaces as independent logical entities means you can have members of several different assemblies which are all logically grouped together because they're meant to be used in conjunction with each other.
(Also, there's nothing wrong with having your physical and logical layouts pretty similar.) |
One word of warning: Safari on Windows does not support XSLT. |
Took me a while but I finally [found it][1]. If you want to use scaffolding and still have this kind of control what you have to do is execute the command from within your app directory:
grails install-templates
then you'll find the directory: "src/templates/scaffolding" and in that folder you'll find the .gsp templates and also the controller template: Controller.groovy where you should find this piece of code:
def list = {
if(!params.max) params.max = 10
[ ${propertyName}List: ${className}.list( params ) ]
}
and hopefully that answers your questions.
[1]: http://www.ibm.com/developerworks/java/library/j-grails03118/#N1038A |
Try StarPrint's [VSNETcodePrint][1]
[1]: http://starprint2000.com/ |
Here you go... (hope no-one beat me to it...) (You'll need to save the file as lasttext.bat or something)
This will open up / run the oldest .txt file
dir *.txt /b /od > systext.bak
FOR /F %%i in (systext.bak) do set sysRunCommand=%%i
call %sysRunCommand%
del systext.bak
Probably XP only. BEHOLD The mighty power of DOS.
Although this takes the latest filename by date - NOT by filename..
If you want to get the latest filename, change /od to /on .
If you want to sort on something else, add a "sort" command to the second line. |
Here you go... (hope no-one beat me to it...) (You'll need to save the file as lasttext.bat or something)
This will open up / run the oldest .txt file
dir *.txt /b /od > systext.bak
FOR /F %%i in (systext.bak) do set sysRunCommand=%%i
call %sysRunCommand%
del systext.bak /Y
Probably XP only. BEHOLD The mighty power of DOS.
Although this takes the latest filename by date - NOT by filename..
If you want to get the latest filename, change /od to /on .
If you want to sort on something else, add a "sort" command to the second line. |
I am a committer on the BIRT project, so I am biased. BIRT provides a very well thought out report object model (ROM) and appropriate API for the various design and deploy function that is needed. In addition, BIRT provides the best multi-language support and the ability to separate development from design through the use of CSS.
BIRT can be embedded into your application for no license cost through the REAPI or it can be purchased through a couple of commercial offerings. |
**Suggestion**
Try run on multi CPU systems. |
I agree that a memory profiler is the easiest way to get the information you are looking for. In addition to the two previously mentioned, I recommend JetBrains [dotTrace][1], which is both a performance profiler and a memory profiler.
If you want to do it yourself, and are willing to get pretty deep into the guts of the CLR, you can use the [.NET Profiling API][2], which is an unmanaged API that (as Microsoft says): "enables a profiler to monitor a program's execution by the common language runtime (CLR)." It's not exactly intended for casual use, but it does have an enormous amount of functionality.
[1]: http://www.jetbrains.com/profiler/index.html
[2]: http://msdn.microsoft.com/en-us/library/ms404386.aspx |
In my humble opinion:
When using PHP for web development, most of your connection will only "live" for the life of the page executing. A persistant connection is going to cost you a lot of overhead as you'll have to put it in the session or some such thing.
99% of the time a single non-persistant connection that dies at the end of the page execution will work just fine.
The other 1% of the time, you probably should not be using PHP for the app, and there is no perfect solution for you.
|
I've used [bqbackup.com][1] for 1-2 years no problem. You can do a sync using rsync nightly.
[1]: http://bqbackup.com |
I've used [bqbackup.com][1] for 1-2 years no problem. You can do a sync using rsync nightly. Wanted to add that their prices are dirt cheap, and I now have close to 1TB with them.
[1]: http://bqbackup.com |
what client(s) should be targeted in implementing an iCalendar export for events? |
|outlook|icalendar|gmail|recurrence| |
[http://en.wikipedia.org/wiki/ICalendar][1]
I'm working to implement an export feature for events. The link above lists tons of clients that support the ICalendar standard, but the "three big ones" I can see are Apple's iCal, Microsoft's Outlook, and Google's Gmail.
I'm starting to get the feeling that each of these client implement different parts of the "standard", and I'm unsure of what pieces of information we should be trying to export from the application so that someone can put it on their calendar (especially around recurrence).
For example, from what I understand Outlook doesn't support daily recurrence (an event happens more than once per day).
Could any of you provide guidance of the "happy medium" here from a features implementation standpoint?
Secondary question, if we decide to cut features from the export (such as daily recurrence) because it isn't supported in Outlook, should we support it in the application as well? (it is a general purpose event scheduling application, with no business specific use in mind...so we really are looking for the happy medium).
Thanks,
Ian
[1]: http://en.wikipedia.org/wiki/ICalendar |
what client(s) should be targeted in implementing an ICalendar export for events? |
|outlook|gmail|icalendar|recurrence| |
[http://en.wikipedia.org/wiki/ICalendar][1]
I'm working to implement an export feature for events. The link above lists tons of clients that support the ICalendar standard, but the "three big ones" I can see are Apple's iCal, Microsoft's Outlook, and Google's Gmail.
I'm starting to get the feeling that each of these client implement different parts of the "standard", and I'm unsure of what pieces of information we should be trying to export from the application so that someone can put it on their calendar (especially around recurrence).
For example, from what I understand Outlook doesn't support hourly recurrence.
Could any of you provide guidance of the "happy medium" here from a features implementation standpoint?
Secondary question, if we decide to cut features from the export (such as hourly recurrence) because it isn't supported in Outlook, should we support it in the application as well? (it is a general purpose event scheduling application, with no business specific use in mind...so we really are looking for the happy medium).
Thanks,
Ian
[1]: http://en.wikipedia.org/wiki/ICalendar |
Ctrl+Alt+E (or Debug\Exceptions)
From there you can select which exceptions break. |
Printing from a service is a bad idea. Network printers are connected "per-user". You can mark the service to be run as a particular user, but I'd consider that a bad security practice. You might be able to connect to a local printer, but I'd still hesitate before going this route.
The best option is to have the service store the data and have a user-launched application do the printing by asking the service for the data. Or a common location that the data is stored, like a database.
If you need to have the data printed as regular intervals, setup a Task event thru the Task Scheduler. Launching a process from a service will require knowing the user name and password, which again is bad security practice.
As for the printing itself, use a third-party tool to generate the report will be the easiest. |
You could investigate DoubleCommand, it may do what you need.
There's an experimental version that allows for different properties for different keyboards. |
Object initialization in C# (.Net) |