text stringlengths 8 267k | meta dict |
|---|---|
Q: How do I connect & administer an SQL Server database remotely over the Internet? How do I connect to an SQL Server remotely and administer my database objects? I need to connect to my database located on the web hosting company's server. I have the server name, IP address, my database username & password. I have an installation of SQL Server 2000 in my machine.
Can I use SQL Server Enterprise Manager for this? A step by step guide would be very helpful.
A: Yes, you can use Enterprise Manager (or SQL Server Management Studio, even if it's an Express version) to connect to any SQL Server (of the same or lower version as the tool you're using) you have TCP/IP connectivity to. Just launch it, enter the DNS host name or IP address in the 'Server Name' box and hit Connect.
Two things may prevent this from working:
*
*Your SQL Server isn't set up for TCP/IP connectivity. This is the default setting from version 2005 onwards, and can be changed using the SQL Server Configuration Manager.
*There is a firewall between you and your SQL Server blocking TCP/IP traffic. This is an entirely sensible construction: you do NOT want your database server to be available from the general Internet, as this is a huge security risk. In fact, if your hosting company allows this kind of access by default, I'd be looking for a different provider...
Anyway, what seems to be needed in your scenario is: a) the hosting company enabling TCP/IP on your SQL Server instance, b) them providing you with secure access to the IP address that instance is running on.
This will involve some kind of VPN or SSH port-forwarding soluition, or at least an IP filter, where only 'trusted' IP addresses can access the SQL Server (which is not a great solution, but can be used if nothing else can be implemented). You'll have to discuss these requirements with your hosting company.
A: Yes you can do this but how you do this will be dependent on your hosting setup.
As Brannon suggests you will need to open port 1433 on the firewall. This would be a dangerous thing to do on its own so you would combine it with a VPN. It can be quite tricky to set this up through and you could end up blocking yourself from access to the server and needing the hosting companies help to reset it.
Another option would be some of the web management tools such as:
http://www.microsoft.com/downloads/details.aspx?familyid=C039A798-C57A-419E-ACBC-2A332CB7F959&displaylang=en
http://www.mylittleadmin.com/en/welcome.aspx
A: It is possible, but a lot of things need to align for you to be able to connect.
Enterprise Manager and SQL Management Studio just use a standard client connection to SQL. The default TCP/IP port is 1433, but the host needs to allow that port through the firewall. If you are using a named instance, then I believe you also need to be able to connect to port 1434.
Most administrators do not allow direct access to the SQL Server from outside the firewall. In that case, if you can connect to the host over VPN then you should be able to connect directly to the server with Enterprise Manager or SQL Management Studio.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: for-loop mechanism efficiency tips As I am using for-loops on large multi-dim arrays, any saving on the for-loop mechanism itself is meaningful.
Accordingly, I am looking for any tips on how to reduce this overhead.
e.g. : counting down using uint instead of int and != 0 as stop instead of >0 allows the CPU to do less work (heard it once, not sure it is always true)
A: Try to make your loops contiguous in memory, this will optimize cache usage. That is, don't do this:
for (int i = 0; i < m; i++)
for (j = 0; j < n; j++)
s += arr[j][i];
*
*If processing images, convert two loops to one loop on the pixels with a single index.
*Don't make loops that will run zero times, as the pipeline is optimized to assume a loop will continue rather than end.
A: Have you measured the overhead? Do you know how much time is spent processing the for loops vs. how much time is spent executing your application code? What is your goal?
A: Loop-unrolling can be one way. That is:
for (i=0; i<N; i++) {
a[i]=...;
}
transforms into:
for (i=0; i<N; i+=4) {
a[i]=...;
a[i+1]=...;
a[i+2]=...;
a[i+3]=...;
}
You will need special handling when N is not a multiple of 4 in the example above.
A: This isn't a language agnostic question, it depends highly on not only language, but also compiler. Most compilers I believe will compile these two equivalently:
for (int i = 0; i < 10; i++) { /* ... */ }
int i = 0;
while (i < 10) {
// ...
i++;
}
In most languages/compilers, the for loop is just syntactic sugar for the later while loop. Foreach is another question again, and is highly dependant on language/compiler as to how it's implemented, but it's generally less efficient that a normal for/while loop. How much more so is again, language and compiler dependant.
Your best bet would probably be to run some benchmarks with several different variations on a theme and see what comes out on top.
Edit: To that end, the suggestions here will probably save you more time rather than worrying about the loop itself.
A: BTW, unless you need post-increment, you should always use the pre-increment operator. It is only a minor difference, but it is more efficient.
Internally this is the difference:
*
*Post Increment
i++;
is the same as:
int postincrement( int &i )
{
int itmp = i;
i = i + 1;
return itmp;
}
*Pre Increment
++i;
is the same as:
int preincrement( int &i )
{
i = i + 1;
return i;
}
A: First, don't sweat the small stuff. Details like counting up versus counting down are usually completely irrelevant in running time. Humans are notoriously bad at spotting areas in code that need to be sped up. Use a profiler. Pay little or no attention to any part of the loop that is not repeated, unless the profiler says otherwise. Remember that what is written in an inner loop is not necessarily executed in an inner loop, as modern compilers are pretty smart about avoiding unnecessary repetition.
That being said, be very wary of unrolling loops on modern CPUs. The tighter they are, the better they will fit into cache. In a high-performance application I worked on last year, I improved performance significantly by using loops instead of straight-line code, and tightening them up as much as I could. (Yes, I profiled; the function in question took up 80% of the run time. I also benchmarked times over typical input, so I knew the changes helped.)
Moreover, there's no harm in developing habits that favor efficient code. In C++, you should get in the habit of using pre-increment (++i) rather than post-increment (i++) to increment loop variables. It usually doesn't matter, but can make a significant difference, it doesn't make code less readable or writable, and won't hurt.
A: I agree with @Greg. First thing you need to do is put some benchmarking in place. There will be little point optimising anything until you prove where all your processing time is being spent. "Premature optimisation is the root of all evil"!
A: One important suggestion: move as much calculation to the outer loop as possible. Not all compilers can do that automatically. For eample, instead of:
for row = 0 to 999
for col = 0 to 999
cell[row*1000+col] = row * 7 + col
use:
for row = 0 to 999
x = row * 1000
y = row * 7
for col = 0 to 999
cell[x+col] = y + col
A: As your loops will have O(n^d) complexity (d=dimension), what really counts is what you put INTO the loop, not the loop itself. Optimizing a few cycles away in the loop framework from millions of cycles of an inefficient algorithm inside the loop is just snake oil.
A: By the way, is it good to use short instead of int in for-loop if Int16 capacity is guaranteed to be enough?
A: I think most compilers would probably do this anyway, stepping down to zero should be more efficient, as a check for zero is very fast for the processor. Again though, any compiler worth it's weight would do this with most loops anyway. You need to loo at what the compiler is doing.
A: There is not enough information to answer your question accurately. What are you doing inside your loops? Does the calculation in one iteration depend on a value calculated in a previous iteration. If not, you can almost cut your time in half by simply using 2 threads, assuming you have at least a dual core processor.
Another thing to look at is how you are accessing your data, if you are doing large array processing, to make sure that you access the data sequentially as it is stored in memory, avoiding flushing your L1/L2 cache on every iteration (seen this before on smaller L1 caches, the difference can be dramatic).
Again, I would look at what is inside the loop first, where most of the gains (>99%) will be, rather than the outer loop plumbing.
But then again, if your loop code is I/O bound, then any time spent on optimization is wasted.
A: There is some relevant information among the answers to another stackoverflow question, how cache memory works. I found the paper by Ulrich Drepper referred to in this answer especially useful.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I dictate the destination folder of a clickOnce application? How do I dictate the destination folder of a clickOnce application?
A: As a further to the above, this is a security feature. Allowing websites to install software to arbitrary locations on someone's harddrive somewhat automatically is a bad idea.
A: This is not possible with ClickOnce. ClickOnce applications are always installed in the Apps subdirectory of local application data.
A: One Click application directly installs into the user profile directory. There is no way you can install it to your Programme files. To customize the your application use Installaware Admin http://www.installaware.com/studio-admin-features.htm
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Can you recommend Performance Analysis tools for PHP? Can anyone recommend some useful performance analysis tools for PHP scripts? Anything that could help me find problematic or unusually slow blocks of code, details about execution time, etc. would be really helpful. I know there are tools out there, but I'm wondering what people recommend as being the most useful and well-designed.
A: XDebug's profiler is good. But then I'm a KDE user... I think you could install the KCacheGrind in other window managers though.
A: Try webgrind. It gives you the profiling of CacheGrinder in an easy to read, browser based format. I'm on a Mac and it has made profiling a breeze.
A: Xdebug profiler is pretty good but the CacheGrinder can be a little difficult to interpret.
Zend Platform (expensive if you're not running the developers license) will alert you to issue code and bad use of resources.
A: I'm personally a fan of XHProf, one of Facebook's open source initiatives. This, along with the XDebug dumps, is crucial in determining performance bottlenecks. Plus, the UI (and particularly, the weighted image-based callgraph functionality) rocks.
I have used this across the Gawker Media network in the past (again, along with XDebug-style dumps), to help focus our performance-geared development efforts.
A: See SD's PHP Profiler. Measures frequence of execution across your entire application and provides a hotspot graphical display of highly-executed code. No changes necessary to PHP server to install this.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How to test web-apps on mobile for free without wlan? I tried GNUBOX which use bluetooth to connect to my computer then to the internet. It's very painful to set up (under windows more than under linux, but it's still painful, it works 1 time on 3).
I own a Nokia 6630 so there is no WLAN support. Is there any emulator? I'd need to know something like max width, max height etc... usability in general, any hint?
A: This may sound silly but you could consider getting a mobile tariff with unlimited data. In most European countries these are now available and are not too expensive.
I don't believe you would get a solid experience from any emulator.
A: Don't know if you're only limiting to the 6630 or not...if not, Opera Mini has a free simulator.
If you find yourself needing to do more testing on multiple devices, there's always Device Anywhere...but it definitely does not meet your requirement for free.
A: Can you use a data cable and IP pass through?
Since the 6630 is a Symbian phone, you should be able to use GNUbox to handle the connection. See http://xan.dnsalias.org/gnubox/
A: Keynote's MITE just launched a free version for content testing; it includes the 6630 along with more than 1600 other profiles and 11,000 user agent strings. You can access via LAN get the protocol details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there any performance difference in using .resx file and satellite assembly? Which is the best way to go forward while building a localized aspx web application, .resx files of satellite assemblies? Is there any performance comparisons available any where on web?
A: Well I don't know if the comparison is valid..
ResX is a storage format for storing resources in XML. It gets compiled to a binary form (.resources) with the resgen tool before it gets embedded (if so specified) into the assembly.
Satellite assembly is a diff/delta of your main assembly resources and your localized resources. So if you have a Strings.resx with 100 strings in MainAssembly.dll of which 10 change in French Canadian Culture, you should have a MainAssembly.resources.dll (satellite assembly) containing just those 10 strings in the fr-CA subdirectory of the DLL folder.
When you query for a string resource using a ResourceManager, it takes into account current culture. If fr-CA, it will first look for the string in the satellite assembly in the fr-CA folder, if not found it will fall back to the resources in the DLL itself and return that. The mechanism is to search for it in the following order always.
- [fr-CA subfolder]\MyAssembly.resources.dll
- [fr subfolder]\MyAssembly.resources.dll
- DLL itself
For more details, check out http://www.dotneti18n.com/ or the Resources chapter of 'Programming WPF'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Deploy a Desktop Shortcut to a Device running Windows CE 4.2 (VS2005) I have an application written using VS2005 in C# targeting the Compact Framework 2.0 SP2. As part of the solution, I have CAB deploy project which deploys to the device without a problem. What I can't do is create a shortcut to my application on the devices desktop.
I have spent several hours reading various bits of documentation (why is the search at the MSDN site so bad?), followed the instructions but no joy.
What I've done is:
*
*Add the "Windows Folder" node to the File System
*Created a folder underneath that named "Desktop"
*Created a shortcut to the Applications Primary Output and placed that in the "Desktop" folder
What am I missing?
A: A bit late but maybe this will help somebody like me who searched for this issue, I solved the problem like this:
I added a custom folder on the root node (File System on Local Machine) and called it %CE3%.
That is the shortcut for \Windows\Desktop.
I added my shortcut (right click create new shortcut) and gave it a name.
That's it, then I build!
When analysing the Shortcuts section in the inf generated, it looked good.
[Shortcuts]
"ShortCutName",0,"MyApp.exe","%CE3%"
And when I deployed and it worked perfectly!
I'm using VS2008 and deploy on windows CE 5.0
Here is a list of shortcuts: windows CE shortcuts
A: A Windows CE shortcut (CE of any version or flavor, including WinMo) uses a ASCII-text based file. They are in the form:
{XX}#{PATH}
Where:
*
*XX = the number of the characters in the path, to include the number a # sign
*PATH = fully qualified path to the file to run
For example:
20#\Windows\calc.exe
The other option is to use the CEShortcuts section of the INF file used to generate your CAB.
In the [DefaultInstall] section of the INF, set the CEShortcuts to a section name of your choice (something like "Shortcuts"), then add that section with your shortcut descriptor. MSDN details it here.
MSDN also has an article on creating a deployment project to generate the cab (available here), but in all honesty, the project capabilities are limited and IMO the tool just generally sucks. To this day we still use direct calls to CABWIZ (which also sucks, but it's our only choice) with hand-written INF files.
A: I had this same problem and found a simple solution, if anyone still needs this.
Instead of adding a windows special folder, just add a custom folder named Windows, then a folder within it named Desktop, and put the shortcut there.
This worked for me.
A: The Simplest way is to go into the Application folder in Cab Project(setup), right click on your EXE Program(Application exe that you want to make shortcut for) and chose "Create Shortcut to" and move that file to any folder you want such as "Start Menu Folder"
A: Mitch: create the LNK file as before, but give it a name like "shortcut.lnkx" (note the "x" on the end). You can then add it to the "Desktop" folder in your CAB project. Once the file is added, change the TargetName property to "shortcut.lnk" and compile. I think this will work.
A: Assuming that you use Windows Mobile (5.0 or 6.x) you could use that syntax to create a file as a shourtcut(*,lnk):
SHORTCUT = XX#"\Program Path..."?\Icon File Path...,-Icon Number
Where:
XX = Count of characters to be included in arguments after the Program Path to process.
Program Path = Target exe file location.
Icon File Path = If exe file does not contain an icon image or you want to use another, this is the location of the file containing the icon image.
Icon number = Index of icon image within the file, it starts with 0.
Ex: 86#"\Storage Card\Logical Sky CEdit\cedit.exe"?\Storage Card\Logical Sky CEdit\cedit.exe,-101
I had test it, and works fine.
A: 1.Copy the file.
2.Go to desktop (or wherever you want to create the shortcut).
3.Right click on an empty space, click Paste Shortcut.
That's it.
A: Actually, this is pretty simple ! (Using VS 2008 and Smart Device CAB project)
1- In the solution explorer on VS, Go to your CAB project and right-click on it.
2- Go to View -> File System
3- Here, on the left column, right-click and "Add Special Folder"
4- Select Start Menu Folder for the shortcut folder
5- Go to the Application Folder just above
6- On the right column, right click on the Primary output and select "Create shortcut to bla bla bla"
7- Then you just have to move it to the start menu folder on the left and rename the File :)
OPTIONAL: You can even add fonts to the device using "Fonts folder" in the "Add Special Folder" menu !
Cheers
A: I assume that you're working with a "Smart Device CAB Project"? I don't think this project creates shortcuts in the correct manner for Windows CE 4.2. I think you can download an SDK from Microsoft that after installation will show you something like "Windows CE CAB Project" as a project option.
I think you can do this manually, though. A Windows CE shortcut is a file with a *.lnk extension, so if you want a shortcut labeled "My Application", create a text file on your PC named "My Application.lnk". The file's contents should be:
46#\Program Files\My
Application\MyApplication.exe (the # should be the full path length)
or whatever full path your application has on the Windows CE device.
In your CAB project, continue with adding the "Windows" folder and then the "Desktop" folder as you were. In the Desktop folder, right-click and add the LNK file that you created. You may have to soft-reset the device in order to have the shortcut show up after installation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Multiple repositories, single setup If I use multiple repositories, all located under a single root folder, how can I set it up so that they will use a single master svnconf/passwd file for setup but still allow me to customize each if the need arises?
This is on Windows, but I guess the process would be similar on other systems.
Update: I am using svnserve as a service.
A: svnserve isn't intended for use in large deployments. If you need more fine-grained permissions, or integration, etc., use a web server (like Apache).
A: If you are using svnserve, then the conf/svnserve.conf file in each repository has a configuration item which names the password database file. This is the password-db item in the [general] section, just set them all to point to the same file.
If you're not using svnserve, then this probably doesn't apply.
A: There are different ways of doing that depending on what exactly you want to achieve and the setup you are using.
svnserve
If you use svnserve, there is a file in each repository which is called svnserve.conf where you can define the password file in use. You could put a password file in the server root and point all your repositories there.
It would look like this:
c:\svn\passwd
c:\svn\project1\conf\svnserve.conf
c:\svn\project2\conf\svnserve.conf
Then in both svnserve.conf files a section like
[general]
password-db = c:\svn\passwd
should do the trick and will still keep different ACLs (authz file) for each repository
Apache
That is a bit more complicated but
<Location /project1>
DAV svn
SVNPath C:/Repositories/project1
AuthType Basic
AuthName "Subversion Project1 repository"
AuthUserFile c:/etc/svn-auth-file
Require valid-user
AuthzSVNAccessFile c:/etc/svn-acl
</Location>
<Location /project2>
DAV svn
SVNPath C:/Repositories/project2
AuthType Basic
AuthName "Subversion Project2 repository"
AuthUserFile c:/etc/svn-auth-file
Require valid-user
AuthzSVNAccessFile c:/etc/svn-acl
</Location>
As long as you use the same authuserfile for each SVN enabled location, you will get your result.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Easy way to embed svn revision number in page in PHP? Notice in the bottom right hand corner of this page it has the SVN revision id? I'm assuming that's dynamic.
I'd love to add that to some of my sites, just as a comment in the source to make sure code pushes are going through.
NOTE: You can also assume that the working directory of the site in question is an svn checkout of the repo in question.
Edit: I'm looking for the global revision number, not the revision number of the file I'm looking at.
A: You can use the svnversion CLI utility to get a more specific look at the revision, including the highest number. You could then use regular expressions to parse this.
Subversion has no concept of a global revision; rather, you'd have to recursively look through the working copy to find the highest revision number. svnversion does that for you.
A: The keyword subsitution method isn't reliable because it will provide the revision of the file rather than the whole codebase that you're deploying, which I presume is what you're after.
Typically I use ANT to deploy from subversion, and in the build script I'd use the replace task to substitue a revision token in a layout template or common header file with the revision number of the codebase that I'm deploying - see below. Although if anyone has a better method I'd love to hear it!
<svn username="${svn.username}" password="${svn.password}" javaHL="${svn.javahl}">
<status path="${dir.build}" revisionProperty="svn.status.revision" />
</svn>
<replace dir="${dir.build}" token="%revision%" value="${svn.status.revision}">
<include name="**/*.php" />
</replace>
A: Read up on Keyword substitution. See
http://svnbook.red-bean.com/en/1.4/svn.advanced.props.special.keywords.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Why Stackoverflow binds user actions dynamically with javascript? Checking the HTML source of a question I see for instance:
<a id="comments-link-xxxxx" class="comments-link">add comment</a><noscript> JavaScript is needed to access comments.</noscript>
And then in the javascript source:
// Setup our click events..
$().ready(function() {
$("a[id^='comments-link-']").click(function() { comments.show($(this).attr("id").substr("comments-link-".length)); });
});
It seems that all the user click events are binded this way.
The downsides of this approach are obvious for people browsing the site with no javascript but, what are the advantages of adding events dynamically whith javascript over declaring them directly?
A: Attaching events via the events API instead of in the mark-up is the core of unobtrusive javascript. You are welcome to read this wikipedia article for a complete overview of why unobtrusive javascripting is important.
The same way that you separate styles from mark-up you want to separate scripts from mark-up, including events.
A: I see this as one of the fundamental principals of good software development:
The separation of presentation and logic.
HTML/CSS is a presentation language essentially. Javascript is for creating logic. It is a good practice to separate any logic from your presentation if possible.
A: *
*You don't have to type the same string over and over again in the HTML (which if nothing else would increase the number of typos to debug)
*You can hand over the HTML/CSS to a designer who need not have any javascript skills
*You have programmatic control over what callbacks are called and when
*It's more elegant because it fits the conceptual separation between layout and behaviour
*It's easier to modify and refactor
On the last point, imagine if you wanted to add a "show comments" icon somewhere else in the template. It'd be very easy to bind the same callback to the icon.
A: This way you can have a light-weight page where you can handle all your actions via javascript. Instead of having to use loads of different urls and actions embedded into the page, just write one javascript function that finds the link, and hooks it up, no matter where on the page you dump that 'comment' link.
This saves loads of repeating html :)
A: The only advantage I see is a reduction of the page size, and thus a lower bandwith need.
Edit: As I'm being downvoted, let met explain a more my answer.
My point is that, using a link as an empty anchor is just a bad practice, nothing else! Of course separation of JavaScript logic from HTML is great. Of course it's easier to refactor and debug. But here, it's against the main principle of unobtrusive JavaScript: Gracefull degradation!
A good solution would be to have to possible call of the comments: one through a REAL link that will point to a simple page showing the comment and another which returns only the comments (in a JSON notation or similar format) with the purpose of being called through AJAX to be injected directly in the main page.
Doing so, the method using the AJAX method should also take care of cancelling the other call, to avoid that the user is redirected to the simple page. That would be Unobtrusive JavaScript. Here it's just JavaScript put on a misused anchor tag.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Looking for up-to-date eclipse plugin for C# I used to work with eclipse for nearly all the languages I need. I'm asked to work on a tool developed in C# and so, I would like to stay in the same familiar environment.
I've found the improve's plugin but its last release is from 2004 and .NET 1.1 which is quite old. Is there a newer plugin to program in C# within eclipse or am I forced to take a look at VS?
A: I fear, that there is no good eclipse plug in. Try http://www.monodevelop.com/Main_Page or http://www.icsharpcode.net/OpenSource/SD/. And the free visual 2008 express editions are worth a look.
A: Emonic integrates mono into the eclipse framework, that may be of use.
A: I have found below 2 articles helpful in trying to get C# Formatting in Eclipse:
*
*C# Like format.xml
*Article explaning how to change your formatting
A: Emonic is worth a look as Jasper suggested. I've installed it in the past myself, but over a year ago. Checking the change logs on the site, it does not appear that they have had any new releases since then. The worst thing about it is that it does not supply a debugger or any refactoring tools. I've found that if you're going to work with Microsoft products it's best to eat the whole hog.
You will have a learning curve getting into visual studion from eclipse, but it will probably save you some time working out the nuiances with a product trying to build .NET code.
Visual Studio is a very nice environment to work in, the express editions are free so my suggestion would be to take the opportunity and have a look at the VS dev environment.
A: MonoDevelop just released a Windows Beta, and it's looking very good. It's a cross platform C# IDE. It may be of use.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: Should I stick with the ReportViewer control or buy a third party tool? we need reports in our web application and there is the free ReportViewer Control from microsoft (normally used, to display reports from the reporting services). I like the fact, that the Report Format (.RDL-Format) from the ReportViewer is a documented XML-Format. But the functionality is somewhat limited, when the ReportViewer is used without the ReportingServices.
Is there a good replacement, which is based or compatible with the .RDL-Format?
The first Reports are build in the application, but later, the customer should make his reports by himself.
The Application is a ASP.NET Web-Application
A: This depend on your requirements. I am not clear what your requirements are.
Is your application a web application or a desktop application?
If your application is a web application then you can use any other reporting service. I like i-net Clear Reports. There is also a free and fully functional GUI report designer that is easy to use. Your customer can create your own reports.
If you have a desktop application then you are limit to the language of your application.
You should also think about the platforms. The reporting services are limited to windows and a SQL Server is needed. Does all your customer have a SQL Server?
A: Sorry, I have forgotten to write, that the application is a asp.net web application
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Merge several native DLLs into one DLL I've got a lot of small DLLs which I would like to make into one big(er) DLL (as suggested here). I can do so by merging my projects but I would like a less intrusive way.
Can several DLLs be merged into one unit?
A quick search found this thread that claims this is not possible. Does anyone know otherwise?
Note that I'm talking about native C++ code not .NET so ILMerge is out.
A: I don't know about merging dlls, but I'm sure you can link the intermediate object files into one dll. This would only require changes in your build script.
A: As far as I know you cannot merge DLL files directly. But it should be possible with static libraries or object files. If it is possible for you to build static libraries of your projects you can merge them using the Library Manager by extracting object files from all libraries and packaging them into a new library.
A: Also, there was a product that made a .LIB out of .DLLs. You could then link your exe against that .LIB and get rid of the .DLLs altogether. Perhaps you could link a .DLL out of the .LIB - I'm not sure.
The product is here:
http://www.binary-soft.com/dll2lib/dll2lib.htm
I'm not sure, if it works anymore, if it's supported or even sold. It sure appears pricey, but it used to have (nag-enabled) free trial period.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Do you think a software company should impose developers a coding-style? If you think it shouldn't, explain why.
If yes, how deep should the guidelines be in your opinion? For example, indentation of code should be included?
A: You want everybody reading and writing code in a standard way. There are two ways you can achieve this:
*
*Clone a single developer several times and make sure they all go through the same training. Hopefully they should all be able to write the same codebase.
*Give your existing developers explicit instruction on what you require. Tabs or spaces for indentation. Where braces sit. How to comment. Version-control commit guidelines.
The more you leave undefined, the higher the probability one of the developers will clash on style.
A: The company should impose that some style should be followed. What style that is and how deep the guidelines are should be decided collectively by the developer community in the company.
I'd definitely lay down guidelines on braces, indentation, naming etc...
You write code for readability and maintainability. Always assume someone else is going to read your code.
There are tools that will auto magically format your code , and you can mandate that everyone uses the tool.
If you are on .Net look at stylecop, fxcop and Resharper
A:
Do you think a software company should impose developers a coding-style?
Not in a top-down manner. Developers in a software company should agree on a common coding style.
If yes, how deep should the guidelines be in your opinion?
They should only describe the differences from well-known conventions, trying to keep the deviation minimal. This is easy for languages like Python or Java, somewhat blurry for C/C++, and almost impossible for Perl and Ruby.
For example, indentation of code should be included?
Yes, it makes code much more readable. Keep indentation consistent in terms of spaces vs tabs and (if you opt for spaces) number of space characters. Also, agree on a margin (e.g. 76 chars or 120 chars) for long lines.
A: Yes, but within reason.
All modern IDEs offer one-keystroke code pretty-print, so the "indentation" point is quite irrelevant, in my opinion.
What is more important is to establish best practices: for example, use as little "out" or "ref" parameters as possible... In this example, you have 2 advantages: improves readability and also fixes a lot of mistakes (a lot of out parameters is a code smell and should probably be refactored).
Going beyond that is, in my honest opinion, a bit "anal" and unnecessarily annoying for the devs.
Good point by Hamish Smith:
Style is quite different from best
practices. It's a shame that 'coding
standards' tend to roll the two
together. If people could keep the
style part to a minimum and
concentrate on best practices that
would probably add more value.
A: I don't believe a dev team should have style guidelines they must follow as a general rule. There are exceptions, for example the use of <> vs. "" in #include statements, but these exceptions should come from necessity.
The most common reason I hear people use to explain why style guidelines are necessary is that code written in a common style is easier to maintain that code written in individual styles. I disagree. A professional programmer isn't going to be bogged down when they see this:
for( int n = 0; n < 42; ++42 ) {
// blah
}
...when they are used to seeing this:
for(int n = 0; n < 42; ++42 )
{
// blah
}
Moreover, I have found it's actually easier to maintain code in some cases if you can identify the programmer who wrote the original code by simply recognizing their style. Go ask them why they implemented the gizmo in such a convoluted way in 10 minutes instead of spending the better part of a day figuring out the very technical reason why they did something unexpected. True, the programmer should have commented the code to explain their reasoning, but in the real world programmers often don't.
Finally, if it takes Joe 10 minutes backspacing & moving his curly braces so that Bill can spend 3 fewer seconds looking at the code, did it really save any time to make Bill do something that doesn't come natural to him?
A: I think a team (rather than a company) need to agree on a set of guidelines for reasonably consistent style. It makes it more straightforward for maintenance.
How deep? As shallow as you can agree on. The shorter and clearer it is the more likely it is that all the team members can agree to it and will abide by it.
A: I believe having a consistent codebase is important. It increases the maintainability of ur code. If everyone expects the same kind of code, they can easily read and understand it.
Besides it is not much of a hassle given today's IDEs and their autoformatting capabilities.
P.S:
I have this annoying habit of putting my braces on the next line :). No one else seems to like it
A: I think that programmers should be able to adapt to the style of other programmers. If a new programmer is unable to adapt, that usually means that the new programmer is too stubborn to use the style of the company. It would be nice if we could all do our own thing; however, if we all code along some bast guideline, it makes debugging and maintenance easier. This is only true if the standard is well thought out and not too restrictive.
While I don't agree with everything, this book contains an excellent starting point for standards
A: The best solution would be for IDEs to regard such formatting as meta data. For example, the opening curly brace position (current line or next line), indentation and white space around operators should be configurable without changing the source file.
A: In my opinion I think it's highly necessary with standards and style guides. Because when your code-base grows you will want to have it consistent.
As a side note, that is why I love Python; because it already imposes quite a lot of rules on how to structure your applications and such. Compare that with Perl, Ruby or whatever where you have an extreme freedom(which isn't that good in this case).
A: There are plenty of good reasons for the standards to define the way the applications are developed and the way the code should look like. For example when everyone use the same standard an automatic style-checker could be used as a part of the project CI.
Using the same standards improve code readability and helps to reduce the tension between team members about re-factoring the same code in different ways.
Therefore:
*
*All the code developed by the particular team should follow precisely the same standard.
*All the code developed for a particular project should follow precisely the same standard.
*It is desirable that teams belonging to the same company use the same standard.
In an outsourcing company an exception could be made for a team working for a customer if the customer wants to enforce a standard of their own. In this case the team adopts the customer's standard which could be incompatible with the one used by their company.
A: Like others have mentioned, I think it needs to be by engineering or by the team--the company (i.e. business units) should not be involved in that sort of decision.
But one other thing I'd add is any rules that are implemented should be enforced by tools and not by people. Worst case scenario, IMO, is some over-zealous grammar snob (yes, we exist; I know because we can smell our own) writes some documentation outlining a set of coding guidelines which absolutely nobody actually reads or follows. They become obsolete over time, and as new people are added to the team and old people leave, they simply become stale.
Then, some conflict arises, and someone is put in the uncomfortable position of having to confront someone else about coding style--this sort of confrontation should be done by tools and not by people. In short, this method of enforcement is the least desirable, in my opinion, because it is far too easy to ignore and simply begs programmers to argue about stupid things.
A better option (again, IMO) is to have warnings thrown at compile time (or something similar), so long as your build environment supports this. It's not hard to configure this in VS.NET, but I'm unaware of other development environments that have similar features.
A: Style guidelines are extremely important, whether they're for design or development, because they speed the communication and performance of people who work collaboratively (or even alone, sequentially, as when picking up the pieces of an old project). Not having a system of convention within a company is just asking people to be as unproductive as they can. Most projects require collaboration, and even those that don't can be vulnerable to our natural desire to exercise our programming chops and keep current. Our desire to learn gets in the way of our consistency - which is a good thing in and of itself, but can drive a new employee crazy trying to learn the systems they're jumping in on.
Like any other system that's meant for good and not evil, the real power of the guide lies in the hands of its people. The developers themselves will determine what the essential and useful parts are and then, hopefully, use them.
Like the law. Or the English language.
Style guides should be as deep as they want to be - if it comes up in the brainstorm session, it should be included. It's odd how you worded the question because at the end of the day there is no way to "impose" a style guide because it's only a GUIDE.
RTFM, then glean the good stuff and get on with it.
A: Yes, I think companies should. Developer may need to get used to the coding-style but in my opinion a good programmer should be able to work with any coding style. As Midhat said: It is important to have a consistent codebase.
I think this is also important for opensource projects, there is no supervisor to tell you how to write your code but many languages have specifications on how naming and organisation of your code should be. This helps a lot when integrating opensource components into your project.
A: Sure, guidelines are good, and unless it's badly-used Hungarian notation (ha!), it'll probably improve consistency and make reading other people's code easier. The guidelines should just be guidelines though, not strict rules enforced on programmers. You could tell me where to put my braces or not to use names like temp, but what you can't do is force me to have spaces around index values in array brackets (they tried once...)
A: Yes.
Coding standards are a common way of ensuring that code within a certain organization will follow the Principle of Least Surprise: consistency in standards starting from variable naming to indentation to curly brace use.
Coders having their own styles and their own standards will only produce a code-base that is inconsistent, confusing, and frustrating to read, especially on larger projects.
A: These are the coding standards for a company I used to work for. They're well defined, and, while it took me a while to get used to them, meant that the code was readable by all of us, and uniform all the way through.
I do think coding standards are important within a company, if none are set, there are going to be clashes between developers, and issues with readability.
Having the code uniform all the way through presents a better code to the end user (so it looks as if it's written by one person - which, from an End Users point of view, it should - that person being "the company" and it also helps with readability within the team...
A: A common coding style promotes consistency and makes it easy for different people to easily understand, maintain and expand the whole code base, not only their own pieces. It also makes it easier for new people to learn the code faster. Thus, any team should have a guidelines on how the code is expected to be written.
Important guidelines include (in no particular order):
*
*whitespace and indentation
*standard comments - file, class or method headers
*naming convention - classes, interfaces, variables, namespaces, files
*code annotations
*project organization - folder structures, binaries
*standard libraries - what templates, generics, containers and so on to use
*error handling - exceptions, hresults, error codes
*threading and synchronization
Also, be wary of programmers that can't or won't adapt to the style of the team, no matter how bright they might be. If they don't play by one of the team rules, they probably won't play by other team rules as well.
A: I would agree that consistency is key. You can't rely on IDE pretty-printing to save the day, because some of your developers may not like using an IDE, and because when you're trawling through a code base of thousands of source files, it's simply not feasible to pretty print all the files when you start working on them, and perform a roll-back afterwards so your VCS doesn't try to commit back all the changes (clogging the repository with needless updates that burden everyone).
I would suggest standardizing at least the following (in decreasing order of importance):
*
*Whitespace (it's easiest if you choose a style that conforms to the automatic pretty-printing of some shared tool)
*Naming (files and folders, classes, functions, variables, ...)
*Commenting (using a format that allows automatic documentation generation)
A: My opinion:
*
*Some basic rules are good as it helps everyone to read and maintain the code
*Too many rules are bad as it stops developers innovating with clearer ways of laying out code
*Individual style can be useful to determine the history of a code file. Diff/blame tools can be used but the hint is still useful
A: Modern IDEs let you define a formatting template. If there is a corporate standard, then develop a configuration file that defines all the formatting values you care about and make sure everyone runs the formatter before they check in their code. If you want to be even more rigorous about it you could add a commit hook for your version control system to indent the code before it is accepted.
A: Yes in terms of using a common naming standard as well as a common layout of classes and code behind files. Everything else is open.
A: Every company should. Consistent coding style ensures higher readibility and maintainability of the codebase across whole your team.
The shop I work at does not have a unified coding standard, and I can say we (as a team) vastly suffer from that. When there is no will from the individuals (like in case of some of my colleagues), the team leader has to bang his fist on the table and impose some form of standardised coding guidelines.
A: Ever language has general standards that are used by the community. You should follow those as well as possible so that your code can be maintained by other people used to the language, but there's no need to be dictatorial about it.
The creation of an official standard is wrong because a company coding standard is usually too rigid, and unable to flow with the general community using the language.
If you're having a problem with a team member really be out there in coding style, that's an excellent thing for the group to gently suggest is not a good idea at a code review.
A: Coding standards: YES. For reasons already covered in this thread.
Styling standards: NO. Your readable, is my bewildering junk, and vice versa. Good commenting and code factoring have a far greater benefit. Also gnu indent.
A: I like Ilya's answer because it incorporates the importance of readability, and the use of continuous integration as the enforcement mechanism. Hibri mentioned FxCop, and I think its use in the build process as one of the criteria for determining whether a build passes or fails would be more flexible and effective than merely documenting a standard.
A: I entirely agree that coding standards should be applied, and that it should almost always be at the team level. However there are a couple of exceptions.
If the team is writing code that is to be used by other teams (and here I mean that other teams will have to look at the source, not just use it as a library) then there are benefits to making common standards across all the teams using it. Similarly if the policy of the company is to frequently move programmers from one team to another, or is in a position where one team frequently wants to reuse code from another team then it is probably best to impose standards across the company.
A: There are two types of conventions.
Type A conventions: "please do these, it is better"
and Type B: "please drive on the right hand side of the road", while it is okay to drive on the other side, as long as everyone does the same way.
There's no such thing as a separate team. All code in a good firm is connected somehow, and style should be consistent. It's easier to get yourself used to one new style than to twenty different styles.
Also, a new developer should be able to respect the practices of existing codebase and to follow them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Why does "abcd".StartsWith("") return true? Title is the entire question. Can someone give me a reason why this happens?
A: In C# this is how the specification tells it to react;
To be equal, value must be an empty string (Empty), a reference to this same instance, or match the beginning of this instance.
A: The first N characters of the two strings are identical. N being the length of the second string, i.e. zero.
A:
Why does “abcd”.StartsWith(“”) return true?
THE REAL ANSWER:
It has to be that way otherwise you'd have the case where
"".startsWith("") == false
"".equals("") == true
but yet
"a".startsWith("a") == true
"a".equals("a") == true
and then we'd have Y2K all over again because all the bank software that depends on equal strings starting with themselves will get our accounts mixed up and suddenly Bill Gates will have my wealth and I'd have his, and damn it! Fate just isn't that kind to me.
A: I will try to elaborate on what Jon Skeet said.
Let's say x, y and z are strings and + operator is in fact concatenation, then:
If we can split z to write z = x + y that means that z starts with x.
Because every string z can be split to z = "" + z it follows that every string starts with "".
So, because ("" + "abcd") == "abcd" it follows that "abcd" starts with ""
A: Just for the record, String.StartsWith() internally calls the method System.Globalization.CultureInfo.IsPrefix() which makes the following check explicitly:
if (prefix.Length == 0)
{
return true;
}
A: I'll start with a related fact that is easier to understand.
The empty set is a subset of every set.
Why? The definition of subset states that A is a subset of B if every element of A is an element of B. Conversely, A is not a subset of B if there is an element of A that is not an element of B.
Now fix a set B. I'll establish that the empty set is a subset of B. I'll do this by showing that it is not the case that the empty set is not a subset of B. If the empty set were not a subset of B then I could find an element of the empty set that is not in B. But the empty set does not have any elements and thus I can not find an element that is not in B. Therefore, it is not the case that the empty set is not a subset of B. Thus, the empty set must be a subset of B.
Any string starts with the empty string.
First, we must agree on our definition of starts with. Let s and t be strings We say that s starts with t if s.Length >= t.Length and the first t.Length characters of t match those of s. That is, s.Length >= t.Length and for every Int32 index such that 0 <= index < t.Length, s[index] == t[index] is true. Conversely, we would say that s does not start with t if the statement
s.Length < t.Length or s.Length >= t.Length and there is an Int32 index such that 0 <= index < t.Length and s[index] != t[index]
is true. In plain English, s is shorter than t, or, if not, there is a character in t not matching the character as the same position in s.
Now fix a string s. I'll establish that s starts with the empty string. I'll do this by showing that it is not the case that s does not start with the empty string. If s does not start with the empty string then s.Length < String.Empty.Length or s.Length >= String.Empty.Length and there is an Int32 index such that 0 <= index < String.Empty.Length. But s.Length >= 0 and String.Empty.Length is equal to zero so it is impossible for s.Length < String.Empty.Length to be true. Similarly, since ``String.Empty.Lengthis equal to zero, there is noInt32 indexsatisfying0 <= index < String.Empty.Length`. Therefore
s.Length < String.Empty.Length or s.Length >= String.Empty.Length and there is an Int32 index such that 0 <= index < String.Empty.Length
is false. Therefore, it is not the case that s does not start with the empty string. Thus, s must start with the empty string.
The following is an implementation of starts with coded as an extension to string.
public static bool DoStartsWith(this string s, string t) {
if (s.Length >= t.Length) {
for (int index = 0; index < t.Length; index++) {
if (s[index] != t[index]) {
return false;
}
}
return true;
}
return false;
}
The above two bolded facts are examples of vacuously true statements. They are true by virtue of the fact that the statements defining them (subset and starts with) are universal quantifications over empty universes. There are no elements in the empty set, so there can not be any elements of the empty set not in some other fixed set. There are no characters in the empty string, so there can not be a character as some position in the empty string not matching the character in the same position in some other fixed string.
A: This method compares the value parameter to the substring at the beginning of this string that is the same length as value, and returns a value that indicates whether they are equal. To be equal, value must be an empty string (Empty), a reference to this same instance, or match the beginning of this instance.
.NET String.StartsWith
true if the character sequence represented by the argument is a prefix of the character sequence represented by this string; false otherwise. Note also that true will be returned if the argument is an empty string or is equal to this String object as determined by the equals(Object) method.
Java String.startsWith
A: Yes - because it does begin with the empty string. Indeed, the empty string logically occurs between every pair of characters.
Put it this way: what definition of "starts with" could you give that would preclude this? Here's a simple definition of "starts with" that doesn't:
"x starts with y if the first y.Length characters of x match those of y."
An alternative (equivalent) definition:
"x starts with y if x.Substring(0, y.Length).Equals(y)"
A: Let's just say "abcd".StartsWith("") returns false.
if so then what does the following expression eval to, true or false:
("abcd".Substring(0,0) == "")
it turns out that evals to true, so the string does start with the empty string ;-), or put in other words, the substring of "abcd" starting at position 0 and having 0 length equals the empty string "". Pretty logical imo.
A: Because a string begins well with "nothing".
A: If you think of it in regular expressions terms, it makes sense.
Every string (not just "abcd", also "" and "sdf\nff") ,
returns true when evaluating the regular expression of 'starts with empty string'.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "89"
} |
Q: Does WWF provide BPEL integration? I've found a CTP for such integration from Microsoft,
but it seems it never was officially released and supported.
Also - Do you know a list of WWF to BPEL activities mapping?
Thanks
A: Probably we can not expect real WF-BPEL integration from Microsoft. Even the CTP that you found, is not an integration effort but rather an import/export tool. David Chappel's post (and comments) indicates that we must not expect too much on this.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: System.Web.HttpContextWrapper2 error in sutekishop - MVC 3 I am currently try to use the sutekishop .Net CMS product but am getting the error
"Could not load type 'System.Web.HttpContextWrapper2'..."
Is this an MVC assembly mismatch? i have uninstalled mvc 5 and installed 3 (required according to the set up) but am still getting the issues.
Any ideas?
Rhys
Full error:
Could not load type 'System.Web.HttpContextWrapper2' from assembly 'System.Web.Abstractions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.
[TypeLoadException: Could not load type 'System.Web.HttpContextWrapper2' from assembly 'System.Web.Abstractions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.]
System.Web.Mvc.MvcHandler.ProcessRequest(HttpContext httpContext) +0
System.Web.Mvc.MvcHandler.System.Web.IHttpHandler.ProcessRequest(HttpContext httpContext) +36
System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +181
System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +75
A: Have you recently upgraded to .NET 3.5 SP1? We did, and it caused this exact error message on our MVC website (we were using MVC Preview 2).
After uninstalling .NET 3.5 SP1, the problem went away.
A: Thanks , but i have (possibly foolishy) attempted to port the whole thing to MVC 5. Being an ASP.net MVC noob this could get very interesting...
At this stage the project builds but the tests dont. I really wish there was some decent documentation or at least a breaking changes document to go by. :(
A: Yup, I have upgraded. As this is not mission critical stuff, its for my GF's upstart business, so I have decided I want the latest, so i WANT to port this to MVC 5, i am not interested in going backwards. I guess i will have to suffer if this is the path i take. Cheers anyway.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Best way to sandbox Apache on Linux I have Apache running on a public-facing Debian server, and am a bit worried about the security of the installation. This is a machine that hosts several free-time hobby projects, so none of us who use the machine really have the time to constantly watch for upstream patches, stay aware of security issues, etc. But I would like to keep the bad guys out, or if they get in, keep them in a sandbox.
So what's the best, easy to set up, easy to maintain solution here? Is it easy to set up a user-mode linux sandbox on Debian? Or maybe a chroot jail? I'd like to have easy access to files inside the sadbox from the outside. This is one of those times where it becomes very clear to me that I'm a programmer, not a sysadmin. Any help would be much appreciated!
A: I second what xardias says, but recommend OpenVZ instead.
It's similar to Linux-Vserver, so you might want to compare those two when going this route.
I've setup a webserver with a proxy http server (nginx), which then delegates traffic to different OpenVZ containers (based on hostname or requested path). Inside each container you can setup Apache or any other webserver (e.g. nginx, lighttpd, ..).
This way you don't have one Apache for everything, but could create a container for any subset of services (e.g. per project).
OpenVZ containers can quite easily get updated altogether ("for i in $(vzlist); do vzctl exec apt-get upgrade; done")
The files of the different containers are stored in the hardware node and therefore you can quite easily access them by SFTPing into the hardware node.
Apart from that you could add a public IP address to some of your containers, install SSH there and then access them directly from the container.
I've even heard from SSH proxies, so the extra public IP address might be unnecessary even in that case.
A: You could always set it up inside a virtual machine and keep an image of it, so you can re-roll it if need be. That way the server is abstracted from your actual computer, and any virus' or so forth are contained inside the virtual machine. As I said before, if you keep an image as a backup you can restore to your previous state quite easy.
A: To make sure it is said, CHRoot Jails are rarely a good idea it is, despite the intention, very easy to break out of, infact I have seen it done by users accidentally!
A: No offense, but if you don't have time to watch for security patches, and stay aware of security issues, you should be concerned, no matter what your setup. On the other hand, the mere fact that you're thinking about these issues sets you apart from the other 99.9% of owners of such machines. You're on the right path!
A: I find it astonishing that nobody mentioned mod_chroot and suEXEC, which are the basic things you should start with, and, most likely the only things you need.
A: Chroot jails can be really insecure when you are running a complete sandbox environment. Attackers have complete access to kernel functionality and for example may mount drives to access the "host" system.
I would suggest that you use linux-vserver. You can see linux-vserver as an improved chroot jail with a complete debian installation inside. It is really fast since it is running within one single kernel, and all code is executed natively.
I personally use linux-vserver for seperation of all my services and there are only barely noticeable performance differences.
Have a look at the linux-vserver wiki for installation instructions.
regards, Dennis
A: You should use SELinux. I don't know how well it's supported on Debian; if it's not, just install a Centos 5.2 with SELinux enabled in a VM. Shouldn't be too much work, and much much safer than any amateur chrooting, which is not as safe as most people believe.
SELinux has a reputation for being difficult to admin, but if you're just running a webserver, that shouldn't be an issue. You might just have to do a few "sebool" to let httpd connect to the DB, but that's about it.
A: While all of the above are good suggestions, I also suggest adding a iptables rule to disallow unexpected outgoing network connections. Since the first thing most automated web exploits do is download the rest of their payload, preventing the network connection can slow the attacker down.
Some rules similar to these can be used (Beware, your webserver may need access to other protocols):
iptables --append OUTPUT -m owner --uid-owner apache -m state --state ESTABLISHED,RELATED --jump ACCEPT
iptables --append OUTPUT -m owner --uid-owner apache --protocol udp --destination-port 53 --jump ACCEPT
iptables --append OUTPUT -m owner --uid-owner apache --jump REJECT
A: If using Debian, debootstrap is your friend coupled with QEMU, Xen, OpenVZ, Lguest or a plethora of others.
A: Make a virtual machine. try something like vmware or qemu
A: What problem are you really trying to solve? If you care about what's on that server, you need to prevent intruders from getting into it. If you care about what intruders would do with your server, you need to restrict the capabilities of the server itself.
Neither of these problems could be solved with virtualization, without severly criplling the server itself. I think the real answer to your problem is this:
*
*run an OS that provides you with an easy mechanism for OS updates.
*use the vendor-supplied software.
*backup everything often.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: A universal reflections API? Some time back I was working on an algorithm that processed code, and required a reflections API. We were interested in its implementation for multiple languages, but the reflections API for a language would not work for any other language. So is there any thing like a "universal reflections API" that would work for all languages, or maybe for a few mainstream languages (.NET,Java,Ruby,Python)
If there isnt any, is it possible to build such a thing that can process classes from different languages.
How would you go about having a unified way to process OO code from multiple languages
A: I don't believe there is universal Reflection API. Any Reflection API depends on the metadata that the compiler generates for the language constructs and these can vary quite a lot from language to language, even though there is a common subset across multiple languages.
A: In .NET there is CodeDOM, which provides a way to generate a universal syntax tree and then serialize it as (C#, VB .NET etc...) code and/or compile it. Of course that's the mirror image of Reflection, but if anyone ever writes a tool to generate the AST directly from IL the functionality could start to overlap.
In any case its the closest thing I can think of.
A: A reflection API depends on the metadata generated for the code, so you can have a universal API for all languages on the JVM, or all languages on the CLR...but it wouldn't really be possible to make one that does Python, Java, and VB etc...
A: If you want a universal API, you need to step outside the language. See our DMS meta-tool for processing arbitrary languages, and answering arbitrary questions, including those you think of as reflection.
(Op asked for support for various languages: DMS has full parsers for C#, VB.net, Java, and Python. Ruby not yet in the list; we're working on it).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I write a shell script to direct grep data into a date-based filename? I basically want to do this:
grep 'example.com' www_log > example.com.YYYY-MM-DD-H:i:S.log
...with of course the filename being example.com.2008-09-27-11:21:30.log
I'd then put this in crontab to run daily.
A: The verbose method:
grep 'example.com' www_log > `date +example.com.%Y-%m-%d-%H:%M:%S.log`
The terse method:
grep 'example.com' www_log > `date +example.com.%F-%T.log`
A: grep 'example.com' www_log > example.com.$(date +%F-%T).log
A: Here is another way, that I usually use:
grep 'example.com' www_log > example.com.`date +%F-%T`.log
Backticks are a form of command substitution. Another form is to use $():
$(command)
which is the same as:
`command`
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Finding Frequency of numbers in a given group of numbers Suppose we have a vector/array in C++ and we wish to count which of these N elements has maximum repetitive occurrences and output the highest count. Which algorithm is best suited for this job.
example:
int a = { 2, 456, 34, 3456, 2, 435, 2, 456, 2}
the output is 4 because 2 occurs 4 times. That is the maximum number of times 2 occurs.
A: If you have the RAM and your values are not too large, use counting sort.
A: A possible C++ implementation that makes use of STL could be:
#include <iostream>
#include <algorithm>
#include <map>
// functor
struct maxoccur
{
int _M_val;
int _M_rep;
maxoccur()
: _M_val(0),
_M_rep(0)
{}
void operator()(const std::pair<int,int> &e)
{
std::cout << "pair: " << e.first << " " << e.second << std::endl;
if ( _M_rep < e.second ) {
_M_val = e.first;
_M_rep = e.second;
}
}
};
int
main(int argc, char *argv[])
{
int a[] = {2,456,34,3456,2,435,2,456,2};
std::map<int,int> m;
// load the map
for(unsigned int i=0; i< sizeof(a)/sizeof(a[0]); i++)
m [a[i]]++;
// find the max occurence...
maxoccur ret = std::for_each(m.begin(), m.end(), maxoccur());
std::cout << "value:" << ret._M_val << " max repetition:" << ret._M_rep << std::endl;
return 0;
}
A: Sort the array and then do a quick pass to count each number. The algorithm has O(N*logN) complexity.
Alternatively, create a hash table, using the number as the key. Store in the hashtable a counter for each element you've keyed. You'll be able to count all elements in one pass; however, the complexity of the algorithm now depends on the complexity of your hasing function.
A: Optimized for space:
Quicksort (for example) then iterate over the items, keeping track of largest count only.
At best O(N log N).
Optimized for speed:
Iterate over all elements, keeping track of the separate counts.
This algorithm will always be O(n).
A: a bit of pseudo-code:
//split string into array firts
strsplit(numbers) //PHP function name to split a string into it's components
i=0
while( i < count(array))
{
if(isset(list[array[i]]))
{
list[array[i]]['count'] = list + 1
}
else
{
list[i]['count'] = 1
list[i]['number']
}
i=i+1
}
usort(list) //usort is a php function that sorts an array by its value not its key, Im assuming that you have something in c++ that does this
print list[0]['number'] //Should contain the most used number
A: The hash algorithm (build count[i] = #occurrences(i) in basically linear time) is very practical, but is theoretically not strictly O(n) because there could be hash collisions during the process.
An interesting special case of this question is the majority algorithm, where you want to find an element which is present in at least n/2 of the array entries, if any such element exists.
Here is a quick explanation, and a more detailed explanation of how to do this in linear time, without any sort of hash trickiness.
A: If the range of elements is large compared with the number of elements, I would, as others have said, just sort and scan. This is time n*log n and no additional space (maybe log n additional).
THe problem with the counting sort is that, if the range of values is large, it can take more time to initialize the count array than to sort.
A: Here's my complete, tested, version, using a std::tr1::unordered_map.
I make this approximately O(n). Firstly it iterates through the n input values to insert/update the counts in the unordered_map, then it does a partial_sort_copy which is O(n). 2*O(n) ~= O(n).
#include <unordered_map>
#include <vector>
#include <algorithm>
#include <iostream>
namespace {
// Only used in most_frequent but can't be a local class because of the member template
struct second_greater {
// Need to compare two (slightly) different types of pairs
template <typename PairA, typename PairB>
bool operator() (const PairA& a, const PairB& b) const
{ return a.second > b.second; }
};
}
template <typename Iter>
std::pair<typename std::iterator_traits<Iter>::value_type, unsigned int>
most_frequent(Iter begin, Iter end)
{
typedef typename std::iterator_traits<Iter>::value_type value_type;
typedef std::pair<value_type, unsigned int> result_type;
std::tr1::unordered_map<value_type, unsigned int> counts;
for(; begin != end; ++begin)
// This is safe because new entries in the map are defined to be initialized to 0 for
// built-in numeric types - no need to initialize them first
++ counts[*begin];
// Only need the top one at this point (could easily expand to top-n)
std::vector<result_type> top(1);
std::partial_sort_copy(counts.begin(), counts.end(),
top.begin(), top.end(), second_greater());
return top.front();
}
int main(int argc, char* argv[])
{
int a[] = { 2, 456, 34, 3456, 2, 435, 2, 456, 2 };
std::pair<int, unsigned int> m = most_frequent(a, a + (sizeof(a) / sizeof(a[0])));
std::cout << "most common = " << m.first << " (" << m.second << " instances)" << std::endl;
assert(m.first == 2);
assert(m.second == 4);
return 0;
}
A: It wil be in O(n)............ but the thing is the large no. of array can take another array with same size............
for(i=0;i
mar=count[o];
index=o;
for(i=0;i
then the output will be......... the element index is occured for max no. of times in this array........
here a[] is the data array where we need to search the max occurance of certain no. in an array.......
count[] having the count of each element..........
Note : we alrdy knw the range of datas will be in array..
say for eg. the datas in that array ranges from 1 to 100....... then have the count array of 100 elements to keep track, if its occured increament the indexed value by one........
A: Now, in the year 2022 we have
*
*namespace aliases
*more modern containers like std::unordered_map
*CTAD (Class Template Argument Deduction)
*range based for loops
*using statment
*the std::ranges library
*more modern algorithms
*projections
*structured bindings
With that we can now write:
#include <iostream>
#include <vector>
#include <unordered_map>
#include <algorithm>
namespace rng = std::ranges;
int main() {
// Demo data
std::vector data{ 2, 456, 34, 3456, 2, 435, 2, 456, 2 };
// Count values
using Counter = std::unordered_map<decltype (data)::value_type, std::size_t> ;
Counter counter{}; for (const auto& d : data) counter[d]++;
// Get max
const auto& [value, count] = *rng::max_element(counter, {}, &Counter::value_type::second);
// Show output
std::cout << '\n' << value << " found " << count << " times\n";
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Free set of forms, icons, styles, etc for web-based admin interfaces Is there any free set of forms, icons, styles, images, etc for building web-based admin interfaces? If yes, which is the best?
A: Tango icons is a set of free-as-in-speech icons. It is covered under CC Attribution Share Alike 2.5 license, so it should be viable for commercial work.
A: Icon Archive
is one of the very best source for any kind of icons.
A: A particularly common choice is Silk. It's a very comprehensive free set. There's also the Silk Companion 1.
A: http://www.iconspedia.com/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Existing Standard Style and Coding standard documents The following have been proposed for an upcoming C++ project.
*
*C++ Coding Standards, by Sutter and Alexandrescu
*JSF Air Vehicle C++ coding standards
*The Elements of C++ Style
*Effective C++ 3rd Edition, by Scott Meyers
Are there other choices? Or is the list above what be should used on a C++ project?
Some related links
*
*Do you think a software company should impose developers a coding-style?
*https://stackoverflow.com/questions/66268/what-is-the-best-cc-coding-style-closed
A: C++ Coding Standards: 101 Rules, Guidelines, and Best Practices (C++ In-Depth Series)
by Herb Sutter and, Andrei Alexandrescu.
A: I really think it does not matter which one you adopt, as long as everyone goes along with it. Sometimes that can be hard as it seems that some styles don't agree with peoples tases. I.e. it comes down to arguing about whether prefixing all member variable with m_ is pretty or not.
I have been using and modifying the Geosoft standards for a while, these are for C++. There are some other at the what-is-your-favorite-coding-guidelines-checklist thread
A: Hmm, strange question. Just choose standard which most of the team members are familiar with. Make some kind of poll for your team. Not sure how SO can help here :)
A: High Integrity C++ Coding Standard Manual - Version 2.4
A: Try this one, it's the one that NASA's Goddard space flight centre uses.
http://software.gsfc.nasa.gov/AssetsApproved/PA2.4.1.3.pdf
A: I've written a coding standard for a major British company and was very conscious of putting reasons why I selected certain things rather than just make it a bunch of "Thou shalt" pronouncements. (-:
As a quick way out, I'd suggest mandating:
*
*Scott Meyers's Effective C++ 3rd Edition (Amazon link) - if you can find a copy of the 1st edition of this book then buy it for the overview of OO design which was removed from later editions. )-:
*Scott Meyer's book Effective STL (Amazon link) - you must use STL to use C++ efficiently.
*Steve McConnell's book Code Complete 2 (Amazon link) - not C++ specific but full of great insights.
A: Coding standards are only meaningful if they help you write code. So they just need to keep your code consistent (ie if someone puts m_ for variable members and someone doesn't, it can take longer to grok the code than if they all used the same style).
That's all they (should) do, so just pick up your existing code and make sure your team codes to the same style.
I like to think of it like cartoons. If you become a cartoonist on the Simpsons, you have to draw eyes in the official way or everything looks pants, but if you go to Family Guy, you have to draw them differently. Neither way is wrong.
Too many standards are about meaningless restrictions, written by people who don't code themselves (or consider themselves too good to keep to them). Others try to teach you how to code. Neither has its place in a good standard, those just make it easier for you to look at some code and understand what its doing.
eg. my standards include rules for naming directories - you will always have your code in a directory called the same name as the project, and all binaries go in the bin subdir, with all config files in the same place, and a changelog, etc. All simple stuff, but I guarantee I'll never find a project called something different with its binaries in the root directory where I don't know what changes were made to it. Simple, easy stuff that makes a huge difference.
A: I agree with Harald Scheirich, it is most important to have the team agree on what the rules should be rather than just picking a set that has been recommended by outsiders.
My personal recommendation would be to read Code Complete, 2nd Edition by Steve McConnell which describes (among a whole lot of other useful stuff) several common coding standards and offers commentary on each. This might help your team in setting up your own standards.
A: Lockheed Martin's JSF Air Vehicle C++ Coding Standards is an interesting read but it's a bit overkill unless you're working in fields where a bug can kill people. It's still a very important example to look at from a computer ethics standpoint about an example of how to program with safety and correctness being top priority.
For general-purpose C++ coding, I'd personally recommend C++ Coding Standards by Herb Sutter. From the very beginning, it emphasizes what not to standardize (things relating to style or preference rather than practices that promote safety, correctness, efficiency). It's also among the easiest reads in your list giving very brief but concise arguments for each standard, making it something easy to show your co-workers.
A: Poco C++ Coding Style Guide.pdf
A: *
*Apple Coding Guidelines for Cocoa
*GNU Coding Standards
*Bell Labs' Recommended C Style and Coding Standards
*reserved names from POSIX / ISO
*Facebook HHVM Coding Conventions
*GeoSoft C++ Programming Style Guidelines
*LLVM Coding Standards
*C Style and Coding Standards for SunOS
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Creating, opening and printing a word file from C++ I have three related questions.
I want to create a word file with a name from C++. I want to be able to sent the printing command to this file, so that the file is being printed without the user having to open the document and do it manually and I want to be able to open the document. Opening the document should just open word which then opens the file.
A: As posted as an answer to a similar question, I advise you to look at this page where the author explains what solution he took to generate Word documents on a server, without MsWord being available, without automation or thirdparty libraries.
A: When you have the file and just want to print it, then look at this entry at Raymond Chen's blog. You can use the verb "print" for printing.
See the shellexecute msdn entry for details.
A: You can use Office Automation for this task. You can find answers to frequently asked questions about Office Automation with C++ at http://support.microsoft.com/kb/196776 and http://support.microsoft.com/kb/238972 .
Keep in mind that to do Office Automation with C++, you need to understand how to use COM.
Here are some examples of how to perform various tasks in word usign C++:
*
*http://support.microsoft.com/kb/220911/en-us
*http://support.microsoft.com/kb/238393/en-us
*http://support.microsoft.com/kb/238611/en-us
Most of these samples show how to do it using MFC, but the concepts of using COM to manipulate Word are the same, even if you use ATL or COM directly.
A: You can use automation to open MS Word (in background or foreground) and then send the needed commands.
A good starting place is the knowledge base article Office Automation Using Visual C++
Some C source code is available in How To Use Visual C++ to Access DocumentProperties with Automation (the title says C++, but it is plain C)
A: I have no experience from integrating with Microsoft Office, but I guess there are some APIs around that you can use for this.
However, if what you want to accomplish is a rudimentary way of printing formatted output and exporting it to a file that can be handled in Word, you might want to look into the RTF format. The format is quite simple to learn, and is supported by the RtfTextBox (or is it RichTextBox?), which also has some printing capabilities. The rtf format is the same format as is used by Windows Wordpad (write.exe).
This also has the benefit of not depending on MS Office in order to work.
A: My solution to this is to use the following command:
start /min winword <filename> /q /n /f /mFilePrint /mFileExit
This allows the user to specify a printer, no. of copies, etc.
Replace <filename> with the filename. It must be enclosed in double-quotation marks if it contains spaces. (e.g. file.rtf, "A File.docx")
It can be placed within a system call as in:
system("start /min winword <filename> /q /n /f /mFilePrint /mFileExit");
Here is a C++ header file with functions that handle this so you don't have to remember all of the switches if you use it frequently:
/*winword.h
*Includes functions to print Word files more easily
*/
#ifndef WINWORD_H_
#define WINWORD_H_
#include <string.h>
#include <stdlib.h>
//Opens Word minimized, shows the user a dialog box to allow them to
//select the printer, number of copies, etc., and then closes Word
void wordprint(char* filename){
char* command = new char[64 + strlen(filename)];
strcpy(command, "start /min winword \"");
strcat(command, filename);
strcat(command, "\" /q /n /f /mFilePrint /mFileExit");
system(command);
delete command;
}
//Opens the document in Word
void wordopen(char* filename){
char* command = new char[64 + strlen(filename)];
strcpy(command, "start /max winword \"");
strcat(command, filename);
strcat(command, "\" /q /n");
system(command);
delete command;
}
//Opens a copy of the document in Word so the user can save a copy
//without seeing or modifying the original
void wordduplicate(char* filename){
char* command = new char[64 + strlen(filename)];
strcpy(command, "start /max winword \"");
strcat(command, filename);
strcat(command, "\" /q /n /f");
system(command);
delete command;
}
#endif
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: What continuous integration tool is best for a C++ project? Cruisecontrol and Hudson are two popular continuous integration systems. Although both systems are able to do the automated continuous builds nicely, it just seems a lot easier to create a batch or bash build script, then use Windows scheduler or cron to schedule builds.
Are there better continuous integration systems available for C++ projects? Or is just using a script and a scheduler the simpler way?
A: I've been using Buildbot for the Spring RTS engine project succesfully.
A: We've been using Dart Dashboard. It's open source but driven by KitWare. They've since changed the name to CDash which I presume is still as capable. We're doing several kinds of testing including nightly and continuous integration across 10 different platforms in both debug and release mode as well as running 1000s of application tests and reporting the results there too.
A: You can also try JetBrains' TeamCity. It's a commercial product but it gives a free license for up to 20 build configurations.
A: We have been using CruiseControl for CI on a C++ project. While it is the only thing we use ant for, the ant build script for CruiseControl just starts our normal build script, so it is very simple and we haven't really needed to update it in a long while. Therefore the fact that CrusieControl is Java based has not really been an issue at all for us.
The main benefits of using something like cruise control are
*
*A nice web page showing build status
*Email after each build or after failed builds
*Automatically build after a commit to the source control system
*A firefox plugin to monitor the build status
*Shows the output for any build errors.
*Shows what files have changed since the last build (good for seeing which developer broke the buid)
Of course you can write a script yourself which does all of this, but why do all that work? In the long run the extra initial cost of setting up CruiseControl (or something similar) is probably much less than the cost of maintaining and updating a custom CI build script.
If all you need is to launch a daily build and a simple script started by cron is sufficient for your needs then by all means do that. However, one of the advantages of CI is that you get a build status report after every check in. Writing a script to do that takes more work and CruiseControl already does it.
A: We use Hudson for CI and SonarQube for code metrics. They're integrated, and Hudson has a handful of plugins that no cronjob can beat.
One great plugin is CI Game, which keeps a score about who breaks the builds and who commits without breaking it. Hudson has plugins to play with VMWare, Selenium, SVN, CSV, Git. It has RSS syndication, which can help you to automate even more everything else.
Hudson is great...
A: One of the nice features of a continuous integration (CI) tool is that a build gets triggered every time something is checked into your source control repository.
If that is not something you need then you are probably better of using the windows task scheduler or cron jobs.
In addition CI tools also come with a (web) dashboard and advanced logging capabilities.
Your question seems to me more "why would I use a CI tool" then "which CI tool should I use". If a batch script serves your needs, please use that. (Re)creating a build environment only becomes easier if you do not need a CI tool as an additional component. If you want source control triggered build, a dashboard, storage of old build results or other logging, use a CI tool and avoid developing all such functions in batch or shell scripts.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Text difference algorithm I need an algorithm that can compare two text files and highlight their difference and ( even better!) can compute their difference in a meaningful way (like two similar files should have a similarity score higher than two dissimilar files, with the word "similar" defined in the normal terms). It sounds easy to implement, but it's not.
The implementation can be in c# or python.
Thanks.
A: If you need a finer granularity than lines, you can use Levenshtein distance. Levenshtein distance is a straight-forward measure on how to similar two texts are.
You can also use it to extract the edit logs and can a very fine-grained diff, similar to that on the edit history pages of SO.
Be warned though that Levenshtein distance can be quite CPU- and memory-intensive to calculate, so using difflib,as Douglas Leder suggested, is most likely going to be faster.
Cf. also this answer.
A: I can recommend to take a look at Neil Fraser's code and articles:
google-diff-match-patch
Currently available in Java,
JavaScript, C++ and Python. Regardless
of language, each library features the
same API and the same functionality.
All versions also have comprehensive
test harnesses.
Neil Fraser: Diff Strategies - for theory and implementation notes
A: There are a number of distance metrics, as paradoja mentioned there is the Levenshtein distance, but there is also NYSIIS and Soundex. In terms of Python implementations, I have used py-editdist and ADVAS before. Both are nice in the sense that you get a single number back as a score. Check out ADVAS first, it implements a bunch of algorithms.
A: In Python, there is difflib, as also others have suggested.
difflib offers the SequenceMatcher class, which can be used to give you a similarity ratio. Example function:
def text_compare(text1, text2, isjunk=None):
return difflib.SequenceMatcher(isjunk, text1, text2).ratio()
A: Look at difflib. (Python)
That will calculate the diffs in various formats. You could then use the size of the context diff as a measure of how different two documents are?
A: As stated, use difflib. Once you have the diffed output, you may find the Levenshtein distance of the different strings as to give a "value" of how different they are.
A: My current understanding is that the best solution to the Shortest Edit Script (SES) problem is Myers "middle-snake" method with the Hirschberg linear space refinement.
The Myers algorithm is described in:
E. Myers, ``An O(ND) Difference
Algorithm and Its Variations,''
Algorithmica 1, 2 (1986), 251-266.
The GNU diff utility uses the Myers algorithm.
The "similarity score" you speak of is called the "edit distance" in the literature which is the number of inserts or deletes necessary to transform one sequence into the other.
Note that a number of people have cited the Levenshtein distance algorithm but that is, albeit easy to implement, not the optimal solution as it is inefficient (requires the use of a possibly huge n*m matrix) and does not provide the "edit script" which is the sequence of edits that could be used to transform one sequence into the other and vice versa.
For a good Myers / Hirschberg implementation look at:
http://www.ioplex.com/~miallen/libmba/dl/src/diff.c
The particular library that it is contained within is no longer maintained but to my knowledge the diff.c module itself is still correct.
Mike
A: Bazaar contains an alternative difference algorithm, called patience diff (there's more info in the comments on that page) which is claimed to be better than the traditional diff algorithm. The file 'patiencediff.py' in the bazaar distribution is a simple command line front end.
A: You could use the solution to the Longest Common Subsequence (LCS) problem. See also the discussion about possible ways to optimize this solution.
A: One method I've employed for a different functionality, to calculate how much data was new in a modified file, could perhaps work for you as well.
I have a diff/patch implementation C# that allows me to take two files, presumably old and new version of the same file, and calculate the "difference", but not in the usual sense of the word. Basically I calculate a set of operations that I can perform on the old version to update it to have the same contents as the new version.
To use this for the functionality initially described, to see how much data was new, I simple ran through the operations, and for every operation that copied from the old file verbatim, that had a 0-factor, and every operation that inserted new text (distributed as part of the patch, since it didn't occur in the old file) had a 1-factor. All characters was given this factory, which gave me basically a long list of 0's and 1's.
All I then had to do was to tally up the 0's and 1's. In your case, with my implementation, a low number of 1's compared to 0's would mean the files are very similar.
This implementation would also handle cases where the modified file had inserted copies from the old file out of order, or even duplicates (ie. you copy a part from the start of the file and paste it near the bottom), since they would both be copies of the same original part from the old file.
I experimented with weighing copies, so that the first copy counted as 0, and subsequent copies of the same characters had progressively higher factors, in order to give a copy/paste operation some "new-factor", but I never finished it as the project was scrapped.
If you're interested, my diff/patch code is available from my Subversion repository.
A: Take a look at the Fuzzy module. It has fast (written in C) based algorithms for soundex, NYSIIS and double-metaphone.
A good introduction can be found at: http://www.informit.com/articles/article.aspx?p=1848528
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43"
} |
Q: What is the best practice when having a terminate clause...see explanation :) The case goes as following:
You have a Boolean property called FullScreenEnabled. You enter some method, and the code within this method is executed iff FullScreenEnabled is true. Which of the 2 approaches below do you use in your everyday programming:
private bool FullScreenEnabled { get; set; }
// Check if FullScreenEnabled is false and return;
private void Case1()
{
if (FullScreenEnabled == false)
{
return;
}
// code to be executed goes here!
}
// Surround the code by an if statement.
private void Case2()
{
if (FullScreenEnabled)
{
// code to be executed goes here!
}
}
A: I generally prefer the first version (bailing at the start of the method). It leads to less nesting, which slightly increases readability. Should you decide you don't need to check for the condition in the future, it's also easier to remove the if condition in the first version, especially if you have several such checks. Plus, it could be easily be written in a single line: if (!FullScreenEnabled) return;
A: private void MyMethod(bool arg){
if(arg)
return;
//do stuff
};
(for voting)
A: It depends upon the length and complexity of the method. If the method is short then nesting inside the if is no problem (and may be clearer). If the method has lots of nested statements then the immediate return will reduce amount of necessary indentation and might improve readability slightly.
A: The first approach (using a guard clause) scales better as more if cases are added. The problem with the second approach is that adding more if statements will result in code that exhibits the arrow anti-pattern where code starts to be idented like an arrow.
There is a very good article that explains this in more detail below:
Coding Horror: Flattening Arrow Code
A: Neither approach was posted. You should read the editing help to make sure code actually appears.
A: It's about whether one should test positive or negative, i.e. return at the method beginning if the condition is not met, or executing the code only when the condition is met. In a short method, I'd go with the latter case, in a long method, I'd go with the former. I'd always go with the early exit when there are several conditions to test. It doesn't really make a difference though.
Note however that in your sample is a comparison with false. You should write !FullScreenEnabled instead. Makes the code more readable.
A: if (!FullScreenEnabled)
throw new InvalidOperationException("Must be in fullscreen mode to do foo.");
My two cents, for what it's worth.
A: Either way works the same.
However, if you run code coverage metrics for your unit tests, the if (!FullScreenEnabled) return; will count as a separate block and you'll have to create a unit test to cover it to get to 100%.
Granted, even with the other approach you might want to have a unit test that verifies you are not executing your code when FullScreenEnabled is false. But if you cheat and don't write it, you still get 100%. :-)
A: I would go with the first approach, i find it to be more readable then the second.
basically I think that:
*
*if (FullScreenEnabled == false) is more readable then if (FullScreenEnabled).
*if you keep putting your "sanity" checks at the start of the method the method get a nice structure that is very easy to understand.
I am however think that there is a fine line here that need not be crossed, putting return statement in too many places in the middle of a method does tend to make it more complex
A: private void MyMethod(bool arg){
if(!arg){
//do stuff
}
}
(for voting)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What are some recommendations for porting C++ code to the MacOS? For a upcoming project, there are plans to port the existing C++ code that compiles on Windows and Linux to the MacOS(leopard). The software is command line application, but a GUI front end might be planned. The MacOS uses the g++ compiler. By having the same compiler as Linux, it does not seem like there would be any issues, but there always are.
Are there any recommendations or problems to watch out for during the port?
A: Does your app have a GUI, and which one (native / Qt / Gtk+)?
If not, the issues to watch out for (compared to Linux) are mainly in the dynamic linkage area. OS X uses '-dylib' and '-bundle' and in fact has two kinds of dynamic libraries (runtime loadable and the normal ones). Linux has only one kind (-shared), and is looser on this anyways.
If your app has a GUI, you'll need to recode the whole thing in Cocoa, using Objective-C. Meaning you'll be into a new language as well. Some people (like MS) have used Carbon (C++ API), but it's being phased out. I wouldn't recommend that to new projects.
Your best luck is using Qt or Gtk+. A native Gtk+ port has been (re)announced just some days ago (see Imendio).
p.s.
OS X does of course run X11 binaries, too, but pushing that to any of your customers might be a hard road. They're used to Aqua interface, and productive with that. Consider X11 only a very short term solution.
p.p.s. The number of open source addon libs coming with OS X is limited, and their versions might lack behind. Whereas in Linux you can easily require users to have 'libxxx v.y.y' installed, in OS X there are multiple packaging approaches (fink, macports) and for a commercial tool the required libraries are expected to be contained in the application. OS X offers 'application bundles' and 'frameworks' for this (local copies, making the application self-sufficient). Linux does not have such a concept. This will have a great effect on your build system as well; maybe you'll want to try something like SCons for all platforms?
A: You don't need to recode everything to Objective-C. There's a strange bastardization of C++ and Objective-C that will allow you to use C++ code from Objective-C, so you could intelligently split up the model code in C++ and view/controller code in Objective-C. To use Objective-C, just suffix your source code files with .mm instead of .m, and you can intermix most legal C++ and Objective-C syntax even on the same line.
A: We haven't been porting to MacOS, but have been porting to various Unixes from Linux, the main work area has been the installation, and startup systems, so expect to put most of the work there (given your existing is already portable between Linux and Windows).
A: Macintosh (macosx) is essentially FreeBSD under the hood (though it has been tweaked). There are some differences in systems programming between Linux and FreeBSD. Primarily these differences exist between the various system calls... so how much this affects you will be determined by what your application is doing and what kind of OS system calls you are making during execution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Deploying database for ASP .NET Website I was just browsing through questions on stack overflow, and i've come up to a post where it suggests of deploying database by simply copying the mdf file in the app_data folder and modifying your connection string.
I know that some people do create an mdf file in the app_code during development, but for going live, is this really a viable way and a good practice to deploy your database?
What I usualy do during development time is to have written my own SQL script file to build the database, and run it on my local SQL server. When site is about to go live I run the script on the target server and set my site to talk to the database. To be honest I've never utilize the app_code folder for storing the database, I usually use it to store my data access layer logic..
Am I doing the wrong thing here? Is it really a good practice to utilize the app_data folder to store your database? One problem that I can see with this method is that, deployment going to be slow. Transfering mdf file accross the internet will definately be much slower than running my sql script files. Looking forwards to hear your thoughts and experience on this matters. Cheers.
A: I personally prefer your method of deploying Databases and I see one big advantage with this: usually the Web and the DBServer should not be one machine (Security, Maintainability, ...) and utilizing the app_code folder to hold your Database seems a little credulously.
A: Another drawback is that an MDF file deployment will only work the first time. It will be inadequate once you are live and need to keep the data.
A: The app_data deployment scenario is useful for websites where you don't have a distinct database server (a lot of the free/less expensive hosts enable you to do this).
This is similar in theory to the old method of using access as the database for small classic ASP websites.
A: In my experience, the best practice involves using some sort of build process (perhaps automated or a script like you have mentioned). Then you divide your database into the following:
1) One time events - typically you would create the database once; you might insert data once or initial deployment; schema changes - these are handled outside of the build process.
2) Stored procedures, user functions and other repeatable events - handled by the build process.
The build process then deploys the database constructs when it deploys the code.
Stored procedures, etc are written such that if they exist, they are first deleted and then created again.
These scripts are then stored in your code repository - not the mdf. Now you can do versioning, history, control your build process, etc. As to folder - I would create a separate project folder for the core database constructs - these are different from "code" and should be treated as such. But they probably should be part of your solution.
A: Actually in my opinion the DB deployment can be useful for site tools, mass deplyments, etc. Deploying a premade application that includes the db can be a simple process. If successive versions of this get updated, all it needs to do is know if its an upgrade or not. If it is, run the script that updates the local DB, otherwise copy in the new database. Can simplify certain limited scenarios.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to properly setup a multi module SpringMVC application created by appfuse in Eclipse? I am trying to setup a multi module SpringMVC appfuse applicaiton in Eclipse but it seems that I'm facing lots of errors in Eclipse after I import the project in Eclipse. Can anyone please help me with a step by step guideline showing the ideal way to setup such application in Eclipse?
A: Have you tried using maven eclipse plugin?
You can just go to the project root folder (the one that contains your pom.xml file) and run "mvn eclipse:eclipse" from the command line.
This will build project files for each of your modules and also create inter-dependencies. You can just treat your multi-module project like a workspace with multiple projects.
Most of the errors that appear at load time, after mvn eclipse:eclipse are because of the repository variable. You can configure this by using "mvn -Declipse.workspace= eclipse:add-maven-repo".
More info on maven eclipse plugin at http://maven.apache.org/plugins/maven-eclipse-plugin/.
Regards,
Bogdan
A: What are the errors? The most common problem I can think of is library errors, in which case you have to edit the build path.
A: From what I recall for multi-module projects, eclipse just does not handle this well. It would help to see the specific errors you're getting, and then to start from there.
A: I know this problem, it's not related to Appfuse but rather to Maven itself. I suggest to follow these steps:
*
*set up your project;
*create every directory/source needed: mainly java sources and resources files, for both application AND unit tests;
*make sure everything compiles and all tests pass. for this you can check with
mvn package
*use the eclipse maven plugin:
mvn eclipse:eclipse
This way the project will include everything that's needed in the classpath: Spring and Log4j configuration files, resources, etc.
If you already executed the mvn eclipse:eclipse command, just delete the project from Eclipse (DON'T DELETE FILES!), remove Eclipse-specific files from directory, re-run mvn eclipse:eclipse
My 2 cents
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Use of Cron jobs in Google App Engine How can I use Cron on Google App Engine?
A: Google has officially enabled cron in the AppEngine, for more details check:
Cron for Python:
http://code.google.com/appengine/docs/python/config/cron.html
Cron for Java:
http://code.google.com/appengine/docs/java/config/cron.html
A: This page lists how to achieve Cron-like functionality on Google Appengine.
A: You can now run scheduled tasks with google appengine
http://code.google.com/appengine/docs/python/config/cron.html#Securing_URLs_for_Cron
A: Read comments to issue 6 for possible workarounds.
A: Use http://schedulerservice.appspot.com/ or http://code.google.com/p/gaeutilities/wiki/Cron
A: See this post - maybe soon we'll get sort-of-cron functionality in GAE.
A: According to the official AppEngine blog's public roadmap update two weeks ago, scheduled tasks (as well as background task queues) are due for the release in the moderately near future ("in the next six months," as of Feb. 6, 2009).
A: Here is an example http://cron-tab.appspot.com/, the python source code is available in the related project at code.google.com/p/cron-tab/
A: Here's a tutorial on using a Google App Engine Cron Job to send an automated email:
http://www.fishbonecloud.com/2010/11/automated-email-using-google-app-engine.html
A: You can use GAE Cron jobs, but be careful as they don't support HTTPS!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Tool to compare large numbers of PDF files? I need to compare large count of PDF files for it optical content. Because the PDF files was created on different platforms and with different versions of the software there are structural differences. For example:
*
*the chunking of text can be different
*the write order can be different
*the position can be differ some pixels
It should compare the content like a human people and not the internal structure. I want test for regressions between different versions of the PDF generator that we used.
A: Because there is no such tool available that we have written one. You can download the i-net PDF content comparer and use it. I hope that help other with the same problem. If you have problems with it or you have feedback for us then you can contact our support.
A: I think your best approach would be to convert the PDF to images at a decent resolution and than do an image compare.
To generate images from PDF you can use Adobe PDF Library or the solution suggested at Best way to convert pdf files to tiff files.
To compare the generated TIFF files I found GNU tiffcmp (for windows part of GnuWin32 tiff) and tiffinfo did a good job. Use tiffcmp -l and count the number of lines of output to find any differences. If you are happy to have a small amount of content change (e.g. anti-aliasing differences) then use tiffinfo to count the total number of pixels and you can then generate a percentage difference value.
By the way for anyone doing simple PDF comparison where the structure hasn't changed it is possible to use command line diff and ignore certain patterns, e.g. with GNU diff 2.7:
diff --brief -I xap: -I xapMM: -I /CreationDate -I /BaseFont -I /ID --binary --text
This still has the problem that it doesn't always catch changes in generated font names.
A: There is actually a diffpdf tool.
http://www.qtrac.eu/diffpdf.html
Its weakness is that it doesn't react well when additions make new text shift partially to a new page. For instance, if old page 4 should be compared to the end of page 5 and the beginning of page 6, you'll need to shift parameters to compare the two slices separately.
A: I've used a home-baked script which
*
*converts all pages on two PDFs to bitmaps
*colors pages of PDF 1 to red-on-white
*changes white to transparent on pages of PDF 2
*overlays each page from PDF 2 on top of the corresponding page from PDF 1
*runs conversion/coloring and overlaying in parallel on multiple cores
Software used:
*
*GhostScript for PDF-to-bitmap conversion
*ImageMagick for coloring, transparency and overlay
*inotify for synchronizing parallel processes
*any PNG-capable image viewer for reviewing the result
Pros:
*
*simple implementation
*all tools used are open source
*great for finding small differences in layout
Cons:
*
*the conversion is slow
*major differences between PDFs (e.g. pagination) result in a mess
*bitmaps are not zoomable
*only works well for black-and-white text and diagrams
*no easy-to-use GUI
I've been looking for a tool which would do the same on PDF/PostScript level.
Here's how our script invokes the utilities (note that ImageMagick uses GhostScript behind the scenes to do the PDF->PNG conversion):
$ convert -density 150x150 -fill red -opaque black +antialias 1.pdf back%02d.png
$ convert -density 150x150 -transparent white +antialias 2.pdf front%02d.png
$ composite front01.png back01.png result01.png # do this for all pairs of images
A: I don't seem to be able to see this here, so here it is: via superuser: How to compare the differences between two PDF files? (answer #229891, by @slestak), there is
https://github.com/vslavik/diff-pdf
(build steps for Ubuntu Natty can be found in get-diff-pdf.sh)
As far as I can see, it basically overlays the text/graphics of each page in the pdf(s), allowing you to easily see if there were any changes...
Cheers!
A: We've also used pdftotext (see Sklivvz's answer) to generate ASCII versions of PDFs and wdiff to compare them.
Use pdftotext's -layout switch to enhance readability and get some idea of changes in the layout.
To get nice colored output from wdiff, use this wrapper script:
#!/bin/sh
RED=$'\e'"[1;31m"
GREEN=$'\e'"[1;32m"
RESET=$'\e'"[0m"
wdiff -w$RED -x$RESET -y$GREEN -z$RESET -n $1 $2
A: Our product, PDF Comparator - http://www.premediasystems.com/pdfc.html" - will do this quite elegantly and efficiently. It's also not free, and is a Mac OS X only application.
A: Based on your needs, a convert to text solution would be the easiest and most direct. I did think the bitmap idea was pretty cool.
A: blubeam pdf software will do this for you
A: You can batch compare pdf files with Tarkware Pdf Comparer. But it's not free and requires Adobe Acrobat.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "84"
} |
Q: How to approach something complex? You know that particular part of your code that is essential for the project but will probably take a lot of time to get it done? Do you ever get the feeling that you'd rather work on something else (probably less important) or not code at all instead of working on that part? That beast you try so hard to avoid and use every lazy trick you know to delay its inevitable implementation?
Now, I'm probably just being lazy, but I've always had to deal with code like that. Write something I don't feel like writing (and it's worse if you're doing it for fun and not getting paid!). A large system that will take a lot of time to get it into a stage where you get back any useful results or indication of it working. How do you start coding something like that? Most people would probably suggest divide and conquer and similar architectural techniques, but this isn't about how you do it; it's about how you get yourself started on doing it. What's the very first steps you'd take?
A: Generally I love the large, complex part. They are the parts that actually extend a challenge and force me to carefully consider what I'm doing. It's all the small, tedious bits that I dislike. However, when it comes to doing anything I've been putting off I find one simple piece of advice important:JUST DO IT!!!
Seriously, once it's started it's much easier to finish. I always find I put things off until I start them, then suddenly I find that, now that I've started, it's not as bad as I had imagined, and look, it's almost done already!
A: Divide and conquer is not just about structuring code, it also works as an approach to make a project conceptually manageable. If I have a hard time getting started on a project it almost always because it's to big and scary. By dividing into conceptually manageable pieces, it becomes less scary.
I also believe in "tracer bullets" as described by the pragmatic programmers. Reduce the project to the absolutely simplest possible "proof of concept" of the core parts, e.g. without UI, special cases, error handling and so on. Perhaps its just a few core routines with associated unit-tests. With this you have conquered the scary parts, and can build from the core.
Basically the trick to getting started (for me at least) is: Don't start on the whole project. Start on one small (preferably core) part and build from there. If I still have a hard time getting started, It's because the small part I decided on is still to big, so I have to divide and reduce it further.
A: I'll tell a story of a case in which this happened to me.
I wanted to implement a new frametype decision algorithm for x264 that used forward dynamic programming (the Viterbi algorithm). But it was going to be complicated, messy, ugly, and so forth. And I really didn't want to do it. I tried to pawn off the project onto Google Summer of Code, but out of some sort of terrible bad luck, the one student that we had that simply bailed on his project... was the student that chose that project.
So after two months of complaining about it and dodging it, I finally got to work on the algorithm. And here's how I did it.
First, I talked to the other developer, who apparently already had some ideas on how to do it. We talked it over and he explained it to me until I fully understood the process from an algorithmic standpoint. This is the first step of any such project: understand the algorithm behind it so well that you can pseudocode the entire thing.
Then, I talked to another coworker of mine. We went up to a whiteboard and I sketched it out until he understood it too. By explaining it to someone else, I gained understanding myself. This is the second step: explain the algorithm to someone else so well that they can pseudocode it. This is an emulation of the programming process, since programming is a form of "explaining" an algorithm to the computer.
Then, I wrote a simple Java prototype that used arbitrary fake values for the cost function and was solely being used to test the Viterbi search. I finished it, and checked it against an exhaustive search--it matched perfectly. My dynamic programming was correct. This is the third step: write the simplest possible form of the algorithm in the simplest possible environment.
Then I ported it to C, x264's native language. It worked again. This is the fourth step: port that simple form of the algorithm to the full environment.
Then, finally, I replaced the fake cost function with the real one. After some bughunting and fixing, it worked. This is the final step: integrate the algorithm completely with the environment.
This process took barely a week, but from the perspective of me at the start of the project, it was completely daunting and I couldn't get myself to even get started--yet by breaking it down into such a step by step process, I was able to not only get it done, but get it done much faster than I expected.
And the benefits went far beyond x264; I now understand Viterbi so thoroughly that I now can explain it to others... and those others can benefit greatly from it. For example, one of the ffmpeg developers is using an adaptation of my algorithm and code to optimally solve a somewhat different problem: optimal header placement in audio files.
A: I agree with you that many large, important parts of a software are not fun to write. I usually start my development day with some smaller things, like adding a feature here, or fixing a bug there. When it's time, I start with the large part, but when I just can't see the thing any more, I do something different. That's all fine if you still get everything done on time. And remember that it may make things easier if you talk with other people about that large beast before you're doing it, while you're doing and after you're done. This will not only free your mind, but you'll also get other people's opinion from a less subjective point of view. Planning such things together also helps.
A: Funny, I'm the other way around. When I start tackling a problem, I go for the big ones first. The core of the problem is usually what interests me.
If I'm doing a project for myself, I usually couldn't be bothered to implement all the fuzzy bits around the edges, so they never get done. If I'm doing something for real, I eventually get to all the fuzzy bits, but it's not my favourite part.
A: I think there are two issues here.
First is actually getting started. As you say, that can be pretty tricky. Personally I just start on any bit, just to get something on paper/screen. It will probably be wrong and need editing but, in general, it's easier to criticise than create, even on your own work.
Then there's the actual process of solving hard problems. There a great book called "Conceptual Blockbusting" that discusses various ways of approaching problems. I learned a lot about how I approach problem solving and my blind-spots using that book. Highly recommended.
A: I try to establish a metaphor for what the system is trying to do. I always feel more comfortable when I can describe the behaviour in terms of a metaphor.
I then approach it from a test-driven development point of view, i.e. start describing what the system needs to do by setting up the tests that will verify correct behaviour.
HTH.
cheers,
Rob
A: The most difficult part of the project is going from having nothing done to the first line. Just putting anything down on paper gets this process started and it's amazing how quickly the rest can flow from here.
I'm a fan of the "divide and conquer"-type approach myself.
When there's a particular large task in a system hanging over me, I leave the computer, take a pen and paper, and break the task out into all of it's logical components and work-flow.
Then take each of these tasks, and break the down into the most basic functions / calls required.
I can then put in stub methods that I reckon I'll need. and flesh them out one by one. At this point each of these "sub-tasks" is no larger than the smaller development tasks orbiting the same project, so feel like a much less-onerous task hanging over me.
A: I usually tackle this kind of problems at home using a pen and a piece of paper.. I imagine the algorithm and/or logical flow and then stub (on the paper!) the classes and method stubs and when I get in front of a/the computer I can do it much easier... Probably it's just me..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Relational Database Design Patterns? Design patterns are usually related to object oriented design.
Are there design patterns for creating and programming relational databases?
Many problems surely must have reusable solutions.
Examples would include patterns for table design, stored procedures, triggers, etc...
Is there an online repository of such patterns, similar to martinfowler.com?
Examples of problems that patterns could solve:
*
*Storing hierarchical data (e.g. single table with type vs multiple tables with 1:1 key and differences...)
*Storing data with variable structure (e.g. generic columns vs xml vs delimited column...)
*Denormalize data (how to do it with minimal impact, etc...)
A: AskTom is probably the single most helpful resource on best practices on Oracle DBs. (I usually just type "asktom" as the first word of a google query on a particular topic)
I don't think it's really appropriate to speak of design patterns with relational databases. Relational databases are already the application of a "design pattern" to a problem (the problem being "how to represent, store and work with data while maintaining its integrity", and the design being the relational model). Other approches (generally considered obsolete) are the Navigational and Hierarchical models (and I'm nure many others exist).
Having said that, you might consider "Data Warehousing" as a somewhat separate "pattern" or approach in database design. In particular, you might be interested in reading about the Star schema.
A: Design patterns aren't trivially reusable solutions.
Design patterns are reusable, by definition. They're patterns you detect in other good solutions.
A pattern is not trivially reusable. You can implement your down design following the pattern however.
Relational design patterns include things like:
*
*One-to-Many relationships (master-detail, parent-child) relationships using a foreign key.
*Many-to-Many relationships with a bridge table.
*Optional one-to-one relationships managed with NULLs in the FK column.
*Star-Schema: Dimension and Fact, OLAP design.
*Fully normalized OLTP design.
*Multiple indexed search columns in a dimension.
*"Lookup table" that contains PK, description and code value(s) used by one or more applications. Why have code? I don't know, but when they have to be used, this is a way to manage the codes.
*Uni-table. [Some call this an anti-pattern; it's a pattern, sometimes it's bad, sometimes it's good.] This is a table with lots of pre-joined stuff that violates second and third normal form.
*Array table. This is a table that violates first normal form by having an array or sequence of values in the columns.
*Mixed-use database. This is a database normalized for transaction processing but with lots of extra indexes for reporting and analysis. It's an anti-pattern -- don't do this. People do it anyway, so it's still a pattern.
Most folks who design databases can easily rattle off a half-dozen "It's another one of those"; these are design patterns that they use on a regular basis.
And this doesn't include administrative and operational patterns of use and management.
A: After many years of database development I can say there are some no goes and some question that you should answer before you begin:
questions:
*
*Do you want to use in the future another DBMS? If yes then do not use to special SQL stuff of the current DBMS. Remove logic in your application.
Do not use:
*
*white spaces in table names and column names
*non ASCII characters in table and column names
*binding to a specific lower case or upper case. And never use 2 tables or columns that differ only with lower case and upper case.
*do not use SQL keywords for tables or columns names like "FROM", "BETWEEN", "DELETE", etc
recommendations:
*
*Use NVARCHAR or equivalent for Unicode support then you have no problems with codepages.
*Give every column a unique name. This make it easier on join to select the column. It is very difficult if every table has a column "ID" or "Name" or "Description". Use XyzID and AbcID.
*Use a resource bundle or equals for complex SQL expressions. It make it easier to switch to another DBMS.
*Does not cast hard on any data type. Another DBMS can not have this data type. For example Oracle does not have a SMALLINT only a number.
I hope this is a good starting point.
A: There's a book in Martin Fowler's Signature Series called Refactoring Databases. That provides a list of techniques for refactoring databases. I can't say I've heard a list of database patterns so much.
I would also highly recommend David C. Hay's Data Model Patterns and the follow up A Metadata Map which builds on the first and is far more ambitious and intriguing. The Preface alone is enlightening.
Also a great place to look for some pre-canned database models is Len Silverston's Data Model Resource Book Series Volume 1 contains universally applicable data models (employees, accounts, shipping, purchases, etc), Volume 2 contains industry specific data models (accounting, healthcare, etc), Volume 3 provides data model patterns.
Finally, while this book is ostensibly about UML and Object Modelling, Peter Coad's Modeling in Color With UML provides an "archetype" driven process of entity modeling starting from the premise that there are 4 core archetypes of any object/data model
A: Your question is a bit vague, but I suppose UPSERT could be considered a design pattern. For languages that don't implement MERGE, a number of alternatives to solve the problem (if a suitable rows exists, UPDATE; else INSERT) exist.
A: Depends what you mean by a pattern. If you're thinking Person/Company/Transaction/Product and such, then yes - there are a lot of generic database schemas already available.
If you're thinking Factory, Singleton... then no - you don't need any of these as they're too low level for DB programming.
If you're thinking database object naming, then it's under the category of conventions, not design per se.
BTW, S.Lott, one-to-many and many-to-many relationships aren't "patterns". They're the basic building blocks of the relational model.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "293"
} |
Q: Combining Lisp and PHP code in the same application At the moment I use PHP for almost everything I develop for the Web but its linguistic limitations are starting to annoy me. However, as I developed some practices and maintain some PHP libraries that help me a lot, I don't feel I'd be ready to just switch to LISP throwing away all my PHP output. It could even be impossible on the servers where all I have access to is a regular LAMP hosting account.
Ergo, my questions are:
Could LISP code be just combined with PHP one? Are there solutions for side-by-side LISP/PHP, interface for their interoperability or perphaps just an implementation of one for the other? Or is it a mutually exclusive choice?
A: It's not a mutually-exclusive choice, you can run both on one system, in the same way that perl and php (for example) are run side-by-side on many systems.
There's a good post here on a similar topic, which suggests using sockets to communicate between both languages -
If you want to go the PHP<->Lisp route the easyest thing to do would be to make PHP communicate with your Lisp-process using sockets.
http://php.net/manual/en/ref.sockets.php
http://www.sbcl.org/manual/Networking.html
This approach does still leave you with the potential added complexity and maintenance issues you get from having 2 languages in your project, but might be a fit for your particular use case.
A: You would most likely not want to write code in PHP once you've started developing in Lisp. (New capitalization since circa 80s, by the way)
Hunchentoot is a popular server that gives you the basics in terms of connecting dispatchers to requests. There's a series of screencasts on writing a reddit clone at LispCast.com
UnCommon Web (sounds like a pun on Peter Norvig's description of Scheme in PAIP) is from what I can tell a more complete framework based heavily on the idea of continuations, much like Seaside for Smalltalk.
Weblocks is yet another continuation-based web framework that looks nice. The author (at defmacro.org) writes good articles, and I like what I've seen in the sample app for Weblocks.
A: I'm in pretty much the same situation at the moment. I have a lot of PHP under my belt, but the language really begins to annoy me. I have experimented with different languages, but have tinkered a lot with scheme recently, and I'm contemplating a gradual switch. Maybe we should start a user-group or something?
Coming from a PHP background, you're probably used to working with a thin level of abstraction to the HTTP protocol. I think this is something that actually makes it easier to transition into a new language; Especially one, where there isn't one dominant framework. In this way, PHP and the Lisp community have some similarities (But so does other fragmented open source platforms, such as Python and Perl).
One problem with Lisp is that there are so many to choose from. I have decided that I prefer Scheme over Common Lisp, so that narrows it down a bit. After some experimenting, I'm now focusing on plt-scheme, which seems to be the one Scheme with most momentum. Amongst other things, it has a web-server bundled with it.
A: Unfortunately I can't think of any libraries for that, however I was in a similar situation, where I had PHP code, but got tired of "trying" to code logic(game logic) in PHP, so I used PHP sockets to connect to Lua, thus now I program all the serverside logic in Lua and use PHP(LAMP setting) as my frontend server.
Hope that helps.
A: I recommend you to give a try at Weblocks.
A: For normal web page development in PHP, Ive made a lib called xilla_tags.
Overview here
There are also some good techniques on Jacob Hanssens bitchware site.
A: Check out an interesting solution to combine Lisp and PHP
https://github.com/lisphp/lisphp
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: gu and pc command of Windbg Could anyone show me a sample about how to use these two commands in Windbg please? I read the document in debugger.chm, but confused. I did search in Google and MSDN, but not find an easy to learn sample.
A: Think in terms of function levels as per the following pseudo-code:
1 x = 0
2 y = 0
3 call 8
4 x = 5
5 y = 7
6 call 8
7 halt
8 print x
9 print y
10 call 12
11 return
12 print x + y
13 print x * y
14 return
The commands are basically "run until an event occurs". The event causes the debugger to break (stop execution and await your command).
The "gu" command runs until it goes up to the next highest stack level. If you're on lines 8, 9, 10 or 11, you'll end up at 4 or 7 depending on which "call 8" has called that code. If you're on lines 12, 13 or 14, you'll break at 11.
Think of this as running until you've moved up the stack. Note that if you first go down, you'll have to come up twice.
The "pc" command runs until the next call so, if you're on line 1, it will break at line 3. This is sort of opposite to "gu" since it halts when you're trying to go down a stack level.
A: There is something wrong from Windbg output -- "Can't continue completed step". Here is the related output from Windbg and source code, any ideas?
(I set a breakpoint in main, then step next using p command twice and then use gc command -- then error happens.)
(204.18c0): Break instruction exception - code 80000003 (first chance)
ntdll!DbgBreakPoint:
0000000077ef2aa0 cc int 3
0:000> bp main
0:000> g
Breakpoint 0 hit
TestDebug1!main:
0000000140001090 4057 push rdi
0:000> p
TestDebug1!main+0x1a:
00000001400010aa c7442424c8000000 mov dword ptr [rsp+24h],0C8h ss:000000000012feb4=cccccccc
0:000> p
TestDebug1!main+0x22:
00000001`400010b2 488d442424 lea rax,[rsp+24h]
0:000> gc
Can't continue completed step
include
using namespace std;
int foo()
{
int b = 300;
return b;
}
int goo()
{
int a = 400;
return a;
}
int main()
{
int a = 200;
int* b = &a;
foo();
a = 400;
goo();
return 0;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What techniques do you use to maximise code reuse? Some years ago I was told about a study into code reuse. Apparently it was found that, on average, programmers have a 7 minute window when searching for code to reuse. If they don't find code that suits their needs within that window they'll write their own.
This was presented in the context of needing to carefully manage your code for reuse to ensure that you can find what you need within the window.
How do you (individuals and organisations) manage your source to make it easier to reuse?
Do you specifically maintain a reuse library? And if so, how do you index it to maximise your hit rate?
A: Refactor mercilessly and hope for the best.
Update (4 years later and hopefully wiser)
*
*Like S.Lott's comment says: Pay attention to Naming. Spread the word to all 'committers' in the team. Good names make things searchable and thereby reduces duplication.
*Have ONE way of doing something and keep it ACCESSIBLE and SEARCHABLE.
*Write code for the average (L.C.D.) programmer.. Don't be clever where simple would suffice. (That includes design-pattern shoe-horning compulsion and related disorders)
*Adopt a common set of conventions, styles, guidelines, standards, et.all early. Ensure buy-in and thereby compliance within the team. (This means everyone uses the tabs (or spaces)!). It doesn't matter what you choose - the goal is that the code should look consistent
*Have a gatekeeper (respected by the team), who eyeballs all check-ins for red-flags.
*Write code test-first / outside-in. This usually ensures that your code is usable by multiple clients. (See GOOS's bullet on context-independence)
A: *
*Have a framework that is actively supported.
*Know the existing code base / make the other developers know the code base. If your group/company is large enough, have somebody who knows the code base and can be asked for guidance.
*Document, document, document. Undocumented code is useless for re-use because it takes way too long to understand its inner workings.
*Have good interfaces. Easy types, easy structures or classes. The more complicated something is, the less it will be used in another project.
*Optimize and debug reusable code. Developers who experience bugs in other people's code for the n-th time will begin to code already existing code anew.
A: A complex question:
*
*Some parts of the code can be generalized as libraries or APIs. We have a common library which is kept up to date with solutions to common problems. Typically: validation, caching, data access classes, logging, etc...
*Some parts are application specific. They cannot be generalized easily. We convert them in HowTos and give internal presentations. Code is also recycled by use of an easily browsable SCM (in our case SVN).
*We also have tools that generate code that one one hand cannot be recycled, on the other it's always similar (think calling a stored procedure).
*Pair programming is also a useful way to spread knowledge of existing solutions. We use that when possible or appropriate.
*The last technique is tuition. Each coder has a tutor to refer to. Since the tutors are few, there is a lot of sharing between them and this knowledge can be diffused in a top down manner.
A: Try using TDD if your aren't already is my initial repsonse.
I think the use of TDD is a great way to keep code coupling low, amongst other benefits. While that inherently doesnt prevent the same behaviour from being implemented twice, it makes it a great deal easier when you DO identify an area in which you could remove duplication.
Another benefit, TDD has a step for removing dupication (refactoring) as part of the cycle.
Also, tests form part of your codes documentation, thus making it easier to identify duplicated behaviour.
A: Organization is key. If namespaces and intellisense is available, the right function can be narrowed down on, and eventually found. If they don't find what they want exactly, they may find something close or related. Code that is just mashed together in one huge group makes it easy to find, but people are never going to find the method they want fast enough.
Consistency is also critical, both with naming and location. If you decide to change your style at some point during the project, go back and change everything to fit that style. It can easily be a very long and boring process, but it is better than trying to have to use an inconsistent library.
A: Profile the whole application and start refactoring from the heavier section of code.
(80% of time spent on 20% of most used code)
Use a profiling tool which has capability to identify memory leaks, repeated calls,
lengthy calls, unfreed memory, undisposed resources etc,.
By rule, New code always uses best practice.
A:
How do you (individuals and organisations) manage your source to make
it easier to reuse? Do you specifically maintain a reuse library? And
if so, how do you index it to maximise your hit rate?
I don't and I have an admittedly controversial opinion here but I find the idea of maximizing code reuse counter-productive (I'm interpreting "maximizing" as prioritizing it above all other things rather than considering it as having both pros and cons to balance in consideration). I prefer instead to allow a healthy amount of redundant efforts in teams to slide in favor of decoupling and isolating each developer's module better. First before everyone starts disagreeing with me left and right, I think we can agree upon some things:
*
*Reusing buggy code that will have you spending hours debugging other people's code is not desirable.
*Reusing code that balances such a wide range of disparate needs that it barely satisfies your own needs and requires you to jump through a lot of hoops to ultimately get an awkward and inefficient solution is undesirable.
*Reusing code that constantly requires design changes and goes through deprecations of a kind which will require you to rewrite the code using it every 6 months is undesirable if you could have just implemented the solution yourself in half an hour in ways that don't need design changes in the future since it's only serving your precise needs.
*A codebase filled with alien-looking code is undesirable over one that uses more of the language and standard library in idiomatic and familiar ways, even if that requires slightly more code.
*Developers stepping all over each other's toes because both of them want to make incompatible changes to the same design while fighting and arguing and making changes which cause bugs in each other's implementations is undesirable.
*Throwing a boatload of dependencies to immature designs that have not proven themselves (not had thorough test coverage, not had the time to really soundproof the design and make sure it effectively satisfies user-end needs without requiring further design changes) is undesirable.
*Having to include/import/link a boatload of libraries and classes/functions with the most complex build script to write something simple is undesirable.
*Most of all, reusing code in a way that costs far more time both in the short and long run than not reusing it is undesirable.
Hopefully we can at least agree on these points. The problem I've found with maximizing code reuse from overly-enthusiastic colleagues was that it often lead to one or more of the problems above. It wasn't directly the enthusiasm for code reuse that was the fundamental problem but that the priorities were skewed towards code reuse rather than test coverage, soundproofing designs, making sure things are mature enough before we reuse them like crazy, and so forth.
Naturally if all the code we reused worked beautifully, had thorough test coverage, was proven to fulfill the needs of everything using it in ways that were far more productive than not reusing it, and didn't have to go through any design changes for years on end, I would be ecstatic about code reuse. But my experiences often found things falling far short of this ideal in ways where code reuse was arguably becoming the maintenance problem rather than the solution.
How do you (individuals and organisations) manage your source to make
it easier to reuse? Do you specifically maintain a reuse library? And
if so, how do you index it to maximise your hit rate?
So, again I don't seek to "maximize" code reuse among proprietary code written internally among the team. I seek to make sure the team doesn't spend enormous amount of time on redundant effort, but I let things slide quite a bit if both the physicists and the rendering guys both implement their own axis-aligned bounding box class, e.g. It's not necessarily even that redundant, since the physicist might use min/max representations which are more efficient for his purpose while the rendering developer might use center/half-size representations. I do try to make sure we reuse as much of the standard library when possible, because that's code reuse of a kind that is practically guaranteed to be solid, ultra well-tested, and not require further design changes (other teams are spending a boatload of their time to make sure of that).
Instead I shift the focus on testing. A module duplicating a little bit of code here and there is totally fine if you ask me if it's working beautifully in ways that make users really happy, has thorough test coverage, and doesn't warrant endless changes. We accept such duplication all the time when we use third party libraries who likely duplicate some code that we also have in our internal codebase. It's not an issue there when the redundancy doesn't lead to redundant maintenance efforts.
So I suggest just relaxing the idea of maximizing code reuse just a little bit. But if you want to make it easy as possible to reuse the really solid, well-tested, non-trivial code, then I've found it far more helpful to organize very singular-purpose libraries, like a "math" library, an "image" processing library, etc. -- instead of trying to fuse them all together into something like "core" or "common". The latter types tend to tempt developers to throw in all kinds of eclectic utility functions which barely benefit the team using them, and mostly it tends to become messy in ways where it starts to become difficult to find anything of interest.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Best way to validate user input JDBC? Is there a built-in way to escape user input in java using the JDBC? Something similar to the php version mysql_real_escape() function. What's the best way to validate input?
A: If you mean how do you make sure user input can't be used in SQL injection attacks, the way to do this (and the way all SQL should be written in JDBC) is using Prepared Statements. JDBC will automatically handle any necessary escaping.
http://java.sun.com/docs/books/tutorial/jdbc/basics/prepared.html
A: Just to add to the suggestion by @skaffman, PreparedStatements solve the issue for the majority of applications. However, there are some applications where (parts of) SQL statements (as opposed to just parameter values) are taken from user input (for example, a URL parameter containing the ORDER BY clause). Just make sure you sanitize those as well or, better yet, avoid such designs if possible.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What are the various "Build action" settings in Visual Studio project properties and what do they do? For the most part, you just take whatever Visual Studio sets it for you as a default... I'm referring to the BuildAction property for each file selected in Solution Explorer. There are a number of options and it's difficult to know what each one of them will do.
A: In VS2008, the doc entry that seems the most useful is:
Windows Presentation Foundation Building a WPF Application (WPF)
ms-help://MS.VSCC.v90/MS.MSDNQTR.v90.en/wpf_conceptual/html/a58696fd-bdad-4b55-9759-136dfdf8b91c.htm
ApplicationDefinition
Identifies the XAML markup file that contains the application definition (a XAML markup file whose root element is Application). ApplicationDefinition is mandatory when Install is true and OutputType is winexe. A WPF application and, consequently, an MSBuild project can only have one ApplicationDefinition.
Page
Identifies a XAML markup file whose content is converted to a binary format and compiled into an assembly. Page items are typically implemented in conjunction with a code-behind class.
The most common Page items are XAML files whose top-level elements are one of the following:
Window (System.Windows..::.Window).
Page (System.Windows.Controls..::.Page).
PageFunction (System.Windows.Navigation..::.PageFunction<(Of <(T>)>)).
ResourceDictionary (System.Windows..::.ResourceDictionary).
FlowDocument (System.Windows.Documents..::.FlowDocument).
UserControl (System.Windows.Controls..::.UserControl).
Resource
Identifies a resource file that is compiled into an application assembly. As mentioned earlier, UICulture processes Resource items.
Content
Identifies a content file that is distributed with an application. Metadata that describes the content file is compiled into the application (using AssemblyAssociatedContentFileAttribute).
A: Build actions control the MSBuild Item Type of each item in a project. For example, a Compile build action on MyClass.cs means something like this in your .csproj file:
<ItemGroup>
<Compile>MyClass.cs</Compile>
</ItemGroup>
Item types have specific meanings by convention. Common types are Compile, Content and None, but there are others.
For example, .editorconfig files have their own item type (EditorConfigFiles). Files may be passed to analyzers by marking them with "C# analyzer additional file" (AdditionalFiles).
You can also define your own item types in your project for your own purposes via AvailableItemName. For example:
<ItemGroup>
<AvailableItemName Include="Foo" />
</ItemGroup>
Doing this will produce:
A: How about this page from Microsoft Connect (explaining the DesignData and DesignDataWithDesignTimeCreatableTypes) types. Quoting:
The following describes the two Build Actions for Sample Data files.
Sample data .xaml files must be assigned one of the below Build Actions:
DesignData: Sample data types will be created as faux types. Use this Build Action when the sample data types are not creatable or have read-only properties that you want to defined sample data values for.
DesignDataWithDesignTimeCreatableTypes: Sample data types will be created using the types defined in the sample data file. Use this Build Action when the sample data types are creatable using their default empty constructor.
Not so incredibly exhaustive, but it at least gives a hint. This MSDN walkthrough also gives some ideas. I don't know whether these Build Actions are applicable for non-Silverlight projects also.
A: *
*Fakes: Part of the Microsoft Fakes (Unit Test Isolation) Framework. Not available on all Visual Studio versions. Fakes are used to support unit testing in your project, helping you isolate the code you are testing by replacing other parts of the application with stubs or shims. More here: https://msdn.microsoft.com/en-us/library/hh549175.aspx
A: Page -- Takes the specified XAML file, and compiles into BAML, and embeds that output into the managed resource stream for your assembly (specifically AssemblyName.g.resources), Additionally, if you have the appropriate attributes on the root XAML element in the file, it will create a blah.g.cs file, which will contain a partial class of the "codebehind" for that page; this basically involves a call to the BAML goop to re-hydrate the file into memory, and to set any of the member variables of your class to the now-created items (e.g. if you put x:Name="foo" on an item, you'll be able to do this.foo.Background = Purple; or similar.
ApplicationDefinition -- similar to Page, except it goes onestep furthur, and defines the entry point for your application that will instantiate your app object, call run on it, which will then instantiate the type set by the StartupUri property, and will give your mainwindow.
Also, to be clear, this question overall is infinate in it's results set; anyone can define additional BuildActions just by building an MSBuild Task. If you look in the %systemroot%\Microsoft.net\framework\v{version}\ directory, and look at the Microsoft.Common.targets file, you should be able to decipher many more (example, with VS Pro and above, there is a "Shadow" action that allows you generate private accessors to help with unit testing private classes.
A: VS2010 has a property for 'Build Action', and also for 'Copy to Output Directory'. So an action of 'None' will still copy over to the build directory if the copy property is set to 'Copy if Newer' or 'Copy Always'.
So a Build Action of 'Content' should be reserved to indicate content you will access via 'Application.GetContentStream'
I used the 'Build Action' setting of 'None' and the 'Copy to Output Direcotry' setting of 'Copy if Newer' for some externally linked .config includes.
G.
A: From the documentation:
The BuildAction property indicates
what Visual Studio does with a file
when a build is executed. BuildAction
can have one of several values:
None - The file is not included in the
project output group and is not
compiled in the build process. An
example is a text file that contains
documentation, such as a Readme file.
Compile - The file is compiled into
the build output. This setting is used
for code files.
Content - The file is not compiled,
but is included in the Content output
group. For example, this setting is
the default value for an .htm or other
kind of Web file.
Embedded Resource - This file is
embedded in the main project build
output as a DLL or executable. It is
typically used for resource files.
A: *
*None: The file is not included in the project output group and is not compiled in the build process. An example is a text file that contains documentation, such as a Readme file.
*Compile: The file is compiled into the build output. This setting is used for code files.
*Content: Allows you to retrieve a file (in the same directory as the assembly) as a stream via Application.GetContentStream(URI). For this method to work, it needs a AssemblyAssociatedContentFile custom attribute which Visual Studio graciously adds when you mark a file as "Content"
*Embedded resource: Embeds the file in an exclusive assembly manifest resource.
*Resource (WPF only): Embeds the file in a shared (by all files in the assembly with similar setting) assembly manifest resource named AppName.g.resources.
*Page (WPF only): Used to compile a xaml file into baml. The baml is then embedded with the same technique as Resource (i.e. available as `AppName.g.resources)
*ApplicationDefinition (WPF only): Mark the XAML/class file that defines your application. You specify the code-behind with the x:Class="Namespace.ClassName" and set the startup form/page with StartupUri="Window1.xaml"
*SplashScreen (WPF only): An image that is marked as SplashScreen is shown automatically when an WPF application loads, and then fades
*DesignData: Compiles XAML viewmodels so that usercontrols can be previewed with sample data in Visual Studio (uses mock types)
*DesignDataWithDesignTimeCreatableTypes: Compiles XAML viewmodels so that usercontrols can be previewed with sample data in Visual Studio (uses actual types)
*EntityDeploy: (Entity Framework): used to deploy the Entity Framework artifacts
*CodeAnalysisDictionary: An XML file containing custom word dictionary for spelling rules
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "898"
} |
Q: FAT-16 on modern OS I've got a project that was written in BASIC. I'm not sure of the exact reason, but the app will not function except when being run from a FAT-16 file system.
I'd rather try to set up an environment that will support this app in a modern OS (Vista/XP) instead of rewriting it.
Does anyone know how to get an app like this running in XP/Vista through some kind of code change (to the BASIC code) or FAT-16 "emulator" (if such a thing exists)?
A: You may try running it via a DOSBOX:
DOSBox emulates an Intel x86 PC,
complete with sound, graphics, mouse,
joystick, modem, etc., necessary for
running many old MS-DOS applications that
simply cannot be run on modern PCs and
operating systems, such as Microsoft
Windows XP, Windows Vista, Linux and
FreeBSD
(from their Wiki)
I use it for several years now. It is good, stable and quite robust. It has several third-party GUIs as well, to make your life easier.
A: Other than just keeping the app alive in a virtualized environment, as has already been suggested, the first thing to do would be to figure out why the code seems to require FAT-16.
If the app (or its runtime) is particularly evil, the FAT-16 requirement may stem from the fact that it's trying to do direct disk I/O, bypassing the operating system. If the BASIC code itself is trying to pull that particular stunt, you should see lots of CALLs, PEEKs, POKEs or even the occasional IN and OUT statement in I/O routines. Determining what the runtime is up to is more difficult: if it's from Microsoft, DOS-based and not too ancient (e.g. GWBASIC or QuickBASIC/PDS), or Windows-based it should be OK, though.
Anyway, if either the app or the runtime is attempting direct disk I/O, you lose: it will be pretty much impossible to get things to work on a modern OS without extensive, rewrite-like, code changes.
If the app is using the normal BASIC facilities for input and output (e.g. OPEN "file" FOR whatever AS #1), and the runtime is also using the normal OS interfaces, the most likely reason it only works on FAT-16 is that it gets thorougly confused by long filenames.
First thing to try would be to put the app in a directory with a short name (e.g. c:\myapp), and see what happens next. Possibly it just works: otherwise, you should be able to figure out what's going on by stepping through the BASIC code (charitably assuming a debugger is part of its runtime environment).
Without some more information about the exact interpreter/compiler your app runs in, it's impossible to answer your question in more detail. If answers so far haven't been helpful, you may want to edit your question to include this information.
A: Run an older version of Windows in a VMWare virtual machine, itself running in a modern OS.
A: Run it from a flash, zip drive or whatever removable media you got.
Windows XP formatted a 1GB usb flash drive as FAT with no problems, no additional tools were nessesary.
Besides, if the application is really evil, you thus, hopefully, constrain its evilness by the boundaries of the drive.
A: Depending on the Environment: It should still be possible to create Fat-16 Filesystems on modern OS, you may just need additional Tools like Acronis DiskDirector of even some Linux' fdisk Variant.
Just keep in mind that FAT-16 is limited to a partition Size of 2 GB.
But as said before: Best to find out WHY. Sounds like some sort of WTF-Copy-Protection.
A: I second @eugensk00's suggestion, we have some slightly wacky instrument software which won't save to a NTFS hard disk but will save to a small memory stick (1GB)...
A: You might be able to import the code directly into VB.NET (although it would almost certainly require some modifications). You could then replace the original app's file IO calls (which are almost certainly your problem) with VB.NET calls, getting you out of the FAT16 problem.
A: Also note that some old-school programs first check to see if there is enough disk space before writing files, resulting in wacky issues if the drive is so big that it overflows the 16-bit counter it is apparently using. (If that's the case, then it'll either work, or not work, depending on the nature of the overflow).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: .gdbinit config file conflict with Xcode debugging I've a fairly huge .gdbinit (hence not copied here) in my home directory.
Now if I want to debug code inside Xcode I get this error:
Failed to load debugging library at:
/Developer/Applications/Xcode.app/Contents/PlugIns/GDBMIDebugging.xcplugin/Contents/Resources/PBGDBIntrospectionSupport.A.dylib
Custom data formatters are disabled.
Error message was:
0x1005c5 "dlopen(/Developer/Applications/Xcode.app/Contents/PlugIns/GDBMIDebugging.xcplugin/Contents/Resources/PBGDBIntrospectionSupport.A.dylib, 16): image not found"
Actually - as posted below - the debugging still works in Xcode but the Data Formatters breaks. Moving out .gdbinit OR disabling Data Formatters does get gdb in Xcode back in a working state but it's obviously a pain (Including Data Formatters, in the first case)
Any idea as to which settings in gdbinit could cause this error in Xcode ?
Note from Reply: It's seems (from a google search) that this error might happen when linking against the wxWidgets library. Something that I'm not doing here.
Note: if needed I can provide a copy of my (long) .gdbinit
WIP: I will have a look in details at my .gdbinit to see if I can narrow down the issue
A: My "short" answer:
You may have noticed this already, but just in case:
First of all, even when you see that error, (assuming that you click past it and continue), then you should still be able to use 99% of the debugging features in Xcode. In other words, that error means that only a very small, specific portion of the debugger is "broken" for a given debugging session. It does not mean that debugging is completely down and/or impossible for the given program-execution.
Given the above fact, if you simply want to get rid of the error and do not care whether Custom Data Formatters are working or not, then REMOVE the check-mark next to the following menu item:
*
*Run -> Variables View -> Enable Data Formatters
My "long" answer:
The developers in my office had been experiencing this very same Xcode error for quite a while until someone discovered that some third party libraries were the cause.
In our case, this error was happening only for projects using wxWidgets. I am not meaning to imply that usage of wxWidgets is the only possible cause. I am only trying to put forth more information that might lead to the right solution for your case.
Also of interest: we (in my office) were getting this error without any use or presence of any .gdbinit file whatsoever.
It turns out that the "property" of wxWidgets that made it trigger this error was related to a "custom/generic" implementation of "dlopen." Prior to Mac OS X 10.3,
dlopen was not provided within the operating system, so apparently some libraries coded their own versions. When such libraries are being used, then apparently the dlopen call that tries to open PBGDBIntrospectionSupport.A.dylib can fail.
Read through the comments on this sourceforge patch submission to learn even further details about dlopen in 10.3 and beyond.
Also, here is another related link:
Message on the Xcode users mailing list about PBGDBIntrospectionSupport and Custom Data Formatters
A: Strange... Looking around my Mac, I see that library just fine, and it looks sane.
Have you tried using dtrace to see what Xcode and GDB are trying to do when the error happens?
A: Your error is actually a little different from the one I was getting with wxWidgets. It's been a while since I found the dlopen conflict, but I do remember that I had to use gdb itself in that specific debug session to figure out what was going on. Also, with the wxWidgets issue, the hex address was different every time.
In gdb, call "info symbol" on the hex address that's in the error message. This may give you details on precisely what's failing to load.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How you design Controller layer of MVC design pattern in .NET Here are my thoughts: The purpose of using MVC is seperation of concerns and testability of gui logic. View should be able to work with different models and model should be able to work with different views.
I think controller class must implement an interface for mocking/testing reasons and view should call controller methods through this interface. But if we do so, then it becomes difficult to process view elements (textboxes, grids etc.) in controller. So these elements must be known by the controller somehow.
1. Do you expose these gui elements through the interface? Define controller classes as partial classes so that controller can directly process the gui elements (what about the interface then)? What do you do to solve this problem?
2. Basically, should controller implement more than one interface? One for view and another for model layer, to make view/model be able to work with different models/views via controllers?
3. Model layer also should implement an interface for mocking/testing?
How can we achieve our testing, loosely coupling, SoC purposes best? Please share your experience/thoughts.
A: I believe the view should implement an interface and be passed into the controller, usually through the constructor. This way a controller could use the fields of the view interface to get at the values of the controls the view uses. It could also use any model of your choosing. This would give you the loose coupling between the model and view that you want.
The same could be done for the model by passing a repository for your model in through the constructor. The repository methods could then return interfaces that your model classes must implement.
You could then have the controllers implement an interface and get the appropriate controller at run time using an IoC container (which would automatically supply the controller with the appropriate view and model repository. That would make your controllers able to be swapped out easily to replace the current view/model combination for a different one. In general, however, I find this to be unecessary because I only ever have one controller for each type of view (view interface).
A: Exposing gui elements through the interface can be a long task, if gui have hundreds of them. But i cant see another option: partial classing would make gui logic harder to test, and ruins MVC basics too.
So elements to be processed can be passed as parameters in methods. This can reduce amount of coding.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Firefox Sidebar and Document object of DOM There is a webpage loaded in the firefox sidebar and another webpage loaded in the main document. Now, how do I ask access the main document object through the Firefox sidebar? An example to do this through Javascript code in the firefox sidebar document to access the main document would be helpful.
Thanks for the answers. I have to refine my question however. The main window has some webpage loaded and the sidebar has a webpage. I want the sidebar window to know what text the user has selected on the main window when a link on the sidebar window is clicked. I know how to get the selected text from a window. Only that the sidebar element adds complexity to the problem that I am not able to surpass.
@PConory:
I like your answer, but when I try it there is an error:
Error: Permission denied to create wrapper for object of class
UnnamedClass.
Thanks.
A: As far as I can tell, you are actually loading a web site in the sidebar (checked the 'Load this bookmark in Sidebar'). If this is the case, AND if the sidebar is opening the main window page. You can use the window.postMessage to communicate between them. But like I said, the sidebar page has to open the main page because you need the window reference in order to post the message.
sidebar.js
var newwin = window.open('http://otherpage')
newwin.onload = function()
{
newwin.postMessage('Hey newwin', 'http://sidebar');
};
mainpage.js
window.addEventListener('message',function(e)
{
if(message.origin == 'http://sidebar')
alert('message from sidebar');
},false);
Using this you still do not have access to the document, but can communicate between them and script out any changes you want to do.
EDIT: Putting some more thought into it, if you opened the window from the side bar, you would have the DOM for it. var newwin = window.open('blah'); newwin.document making the hole postMessage thing pretty pointless.
A: Accessing the main window from a sidebar is much trickier than going back the other way.
The DOM tree you'll need to traverse, according to Mozilla's developer centre, is:
#document
window main-window
...
browser
#document
window sidebarWindow
From the above link, the following code will allow you to get at the mainWindow object:
var mWin = window.QueryInterface(Components.interfaces.nsIInterfaceRequestor)
.getInterface(Components.interfaces.nsIWebNavigation)
.QueryInterface(Components.interfaces.nsIDocShellTreeItem)
.rootTreeItem
.QueryInterface(Components.interfaces.nsIInterfaceRequestor)
.getInterface(Components.interfaces.nsIDOMWindow);
A: Are you trying to write in-page javascript that will allow communication between the sidebar page and the tab page? There are restrictions on which pages can see each other and communicate:
*
*If the pages are not on the same domain, they aren't allowed to talk (same-domain restriction).
*If one page did not open the other, there is no way for either page to acquire a reference to the other.
I'm not sure how a page can request the opening of a page in the sidebar, or vice versa. But if you can manage that, use var child = window.open(...) to get a reference one direction and window.opener to get a reference the other direction.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: DLL-s using C++ on Linux I have tried to find how to create DLL-s on linux using google, but got very confusing information.
Is it possible to write dynamic link libraries on linux? If not, are there other means by which I can call code in another module from several running programs?
A: As Sklivvz has said, the term you're after on linux is shared object. These are given the file extension .so.
Using gcc you can create a .so by using the -shared option.
eg.
gcc -shared -o libfoo.so foo.c
If you name your shared object lib*.so you can compile against it by using the -l option on your linker. Note that the "lib" is inferred in this circumstance.
ie.
ld -o a.out -lfoo someobject.o
Alternatively you can load .so files at runtime, just as you can with .dlls, using dlopen() and dlsym().
A: That's because DLL is a Windows term. In Linux they are called shared libraries.
http://www.linux.org/docs/ldp/howto/Program-Library-HOWTO/shared-libraries.html
A: It is a lot if you are just getting started, but at some point you will need to refer to Ulrich Drepper’s “How To Write Shared Libaries.”
A: I guess .SO files instead of DLL files, meaning shared object, not StackOverflow :), is what you want.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Podcasts about JavaScript programming As a passionate JavaScript developer, I'd like to know if there's any quality Podcasts out there, devoted to JavaScript. (Both video and audio).
I am currently subscribing to:
*
*YUI Theater
*Audible Ajax
A: I really like the yayquery podcast (http://yayquery.com/)
There's also the official jquery podcast (http://feeds.feedburner.com/jQueryPodcast)
They are both pretty new. I like the yayquery podcast more because it feels more low-level. They talk about plugins and optimization and "hiddenhancements". The jquery podcast is more abstract. They talk about things like the direction of jquery and overall goals and motivations for the language.
The yayquery is audio or video. The jquery podcast is only audio.
A: There's been a couple of good ones recently on Hanselminutes (http://www.hanselminutes.com/) where he's spoken to a blind developer about useability, an interview with John Resig (jQuery creator), and several others.
A: You can check out the OpenWeb Podcast:
http://openwebpodcast.com/
A: The JavaScript Show just launched and, I may be biased, but think it's pretty awesome.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: Open source or free testing solution
*
*Is there a open source or free suite that can integrate testcases, tests, bugs and possibly the fixes(source code) together. Maintaining the requirements in this system is not a necessity (though, it would be nice to enter a requirement id for each testcase in a custom field). We are a small organization and cannot afford something like HP Quality Center.
*We have coding skills (Java, SQL), so if it comes to integration of different tools using Java APIs, it should not be a problem. Similarly, a practical solution using export/import of results/data should also be fine (we could automate where possible).
*Has anybody used PushToTest TestMaker as part of such a solution.
A: This link http://www.testingfaqs.org/t-management.html has a list of test case management tools, both freeware and commercial. Maybe there is something on that list which can meet your needs.
Another posibility is something like Trac. That is not really designed as a test case management tool but it integrates a Subversion repository browser, a bug tracker and a Wiki. If you can manage organising the test cases on the Wiki then that will let you link the Wiki pages to bugs and bugs to Subversion commits. We used to use Trac and were quite happy with it. We switched to Jira because we wanted some more bug tracking features.
I have not used it, but Trac does have a testcase management plugin listed on their web page.
A: maybe this will help?
http://cruisecontrol.sourceforge.net/
A: Use Testlink and integrate it with bug tracker link Mantis.
Let me know if you need any help in this.
A: JIRA has plugins for scanning CVS commit comments, using them to associate source code changes with tracked issues.
A: There is a comprehensive list of Open Source Test Management tools at www.opensourcetesting.org.
One which I came across on a project recently was TestLink and it looks pretty good. It also has an open API.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Favorite image file format for 2d sprites What is your favorite, lossless image format for games (namely 2d games)? And why?
Some things to take into consideration are size on disk, overhead for converting to a usable format, and features of the format (ie alpha support).
There is no best answer, but be sure to back yours up the best you can!
A: There is a best answer, and it's clearly PNG.
Good compression, supports paletted alpha, extensible (in that you can attach arbitrary blobs to a PNG), what's not to like?
A: I like png a lot. It has a good compression, supports alpha channels, and supports color pallets, so file sizes can be smaller. And it is pattent free, so everybody can use it.
A: I'd suggest PNG. Most software supports writing it, most libraries support reading it, it's lossless and supports alpha transparency. And it's a standard format.
And, maybe important for hobbyist 2D games, very small images also result in very small files (i.e. a 16x16 icon can be 1KB or less).
A: PNG does NOT support alpha transparency, it has a translucency channel, which is different. This can lead to problems depending on how you are rendering sprites to the screen. TGA, hands down.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: .NET: What is the status of the Castle Project? Clicking through to the download-page if see that the last version of the download is over one year old and it's also "just" a Release Candidate of version 1.0. There are really no news on any development.
Yes, you can find newer versions from the nightly builds, but that's not a real serious option.
Also, the "getting-started" and description-pages are sometimes not even started and some not completed.
What's the deal? Any C# 3.0 features on the way or what?
A: Hamilton Verissimo, the project founder, took up a position with Microsoft in August.
What happens to Castle?
That was a delicate subject to me, but surprisingly it wasn’t a problem to them. I got a written permission to keep working on Castle as much as I want. So nothing changes…
What happens to Castle Stronghold?
Albeit I was the frontman at CS, there’s a handful of talented people there. For the first time we were lucky hiring a junior developer - I couldn’t believe myself - and I’m positive the company will have a great future.
I’m going to have a small share of the company, but wont be involved anymore. Stronghold also just had a share sold to one of our clients, I’ll release details soon once it gets signed.
Hammett has recently posted about issues with .Net 3.5 SP1, so is evidently still working on it, but perhaps more from tightening things up from a MS end, than from pushing development on with Castle.
There are other developers still releasing daily builds etc, and the developer community is still very active on Google Groups, if not also on the project homepage.
A: First of all, the "Release candidates" are misnamed. "Milestones" might be better. All areas of Castle are still under continuous development, and as far as I know, most users use the trunk version (I do). And there are a number of very active developers working on it besides Hamilton.
There are two active Google Groups for it:
http://groups.google.com/group/castle-project-devel (for development of Castle)
http://groups.google.com/group/castle-project-users (for users of Castle)
Any C# 3.0 features on the way or
what?
It's completely compatible with .Net 3.5 right now. There is currently a poll on the development newsgroup about dropping VS2005 (i.e. C#2) support and going fully C#3. (I believe it's about 20:1 in favor) Some portions of the CastleContrib already are using C#3 features.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Targeting both 32bit and 64bit with Visual Studio in same solution/project I have a little dilemma on how to set up my visual studio builds for multi-targeting.
Background: c# .NET v2.0 with p/invoking into 3rd party 32 bit DLL's, SQL compact v3.5 SP1, with a Setup project.
Right now, the platform target is set to x86 so it can be run on Windows x64.
The 3rd party company has just released 64 bit versions of their DLL's and I want to build a dedicated 64bit program.
This raises some questions which I haven't got the answers to yet.
I want to have the exact same code base.
I must build with references to either the 32bit set of DLL's or 64bit DLL's.
(Both 3rd party and SQL Server Compact)
Can this be solved with 2 new sets of configurations (Debug64 and Release64) ?
Must I create 2 separate setup projects(std. visual studio projects, no Wix or any other utility), or can this be solved within the same .msi?
Any ideas and/or recommendations would be welcomed.
A: Yes, you can target both x86 and x64 with the same code base in the same project. In general, things will Just Work if you create the right solution configurations in VS.NET (although P/Invoke to entirely unmanaged DLLs will most likely require some conditional code): the items that I found to require special attention are:
*
*References to outside managed assemblies with the same name but their own specific bitness (this also applies to COM interop assemblies)
*The MSI package (which, as has already been noted, will need to target either x86 or x64)
*Any custom .NET Installer Class-based actions in your MSI package
The assembly reference issue can't be solved entirely within VS.NET, as it will only allow you to add a reference with a given name to a project once. To work around this, edit your project file manually (in VS, right-click your project file in the Solution Explorer, select Unload Project, then right-click again and select Edit). After adding a reference to, say, the x86 version of an assembly, your project file will contain something like:
<Reference Include="Filename, ..., processorArchitecture=x86">
<HintPath>C:\path\to\x86\DLL</HintPath>
</Reference>
Wrap that Reference tag inside an ItemGroup tag indicating the solution configuration it applies to, e.g:
<ItemGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|x86' ">
<Reference ...>....</Reference>
</ItemGroup>
Then, copy and paste the entire ItemGroup tag, and edit it to contain the details of your 64-bit DLL, e.g.:
<ItemGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|x64' ">
<Reference Include="Filename, ..., processorArchitecture=AMD64">
<HintPath>C:\path\to\x64\DLL</HintPath>
</Reference>
</ItemGroup>
After reloading your project in VS.NET, the Assembly Reference dialog will be a bit confused by these changes, and you may encounter some warnings about assemblies with the wrong target processor, but all your builds will work just fine.
Solving the MSI issue is up next, and unfortunately this will require a non-VS.NET tool: I prefer Caphyon's Advanced Installer for that purpose, as it pulls off the basic trick involved (create a common MSI, as well as 32-bit and 64-bit specific MSIs, and use an .EXE setup launcher to extract the right version and do the required fixups at runtime) very, very well.
You can probably achieve the same results using other tools or the Windows Installer XML (WiX) toolset, but Advanced Installer makes things so easy (and is quite affordable at that) that I've never really looked at alternatives.
One thing you may still require WiX for though, even when using Advanced Installer, is for your .NET Installer Class custom actions. Although it's trivial to specify certain actions that should only run on certain platforms (using the VersionNT64 and NOT VersionNT64 execution conditions, respectively), the built-in AI custom actions will be executed using the 32-bit Framework, even on 64-bit machines.
This may be fixed in a future release, but for now (or when using a different tool to create your MSIs that has the same issue), you can use WiX 3.0's managed custom action support to create action DLLs with the proper bitness that will be executed using the corresponding Framework.
Edit: as of version 8.1.2, Advanced Installer correctly supports 64-bit custom actions. Since my original answer, its price has increased quite a bit, unfortunately, even though it's still extremely good value when compared to InstallShield and its ilk...
Edit: If your DLLs are registered in the GAC, you can also use the standard reference tags this way (SQLite as an example):
<ItemGroup Condition="'$(Platform)' == 'x86'">
<Reference Include="System.Data.SQLite, Version=1.0.80.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139, processorArchitecture=x86" />
</ItemGroup>
<ItemGroup Condition="'$(Platform)' == 'x64'">
<Reference Include="System.Data.SQLite, Version=1.0.80.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139, processorArchitecture=AMD64" />
</ItemGroup>
The condition is also reduced down to all build types, release or debug, and just specifies the processor architecture.
A: You can use a condition to an ItemGroup for the dll references in the project file.
This will cause visual studio to recheck the condition and references whenever you change the active configuration.
Just add a condition for each configuration.
Example:
<ItemGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|x86' ">
<Reference Include="DLLName">
<HintPath>..\DLLName.dll</HintPath>
</Reference>
<ProjectReference Include="..\MyOtherProject.vcxproj">
<Project>{AAAAAA-000000-BBBB-CCCC-TTTTTTTTTT}</Project>
<Name>MyOtherProject</Name>
</ProjectReference>
</ItemGroup>
A: Let's say you have the DLLs build for both platforms, and they are in the following location:
C:\whatever\x86\whatever.dll
C:\whatever\x64\whatever.dll
You simply need to edit your .csproj file from this:
<HintPath>C:\whatever\x86\whatever.dll</HintPath>
To this:
<HintPath>C:\whatever\$(Platform)\whatever.dll</HintPath>
You should then be able to build your project targeting both platforms, and MSBuild will look in the correct directory for the chosen platform.
A: Not sure of the total answer to your question - but thought I would point out a comment in the Additional Information section of the SQL Compact 3.5 SP1 download page seeing you are looking at x64 - hope it helps.
Due to changes in SQL Server Compact
SP1 and additional 64-bit version
support, centrally installed and mixed
mode environments of 32-bit version of
SQL Server Compact 3.5 and 64-bit
version of SQL Server Compact 3.5 SP1
can create what appear to be
intermittent problems. To minimize the
potential for conflicts, and to enable
platform neutral deployment of managed
client applications, centrally
installing the 64-bit version of SQL
Server Compact 3.5 SP1 using the
Windows Installer (MSI) file also
requires installing the 32-bit version
of SQL Server Compact 3.5 SP1 MSI
file. For applications that only
require native 64-bit, private
deployment of the 64-bit version of
SQL Server Compact 3.5 SP1 can be
utilized.
I read this as "include the 32bit SQLCE files as well as the 64bit files" if distributing for 64bit clients.
Makes life interesting I guess.. must say that I love the "what appears to be intermittent problems" line... sounds a bit like "you are imagining things, but just in case, do this..."
A: One .Net build with x86/x64 Dependencies
While all other answers give you a solution to make different Builds according to the platform, I give you an option to only have the "AnyCPU" configuration and make a build that works with your x86 and x64 dlls.
You have to write some plumbing code for this.
Resolution of correct x86/x64-dlls at runtime
Steps:
*
*Use AnyCPU in csproj
*Decide if you only reference the x86 or the x64 dlls in your csprojs. Adapt the UnitTests settings to the architecture settings you have chosen. It's important for debugging/running the tests inside VisualStudio.
*On Reference-Properties set Copy Local & Specific Version to false
*Get rid of the architecture warnings by adding this line to the first PropertyGroup in all of your csproj files where you reference x86/x64:
<ResolveAssemblyWarnOrErrorOnTargetArchitectureMismatch>None</ResolveAssemblyWarnOrErrorOnTargetArchitectureMismatch>
*Add this postbuild script to your startup project, use and modify the paths of this script sp that it copies all your x86/x64 dlls in corresponding subfolders of your build bin\x86\ bin\x64\
xcopy /E /H /R /Y /I /D $(SolutionDir)\YourPathToX86Dlls $(TargetDir)\x86
xcopy /E /H /R /Y /I /D $(SolutionDir)\YourPathToX64Dlls $(TargetDir)\x64
--> When you would start application now, you get an exception
that the assembly could not be found.
*Register the AssemblyResolve event right at the beginning of your application entry point
AppDomain.CurrentDomain.AssemblyResolve += TryResolveArchitectureDependency;
withthis method:
/// <summary>
/// Event Handler for AppDomain.CurrentDomain.AssemblyResolve
/// </summary>
/// <param name="sender">The app domain</param>
/// <param name="resolveEventArgs">The resolve event args</param>
/// <returns>The architecture dependent assembly</returns>
public static Assembly TryResolveArchitectureDependency(object sender, ResolveEventArgs resolveEventArgs)
{
var dllName = resolveEventArgs.Name.Substring(0, resolveEventArgs.Name.IndexOf(","));
var anyCpuAssemblyPath = $".\\{dllName}.dll";
var architectureName = System.Environment.Is64BitProcess ? "x64" : "x86";
var assemblyPath = $".\\{architectureName}\\{dllName}.dll";
if (File.Exists(assemblyPath))
{
return Assembly.LoadFrom(assemblyPath);
}
return null;
}
*If you have unit tests make a TestClass with a Method that has an AssemblyInitializeAttribute and also register the above TryResolveArchitectureDependency-Handler there. (This won't be executed sometimes if you run single tests inside visual studio, the references will be resolved not from the UnitTest bin. Therefore the decision in step 2 is important.)
Benefits:
*
*One Installation/Build for both platforms
Drawbacks:
- No errors at compile time when x86/x64 dlls do not match.
- You should still run test in both modes!
Optionally create a second executable that is exclusive for x64 architecture with Corflags.exe in postbuild script
Other Variants to try out:
- You don't need the AssemblyResolve event handler if you assure that the right dlls are copied to your binary folder at start (Evaluate Process architecture -> move corresponding dlls from x64/x86 to bin folder and back.)
- In Installer evaluate architecture and delete binaries for wrong architecture and move the right ones to the bin folder.
A: Regarding your last question. Most likely you cant solve this inside a single MSI.
If you are using registry/system folders or anything related, the MSI itself must be aware of this and you must prepare a 64bit MSI to properly install on 32 bit machine.
There is a possibility that you can make you product installed as a 32 it application and still be able to make it run as 64 bit one, but i think that may be somewhat hard to achieve.
that being said i think you should be able to keep a single code base for everything. In my current work place we have managed to do so. (but it did took some juggling to make everything play together)
Hope this helps.
Heres a link to some info related to 32/64 bit issues:
http://blog.typemock.com/2008/07/registry-on-windows-64-bit-double-your.html
A: If you use Custom Actions written in .NET as part of your MSI installer then you have another problem.
The 'shim' that runs these custom actions is always 32bit then your custom action will run 32bit as well, despite what target you specify.
More info & some ninja moves to get around (basically change the MSI to use the 64 bit version of this shim)
Building an MSI in Visual Studio 2005/2008 to work on a SharePoint 64
64-bit Managed Custom Actions with Visual Studio
A: You can generate two solutions differently and merge them afterwards!
I did this for VS 2010. and it works. I had 2 different solutions generated by CMake and I merged them
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "113"
} |
Q: Compile-time type based dispatch Following techniques from 'Modern C++ Design', I am implementing a persistence library with various compile-time optimisations. I would like the ability to dispatch a function to a templated member variable if that variable derives from a given class:
template<class T, template <class> class Manager = DefaultManager> class Data
{
private:
T *data_;
public:
void Dispatch()
{
if(SUPERSUBCLASS(Container, T))
{
data_->IKnowThisIsHere();
}
else
{
Manager<T>::SomeGenericFunction(data_);
}
}
}
Where SUPERSUBCLASS is a compile-time macro to determine object inheritance. Of course, this fails in all cases where T does to inherit from Container (or T is an intrinsic type etc etc) because the compiler rightly complains that IKnowThisIsHere() is not a data member, even though this code path will never be followed, as shown here after preprocessing with T = int:
private:
int *data_;
public:
void Dispatch()
{
if(false)
{
data_->IKnowThisIsHere();
Compiler clearly complains at this code, even though it will never get executed. A suggestion of using a dynamic_cast also does not work, as again a type conversion is attempted at compile time that is not possible (for example with T=double, std::string):
void Dispatch()
{
if(false)
{
dynamic_cast<Container*>(data_)->IKnowThisIsHere();
error: cannot dynamic_cast '((const Data<double, DefaultManager>*)this)->Data<double, DefaultManager>::data_' (of type 'double* const') to type 'class Container*' (source is not a pointer to class)
error: cannot dynamic_cast '((const Data<std::string, DefaultManager>*)this)->Da<sttad::string, DefaultManager>::data_' (of type 'struct std::string* const') to type 'class Container*' (source type is not polymorphic)
I really need to emulate (or indeed persuade!) having the compiler emit one set of code if T does inherit from Container, and another if it does not.
Any suggestions?
A: Overloading can be useful to implement compile-time dispatching, as proposed by Alexandrescu in his book "Modern C++ Design".
You can use a class like this to transform at compile time a boolean or integer into a type:
template <bool n>
struct int2type
{ enum { value = n}; };
The following source code shows a possible application:
#include <iostream>
#define MACRO() true // <- macro used to dispatch
template <bool n>
struct int2type
{ enum { value = n }; };
void method(int2type<false>)
{ std::cout << __PRETTY_FUNCTION__ << std::endl; }
void method(int2type<true>)
{ std::cout << __PRETTY_FUNCTION__ << std::endl; }
int
main(int argc, char *argv[])
{
// MACRO() determines which function to call
//
method( int2type<MACRO()>());
return 0;
}
Of course what really makes the job is the MACRO() or a better implementation as a metafunction
A: You require a kind of compile-time if. This then calls a function depending on which case is true. This way, the compiler won't stumble upon code which it can't compile (because that is safely stored away in another function template that never gets instantiated).
There are several ways of realizing such a compile-time if. The most common is to employ the SFINAE idiom: substitution failure is not an error. Boost's is_base_of ist actually an instance of this idiom. To employ it correctly, you wouldn't write it in an if expression but rather use it as the return type of your function.
Untested code:
void Dispatch()
{
myfunc(data_);
}
private:
// EDIT: disabled the default case where the specialisation matched
template <typename U>
typename enable_if_c<is_base_of<Container, U>::value, U>::type myfunc(U& data_) {
data_->IKnowThisIsHere();
}
template <typename U>
typename disable_if_c<is_base_of<Container, U>::value, U>::type myfunc(U& data_) { // default case
Manager<U>::SomeGenericFunction(data_);
}
A: Boost traits has something for that : is_base_of
A: Look into the boost template meta programming library. Also, depending on what you are trying to accomplish look at the boost serialization library, since it may already have what you need.
A: Unfortunately I've been through that too (and it is, also, a runtime call ;) ) The compiler complains if you pass in non polymorphic or class types, in a similar way to before:
error: cannot dynamic_cast '((const Data<double, DefaultManager>*)this)->Data<double, RawManager>::data_' (of type 'double* const') to type 'class Container*' (source is not a pointer to class)
or
error: cannot dynamic_cast '((const Data<std::string, DefaultRawManager>*)this)->Data<std::string, DefaultManager>::data_' (of type 'struct std::string* const') to type 'class Container*' (source type is not polymorphic)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Including Relevant Boost Libraries with C++ Source (Using Visual Studio) I have a project I'm working on (for school) that I'm digging into the Boost libraries for the solutions. I need some way to distribute the required Boost source code with my application so that it can be compiled without the libraries being installed on the system doing the compiling. (School computers lack just about anything you can mention. The school just installed CVS last year. But they do have VS2005)
Note: I'm using Visual Studio 2005 on Vista. I have Boost 1.34.1 on my system I used the automatic installer. The documentation I've come across says something about using BCP command but that command doesn't seem to copy anything. (I'm using absolute path to call BCP so I don't end up calling the wrong command.)
Edit: I am trying to use the RegEx libraries.
Edit: The command I'm using for BCP is: "c:\Program Files\boost\boost_1_34_1\bin\bcp.exe" boost/regex.hpp regex\
And it returns: no errors detected
A: It depends on the library you're using. If you're using a header-only library (most of the boost libraries are, some notable exceptions are signals, serialisation and date/time) you can just copy those header files. Otherwise you'll need to copy the cpp files, too. My suggestion is to just include them into your project.
So, here's what you do: you remove the boost include path from your project settings (tool->options->projects and solutions->vc++ directories->include files). Try to compile. Look at which include fails. Copy that file from your boost directory to your project directory. Lather, rinse, repeat until your project compiles.
If you're using a library that requires .cpp files, you'll get an error at link time. Copy all .cpp files of the library you use to your project directory and add them all to your solution. Rebuild and cross fingers.
For a more detailed answer, please post which libraries you're using.
A: Try calling bcp with this command:
"c:\Program Files\boost\boost_1_34_1\bin\bcp.exe" --boost="c:\Program Files\boost\boost_1_34_1" regex regex
--boost tells bcp where boost is installed, the first regex is the name of the modules, the second is the destination directory.
Oh, and if you haven't already noticed, there are Visual C++ makefiles in libs\regex\build\.
A: Based on your comment that you're using regex, here's what you do: download the 'normal' boost distribution zip file. Unzip it somewhere. Go to libs/regex/src. Copy and paste all the .cpp files in that directory to your project directory. Add them to your Visual Studio project (right-click, 'add' -> 'existing item'). Then go to boost/regex and copy everything in there (the header files) to your project directory (including the subdirectories). Change all the includes in your own .cpp and .h files from #include to "regex.hpp" so that it includes the headers from your local directory and not those that were installed system-wide. Make sure to remove the system-wide include path from your project settings like I said in my last post.
Then, compile your code. You'll get a number of 'missing include file' errors because regex depends on other boost libraries. Repeat the whole process: go to boost/xxx where xxx is the library that regex is looking for. You can deduce the library from the error message. Copy everything that the compiler asks for to your own project directory. You may need to fiddle a bit with your directory layout before it works. It's really a step by step approach, where every step is the same: identify the missing file, copy it over, see if that include is found and fixed, and continue with the next step. This is boring work I'm afraid.
You could automate this all with bcp but for a one-off project like a school project I wouldn't bother; only if you think you'll have future projects that will require you to deliver a self-contained zipfile.
A: This seems a bit odd to me. If you are distributing source code, then the people you are distributing to should be able to install boost. Then if they already have boost, there is no duplication and confusion, or if they do not and you need a built library, they will build the correct library for their system. If the people you are distributing are not up to installing boost, then I would suggest distributing binaries in an install package to make it as easy as possible for them.
A: I've come across this before, embedding boost into my projects. Each individual boost library comes with various project files for building with different make systems (Jam, make, Visual Studio 6...) but they're never so great with the newer versions of VS.
I always prefer to create a new project file and embed boost directly into my project. It's pretty simple, you just need to add all of the source files and set the project options properly. There is one caveat, however, and that is you must name the library output file the as boost does, because their include files depend on that.
Once you've done this, you can distribute the boost libraries just like any other files in your project.
A: It is such a PITA to compile boost; only the motivated students are going to be able to do it. Have you considered bundling the installer?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Benefits of inline functions in C++? What is the advantages/disadvantages of using inline functions in C++? I see that it only increases performance for the code that the compiler outputs, but with today's optimized compilers, fast CPUs, huge memory etc. (not like in the 1980< where memory was scarce and everything had to fit in 100KB of memory) what advantages do they really have today?
A: I'd like to add that inline functions are crucial when you are building shared library. Without marking function inline, it will be exported into the library in the binary form. It will be also present in the symbols table, if exported. On the other side, inlined functions are not exported, neither to the library binaries nor to the symbols table.
It may be critical when library is intended to be loaded at runtime. It may also hit binary-compatible-aware libraries. In such cases don't use inline.
A: In archaic C and C++, inline is like register: a suggestion (nothing more than a suggestion) to the compiler about a possible optimization.
In modern C++, inline tells the linker that, if multiple definitions (not declarations) are found in different translation units, they are all the same, and the linker can freely keep one and discard all the other ones.
inline is mandatory if a function (no matter how complex or "linear") is defined in a header file, to allow multiple sources to include it without getting a "multiple definition" error by the linker.
Member functions defined inside a class are "inline" by default, as are template functions (in contrast to global functions).
//fileA.h
inline void afunc()
{ std::cout << "this is afunc" << std::endl; }
//file1.cpp
#include "fileA.h"
void acall()
{ afunc(); }
//main.cpp
#include "fileA.h"
void acall();
int main()
{
afunc();
acall();
}
//output
this is afunc
this is afunc
Note the inclusion of fileA.h into two .cpp files, resulting in two instances of afunc().
The linker will discard one of them.
If no inline is specified, the linker will complain.
A: During optimization many compilers will inline functions even if you didn't mark them. You generally only need to mark functions as inline if you know something the compiler doesn't, as it can usually make the correct decision itself.
A: inline allows you to place a function definition in a header file and #include that header file in multiple source files without violating the one definition rule.
A: Generally speaking, these days with any modern compiler worrying about inlining anything is pretty much a waste of time. The compiler should actually optimize all of these considerations for you through its own analysis of the code and your specification of the optimization flags passed to the compiler. If you care about speed, tell the compiler to optimize for speed. If you care about space, tell the compiler to optimize for space. As another answer alluded to, a decent compiler will even inline automatically if it really makes sense.
Also, as others have stated, using inline does not guarantee inline of anything. If you want to guarantee it, you will have to define a macro instead of an inline function to do it.
When to inline and/or define a macro to force inclusion? - Only when you have a demonstrated and necessary proven increase in speed for a critical section of code that is known to have an affect on the overall performance of the application.
A: It is not all about performance. Both C++ and C are used for embedded programming, sitting on top of hardware. If you would, for example, write an interrupt handler, you need to make sure that the code can be executed at once, without additional registers and/or memory pages being being swapped. That is when inline comes in handy. Good compilers do some "inlining" themselves when speed is needed, but "inline" compels them.
A: Advantages
*
*By inlining your code where it is needed, your program will spend less time in the function call and return parts. It is supposed to make your code go faster, even as it goes larger (see below). Inlining trivial accessors could be an example of effective inlining.
*By marking it as inline, you can put a function definition in a header file (i.e. it can be included in multiple compilation unit, without the linker complaining)
Disadvantages
*
*It can make your code larger (i.e. if you use inline for non-trivial functions). As such, it could provoke paging and defeat optimizations from the compiler.
*It slightly breaks your encapsulation because it exposes the internal of your object processing (but then, every "private" member would, too). This means you must not use inlining in a PImpl pattern.
*It slightly breaks your encapsulation 2: C++ inlining is resolved at compile time. Which means that should you change the code of the inlined function, you would need to recompile all the code using it to be sure it will be updated (for the same reason, I avoid default values for function parameters)
*When used in a header, it makes your header file larger, and thus, will dilute interesting informations (like the list of a class methods) with code the user don't care about (this is the reason that I declare inlined functions inside a class, but will define it in an header after the class body, and never inside the class body).
Inlining Magic
*
*The compiler may or may not inline the functions you marked as inline; it may also decide to inline functions not marked as inline at compilation or linking time.
*Inline works like a copy/paste controlled by the compiler, which is quite different from a pre-processor macro: The macro will be forcibly inlined, will pollute all the namespaces and code, won't be easily debuggable, and will be done even if the compiler would have ruled it as inefficient.
*Every method of a class defined inside the body of the class itself is considered as "inlined" (even if the compiler can still decide to not inline it
*Virtual methods are not supposed to be inlinable. Still, sometimes, when the compiler can know for sure the type of the object (i.e. the object was declared and constructed inside the same function body), even a virtual function will be inlined because the compiler knows exactly the type of the object.
*Template methods/functions are not always inlined (their presence in an header will not make them automatically inline).
*The next step after "inline" is template metaprograming . I.e. By "inlining" your code at compile time, sometimes, the compiler can deduce the final result of a function... So a complex algorithm can sometimes be reduced to a kind of return 42 ; statement. This is for me extreme inlining. It happens rarely in real life, it makes compilation time longer, will not bloat your code, and will make your code faster. But like the grail, don't try to apply it everywhere because most processing cannot be resolved this way... Still, this is cool anyway...:-p
A: Inlining is a suggestion to the compiler which it is free to ignore. It's ideal for small bits of code.
If your function is inlined, it's basically inserted in the code where the function call is made to it, rather than actually calling a separate function. This can assist with speed as you don't have to do the actual call.
It also assists CPUs with pipelining as they don't have to reload the pipeline with new instructions caused by a call.
The only disadvantage is possible increased binary size but, as long as the functions are small, this won't matter too much.
I tend to leave these sorts of decisions to the compilers nowadays (well, the smart ones anyway). The people who wrote them tend to have far more detailed knowledge of the underlying architectures.
A: Inline functions are faster because you don't need to push and pop things on/off the stack like parameters and the return address; however, it does make your binary slightly larger.
Does it make a significant difference? Not noticeably enough on modern hardware for most. But it can make a difference, which is enough for some people.
Marking something inline does not give you a guarantee that it will be inline. It's just a suggestion to the compiler. Sometimes it's not possible such as when you have a virtual function, or when there is recursion involved. And sometimes the compiler just chooses not to use it.
I could see a situation like this making a detectable difference:
inline int aplusb_pow2(int a, int b) {
return (a + b)*(a + b) ;
}
for(int a = 0; a < 900000; ++a)
for(int b = 0; b < 900000; ++b)
aplusb_pow2(a, b);
A: Inline function is the optimization technique used by the compilers. One can simply prepend inline keyword to function prototype to make a function inline. Inline function instruct compiler to insert complete body of the function wherever that function got used in code.
Advantages :-
*
*It does not require function calling overhead.
*It also save overhead of variables push/pop on the stack, while function calling.
*It also save overhead of return call from a function.
*It increases locality of reference by utilizing instruction cache.
*After in-lining compiler can also apply intra-procedural optimization if specified. This is the most important one, in this way compiler can now focus on dead code elimination, can give more stress on branch prediction, induction variable elimination etc..
To check more about it one can follow this link
http://tajendrasengar.blogspot.com/2010/03/what-is-inline-function-in-cc.html
A: Fell into the same trouble with inlining functions into so libraries. It seems that inlined functions are not compiled into the library. as a result the linker puts out a "undefined reference" error, if a executable wants to use the inlined function of the library. (happened to me compiling Qt source with gcc 4.5.
A: Why not make all functions inline by default? Because it's an engineering trade off. There are at least two types of "optimization": speeding up the program and reducing the size (memory footprint) of the program. Inlining generally speeds things up. It gets rid of the function call overhead, avoiding pushing then pulling parameters from the stack. However, it also makes the memory footprint of the program bigger, because every function call must now be replaced with the full code of the function. To make things even more complicated, remember that the CPU stores frequently used chunks of memory in a cache on the CPU for ultra-rapid access. If you make the program's memory image big enough, your program won't be able to use the cache efficiently, and in the worst case inlining could actually slow your program down. To some extent the compiler can calculate what the trade offs are, and may be able to make better decisions than you can, just looking at the source code.
A: Our computer science professor urged us to never use inline in a c++ program. When asked why, he kindly explained to us that modern compilers should detect when to use inline automatically.
So yes, the inline can be an optimization technique to be used wherever possible, but apparently this is something that is already done for you whenever it's possible to inline a function anyways.
A: Conclusion from another discussion here:
Are there any drawbacks with inline functions?
Apparently, There is nothing wrong with using inline functions.
But it is worth noting the following points!
*
*Overuse of inlining can actually make programs slower. Depending on a function's size, inlining it can cause the code size to increase or decrease. Inlining a very small accessor function will usually decrease code size while inlining a very large function can dramatically increase code size. On modern processors smaller code usually runs faster due to better use of the instruction cache. - Google Guidelines
*The speed benefits of inline functions tend to diminish as the function grows in size. At some point the overhead of the function call becomes small compared to the execution of the function body, and the benefit is lost - Source
*There are few situations where an inline function may not work:
*
*For a function returning values; if a return statement exists.
*For a function not returning any values; if a loop, switch or goto statement exists.
*If a function is recursive. -Source
*The __inline keyword causes a function to be inlined only if you specify the optimize option. If optimize is specified, whether or not __inline is honored depends on the setting of the inline optimizer option. By default, the inline option is in effect whenever the optimizer is run. If you specify optimize , you must also specify the noinline option if you want the __inline keyword to be ignored. -Source
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "273"
} |
Q: What are the most useful data structures to know inside out? I'm interested in finding out what people would consider the most useful data structures to know in programming. What data structure do you find yourself using all the time?
Answers to this post should help new programmers interested in finding a useful data structure for their problem. Answers should probably include the data structure, information about it or a relevant link, the situation it is being used in and why it is a good choice for this problem (e.g ideal computation complexities, simplicity and understanding etc.)
Each answer should be about one data structure only.
Thanks for any pearls of wisdom and experience people can share.
A: I will have to disregard your requirement about one data structure per post - these are the ones that i have used the most and most programs i find require mostly one amongst these or a combination.
arrays - the most basic and provides the fastest access. vectors are the improvisation over the plain old arrays and are de-facto replacements used commonly these days. dequeue is another variation on this theme and again provides consant time / random access but optimized for fast insertions and deletions at the beginning and end.
link list - very useful to maintain a list of data that is dropped and inserted frequently but very slow to iterate / search. eg free / used lists inside memory pages
trees - a basic structure that forms the basis of more complex structures. There are many forms of this structure. Provides logn search times when the tree is kept sorted.Becomes useful for large data items like dictionaries. Binary / AVL and red-black trees are the most common.
maps and hashes - Not exactly data structures but complex fast lookup algorithms implemented using a combination of clever logic and these above data structure.
These data structure and their implementaion are avalable in the STL library in C++. Other languages also have their native implementations. Once you know these basic data structures and a few of their variatons (queue,stack,priority queues) & something about search algorithms i would say the basics would be well covered.
A: One of the data structures I use the most (beyond vectors, of course) is the Hashtable.
Its about the only choise if you need to be able to search large quantities of data in O(1) time, that means the time to search does not grow as the size of the collection grows.
The catch is that the insertion and deletion times are larger than in other data strutures, and you need to have some sort of key with which to search the collection. Every element must have a key.
The algorithm takes the key of each element and computes an hash code that indicates the slot in the hash table in which to search.
Then depending on the implementation it either follows a list of items that fell on that bucket to find your item or it searches nearby buckets.
The size of the hastable is determinant to the efficiency of the hash that is quite affected by the ammount of collisions of hash codes between keys.
Use it whenever you need a map and the expected number of elements of the map exceed about 10. Its a bit more more memory intensive than other structures since it needs lots of unused slots in the table to be efficient.
C# has a great implementation of it with Dictionary<keytype, valuetype> and even has a HybridDictionary that decides internally when to use a hashtable or a vector.
Any good programming book describes it but you will be well served by wikipedia:
http://en.wikipedia.org/wiki/Hashtable
A: I find myself using associative array quite a lot, basically arrays with a string as the index.
A:
The advantages of linked lists are that they are very cheap to add/remove nodes. Unlike arrays [...] they do not require reallocating more memory upon expanding.
If you have an array and you double the allocation size every time you fill it, you'll have amortized O(1) appends. Also, looping over all the elements of an array is likely to be faster (in wall time) than looping over a linked list, due to caching effects (unless you allocate the links in big chunks and don't mess around with them too much).
Also, arrays are smaller: you save the per-element word overhead, plus the per-allocation overhead (which is probably at least two words: one for the size and one for the next-in-free-list pointer).
A: Linked lists / doubly-linked lists / other variants
Everyone should know the pros and cons of a linked list, and by the complete lack of usage, it seems to be something that many people seem to forget.
The advantages of linked lists are that they are very cheap to add/remove nodes. Unlike arrays or data structures that use an array at the core, they do not require reallocating more memory upon expanding.
The disadvantages are that they do not perform well at all for searching. What would be an O(1) lookup in an array is O(n) for a linked list.
Like all structures, linked lists are ideal only under certain conditions. But used at the right time, they are very powerful.
A: I like binary trees. Especially the Splay-Tree variant. It's somewhat similar to a self balancing binary tree but also adapt to the usage pattern of the application. You almost never run into worst case O(n) behaviour.
A nice bonus is that they are also easier to write and need less code than other self-balancing binary trees. It's one of my favorite data-structures because it performs so incredibly well in practice.
http://en.wikipedia.org/wiki/Splay_tree
A: I find myself using arrays very frequently in combination with the "foreach" control structure to loop through the items. In the past I used arrays with a numeric index and the "for(i=1;i<n;i++)". I've found that looping through arrays with "foreach" instead of an explicit numeric index provides a more general and readable solution.
A: Graphs are a very powerful overlooked data structure.
A lot of problems can be solved by constructing a graph modeling your problem, then using a well-known algorithm on the graph.
Some examples natural language processing (the edge weight connecting nodes can represent how likely one word is to follow another) video games (use graphs to determine shortest paths for AI characters), and network topology.
I learned about graphs from The Algorithm Design Manual, which was recommended by Steve Yegge in a blog post.
A: This is a bit like asking which tools in a carpenter's toolbox are best to learn to use. Each of them is good for a certain type of job, and you need to learn the basic ones (maps, lists, bags, sets, etc) equally.
A: I don't think there is one datastructure one must know. Each datastructure has it's own properties, and thus suitable for a specific problem.
A: I've always found a myriad of uses for stacks, although less so in object oriented programming. Really, all data structures have their uses, and they're not complex. Learn all you can.
A: I don't think that there is a general answer here. It should be bounded to some use case.
For example in my more than 10 year career as a programmer/manager I have never used binary trees. I doubt that means that binary trees are not useful, but that in kernel and embedded world the linked list probably fits better.
Actually when I think about dropping a few exceptions I used only simple linked lists.
And then even in embedded it probably not the only structure used I'm living in the world of low level hardware protocols, probably "up the hill" more data structures used...
A: For a basic appreciation, you should know of a few abstract data types (set, dictionary, ordered list, queue, stack etc.) and several ways of implementing each with their relative trade-offs.
This will probably require you to understand arrays, linked-lists (single and double linked), hash tables, binary search trees (with some understanding of simple balancing heuristics) and binary heaps. Know these inside out and you'll be a long way towards understanding more complex and interesting data structures. Plus if you've implemented all of them you'll have a ready-made library that you understand for programming projects (although obviously more battle-hardened libraries like Boost or whatever are more appropriate for production code).
This gives a very useful vocabulary of data structures, which might make a significant difference to the way you write your programs. You might find you've been solving problems with many partial implementations of a queue, for example, that you can now replace with a canonical implementation.
A: Quicksort
Mergesort
Bubblesort
These are really good to learn and understand how they work. Sorting is fun and can be applied to many areas :)
A: This post is way too vague. There are countless data structures: arrays, dictionaries, etc. Each data structure can be used to solve different problems.
It would be much more productive to ask for DS for a specific problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Unable to access HTTP PUT data in webservice code All,
As part of an application I'm writing I need to have a HTTP PUT webservice which accepts incoming imagedata, which will by analyzed, validated, and added to a local file store.
My issue arises after the size validation as the
$_SERVER['CONTENT_LENGTH']
has a > 0 value, and this value is identical to the test file size, so I can assume that all is going well up to this point but when I try to read the incoming stream data using
file_get_contents('php://stdin');
I get an empty string. I've also tried using
file_get_contents('php://input');
And this give me the same result of an empty string.
Any help, suggestions or direction will be appreciated.
NB: I'm using
*
*PHP 5.2.6
*Apache 2.0
A: Apache HTTPD denies PUT requests by default. You might check out mod_put:
http://perso.ec-lyon.fr/lyonel.vincent/apache/mod_put.html
and add this to httpd.conf:
<Location /upload/dir>
EnablePut On
AuthType Basic
AuthName "Web publishing"
AuthUserFile /www/etc/passwd
AuthGroupFile /www/etc/group
<Limit PUT>
require valid-user
</Limit>
</Location>
A: My best guess is that you need to alter httpd.conf to not deny PUT requests. Have you checked that?
A: The r you've got as the second arg isn't right. file_get_contents doesn't use the a/r/w/a+/r+/w+ arguments that fopen uses. You probably want to remove it and just do:
file_get_contents('php://input');
See http://us3.php.net/file_get_contents.
A: file_get_contents doesn't take the "r" parameter - see the PHP manual page:
string file_get_contents ( string $filename [, int $flags...)
Valid $flag values are FILE_USE_INCLUDE_PATH, FILE_TEXT, FILE_BINARY.
Try removing the "r" flag and trying again
Edit - question updated, "r" flag was being ignored so evidently not the root of the problem.
It looks like there's a reported bug in PHP regarding file_get_contents returning an empty string for a HTTP POST. From the bug description:
file_get_contents('php://input') (and also file, fopen+fread) does not
return POST data, when submited form with
enctype="multipart/form-data".
When submited the same form without enctype specified (so default
"application/x-www-form-urlencoded" is used) all works OK.
So it looks like a work-around is to change the specified form enctype away from multipart/form-data, which obviously isn't ideal for an image upload - from the W3 FORM specification:
The content type "application/x-www-form-urlencoded" is inefficient for sending large quantities of binary data or text containing non-ASCII characters. The content type "multipart/form-data" should be used for submitting forms that contain files, non-ASCII data, and binary data.
Further Edit
This bug seems to have been resolved in your PHP version. Have you checked to make sure that the buffer being read in doesn't start with a carriage-return / newline char? There's a problem somewhat similar to yours that was discussed on Sitepoint.
Try running strlen on the input and see what the length is.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I join int[] to a character-separated string in .NET? I have an array of integers:
int[] number = new int[] { 2,3,6,7 };
What is the easiest way of converting these into a single string where the numbers are separated by a character (like: "2,3,6,7")?
I'm using C# and .NET 3.5.
A: String.Join(";", number.Select(item => item.ToString()).ToArray());
We have to convert each of the items to a String before we can join them, so it makes sense to use Select and a lambda expression. This is equivalent to map in some other languages. Then we have to convert the resulting collection of string back to an array, because String.Join only accepts a string array.
The ToArray() is slightly ugly I think. String.Join should really accept IEnumerable<String>, there is no reason to restrict it to only arrays. This is probably just because Join is from before generics, when arrays were the only kind of typed collection available.
A: If your array of integers may be large, you'll get better performance using a StringBuilder. E.g.:
StringBuilder builder = new StringBuilder();
char separator = ',';
foreach(int value in integerArray)
{
if (builder.Length > 0) builder.Append(separator);
builder.Append(value);
}
string result = builder.ToString();
Edit: When I posted this I was under the mistaken impression that "StringBuilder.Append(int value)" internally managed to append the string representation of the integer value without creating a string object. This is wrong: inspecting the method with Reflector shows that it simply appends value.ToString().
Therefore the only potential performance difference is that this technique avoids one array creation, and frees the strings for garbage collection slightly sooner. In practice this won't make any measurable difference, so I've upvoted this better solution.
A: The question is for "easiest way of converting these in to a single string where the number are separated by a character".
The easiest way is:
int[] numbers = new int[] { 2,3,6,7 };
string number_string = string.Join(",", numbers);
// do whatever you want with your exciting new number string
This only works in .NET 4.0+.
A: Although the OP specified .NET 3.5, people wanting to do this in .NET 2.0 with C# 2.0 can do this:
string.Join(",", Array.ConvertAll<int, String>(ints, Convert.ToString));
I find there are a number of other cases where the use of the Convert.xxx functions is a neater alternative to a lambda, although in C# 3.0 the lambda might help the type-inferencing.
A fairly compact C# 3.0 version which works with .NET 2.0 is this:
string.Join(",", Array.ConvertAll(ints, item => item.ToString()))
A: ints.Aggregate("", ( str, n ) => str +","+ n ).Substring(1);
I also thought there was a simpler way.
A: In .NET 4.0, string join has an overload for params object[], so it's as simple as:
int[] ids = new int[] { 1, 2, 3 };
string.Join(",", ids);
Example
int[] ids = new int[] { 1, 2, 3 };
System.Data.Common.DbCommand cmd = new System.Data.SqlClient.SqlCommand("SELECT * FROM some_table WHERE id_column IN (@bla)");
cmd.CommandText = cmd.CommandText.Replace("@bla", string.Join(",", ids));
In .NET 2.0, it's a tiny little bit more difficult, since there's no such overload. So you need your own generic method:
public static string JoinArray<T>(string separator, T[] inputTypeArray)
{
string strRetValue = null;
System.Collections.Generic.List<string> ls = new System.Collections.Generic.List<string>();
for (int i = 0; i < inputTypeArray.Length; ++i)
{
string str = System.Convert.ToString(inputTypeArray[i], System.Globalization.CultureInfo.InvariantCulture);
if (!string.IsNullOrEmpty(str))
{
// SQL-Escape
// if (typeof(T) == typeof(string))
// str = str.Replace("'", "''");
ls.Add(str);
} // End if (!string.IsNullOrEmpty(str))
} // Next i
strRetValue= string.Join(separator, ls.ToArray());
ls.Clear();
ls = null;
return strRetValue;
}
In .NET 3.5, you can use extension methods:
public static class ArrayEx
{
public static string JoinArray<T>(this T[] inputTypeArray, string separator)
{
string strRetValue = null;
System.Collections.Generic.List<string> ls = new System.Collections.Generic.List<string>();
for (int i = 0; i < inputTypeArray.Length; ++i)
{
string str = System.Convert.ToString(inputTypeArray[i], System.Globalization.CultureInfo.InvariantCulture);
if (!string.IsNullOrEmpty(str))
{
// SQL-Escape
// if (typeof(T) == typeof(string))
// str = str.Replace("'", "''");
ls.Add(str);
} // End if (!string.IsNullOrEmpty(str))
} // Next i
strRetValue= string.Join(separator, ls.ToArray());
ls.Clear();
ls = null;
return strRetValue;
}
}
So you can use the JoinArray extension method.
int[] ids = new int[] { 1, 2, 3 };
string strIdList = ids.JoinArray(",");
You can also use that extension method in .NET 2.0, if you add the ExtensionAttribute to your code:
// you need this once (only), and it must be in this namespace
namespace System.Runtime.CompilerServices
{
[AttributeUsage(AttributeTargets.Assembly | AttributeTargets.Class | AttributeTargets.Method)]
public sealed class ExtensionAttribute : Attribute {}
}
A: I agree with the lambda expression for readability and maintainability, but it will not always be the best option. The downside to using both the IEnumerable/ToArray and StringBuilder approaches is that they have to dynamically grow a list, either of items or characters, since they do not know how much space will be needed for the final string.
If the rare case where speed is more important than conciseness, the following is more efficient.
int[] number = new int[] { 1, 2, 3, 4, 5 };
string[] strings = new string[number.Length];
for (int i = 0; i < number.Length; i++)
strings[i] = number[i].ToString();
string result = string.Join(",", strings);
A: var ints = new int[] {1, 2, 3, 4, 5};
var result = string.Join(",", ints.Select(x => x.ToString()).ToArray());
Console.WriteLine(result); // prints "1,2,3,4,5"
As of (at least) .NET 4.5,
var result = string.Join(",", ints.Select(x => x.ToString()).ToArray());
is equivalent to:
var result = string.Join(",", ints);
I see several solutions advertise usage of StringBuilder. Someone complains that the Join method should take an IEnumerable argument.
I'm going to disappoint you :) String.Join requires an array for a single reason - performance. The Join method needs to know the size of the data to effectively preallocate the necessary amount of memory.
Here is a part of the internal implementation of String.Join method:
// length computed from length of items in input array and length of separator
string str = FastAllocateString(length);
fixed (char* chRef = &str.m_firstChar) // note than we use direct memory access here
{
UnSafeCharBuffer buffer = new UnSafeCharBuffer(chRef, length);
buffer.AppendString(value[startIndex]);
for (int j = startIndex + 1; j <= num2; j++)
{
buffer.AppendString(separator);
buffer.AppendString(value[j]);
}
}
A: One mixture of the two approaches would be to write an extension method on IEnumerable<T> which used a StringBuilder. Here's an example, with different overloads depending on whether you want to specify the transformation or just rely on plain ToString. I've named the method "JoinStrings" instead of "Join" to avoid confusion with the other type of Join. Perhaps someone can come up with a better name :)
using System;
using System.Collections.Generic;
using System.Text;
public static class Extensions
{
public static string JoinStrings<T>(this IEnumerable<T> source,
Func<T, string> projection, string separator)
{
StringBuilder builder = new StringBuilder();
bool first = true;
foreach (T element in source)
{
if (first)
{
first = false;
}
else
{
builder.Append(separator);
}
builder.Append(projection(element));
}
return builder.ToString();
}
public static string JoinStrings<T>(this IEnumerable<T> source, string separator)
{
return JoinStrings(source, t => t.ToString(), separator);
}
}
class Test
{
public static void Main()
{
int[] x = {1, 2, 3, 4, 5, 10, 11};
Console.WriteLine(x.JoinStrings(";"));
Console.WriteLine(x.JoinStrings(i => i.ToString("X"), ","));
}
}
A: You can do
ints.ToString(",")
ints.ToString("|")
ints.ToString(":")
Check out
Separator Delimited ToString for Array, List, Dictionary, Generic IEnumerable
A: Forget about .NET 3.5 and use the following code in .NET Core:
var result = string.Join(",", ints);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "108"
} |
Q: Best Java/Swing browser component? What's the best cross platform Java Swing browser component at least able to play nicely in a swing interface (lightweight component ?) and able to run on MacOSX and Windows ?
Things like : FlyingSaucer, JDIC, maybe others ?
A: The Lobo Browser could be what you're looking for:
http://lobobrowser.org/index.jsp
It's GPL and renders JavaFX as well as HTML
Edit
JavaFX 2.0 comes with a Browser component:
http://docs.oracle.com/javafx/2/webview/jfxpub-webview.htm
A: you can go for Mozswing which have all the features that mozilla firefox 3.0 supports ..
but the same is heavy.
A: i belive this could help:
http://djproject.sourceforge.net/ns/index.html
A: We (@ WebRenderer) believe we have the best Java browser SDK - http://www.webrenderer.com/
WebRenderer is Swing based using Firefox as the underlying engine, and supports HTML5, SVG, etc on both 32 and 64bit systems.
A: Highweight browser based http://code.google.com/p/jbrowser/. It use Canvas component.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
} |
Q: How do I get a stack trace in OCaml? The Objective Caml language will only produce stack traces if you ask for them just right - what are the requirements for both bytecode and native code?
A: Because it looks like you can only get traces for exceptions on unix you can fork and throw the exception in the second process. This way the main process can continue:
export OCAMLRUNPARAM=b
# compile with -g
flush_all(); let r = Unix.fork() in if r == 0 then raise Exit
A: Compile with -g and set environment variable OCAMLRUNPARAM=b
A: If you are using Ocamlbuild instead of invoking compiler directly, you can use the debug tag. From the manual:
With OCamlbuild, you can simply add the debug tag to your program’s targets, and it will sort out when to insert the -g flag or not.
For example, if you are building a file foo.ml with package bar then your _tags file will have a line:
<foo.ml>: package(bar), debug
This will insert the appropriate -g flags while building bytecode/native files. However, you still need to set the environment variable using export OCAMLRUNPARAM=b as mentioned in the other answers.
A: Some Printexc functions let you do this programmatically.
A: As noted in other answers, you need to compile your project with debugging info and run it with the OCAMLRUNPARAM=b environment variable.
A convenient way to have Ocamlbuild compile an entire project with debugging info but without editing the _tags file is to specify a special debug target. From the manual:
The preferred way of compiling code suitable for debugging with ocamldebug or profiling native code with ocamlprof is to use the appropriate target extensions, .d.byte for debugging or .p.native.
I use this technique for quick compile-run cycles on the command line. For example, to run foo.ml:
export OCAMLRUNPARAM=b
ocamlbuild -no-links foo.d.byte && _build/foo.d.byte
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: Which image format is cost effective in terms of network operation for iPhone? I want to display a thumbnail image in a cell of tableViewController, this thumbnail image is located at some remote place (URL of address is in XML file) so which format of image is cost effective?
A: Create a sample thumbnail, convert it to PNG, GIF and JPEG, and see which is smaller.
Sometimes measuring is easier than guessing.
A: It's mostly the same as on the Web, really. For graphics, PNG is generally more efficient; for photos, you'll want to pick JPEG.
I'm not sure whether the iPhone supports JPEG 2000 (OS X does), so that may be a more efficient options for photos.
Also consider tools such as optipng and jpegtran to shed some additional kilobytes off your image data.
A: The recommendation for images in a UITableView is PNG, this has the best rendering performance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: how to set a menubar icon on mac osx using wx I could not find any pointers on how to create a menubar icon on OSX using wx. I originally thought that the wxTaskBarIcon class would do, but it actually creates an icon on the Dock. On Windows, wxTaskBarIcon creates a Systray icon and associated menu, and I would think that on mac osx it would create a menubar icon, I guess not.
A: You have to set wxTaskBarIconType to STATUSITEM, not DOCK. The Cocoa APIs for this are NSStatusBar and NSStatusItem; here's the code in wxWidgets that calls to them.
A: This post by Robin Dunn, the creator of wxPython, explains that wxPython doesn't support menubar icons on mac yet. They only support the Dock.
A: As of wxPython 2.9.2.0 wx.TaskBarIcon will create a menubar icon now instead on OSX, so long as you call SetIcon.
A: There is an example on wiki.wxpython.org that puts an icon in the "status menus" section (right-hand side) of the macOS menu bar (ignore the page title):
https://wiki.wxpython.org/Custom%20Mac%20OsX%20Dock%20Bar%20Icon
It works for me with macOS High Sierra (10.13.3) running python 2.7.14 (installed using miniconda) with wxpython 3.0.0.0 osx-cocoa (classic).
Similarly, it works with python 3.6.4 and wxpython 4.0.1 osx-cocoa (phoenix);
minor code changes required:
*
*you must import wx.adv
*wx.TaskBarIcon becomes wx.adv.TaskBarIcon
*wx.IconFromBitmap becomes wx.Icon
This generates a status/notification/taskbar-type icon on other platforms as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How can I determine that Windows Installer is performing an upgrade rather than a first time install? I have an install that upgrades a previous version of an app if it exits. I'd like to skip certain actions when the install is upgrade mode. How can I determine if the install is running in upgrade mode vs. first time install mode?
I'm using Wise Installer, but I don't think that matters. I'm assuming that Windows Installer has a property that is set when the installer is in upgrade mode. I just can't seem to find it. If the property exists, I'm assuming I could use it in a conditional statement.
A: Can you elaborate what kind of tools are you using to create this installer?
I use Windows Installer XML(WIX). In WIX you could do something like this:
<!-- Property definitions -->
<?define SkuName = "MyCoolApp"?>
<?define ProductName="My Cool Application"?>
<?define Manufacturer="Acme Inc."?>
<?define Copyright="Copyright © Acme Inc. All rights reserved."?>
<?define ProductVersion="1.1.0.0"?>
<?define RTMProductVersion="1.0.0.0" ?>
<?define UpgradeCode="{EF9D543D-9BDA-47F9-A6B4-D1845A2EBD49}"?>
<?define ProductCode="{27EA5747-9CE3-3F83-96C3-B2F5212CD1A6}"?>
<?define Language="1033"?>
<?define CodePage="1252"?>
<?define InstallerVersion="200"?>
And define upgrade options:
<Upgrade Id="$(var.UpgradeCode)">
<UpgradeVersion Minimum="$(var.ProductVersion)"
IncludeMinimum="no"
OnlyDetect="yes"
Language="$(var.Language)"
Property="NEWPRODUCTFOUND" />
<UpgradeVersion Minimum="$(var.RTMProductVersion)"
IncludeMinimum="yes"
Maximum="$(var.ProductVersion)"
IgnoreRemoveFailure="no"
IncludeMaximum="no"
Language="$(var.Language)"
Property="OLDIEFOUND" />
</Upgrade>
Then further you could use OLDIEFOUND and NEWPRODUCTFOUND properties depending on what you want to do:
<!-- Define custom actions -->
<CustomAction Id="ActivateProduct"
Directory='MyCoolAppFolder'
ExeCommand='"[MyCoolAppFolder]activateme.exe"'
Return='asyncNoWait'
Execute='deferred'/>
<CustomAction Id="NoUpgrade4U"
Error="A newer version of MyCoolApp is already installed."/>
The above defined actions have to be define in InstallExcecuteSequence
<InstallExecuteSequence>
<Custom Action="NoUpgrade4U"
After="FindRelatedProducts">NEWPRODUCTFOUND</Custom>
<Custom Action="ActivateProduct"
OnExit='success'>NOT OLDIEFOUND</Custom>
</InstallExecuteSequence>
A: There's an MSI property called Installed that will be true if the product is installed per-machine or for the current user. You can use it in conditional Boolean statements.
You can also check these other MSI installation status properties, in case one of them would work better. I've never used Wise, but I assume there's a way to retrieve these properties.
A: I am not sure I understood your question.
If you are writting the install script yourself, the best way, on Windows, is to check the registry keys such program usually creates. Unlike install directory (and start menu entries, etc.), it is an invariant. One of these keys can even be the version number of the software, to check in case a user tries to install an oldver version (or to know if some files must be removed, etc.).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How can I see what is in my heap in Java? I've managed to get a memory 'leak' in a java application I'm developing. When running my JUnit test suite I randomly get out of memory exceptions (java.lang.OutOfMemoryError).
What tools can I use to examine the heap of my java application to see what's using up all my heap so that I can work out what's keeping references to objects which should be able to be garbage collected.
A: If you need something free, try VisualVM
From the project's description:
VisualVM is a visual tool integrating commandline JDK tools and lightweight profiling capabilities. Designed for both development and production time use.
A: This is a pretty old question. A lot of people might have started using IntelliJ since it was originally answered. IntelliJ has a plugin that can show memory usage called JVM Debugger Memory View.
A: VisualVM is included in the most recent releases of Java. You can use this to create a heap dump, and look at the objects in it.
Alternatively, you can also create a heapdump commandine using jmap (in your jdk/bin dir):
jmap -dump:format=b,file=heap.bin <pid>
You can even use this to get a quick histogram of all objects
jmap -histo <pid>
I can recommend Eclipse Memory Analyzer (http://eclipse.org/mat) for advanced analysis of heap dumps. It lets you find out exactly why a certain object or set of objects is alive. Here's a blog entry showing you what Memory Analyzer can do: http://dev.eclipse.org/blogs/memoryanalyzer/2008/05/27/automated-heap-dump-analysis-finding-memory-leaks-with-one-click/
A: Use the Eclipse Memory Analyzer
There's no other tool that I'm aware of any tool that comes close to it's functionality and performance and price (free and open source) when analysing heap dumps.
A: Use a profiler like JProfiler or YourKitProfiler
A: JProfiler worked very well for me....
http://www.ej-technologies.com/products/jprofiler/overview.html
A: If you're using a system which supports GTK you could try using JMP.
A: You can try the Memory Leak Detector that is part of the JRockit Mission Control tools suite. It allows you to inspect the heap while the JVM is running. You don't need to take snapshots all the time. You can just connect online to the JVM and then see how the heap changes between garbage collections. You can also inspect objects, follow references graphically and get stack traces from where your application is currently allocating objects. Here is a brief introduction.
The tool is free to use for development and you can download it here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: What is the first thing you do when you install Visual Studio? What is the first thing you do when you install Visual Studio? I am talking about anything customization-wise (so we don't get answers based on 'I create a new project').
Do you have a favorite font?
Do you have a must have extension you couldn't possibly live without?
Do you have a keyboard shortcut you like to set?
I am interested to know your favorites.
For me, I only change the font to Calibri, I find it is easier to read, and I can fit more text on the screen.
A: Install ViEmu.
A: Install Visual Assist
A: With no doubt:
*
*Turn on line-numbering
*Change coloring-scheme
*Import settings
*Re-order the most used tabs - auto-hide and position-settings
A: Turn off "Animate environment tools" in the options
A: Install
*
*CodeRush/Refactor Pro
*VisualSVN Plugin
*Setup a macro so I can right click a website folder and "Open with VisualStudio"
A: Install the Consolas font.
A: Install the zenburn color scheme
A: I install ReSharper (sold by JetBrains) because it adds a lot of IDE behaviors/features that I became dependent on using Java IDEs.
The first thing I make others do is turn on line numbers for all file types, because it is impossible to pair with someone if you can't tell then what line of code you are looking at.
A: enable line numbering
A: *
*Install Consolas
*Install Resharper
*Install TestDriven .Net
*Set to show empty environment on startup.
*Set max recent file list to 24 (max).
*Set to a Visual Studio Dark theme
*Set project default paths.
*Set virus scanners to ignore my project directories
A: I turn off drag-and-drop text editing since it's way too easy to do it by accident.
A: I also load up my favorite macro files and assign the most used functions to keyboard shortcuts.
A: Switch to bright text on black background
A: I change the font to Consolas, and the background to a light gray(#FAFAFA).
A: Install the latest Service Pack. It's amazing the amount of bugs fixed, and the incompatibilities it solves.
A: Configure vim as a external editor, so i can code boring stuff through macros on vim and everything else on vs
A: Install ViEmu.
A: Change font to Consolas
Turn on line numbering.
Set to use spaces for indentation always.
Install VIEmu
I like the idea of saving your settings file in source control.
A: Alt-drag the toolbar buttons I actually use onto the menu bar, and then close all the toolbars. This is to de-clutter and get vertical space back so I can see more code at once.
A: Install Visual Assist & ViEmu.
A: Install an add on called VS.Php so I can use VS to develop and debug PHP
A: Visual Studio options:
*
*Turn on line numbering
*Turn on"convert tabs to spaces"
*Turn on "load last saved project when starting"
*Enter full screen mode (shift-alt-enter)
Add-ins:
*
*Install JetBrains Resharper
*Install Slickedit Gadgets, and turn on line and indenation guides)
*Install Consolas font
A: Search for feacp.dll, rename it to something else, then install Visual Assist X.
A: Add /nosplash to the shortcut to make it boot faster.
A: *
*Install SP1
*Install Team Explorer
*Turn on line numbers
*Move Solution Explorer to the right side of screen
*Open Source Control explorer and add to tabs at left of screen
*Disable start page
A: Change the font to FixedSys and the color scheme to fit my usual style.
A: I open a project and some source files. That way I can see most of the default settings I have to adjust before I can start working.
A: *
*Install Consola's
*Turn on LineNumbers
*Install Paste as HTML Plugin for exporting code snippets.
*Go online and mercilessily mock Eclipse users ;-)
A: *
*Hide all those fancy buttons at the top.
*Enable line numbers in the source-editor
*Move the solution explorer to the right of the screen.
A: Install Resharper :)
A: Import my settings file (which I keep in source control and share between my computers).
A: *
*Enable line numbers in the source editor
*Install ReSharper
*Install GhostDoc
*Change coloring scheme to bright-on-dark and change font to Consolas
*Hide the navigation drop-downs since navigation with R# is much faster
A: Add on to Seb Nilsson:
*
*Change font to ProFont, sized at 10
*Turn off animation plus anything fancy which does not
improve productivity
*Hide toolbar button that can be access using keyboard shortcut.
So to keep all the toolbar in just 1 row
*Install ReSharper -> Must have
A: Add/install my favourite Code snippets
A: *
*set the font to Lucida Console
*switch to Multiple documents with a max value of 24 for the MRU areas of the File menu
*un-dock the Properties window and slide it over to my second monitor
*turn off the Start page
A: I install the port of "vibrant ink" theme. Here
A: *
*Change the keyboard mapping to VC6 (I have to swap between the environments)
*Change the tab settings for all languages to 3/insert spaces (tabs are demon spawn)
*Change to a monospace font that distinguishes between 0 and O (e.g. Consolas)
*Move the error list, toolbox properties and output windows to another monitor*
*I wish VS2008 were more multi-monitor aware. I'd like it to have more than one top level frame, so I could arrange panes within each frame, then have one frame maximised on monitor 1 and the other maximised on monitor 2. The current implementation is errr... sub-optimal.
A: Set the key bindings in Visual Studio to use Emacs.
*
*Tools -> Options -> Environment -> Keyboard
*Select 'Emacs' option under "Apply the following additional keyboard mapping scheme:"
A: Re-order toolbars and windows, turn on line numbering, install Mole add-on, add custom snippets.
A: 1) Set the font to Tahoma (mono spaced fonts - pah!).
2) CodeRush / Resharper
... and for web projects, VS2008 'Run As Administrator' as I prefer wep apps to run under IIS
A: *
*Enable the 80 column marker (requires registry hack in older versions of VS)
*Set the font to Lucida Console
*Use tabs for indentation and set tab width to 4
*Enable line numbers
A: Install Team Explorer.
Followed by resharper
and turn on line numbers
turn of animations
A: Go and take a smoke while it prepares Visual Studio for the first time.
And thanks to that I am having real problems quitting.
A: *
*Install Visual Assist
*Bind CTRL + ; to Edit.LineEnd
A: Install RockScroll ... because it rocks ;-)
It provides a graphical summary of the code file you have opened:
(Image from Scott Hanselman's blog).
A: Service Pack and VisualSVN and SlickEdit tools.
A: Type Ctrl+D and ">alias ff Edit.FindinFiles" so I can do Ctrl+D ">ff $term" next time I want to search all my files for $term.
A: Set the keyboard mappings to VC6 mode. I never liked the new keyboard shortcuts :)
A: *
*Move all those pallette windows to other screens.
*Change startup action to show empty environment.
*Enable line numbers for everything!
A: Try compiling my projects to see how well they do on vanilla VS.
It gives me some assurance in case I ever decide to share my code.
A: Ensure that I have the "Visual C#" settings presets chosen, Turn on Line Numbering, Turn on Smart Tabs and set the tab size to 4.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Sql design problem - items in multiple sections I've got a sections table, and an items table.
The problem is each item may be in one or more sections, so a simple 'section_id' for each item won't work, and sql doesn't have a way to store arrays where I can do say "WHERE 5 in section_ids"...
I've considered storing the list of ids as a comma separated string, the problem is I see no way to then check if an item is in a given section from the sql query. The only option I see here is to select the entire table, and parse the strings in php. Needless to say that with 1000's of items this isn't a good idea.
Is there a better way to 'link' an item with multiple sections, and be able to easily select all items for a given section id?
A: You need an intermediate lookup table:
CREATE TABLE item_in_section (item_id int, section_id int)
(I'm guessing about your key types, use whatever ones are appropriate).
To find items in a section:
SELECT item.* from item, item_in_section WHERE item_in_section.item_id = item.item_id AND item_in_section.section_id = X GROUP BY item_id
To find sections an item belongs to
SELECT section.* from section, item_in_section WHERE item_in_section.section_id = section.section_id AND item_in_section.item_id = Y GROUP BY section_id
A: In order to represent a many-to-many relationship, you need a support table with SectionId and ItemId. Both should be foreign keys to their respective tables and the primary key of this table should be both columns.
From Wikipedia:
Because most DBMSs only support one-to-many relationships, it is necessary to implement such relationships physically via a third junction table, say, AB with two one-to-many relationships A -> AB and B -> AB. In this case the logical primary key for AB is formed from the two foreign keys (i.e. copies of the primary keys of A and B).
A: You need a third table itemsPerSection with a primary key composed of both itemid and sectionid, this way you can have a N to N relationship and it's very easy to search on.
So:
Items - ItemsPerSection - Secion
itemid <-> itemid
sectionid <-> sectionid
A: You need a third table, called a junction table, that provides the N to N relationship with 2 foreign keys pointing at the parent tables.
A: The way I know (but I am not a seasoned database designer!), and that I saw in several databases, is to have a third table: it has two columns, one with IDs of the sections table, one with the IDs of the items table.
It creates a relation between these entries without much cost, allowing fast search if you make a compound index out of both IDs.
A: You're talking about a many-to-many relationship. In normalized form that's best handled with a third table:
items
sections
itemsections
Each row in itemsections has an item id and a section id. For normal one-to-many relationships, that's not needed but it's standard practice for what you're looking at.
A: You need an intersection table to sit between the two, ie a table that describes which items are in which sections.
Something like..
CREATE TABLE item_sections (
ID datatype
ITEM_ID datatype,
SECTION_ID datatype);
You'll then need to join the tables to get the data out...
SELECT items.*
FROM items, item_sections
WHERE items.id = item_sections.item_id
and item_sections.section_id = the-id-of the-section-you-want
A: You need to store the section relationship in a second table. Here's a really simple example:
CREATE TABLE foos (
id INTEGER,
name VARCHAR
)
CREATE TABLE foo_sections (
foo_id INTEGER,
section_name VARCHAR,
)
-- Add some 'foos'
INSERT INTO foos (1, 'Something');
INSERT INTO foos (2, 'Something Else');
-- Add some sections for each 'foo'
INSERT INTO foo_sections (1, 'Section One');
INSERT INTO foo_sections (1, 'Section Two');
INSERT INTO foo_sections (2, 'Section One');
-- To get all the section names for a specific 'foo' record:
SELECT section_name FROM foo_sections WHERE foo_id = 1
> Section One
> Section Two
Of course in the second table you could store a reference to a third 'sections' table, but I excluded that for clarity.
Good luck :)
A: You could store several IDs in a field, separated by a comma and then use the FIND_IN_SET command:
SELECT * FROM items WHERE FIND_IN_SET(5, section_id);
http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_find-in-set
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I setup LookAndFeel Files in Java? I need to setup LookAndFeel Files in JDK 1.6.
I have two files:
*
*napkinlaf-swingset2.jar
*napkinlaf.jar
How can I set this up and use it?
I would like a GTK look and feel OR Qt look and feel, Are they available?
A: This page explains how the work with Look&Feels:
http://java.sun.com/docs/books/tutorial/uiswing/lookandfeel/plaf.html
You can do it commandline:
java -Dswing.defaultlaf=com.sun.java.swing.plaf.gtk.GTKLookAndFeel MyApp
Or in code:
UIManager.setLookAndFeel("javax.swing.plaf.metal.MetalLookAndFeel");
You need to make sure the jars containing the look&feel are on the application classpath. How this works depends on the application. A typical way would be to put it in a lib folder.
Look&Feels that are available by default in the JDK are:
com.sun.java.swing.plaf.gtk.GTKLookAndFeel
com.sun.java.swing.plaf.motif.MotifLookAndFeel
com.sun.java.swing.plaf.windows.WindowsLookAndFeel
Quioting the link above:
The GTK+ L&F will only run on UNIX or
Linux systems with GTK+ 2.2 or later
installed, while the Windows L&F runs
only on Windows systems. Like the Java
(Metal) L&F, the Motif L&F will run on
any platform.
A: The class name for Naplin is net.sourceforge.napkinlaf.NapkinLookAndFeel. So to set it as default on the command line, use:
java -Dswing.defaultlaf=net.sourceforge.napkinlaf.NapkinLookAndFeel
To install it add napkinlaf.jar to the lib/ext direction and the lines:
swing.installedlafs=napkin
swing.installedlaf.napkin.name=Napkin
swing.installedlaf.napkin.class=net.sourceforge.napkinlaf.NapkinLookAndFeel
to lib/swing.properties within your Java installation (you'll probably have to create the file).
See the Napkin wiki page
A: The Qt look and feel is available from Trolltech as the product Jambi, which IS Qt for Java.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/145972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Making Eclipse behave like Visual Studio I'm doing some Android development, and I much prefer Visual Studio, but I'll have to use Eclipse for this.
Has anyone made a tool which can make Eclipse look and behave more like visual studio? I mainly can't stand its clippyesqe suggestions on how I should program (Yes, I know I have not yet used that private field! Thanks Eclipse!), or its incredibly lousy intellisense.
For example, in eclipse, if I don't type this first, its intellisense won't realise I want to look for locally scoped members. Also, the TAB to complete VS convention is drilled into my head, and Eclipse is ENTER to complete, I could switch everything by hand but that would take hours, and I was hoping someone had some sort of theme or something that has already done it.
A: If you start typing the name of any class/variable visible in the current scope and hit Ctrl+Space, it'll bring down the autocompletion.
By default, tab is used to move around autocompleted function call arguments.
A: I'm gonna play devils advocate here and say that forcing you to use this.myString is actually much safer than just myString. myString could be defined locally (in the method) or in the class as a private member. I sometimes think Visual Studio is a bit cavalier about this. In the sample you mention (I saw the video but it was illegible) where is myString scoped?
A: Have you tried using the Visual Studio keybindings available in Eclipse Ganymede (3.4)?
(You may want to know that "IntelliSense" is a Visual Studio-term, an probably unknown to anyone without Visual Studio-experience. "Autocompletion" is probably a more widely used term.)
A: There are also other choices for Java IDEs. You've obviously found Eclipse, but you also may want to check out IntelliJ and NetBeans. IntelliJ is not free, but has a 30 day evaluation period and a Visual Studio key map :)
Shop around, find one that you like and start to use it heavily. They are all very good IDEs, and I'm sure once you use one for a while you'll get comfortable with it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "59"
} |
Q: Why is half of my compiled (small) objective-C file a large block of zeroes? I opened up my compiled Hello World Obj-C application in a text editor and, to my surprise, I found about 8 kilobytes of 00 00 00 00 00 00 00 00 ....
Why are these here? Is there a way to clear out these zeroes (which I doubt have too much function)?
Obviously it's not so important in this file, seeing as it's only 16kB to begin with, but I'd like to know anyway.
A: It's most likely padding between code, data, relocation or other sections of the executable format you use.
Linkers like to pad such sections on a 4k or 8k boundary. This improves loading time for the price of a bit of memory-waste.
For a simple hello world it's significant, but for a large application the extra memory used for the padding is neglible.
A: Does Objective-C support incremental linking? That would explain why there was a lot of padding space.
A: Some compilers or linkers want some round number for the file size. You can see that when you add some code, yet the file size does not increase. I guess that's the zeros you are seeing.
A: Maybe it's a static variable? I know in many C-like languages, the initial value of a variable that is declared static is embedded in the code emitted by the compiler. At runtime this initial value is mapped to the memory of the process. Maybe you (or some code you're including or linking against) defines an 8 KB zero-initialized array.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Near Sorting Algorithms - When to use? From time to time I browse the web and look for interesting algorithms and datastructures to put into my bag of tricks. A year ago I came across the Soft Heap data-structure and learned about near sorting.
The idea behind this is that it's possible to break the O(n log n) barrier of compare based sorts if you can live with the fact that the sort algorithm cheats a bit. You get a almost sorted list but you have to live with some errors as well.
I played around with the algorithms in a test environement but never found a use for them.
So the question: Has anyone ever used near sorting in practice? If so in which kind of applications? Can you think up a use-case where near sorting is the right thing to do?
A: There are a lot of "greedy" heuristics where you periodically select the minimum of a set. The greedy heuristic is not perfect, so even if you pick the minimum you aren't guaranteed to get to the best final answer. In fact, the GRASP meta-heuristic, you intentionally introduce random error so that you get multiple final solutions and select the best one. In that case, introducing some error in your sort routine in exchange for speed would be a good trade off.
A: Just speculating here, but one thing I imagine is database query optimization.
A database query in a declarative language such as SQL has to be translated into a step-by-step program called an "execution plan". One SQL query can typically be translated to a number of such execution plans, which all give the same result but can have very varying performance. The query optimizer has to find the fastest one, or at least one that is reasonably fast.
Cost-based query optimizers have a "cost function", which they use to estimate the execution time of a given plan. Exhaustive optimizers go through all possible plans (for some value of "all possible") and select the fastest one. For complicated queries the number of possible plans may be prohibitively large, leading to overly long optimization times (before you even begin the search in the database!) so there are also non-exhaustive optimizers. They only look at some of the plans, perhaps with a random element in choosing which ones. This works, since there is usually a large number of "good" plans, and it might not be that important to find the absolutely best one -- it is probably better to choose a 5-second plan instead of the optimal 2-second plan, if it requires several minutes of optimization to find the 2-second plan.
Some optimization algorithms use a sorted queue of "promising" (partial) plans. If it doesn't really matter if you find the absolutely best plan, maybe you could use an almost-sorted queue?
Another idea (and I'm still just speculating) is a scheduler for processes or threads in a time-sharing system, where it might not be important if a certain process or thread gets its timeslot a few milliseconds later than if strictly sorted by priority.
A: A common application for near-sorting is when a human is doing the pairwise comparison and you don't want to have to ask them as many questions.
Say you have a lot of items you'd like a human to sort via pairwise comparison. You can greatly reduce the number of comparisons you need them to do if you're willing to accept that ordering won't be exact. You might, for example, not care if adjacent items have been swapped a long as the preferred items are at the top.
A: This is a total flying guess, but given the inherent subjectivity of "relevance" measures when sorting search results, I'd venture that it doesn't really matter whether or not they're perfectly sorted. The same could be said for recommendations. If you can somehow arrange that every other part of your algorithm for those things is O(n) then you might look to avoid a sort.
Be aware also that in the worst case your "nearly sorted" data does not meet one possible intuitive idea of "nearly sorted", which is that it has only a small number of inversions. The reason for this is just that if your data has only O(n) inversions, then you can finish sorting it in O(n) time using insertion sort or cocktail sort (i.e. two-way bubble sort). It follows that you cannot possibly have reached this point from completely unsorted, in O(n) time (using comparisons). So you're looking for applications where a majority subset of the data is sorted and the remainder is scattered, not for applications requiring that every element is close to its correct position.
A: Anywhere
*
*you are supposed to react fast,
*you are not promising exact behavior to the client,
*but internally you have some rules
you can use it. How about "not so strict" rule-based priority queue? Where would that be useful? Maybe thread/process/resource scheduling. In thread/process scheduling you are really not promising any one thread is going to go first, second, or last, but generally you want to give everyone some chance. You might want to enforce loose rule so it's preemptive, prioritized, blabla..
A resource schedule example would be responding to pizza delivery or shipping boxes of books to people etc. You can't use it where deterministic result is expected, but there are lots of example in real life where things are not so deterministic/predictable.
A: O(n log n) is already pretty fast. I don't think anyone would ever start out using a near-sort algorithm. You would start out with code that just does a complete sort (since your programming language of choice likely provides a sort function and not a nearsort function), and when you found empirically that the sort was taking too long, you would start to question whether your data really needs to be fully-sorted, and consider using a near-sort.
Basically, you would never even consider using a near sort unless you first discovered sorting to be a severe bottleneck in your program.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Emacs: How do you store the last parameter supplied by the user as the default? I'm writing an interactive function that I'd like to have remember the last argument the user supplied and use it as the default.
(defun run-rake (param)
(interactive "sTask: ")
(shell-command (format "rake %s" task)))
The first time the function is invoked I want it to remember the argument the user supplied so that the next time they invoke the function they can just press enter and it will use the value they supplied the previous time.
I can't seem to find this in the documentation - how do you do this in elisp?
A: read-from-minibuffer
is what you want to use. It has a spot for a history variable.
Here is some sample code:
(defvar run-rake-history nil "History for run-rake")
(defun run-rake (cmd)
(interactive (list (read-from-minibuffer "Task: " (car run-rake-history) nil nil 'run-rake-history)))
(shell-command (format "rake %s " cmd)))
Obviously customize to your needs. The 'run-rake-history is simply a variable that is used to store the history for this invocation of 'read-from-minibuffer. Another option would be to use 'completing-read - but that assumes you've got a list of choices you want to restrict the user to using (which usually isn't the case for shell-like commands).
A: You can see how the compile command does this. Bring up the help text for the compile command with C-h f compile, move the cursor over the name of the file that contains the function, then hit RETURN. This will bring up the source file for compile.
Basically, there's a dynamic/global variable compile-command that holds the last compile command. Emacs is a single-user, single-threaded system, so there's really no need for much more. Also keep in mind that Elisp is a very old school Lisp, and variables have dynamic (call stack), not lexical, scope. In this kind of system it is natural to:
(let ((compile-command "gcc -o foo foo.c frobnicate.c"))
...
(compile)
...)
Speaking of the compile command, have you tried using it instead of your own run-rake function?
A: I figured out how to do this manually using a defvar (global), but this feels like the kind of thing that should already be provided by the core library (kind of like scheme's make-parameter). This just seems like more code, and more manual than it should be:
(defvar *editconf-ruby-run-rake-last-rake-task* nil)
(defun editconf-ruby-run-rake-last-rake-task (&optional new-val)
(when new-val
(setf *editconf-ruby-run-rake-last-rake-task* new-val))
*editconf-ruby-run-rake-last-rake-task*)
(defun editconf-ruby-run-rake (task-name)
"Execute rake `task-name'. See
`krb-ruby-get-rakefile-path-for-current-buffer' for how the
Rakefile is located.."
(interactive
(let* ((rakefile (krb-ruby-get-rakefile-path-for-current-buffer))
(rake-tasks (krb-ruby-get-rake-tasks rakefile))
(default-task (or (editconf-ruby-run-rake-last-rake-task)
(editconf-ruby-run-rake-last-rake-task (car rake-tasks)))))
(list
(read-string (format "Task [%s|%s]: "
rake-tasks
default-task)
nil nil default-task))))
(editconf-ruby-run-rake-last-rake-task task-name)
(let ((cmd (format "cd %s; rake %s"
(krb-lisp-strip-path-suffix rakefile 1)
task-name)))
(message "editconf-ruby-run-rake: cmd='%s'" cmd)
(shell-command cmd)))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Storing a complete graph in a RDBMS I have several types of entities, each with their own fields, which are stored in separate tables.
Each record in such a table may be connected to zero or more records in a different table, i.e., linked to records from different entity types.
If I go with lookup tables, I get (m(m-1))/2=O(m^2) separate lookup tables that need to be initialized.
While still feasible for 6 or 7 different entity types, would it still be relevant for 50+ such types?
In particular, a given record would need to have links to most other entity types so theoretically speaking I would be dealing with a nearly-complete, non-directed, n-sided graph.
Can anyone shed some light on how to store this structure in a relational DBMS?
(I'm using Postgresql if it matters, but any solutions for other DBMS's would be equally helpful).
Thank you for your time!
Yuval
A: This is Object-Relational Mapping, a classically hard problem. You really need a ORM tool to do this properly, or it'll drive you nuts.
The connection problem you refer to is one of the pitfalls, and it needs very careful optimisation and query tuning, else it'll kill performance (e.g. the N+1 SELECT problem).
I can't be any more specific without knowing what your application platform is - the actual DBMS used isn't really relevent to the problem.
A: You could use a common base type for all entity types, and handle relationships through that base type - this is something virtually any ORM tool can do using a discriminator column and foreign key relationships (I'm not familiar with CLSA, though).
This approach leaves you with exactly one relationship table.
Edit:
This is how you set this up:
CREATE TABLE base (
id int(10) unsigned NOT NULL auto_increment,
type enum('type1','type2') NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE type1 (
id int(10) unsigned NOT NULL,
PRIMARY KEY (id),
CONSTRAINT FK_type1_1 FOREIGN KEY (id) REFERENCES base (id)
);
CREATE TABLE type2 (
id int(10) unsigned NOT NULL,
PRIMARY KEY (id),
CONSTRAINT FK_type2_1 FOREIGN KEY (id) REFERENCES base (id)
);
CREATE TABLE x_relations (
from_id int(10) unsigned NOT NULL,
to_id int(10) unsigned NOT NULL,
PRIMARY KEY (from_id,to_id),
KEY FK_x_relations_2 (to_id),
CONSTRAINT FK_x_relations_1 FOREIGN KEY (from_id) REFERENCES base (id),
CONSTRAINT FK_x_relations_2 FOREIGN KEY (to_id) REFERENCES base (id)
ON DELETE CASCADE ON UPDATE CASCADE
);
Note the discriminator column (type) which will help your ORM solution find the correct subtype for a row (type1 or type2). The ORM documentation should have a section on how to map polymorphism with a base table.
A: The other option would be to use an Object Oriented Database such as db40 or Cache. It may be looking into this if performance isn't a huge concern and you are determined to store your entire object graph.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Weblogic add-on to serialize web service calls Is there a prebuilt tool that would integrate with BEA/Oracle Weblogic 10.0 and trace on a database table each call to a web service exposed by the server?
UPDATE: the goal is not to debug the web services (they are working well). The objective is to trace each call on a table, using an existing add-on.
A: Yes, a remote debugger, like pretty much any modern IDE. Attach it to the running application server, breakpoint at the webservice entry point, and follow it through.
A: Not sure if there is an add-on, but you can write a handler (extends from javax.xml.rpc.handler.GenericHandler) where you can write logic to inspect (and manipulate if you so choose) the request, response and the fault (if it occurs). You can detect things like the remote ip, remote user, etc, and then do whatever you want with them ( i.e. log them to disk, console, db, etc)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is it possible to programmatically change a user's screen saver and/or desktop background? I have been asked to standardize the screen saver and desktop background used by everyone in my company and, aside from going around to each PC individually, I'm looking for a programmatic way to accomplish this. I am not a systems admin, so have never crossed this bridge before. It is also worth noting that most PCs are running Windows XP, however some are Windows Server 2003 and a few are Vista.
A: Why not just use Group Policies?
A: Changing screensaver in C#
Changing desktop in VB.net
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How should I organize the contents of my CSS file(s)? This question is about organizing the actual CSS directives themselves within a .css file. When developing a new page or set of pages, I usually just add directives by hand to the .css file, trying to refactor when I can. After some time, I have hundreds (or thousands) of lines and it can get difficult to find what I need when tweaking the layout.
Does anyone have advice for how to organize the directives?
*
*Should I try to organize top-down, mimicking the DOM?
*Should I organize functionally, putting directives for elements that support the same parts of the UI together?
*Should I just sort everything alphabetically by selector?
*Some combination of these approaches?
Also, is there a limit to how much CSS I should keep in one file before it might be a good idea to break it off into separate files? Say, 1000 lines? Or is it always a good idea to keep the whole thing in one place?
Related Question: What's the best way to organize CSS rules?
A: However you find it easiest to read!
Seriously, you'll get a billion and five suggestions but you're only going to like a couple of methods.
Some things I shall say though:
*
*Breaking a CSS file into chunks does help you organise it in your head, but it means more requests from browsers, which ultimately leads to a slower running server (more requests) and it takes browsers longer to display pages. Keep that in mind.
*Breaking up a file just because it's an arbitrary number of lines is silly (with the exception that you have an awful editor - in which case, get a new one)
Personally I code my CSS like this:
* { /* css */ }
body { /* css */ }
#wrapper { /* css */ }
#innerwrapper { /* css */ }
#content { /* css */ }
#content div { /* css */ }
#content span { /* css */ }
#content etc { /* css */ }
#header { /* css */ }
#header etc { /* css */ }
#footer { /* css */ }
#footer etc { /* css */ }
.class1 { /* css */ }
.class2 { /* css */ }
.class3 { /* css */ }
.classn { /* css */ }
Keeping rules on one line allows me to skim down a file very fast and see what rules there are. When they're expanded, I find it too much like hard work trying find out what rules are being applied.
A: Have a look at these three slideshare presentations to start:
*
*Beautiful Maintainable CSS
*Maintainable CSS
*Efficient, maintainable, modular CSS
Firstly, and most importantly, document your CSS. Whatever method you use to organize your CSS, be consistent and document it. Describe at the top of each file what is in that file, perhaps providing a table of contents, perhaps referencing easy to search for unique tags so you jump to those sections easily in your editor.
If you want to split up your CSS into multiple files, by all means do so. Oli already mentioned that the extra HTTP requests can be expensive, but you can have the best of both worlds. Use a build script of some sort to publish your well-documented, modular CSS to a compressed, single CSS file. The YUI Compressor can help with the compression.
In contrast with what others have said so far, I prefer to write each property on a separate line, and use indentation to group related rules. E.g. following Oli's example:
#content {
/* css */
}
#content div {
/* css */
}
#content span {
/* css */
}
#content etc {
/* css */
}
#header {
/* css */
}
#header etc {
/* css */
}
That makes it easy to follow the file structure, especially with enough whitespace and clearly marked comments between groups, (though not as easy to skim through quickly) and easy to edit (since you don't have to wade through single long lines of CSS for each rule).
Understand and use the cascade and specificity (so sorting your selectors alphabetically is right out).
Whether I split up my CSS into multiple files, and in what files depends on the size and complexity of the site and the CSS. I always at least have a reset.css. That tends to be accompanied by layout.css for general page layout, nav.css if the site navigation menus get a little complicated and forms.css if I've got plenty of CSS to style my forms. Other than that I'm still figuring it out myself too. I might have colors.css and type.css/fonts.css to split off the colors/graphics and typography, base.css to provide a complete base style for all HTML tags...
A: I've tried a bunch of different strategies, and I always come back to this style:
.class {border: 1px solid #000; padding: 0; margin: 0;}
This is the friendliest when it comes to a large amount of declarations.
Mr. Snook wrote about this almost four years ago :).
A: I go with this order:
*
*General style rules, usually applied to the bare elements (a, ul, ol, etc.) but they could be general classes as well (.button, .error)
*Page layout rules applied to most/all pages
*Individual page layout rules
For any of the style rules that apply to a single page, or a small grouping pages, I will set the body to an id and a class, making it easy to target particular pages. The id is the base name of the file, and the class is the directory name where it is in.
A: Factor out common styles. Not styles that just happen to be the same, styles that are intended to be the same - where changing the style for one selector will likely mean you'll want to change the other as well. I put an example of this style in another post:
Create a variable in CSS file for use within that CSS file.
Apart from that, group related rules together. And split your rules into multiple files... unless every page actually needs every rule.
A: CSS files are cached on the client. So it's good practice to keep all of your styles in one file. But when developing, I find it useful to structure my CSS according to domains.
For instance: reset.css, design.css, text.css and so forth. When I release the final product, I mash all the styles into one file.
Another useful tip is to focus readability on the rules, not the styles.
While:
ul li
{
margin-left: 10px;
padding: 0;
}
Looks good, it's not easy finding the rules when you've got, say, 100 lines of code.
Instead I use this format:
rule { property: value; property: value; }
rule { property: value; property: value; }
A: I tend to orgainize my css like this:
*
*reset.css
*base.css: I set the layout for the main sections of the page
*
*general styles
*Header
*Nav
*content
*footer
*additional-[page name].css: classes that are used only in one page
A: There are a number of recognised methodologies for formatting your CSS. Its ultimately up to you what you feel most comfortable writing but these will help manage your CSS for larger more complicated projects. Not that it matters, but I tend to use a combination of BEM and SMACSS.
BEM (Block, Element, Modifier)
BEM is a highly useful, powerful and simple naming convention to make your front-end code easier to read and understand, easier to work with, easier to scale, more robust and explicit and a lot more strict.
Block
Standalone entity that is meaningful on its own such as:
header, container, menu, checkbox, input
Element
Parts of a block and have no standalone meaning. They are semantically tied to its block:
menu item, list item, checkbox caption, header title
Modifier
Flags on blocks or elements. Use them to change appearance or behavior:
disabled, highlighted, checked, fixed, size big, color yellow
OOCSS
The purpose of OOCSS is to encourage code reuse and, ultimately, faster and more efficient stylesheets that are easier to add to and maintain.
OOCSS is based on two main principles:
*
*Separation of structure from skin
This means to define repeating visual features (like background and border styles) as separate “skins” that you can mix-and-match with your various objects to achieve a large amount of visual variety without much code. See the module object and its skins.
*Separation of containers and content
Essentially, this means “rarely use location-dependent styles”. An object should look the same no matter where you put it. So instead of styling a specific with .myObject h2 {...}, create and apply a class that describes the in question, like .
This gives you the assurance that: (1) all unclassed s will look
the same; (2) all elements with the category class (called a mixin)
will look the same; and 3) you won’t need to create an override style
for the case when you actually do want .myObject h2 to look like the
normal .
SMACSS
SMACSS is a way to examine your design process and as a way to fit those rigid frameworks into a flexible thought process. It is an attempt to document a consistent approach to site development when using CSS.
At the very core of SMACSS is categorization. By categorizing CSS
rules, we begin to see patterns and can define better practices around
each of these patterns.
There are five types of categories:
/* Base */
/* Layout */
/* Modules */
/* State */
/* Theme */
Base
Contains reset and default element styles. It can also have base styles for controls such as buttons, grids etc which can be overwritten later in the document under specific circumstances.
Layout
Would contain all the navigation, breadcrumbs, sitemaps etc etc.
Modules
Contain area specific styles such as contact form styles, homepage tiles, product listing etc etc etc.
State
Contains state classes such as isSelected, isActive, hasError, wasSuccessful etc etc.
Theme
Contains any styles that are related to theming.
There are too many to detail here but have a look at these others as well:
*
*SuitCSS
*AtomicCSS (not Atomic Design)
*oCSS (organic CSS)
A: As the actual ordering is a vital part of how your CSS is applied, it seems a bit foolish to go ahead with the "alphabetical" suggestion.
In general you want to group items together by the area of the page they affect. E.g. main styles that affect everything go first, then header and footer styles, then navigation styles, then main content styles, then secondary content styles.
I would avoid breaking into multiple files at this point, as it can be more difficult to maintain. (It's very difficult to keep the cascade order in your head when you have six CSS files open). However, I would definitely start moving styles to different files if they only apply to a subset of pages, e.g. form styles only get linked to a page when the page actually contains a form.
A: Here is what I do. I have a CSS index page with no directives on it and which calls the different files. Here is a short sample:
@import url("stylesheet-name/complete-reset.css");
@import url("stylesheet-name/colors.css");
@import url("stylesheet-name/structure.css");
@import url("stylesheet-name/html-tags.css");
@import url("stylesheet-name/menu-items.css");
@import url("stylesheet-name/portfolio.css");
@import url("stylesheet-name/error-messages.css");
It starts with a complete reset. The next file defines the color palette for easy reference. Then I style the main <div/>s that determine the layout, header, footer, number of columns, where they fit, etc... The html tags definses <body/>, <h1/>, <p/>, t etc... Next come the specific sections of the site.
It's very scalabale and very clear. Much more friendly to find code to change and to a dd new sections.
A: I used to worry about this incessantly, but Firebug came to the rescue.
These days, it's much easier to look at how your styles are interrelating through Firebug and figure out from there what needs to be done.
Sure, definitely make sure there's a reasonable structure that groups related styles together, but don't go overboard. Firebug makes things so much easier to track that you don't have to worry about making a perfect css structure up front.
A: ITCSS
By Harry Roberts (CSS Wizardry)
Defines global namespace and cascade, and helps keep selectors specificity low.
Structure
The first two only apply if you are using a preprocessor.
*
*(Settings)
*(Tools)
*Generics
*Elements
*Objects
*Components
*Trumps
A: Normally I do this:
*
*<link rel="stylesheet" href="css/style.css">
*In style.css I @import the following:
@import url(colors.css);
@import url(grid.css);
@import url(custom.css); + some more files (if needed)
*In colors.css I @import the following (when using the CSS custom properties):
@import url(root/variable.css);
Everything is in order and easy to get any part of code to edit.
I'll be glad if it helps somehow.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "85"
} |
Q: What are the limitations of refactoring? I'm making a study on refactoring limitations on improving existing software architecture and I would be interested to hear your experiences where you have found refactoring to be not enough or still too immature to accomplish your goals.
A: Refactoring can be risky
Refactoring is often difficult because the refactorer often isn't the same person as the original designer. Therefore, he or she doesn't have the same background in the system and the decisions that went behind the original design. You always run the risk that bugs avoided in the original design may creep up in the new design.
This may be especially true when a new or young team member, not fully experienced with this system, decides to inject new-cool-wizbang technology or ideas into an otherwise stable system. Often when the new team members are not integrated well into the team and are not given sufficient guidance, they may begin forcing the project in directions unintended by the whole team.
This is just a risk, however, there's also a chance that the team is wrong and the new team member, if put in charge and was allowed to do his or her thing, would actually make a serious improvement.
These problems often come up amongst a team working on legacy systems. Often there are no world-altering enhancements planned, so the team is conservative with their design. Their goal is to prevent new bugs from being injected and fix old ones with a couple extra features thrown in. A new team member might come along and upset the apple cart by insisting that, he rewrite certain subsystems of the code. New bugs are created and users of a fairly stable product are upset because the software, from there perspective is getting worst.
So if your goal is long-term stability without major functionality changes, often major refactoring is not what you want.
If you have larger functionality changes in the pike however, have a user-base that expects your product to not be fully baked quite yet (ie you're in some sort of beta), then its a much better situation to consider serious refactoring because the long-term benefits of the superior design will pay off and you're less likely to disrupt a large user base.
A: Refactoring code that doesn't have a corresponding set suite of unit test can be risky. If the project already has an established unit test suite then provided that you maintain a TDD approach there should be little reason for concern.
A: I am not quite sure your question is valid. You're asking about the limitations of refactoring. However, refactoring involves the rewriting of code. How can there be limits to what you rewrite? You can completely replace the old code over the course of a massive refactoring, piece by piece. Effectively, you can end the refactoring without a single character of the original code, although this is admittedly extreme. Given this far end of the possibilities of refactoring, how can you assume there can be any limitations? If all the final code could possibly be completely new, you have no more limitations than if you had written the final code from scratch. However, writing the same resulting code from scratch gives you less basis to go on, less opportunity to iterative development, and so I must respond with a counter-question: Doesn't any refactoring inherently have less limitation than any rewrite?
A: Not all managers like the idea of refactoring. In their eyes, the time taken to refactor is not used to add new functionality. So you need either convince your manager that it is needed or you can hide it while adding features.
So the risk is to take too much time to refactor.
A: One problem with refactoring arises when you cannot change the outer interface of your classes. In these cases, you are very, very limited as to what you can refactor.
A: To Kev's excellent Answer - "Working Effectively with Legacy Code" by Michael Feathers should be required reading for people working in Software Engineering.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to remove illegal characters from path and filenames? I need a robust and simple way to remove illegal path and file characters from a simple string. I've used the below code but it doesn't seem to do anything, what am I missing?
using System;
using System.IO;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
string illegal = "\"M<>\"\\a/ry/ h**ad:>> a\\/:*?\"<>| li*tt|le|| la\"mb.?";
illegal = illegal.Trim(Path.GetInvalidFileNameChars());
illegal = illegal.Trim(Path.GetInvalidPathChars());
Console.WriteLine(illegal);
Console.ReadLine();
}
}
}
A: You can remove illegal chars using Linq like this:
var invalidChars = Path.GetInvalidFileNameChars();
var invalidCharsRemoved = stringWithInvalidChars
.Where(x => !invalidChars.Contains(x))
.ToArray();
EDIT
This is how it looks with the required edit mentioned in the comments:
var invalidChars = Path.GetInvalidFileNameChars();
string invalidCharsRemoved = new string(stringWithInvalidChars
.Where(x => !invalidChars.Contains(x))
.ToArray());
A: Most solutions above combine illegal chars for both path and filename which is wrong (even when both calls currently return the same set of chars). I would first split the path+filename in path and filename, then apply the appropriate set to either if them and then combine the two again.
wvd_vegt
A: If you remove or replace with a single character the invalid characters, you can have collisions:
<abc -> abc
>abc -> abc
Here is a simple method to avoid this:
public static string ReplaceInvalidFileNameChars(string s)
{
char[] invalidFileNameChars = System.IO.Path.GetInvalidFileNameChars();
foreach (char c in invalidFileNameChars)
s = s.Replace(c.ToString(), "[" + Array.IndexOf(invalidFileNameChars, c) + "]");
return s;
}
The result:
<abc -> [1]abc
>abc -> [2]abc
A: The original question asked to "remove illegal characters":
public string RemoveInvalidChars(string filename)
{
return string.Concat(filename.Split(Path.GetInvalidFileNameChars()));
}
You may instead want to replace them:
public string ReplaceInvalidChars(string filename)
{
return string.Join("_", filename.Split(Path.GetInvalidFileNameChars()));
}
This answer was on another thread by Ceres, I really like it neat and simple.
A: Try something like this instead;
string illegal = "\"M\"\\a/ry/ h**ad:>> a\\/:*?\"| li*tt|le|| la\"mb.?";
string invalid = new string(Path.GetInvalidFileNameChars()) + new string(Path.GetInvalidPathChars());
foreach (char c in invalid)
{
illegal = illegal.Replace(c.ToString(), "");
}
But I have to agree with the comments, I'd probably try to deal with the source of the illegal paths, rather than try to mangle an illegal path into a legitimate but probably unintended one.
Edit: Or a potentially 'better' solution, using Regex's.
string illegal = "\"M\"\\a/ry/ h**ad:>> a\\/:*?\"| li*tt|le|| la\"mb.?";
string regexSearch = new string(Path.GetInvalidFileNameChars()) + new string(Path.GetInvalidPathChars());
Regex r = new Regex(string.Format("[{0}]", Regex.Escape(regexSearch)));
illegal = r.Replace(illegal, "");
Still, the question begs to be asked, why you're doing this in the first place.
A: Throw an exception.
if ( fileName.IndexOfAny(Path.GetInvalidFileNameChars()) > -1 )
{
throw new ArgumentException();
}
A: This seems to be O(n) and does not spend too much memory on strings:
private static readonly HashSet<char> invalidFileNameChars = new HashSet<char>(Path.GetInvalidFileNameChars());
public static string RemoveInvalidFileNameChars(string name)
{
if (!name.Any(c => invalidFileNameChars.Contains(c))) {
return name;
}
return new string(name.Where(c => !invalidFileNameChars.Contains(c)).ToArray());
}
A: File name can not contain characters from Path.GetInvalidPathChars(), + and # symbols, and other specific names. We combined all checks into one class:
public static class FileNameExtensions
{
private static readonly Lazy<string[]> InvalidFileNameChars =
new Lazy<string[]>(() => Path.GetInvalidPathChars()
.Union(Path.GetInvalidFileNameChars()
.Union(new[] { '+', '#' })).Select(c => c.ToString(CultureInfo.InvariantCulture)).ToArray());
private static readonly HashSet<string> ProhibitedNames = new HashSet<string>
{
@"aux",
@"con",
@"clock$",
@"nul",
@"prn",
@"com1",
@"com2",
@"com3",
@"com4",
@"com5",
@"com6",
@"com7",
@"com8",
@"com9",
@"lpt1",
@"lpt2",
@"lpt3",
@"lpt4",
@"lpt5",
@"lpt6",
@"lpt7",
@"lpt8",
@"lpt9"
};
public static bool IsValidFileName(string fileName)
{
return !string.IsNullOrWhiteSpace(fileName)
&& fileName.All(o => !IsInvalidFileNameChar(o))
&& !IsProhibitedName(fileName);
}
public static bool IsProhibitedName(string fileName)
{
return ProhibitedNames.Contains(fileName.ToLower(CultureInfo.InvariantCulture));
}
private static string ReplaceInvalidFileNameSymbols([CanBeNull] this string value, string replacementValue)
{
if (value == null)
{
return null;
}
return InvalidFileNameChars.Value.Aggregate(new StringBuilder(value),
(sb, currentChar) => sb.Replace(currentChar, replacementValue)).ToString();
}
public static bool IsInvalidFileNameChar(char value)
{
return InvalidFileNameChars.Value.Contains(value.ToString(CultureInfo.InvariantCulture));
}
public static string GetValidFileName([NotNull] this string value)
{
return GetValidFileName(value, @"_");
}
public static string GetValidFileName([NotNull] this string value, string replacementValue)
{
if (string.IsNullOrWhiteSpace(value))
{
throw new ArgumentException(@"value should be non empty", nameof(value));
}
if (IsProhibitedName(value))
{
return (string.IsNullOrWhiteSpace(replacementValue) ? @"_" : replacementValue) + value;
}
return ReplaceInvalidFileNameSymbols(value, replacementValue);
}
public static string GetFileNameError(string fileName)
{
if (string.IsNullOrWhiteSpace(fileName))
{
return CommonResources.SelectReportNameError;
}
if (IsProhibitedName(fileName))
{
return CommonResources.FileNameIsProhibited;
}
var invalidChars = fileName.Where(IsInvalidFileNameChar).Distinct().ToArray();
if(invalidChars.Length > 0)
{
return string.Format(CultureInfo.CurrentCulture,
invalidChars.Length == 1 ? CommonResources.InvalidCharacter : CommonResources.InvalidCharacters,
StringExtensions.JoinQuoted(@",", @"'", invalidChars.Select(c => c.ToString(CultureInfo.CurrentCulture))));
}
return string.Empty;
}
}
Method GetValidFileName replaces all incorrect data to _.
A: For file names:
var cleanFileName = string.Join("", fileName.Split(Path.GetInvalidFileNameChars()));
For full paths:
var cleanPath = string.Join("", path.Split(Path.GetInvalidPathChars()));
Note that if you intend to use this as a security feature, a more robust approach would be to expand all paths and then verify that the user supplied path is indeed a child of a directory the user should have access to.
A: I wrote this monster for fun, it lets you roundtrip:
public static class FileUtility
{
private const char PrefixChar = '%';
private static readonly int MaxLength;
private static readonly Dictionary<char,char[]> Illegals;
static FileUtility()
{
List<char> illegal = new List<char> { PrefixChar };
illegal.AddRange(Path.GetInvalidFileNameChars());
MaxLength = illegal.Select(x => ((int)x).ToString().Length).Max();
Illegals = illegal.ToDictionary(x => x, x => ((int)x).ToString("D" + MaxLength).ToCharArray());
}
public static string FilenameEncode(string s)
{
var builder = new StringBuilder();
char[] replacement;
using (var reader = new StringReader(s))
{
while (true)
{
int read = reader.Read();
if (read == -1)
break;
char c = (char)read;
if(Illegals.TryGetValue(c,out replacement))
{
builder.Append(PrefixChar);
builder.Append(replacement);
}
else
{
builder.Append(c);
}
}
}
return builder.ToString();
}
public static string FilenameDecode(string s)
{
var builder = new StringBuilder();
char[] buffer = new char[MaxLength];
using (var reader = new StringReader(s))
{
while (true)
{
int read = reader.Read();
if (read == -1)
break;
char c = (char)read;
if (c == PrefixChar)
{
reader.Read(buffer, 0, MaxLength);
var encoded =(char) ParseCharArray(buffer);
builder.Append(encoded);
}
else
{
builder.Append(c);
}
}
}
return builder.ToString();
}
public static int ParseCharArray(char[] buffer)
{
int result = 0;
foreach (char t in buffer)
{
int digit = t - '0';
if ((digit < 0) || (digit > 9))
{
throw new ArgumentException("Input string was not in the correct format");
}
result *= 10;
result += digit;
}
return result;
}
}
A: If you have to use the method in many places in a project, you could also make an extension method and call it anywhere in the project for strings.
public static class StringExtension
{
public static string RemoveInvalidChars(this string originalString)
{
string finalString=string.Empty;
if (!string.IsNullOrEmpty(originalString))
{
return string.Concat(originalString.Split(Path.GetInvalidFileNameChars()));
}
return finalString;
}
}
You can call the above extension method as:
string illegal = "\"M<>\"\\a/ry/ h**ad:>> a\\/:*?\"<>| li*tt|le|| la\"mb.?";
string afterIllegalChars = illegal.RemoveInvalidChars();
A: These are all great solutions, but they all rely on Path.GetInvalidFileNameChars, which may not be as reliable as you'd think. Notice the following remark in the MSDN documentation on Path.GetInvalidFileNameChars:
The array returned from this method is not guaranteed to contain the complete set of characters that are invalid in file and directory names. The full set of invalid characters can vary by file system. For example, on Windows-based desktop platforms, invalid path characters might include ASCII/Unicode characters 1 through 31, as well as quote ("), less than (<), greater than (>), pipe (|), backspace (\b), null (\0) and tab (\t).
It's not any better with Path.GetInvalidPathChars method. It contains the exact same remark.
A: I think it is much easier to validate using a regex and specifiing which characters are allowed, instead of trying to check for all bad characters.
See these links:
http://www.c-sharpcorner.com/UploadFile/prasad_1/RegExpPSD12062005021717AM/RegExpPSD.aspx
http://www.windowsdevcenter.com/pub/a/oreilly/windows/news/csharp_0101.html
Also, do a search for "regular expression editor"s, they help a lot. There are some around which even output the code in c# for you.
A: Scanning over the answers here, they all** seem to involve using a char array of invalid filename characters.
Granted, this may be micro-optimising - but for the benefit of anyone who might be looking to check a large number of values for being valid filenames, it's worth noting that building a hashset of invalid chars will bring about notably better performance.
I have been very surprised (shocked) in the past just how quickly a hashset (or dictionary) outperforms iterating over a list. With strings, it's a ridiculously low number (about 5-7 items from memory). With most other simple data (object references, numbers etc) the magic crossover seems to be around 20 items.
There are 40 invalid characters in the Path.InvalidFileNameChars "list". Did a search today and there's quite a good benchmark here on StackOverflow that shows the hashset will take a little over half the time of an array/list for 40 items: https://stackoverflow.com/a/10762995/949129
Here's the helper class I use for sanitising paths. I forget now why I had the fancy replacement option in it, but it's there as a cute bonus.
Additional bonus method "IsValidLocalPath" too :)
(** those which don't use regular expressions)
public static class PathExtensions
{
private static HashSet<char> _invalidFilenameChars;
private static HashSet<char> InvalidFilenameChars
{
get { return _invalidFilenameChars ?? (_invalidFilenameChars = new HashSet<char>(Path.GetInvalidFileNameChars())); }
}
/// <summary>Replaces characters in <c>text</c> that are not allowed in file names with the
/// specified replacement character.</summary>
/// <param name="text">Text to make into a valid filename. The same string is returned if
/// it is valid already.</param>
/// <param name="replacement">Replacement character, or NULL to remove bad characters.</param>
/// <param name="fancyReplacements">TRUE to replace quotes and slashes with the non-ASCII characters ” and ⁄.</param>
/// <returns>A string that can be used as a filename. If the output string would otherwise be empty, "_" is returned.</returns>
public static string ToValidFilename(this string text, char? replacement = '_', bool fancyReplacements = false)
{
StringBuilder sb = new StringBuilder(text.Length);
HashSet<char> invalids = InvalidFilenameChars;
bool changed = false;
for (int i = 0; i < text.Length; i++)
{
char c = text[i];
if (invalids.Contains(c))
{
changed = true;
char repl = replacement ?? '\0';
if (fancyReplacements)
{
if (c == '"') repl = '”'; // U+201D right double quotation mark
else if (c == '\'') repl = '’'; // U+2019 right single quotation mark
else if (c == '/') repl = '⁄'; // U+2044 fraction slash
}
if (repl != '\0')
sb.Append(repl);
}
else
sb.Append(c);
}
if (sb.Length == 0)
return "_";
return changed ? sb.ToString() : text;
}
/// <summary>
/// Returns TRUE if the specified path is a valid, local filesystem path.
/// </summary>
/// <param name="pathString"></param>
/// <returns></returns>
public static bool IsValidLocalPath(this string pathString)
{
// From solution at https://stackoverflow.com/a/11636052/949129
Uri pathUri;
Boolean isValidUri = Uri.TryCreate(pathString, UriKind.Absolute, out pathUri);
return isValidUri && pathUri != null && pathUri.IsLoopback;
}
}
A: Here is my small contribution. A method to replace within the same string without creating new strings or stringbuilders. It's fast, easy to understand and a good alternative to all mentions in this post.
private static HashSet<char> _invalidCharsHash;
private static HashSet<char> InvalidCharsHash
{
get { return _invalidCharsHash ?? (_invalidCharsHash = new HashSet<char>(Path.GetInvalidFileNameChars())); }
}
private static string ReplaceInvalidChars(string fileName, string newValue)
{
char newChar = newValue[0];
char[] chars = fileName.ToCharArray();
for (int i = 0; i < chars.Length; i++)
{
char c = chars[i];
if (InvalidCharsHash.Contains(c))
chars[i] = newChar;
}
return new string(chars);
}
You can call it like this:
string illegal = "\"M<>\"\\a/ry/ h**ad:>> a\\/:*?\"<>| li*tt|le|| la\"mb.?";
string legal = ReplaceInvalidChars(illegal);
and returns:
_M ____a_ry_ h__ad___ a_________ li_tt_le__ la_mb._
It's worth to note that this method will always replace invalid chars with a given value, but will not remove them. If you want to remove invalid chars, this alternative will do the trick:
private static string RemoveInvalidChars(string fileName, string newValue)
{
char newChar = string.IsNullOrEmpty(newValue) ? char.MinValue : newValue[0];
bool remove = newChar == char.MinValue;
char[] chars = fileName.ToCharArray();
char[] newChars = new char[chars.Length];
int i2 = 0;
for (int i = 0; i < chars.Length; i++)
{
char c = chars[i];
if (InvalidCharsHash.Contains(c))
{
if (!remove)
newChars[i2++] = newChar;
}
else
newChars[i2++] = c;
}
return new string(newChars, 0, i2);
}
BENCHMARK
I executed timed test runs with most methods found in this post, if performance is what you are after. Some of these methods don't replace with a given char, since OP was asking to clean the string. I added tests replacing with a given char, and some others replacing with an empty char if your intended scenario only needs to remove the unwanted chars. Code used for this benchmark is at the end, so you can run your own tests.
Note: Methods Test1 and Test2 are both proposed in this post.
First Run
replacing with '_', 1000000 iterations
Results:
============Test1===============
Elapsed=00:00:01.6665595
Result=_M ____a_ry_ h__ad___ a_________ li_tt_le__ la_mb._
============Test2===============
Elapsed=00:00:01.7526835
Result=_M ____a_ry_ h__ad___ a_________ li_tt_le__ la_mb._
============Test3===============
Elapsed=00:00:05.2306227
Result=_M ____a_ry_ h__ad___ a_________ li_tt_le__ la_mb._
============Test4===============
Elapsed=00:00:14.8203696
Result=_M ____a_ry_ h__ad___ a_________ li_tt_le__ la_mb._
============Test5===============
Elapsed=00:00:01.8273760
Result=_M ____a_ry_ h__ad___ a_________ li_tt_le__ la_mb._
============Test6===============
Elapsed=00:00:05.4249985
Result=_M ____a_ry_ h__ad___ a_________ li_tt_le__ la_mb._
============Test7===============
Elapsed=00:00:07.5653833
Result=_M ____a_ry_ h__ad___ a_________ li_tt_le__ la_mb._
============Test8===============
Elapsed=00:12:23.1410106
Result=_M ____a_ry_ h__ad___ a_________ li_tt_le__ la_mb._
============Test9===============
Elapsed=00:00:02.1016708
Result=_M ____a_ry_ h__ad___ a_________ li_tt_le__ la_mb._
============Test10===============
Elapsed=00:00:05.0987225
Result=M ary had a little lamb.
============Test11===============
Elapsed=00:00:06.8004289
Result=M ary had a little lamb.
Second Run
removing invalid chars, 1000000 iterations
Note: Test1 will not remove, only replace.
Results:
============Test1===============
Elapsed=00:00:01.6945352
Result= M a ry h ad a li tt le la mb.
============Test2===============
Elapsed=00:00:01.4798049
Result=M ary had a little lamb.
============Test3===============
Elapsed=00:00:04.0415688
Result=M ary had a little lamb.
============Test4===============
Elapsed=00:00:14.3397960
Result=M ary had a little lamb.
============Test5===============
Elapsed=00:00:01.6782505
Result=M ary had a little lamb.
============Test6===============
Elapsed=00:00:04.9251707
Result=M ary had a little lamb.
============Test7===============
Elapsed=00:00:07.9562379
Result=M ary had a little lamb.
============Test8===============
Elapsed=00:12:16.2918943
Result=M ary had a little lamb.
============Test9===============
Elapsed=00:00:02.0770277
Result=M ary had a little lamb.
============Test10===============
Elapsed=00:00:05.2721232
Result=M ary had a little lamb.
============Test11===============
Elapsed=00:00:05.2802903
Result=M ary had a little lamb.
BENCHMARK RESULTS
Methods Test1, Test2 and Test5 are the fastest. Method Test8 is the slowest.
CODE
Here's the complete code of the benchmark:
private static HashSet<char> _invalidCharsHash;
private static HashSet<char> InvalidCharsHash
{
get { return _invalidCharsHash ?? (_invalidCharsHash = new HashSet<char>(Path.GetInvalidFileNameChars())); }
}
private static string _invalidCharsValue;
private static string InvalidCharsValue
{
get { return _invalidCharsValue ?? (_invalidCharsValue = new string(Path.GetInvalidFileNameChars())); }
}
private static char[] _invalidChars;
private static char[] InvalidChars
{
get { return _invalidChars ?? (_invalidChars = Path.GetInvalidFileNameChars()); }
}
static void Main(string[] args)
{
string testPath = "\"M <>\"\\a/ry/ h**ad:>> a\\/:*?\"<>| li*tt|le|| la\"mb.?";
int max = 1000000;
string newValue = "";
TimeBenchmark(max, Test1, testPath, newValue);
TimeBenchmark(max, Test2, testPath, newValue);
TimeBenchmark(max, Test3, testPath, newValue);
TimeBenchmark(max, Test4, testPath, newValue);
TimeBenchmark(max, Test5, testPath, newValue);
TimeBenchmark(max, Test6, testPath, newValue);
TimeBenchmark(max, Test7, testPath, newValue);
TimeBenchmark(max, Test8, testPath, newValue);
TimeBenchmark(max, Test9, testPath, newValue);
TimeBenchmark(max, Test10, testPath, newValue);
TimeBenchmark(max, Test11, testPath, newValue);
Console.Read();
}
private static void TimeBenchmark(int maxLoop, Func<string, string, string> func, string testString, string newValue)
{
var sw = new Stopwatch();
sw.Start();
string result = string.Empty;
for (int i = 0; i < maxLoop; i++)
result = func?.Invoke(testString, newValue);
sw.Stop();
Console.WriteLine($"============{func.Method.Name}===============");
Console.WriteLine("Elapsed={0}", sw.Elapsed);
Console.WriteLine("Result={0}", result);
Console.WriteLine("");
}
private static string Test1(string fileName, string newValue)
{
char newChar = string.IsNullOrEmpty(newValue) ? char.MinValue : newValue[0];
char[] chars = fileName.ToCharArray();
for (int i = 0; i < chars.Length; i++)
{
if (InvalidCharsHash.Contains(chars[i]))
chars[i] = newChar;
}
return new string(chars);
}
private static string Test2(string fileName, string newValue)
{
char newChar = string.IsNullOrEmpty(newValue) ? char.MinValue : newValue[0];
bool remove = newChar == char.MinValue;
char[] chars = fileName.ToCharArray();
char[] newChars = new char[chars.Length];
int i2 = 0;
for (int i = 0; i < chars.Length; i++)
{
char c = chars[i];
if (InvalidCharsHash.Contains(c))
{
if (!remove)
newChars[i2++] = newChar;
}
else
newChars[i2++] = c;
}
return new string(newChars, 0, i2);
}
private static string Test3(string filename, string newValue)
{
foreach (char c in InvalidCharsValue)
{
filename = filename.Replace(c.ToString(), newValue);
}
return filename;
}
private static string Test4(string filename, string newValue)
{
Regex r = new Regex(string.Format("[{0}]", Regex.Escape(InvalidCharsValue)));
filename = r.Replace(filename, newValue);
return filename;
}
private static string Test5(string filename, string newValue)
{
return string.Join(newValue, filename.Split(InvalidChars));
}
private static string Test6(string fileName, string newValue)
{
return InvalidChars.Aggregate(fileName, (current, c) => current.Replace(c.ToString(), newValue));
}
private static string Test7(string fileName, string newValue)
{
string regex = string.Format("[{0}]", Regex.Escape(InvalidCharsValue));
return Regex.Replace(fileName, regex, newValue, RegexOptions.Compiled);
}
private static string Test8(string fileName, string newValue)
{
string regex = string.Format("[{0}]", Regex.Escape(InvalidCharsValue));
Regex removeInvalidChars = new Regex(regex, RegexOptions.Singleline | RegexOptions.Compiled | RegexOptions.CultureInvariant);
return removeInvalidChars.Replace(fileName, newValue);
}
private static string Test9(string fileName, string newValue)
{
StringBuilder sb = new StringBuilder(fileName.Length);
bool changed = false;
for (int i = 0; i < fileName.Length; i++)
{
char c = fileName[i];
if (InvalidCharsHash.Contains(c))
{
changed = true;
sb.Append(newValue);
}
else
sb.Append(c);
}
if (sb.Length == 0)
return newValue;
return changed ? sb.ToString() : fileName;
}
private static string Test10(string fileName, string newValue)
{
if (!fileName.Any(c => InvalidChars.Contains(c)))
{
return fileName;
}
return new string(fileName.Where(c => !InvalidChars.Contains(c)).ToArray());
}
private static string Test11(string fileName, string newValue)
{
string invalidCharsRemoved = new string(fileName
.Where(x => !InvalidChars.Contains(x))
.ToArray());
return invalidCharsRemoved;
}
A: I use Linq to clean up filenames. You can easily extend this to check for valid paths as well.
private static string CleanFileName(string fileName)
{
return Path.GetInvalidFileNameChars().Aggregate(fileName, (current, c) => current.Replace(c.ToString(), string.Empty));
}
Update
Some comments indicate this method is not working for them so I've included a link to a DotNetFiddle snippet so you may validate the method.
https://dotnetfiddle.net/nw1SWY
A: The best way to remove illegal character from user input is to replace illegal character using Regex class, create method in code behind or also it validate at client side using RegularExpression control.
public string RemoveSpecialCharacters(string str)
{
return Regex.Replace(str, "[^a-zA-Z0-9_]+", "_", RegexOptions.Compiled);
}
OR
<asp:RegularExpressionValidator ID="regxFolderName"
runat="server"
ErrorMessage="Enter folder name with a-z A-Z0-9_"
ControlToValidate="txtFolderName"
Display="Dynamic"
ValidationExpression="^[a-zA-Z0-9_]*$"
ForeColor="Red">
A: public static class StringExtensions
{
public static string RemoveUnnecessary(this string source)
{
string result = string.Empty;
string regex = new string(Path.GetInvalidFileNameChars()) + new string(Path.GetInvalidPathChars());
Regex reg = new Regex(string.Format("[{0}]", Regex.Escape(regex)));
result = reg.Replace(source, "");
return result;
}
}
You can use method clearly.
A: One liner to cleanup string from any illegal chars for windows file naming:
public static string CleanIllegalName(string p_testName) => new Regex(string.Format("[{0}]", Regex.Escape(new string(Path.GetInvalidFileNameChars()) + new string(Path.GetInvalidPathChars())))).Replace(p_testName, "");
A: I've rolled my own method, which seems to be a lot faster of other posted here (especially the regex which is so sloooooow) but I didn't tested all methods posted.
https://dotnetfiddle.net/haIXiY
The first method (mine) and second (also mine, but old one) also do an added check on backslashes, so the benchmark are not perfect, but anyways it's just to give you an idea.
Result on my laptop (for 100 000 iterations):
StringHelper.RemoveInvalidCharacters 1: 451 ms
StringHelper.RemoveInvalidCharacters 2: 7139 ms
StringHelper.RemoveInvalidCharacters 3: 2447 ms
StringHelper.RemoveInvalidCharacters 4: 3733 ms
StringHelper.RemoveInvalidCharacters 5: 11689 ms (==> Regex!)
The fastest method:
public static string RemoveInvalidCharacters(string content, char replace = '_', bool doNotReplaceBackslashes = false)
{
if (string.IsNullOrEmpty(content))
return content;
var idx = content.IndexOfAny(InvalidCharacters);
if (idx >= 0)
{
var sb = new StringBuilder(content);
while (idx >= 0)
{
if (sb[idx] != '\\' || !doNotReplaceBackslashes)
sb[idx] = replace;
idx = content.IndexOfAny(InvalidCharacters, idx+1);
}
return sb.ToString();
}
return content;
}
Method doesn't compile "as is" dur to InvalidCharacters property, check the fiddle for full code
A: For starters, Trim only removes characters from the beginning or end of the string. Secondly, you should evaluate if you really want to remove the offensive characters, or fail fast and let the user know their filename is invalid. My choice is the latter, but my answer should at least show you how to do things the right AND wrong way:
StackOverflow question showing how to check if a given string is a valid file name. Note you can use the regex from this question to remove characters with a regular expression replacement (if you really need to do this).
A: I use regular expressions to achieve this. First, I dynamically build the regex.
string regex = string.Format(
"[{0}]",
Regex.Escape(new string(Path.GetInvalidFileNameChars())));
Regex removeInvalidChars = new Regex(regex, RegexOptions.Singleline | RegexOptions.Compiled | RegexOptions.CultureInvariant);
Then I just call removeInvalidChars.Replace to do the find and replace. This can obviously be extended to cover path chars as well.
A: I absolutely prefer the idea of Jeff Yates. It will work perfectly, if you slightly modify it:
string regex = String.Format("[{0}]", Regex.Escape(new string(Path.GetInvalidFileNameChars())));
Regex removeInvalidChars = new Regex(regex, RegexOptions.Singleline | RegexOptions.Compiled | RegexOptions.CultureInvariant);
The improvement is just to escape the automaticially generated regex.
A: Here's a code snippet that should help for .NET 3 and higher.
using System.IO;
using System.Text.RegularExpressions;
public static class PathValidation
{
private static string pathValidatorExpression = "^[^" + string.Join("", Array.ConvertAll(Path.GetInvalidPathChars(), x => Regex.Escape(x.ToString()))) + "]+$";
private static Regex pathValidator = new Regex(pathValidatorExpression, RegexOptions.Compiled);
private static string fileNameValidatorExpression = "^[^" + string.Join("", Array.ConvertAll(Path.GetInvalidFileNameChars(), x => Regex.Escape(x.ToString()))) + "]+$";
private static Regex fileNameValidator = new Regex(fileNameValidatorExpression, RegexOptions.Compiled);
private static string pathCleanerExpression = "[" + string.Join("", Array.ConvertAll(Path.GetInvalidPathChars(), x => Regex.Escape(x.ToString()))) + "]";
private static Regex pathCleaner = new Regex(pathCleanerExpression, RegexOptions.Compiled);
private static string fileNameCleanerExpression = "[" + string.Join("", Array.ConvertAll(Path.GetInvalidFileNameChars(), x => Regex.Escape(x.ToString()))) + "]";
private static Regex fileNameCleaner = new Regex(fileNameCleanerExpression, RegexOptions.Compiled);
public static bool ValidatePath(string path)
{
return pathValidator.IsMatch(path);
}
public static bool ValidateFileName(string fileName)
{
return fileNameValidator.IsMatch(fileName);
}
public static string CleanPath(string path)
{
return pathCleaner.Replace(path, "");
}
public static string CleanFileName(string fileName)
{
return fileNameCleaner.Replace(fileName, "");
}
}
A: public static bool IsValidFilename(string testName)
{
return !new Regex("[" + Regex.Escape(new String(System.IO.Path.GetInvalidFileNameChars())) + "]").IsMatch(testName);
}
A: This will do want you want, and avoid collisions
static string SanitiseFilename(string key)
{
var invalidChars = Path.GetInvalidFileNameChars();
var sb = new StringBuilder();
foreach (var c in key)
{
var invalidCharIndex = -1;
for (var i = 0; i < invalidChars.Length; i++)
{
if (c == invalidChars[i])
{
invalidCharIndex = i;
}
}
if (invalidCharIndex > -1)
{
sb.Append("_").Append(invalidCharIndex);
continue;
}
if (c == '_')
{
sb.Append("__");
continue;
}
sb.Append(c);
}
return sb.ToString();
}
A: I think the question already not full answered...
The answers only describe clean filename OR path... not both. Here is my solution:
private static string CleanPath(string path)
{
string regexSearch = new string(Path.GetInvalidFileNameChars()) + new string(Path.GetInvalidPathChars());
Regex r = new Regex(string.Format("[{0}]", Regex.Escape(regexSearch)));
List<string> split = path.Split('\\').ToList();
string returnValue = split.Aggregate(string.Empty, (current, s) => current + (r.Replace(s, "") + @"\"));
returnValue = returnValue.TrimEnd('\\');
return returnValue;
}
A: I created an extension method that combines several suggestions:
*
*Holding illegal characters in a hash set
*Filtering out characters below ascii 127. Since Path.GetInvalidFileNameChars does not include all invalid characters possible with ascii codes from 0 to 255. See here and MSDN
*Possiblity to define the replacement character
Source:
public static class FileNameCorrector
{
private static HashSet<char> invalid = new HashSet<char>(Path.GetInvalidFileNameChars());
public static string ToValidFileName(this string name, char replacement = '\0')
{
var builder = new StringBuilder();
foreach (var cur in name)
{
if (cur > 31 && cur < 128 && !invalid.Contains(cur))
{
builder.Append(cur);
}
else if (replacement != '\0')
{
builder.Append(replacement);
}
}
return builder.ToString();
}
}
A: Here is a function which replaces all illegal characters in a file name by a replacement character:
public static string ReplaceIllegalFileChars(string FileNameWithoutPath, char ReplacementChar)
{
const string IllegalFileChars = "*?/\\:<>|\"";
StringBuilder sb = new StringBuilder(FileNameWithoutPath.Length);
char c;
for (int i = 0; i < FileNameWithoutPath.Length; i++)
{
c = FileNameWithoutPath[i];
if (IllegalFileChars.IndexOf(c) >= 0)
{
c = ReplacementChar;
}
sb.Append(c);
}
return (sb.ToString());
}
For example the underscore can be used as a replacement character:
NewFileName = ReplaceIllegalFileChars(FileName, '_');
A: Or you can just do
[YOUR STRING].Replace('\\', ' ').Replace('/', ' ').Replace('"', ' ').Replace('*', ' ').Replace(':', ' ').Replace('?', ' ').Replace('<', ' ').Replace('>', ' ').Replace('|', ' ').Trim();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "574"
} |
Q: OpenGL: How to do RGBA->RGBA blitting without changing destination alpha I have an OpenGL RGBA texture and I blit another RGBA texture onto it using a framebuffer object. The problem is that if I use the usual blend functions with
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA),
the resulting blit causes the destination texture alpha to change, making it slightly transparent for places where alpha previously was 1. I would like the destination surface alpha never to change, but otherwise the effect on RGB values should be exactly like with GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA. So the blend factor functions should be (As,As,As,0) and (1-As,1-As,1-As,1). How can I achieve that?
A: You can set the blend-modes for RGB and alpha to different equations:
void glBlendFuncSeparate(
GLenum srcRGB,
GLenum dstRGB,
GLenum srcAlpha,
GLenum dstAlpha);
In your case you want to use the following enums:
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ZERO, GL_ONE);
Note that you may have to import the glBlendFuncSeparate function as an extension. It's safe to do so though. The function is around for a very long time. It's part of OpenGL 1.4
Another way to do the same is to disable writing to the alpha-channel using glColorMask:
void glColorMask( GLboolean red,
GLboolean green,
GLboolean blue,
GLboolean alpha )
It could be a lot slower than glBlendFuncSeparate because OpenGL-drivers optimize the most commonly used functions and glColorMask is one of the rarely used OpenGL-functions.
If you're unlucky you may even end up with software-rendering emulation by calling oddball functions :-)
A: Maybe you could use glColorMask()? It let's you enable/disable writing to each of the four color components.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Is my form password being passed in clear text? This is what my browser sent, when logging into some site:
POST http://www.some.site/login.php HTTP/1.0
User-Agent: Opera/8.26 (X2000; Linux i686; Z; en)
Host: www.some.site
Accept: text/html, application/xml;q=0.9, application/xhtml+xml, image/png, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1
Accept-Language: en-US,en;q=0.9
Accept-Charset: iso-8859-1, utf-8, utf-16, *;q=0.1
Accept-Encoding: deflate, gzip, x-gzip, identity, *;q=0
Referer: http://www.some.site/
Proxy-Connection: close
Content-Length: 123
Content-Type: application/x-www-form-urlencoded
lots_of_stuff=here&e2ad811=my_login_name&e327696=my_password&lots_of_stuff=here
Can I state that anyone can sniff my login name and password for that site?
Maybe just on my LAN?
If so (even only on LAN ) then I'm shocked. I thought using
<input type="password">
did something more than make all characters look like ' * '
p.s. If it matters I played with netcat (on linux) and made connection
browser <=> netcat (loged here) <=> proxy <=> remote_site
A: type="password" only hides the character on-screen. If you want to stop sniffing, you need to encrypt the connection (i.e. HTTPS).
A: You can either encrypt the HTTP connection via HTTPS, or there are MD5 and other hashing algorithms implemented in JavaScript that can be used client side to hash the password client side before sending it, hence stopping a sniffer being able to read your password.
A: Every data sent trought a http connection can be seen by someone in your route to the server (man in the middle attack).
type="password" only hides the character on-screen, and even other programs on your computer can read the data.
The only way to protect the data is to send it trought SSL (HTTPS instead of HTTP)
A: Yes, your credentials are passed in cleartext, anyone who can hear your network traffic can sniff them.
A: Contents of a POST body are visible, i.e., "in the clear," if transported on a non-encrypted channel. If you wish to protect the HTTP body from being sniffed, you should do so over a secure channel, via HTTPS.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How to rotate and fade background images in and out with javascript/ASP.NET/CSS I need to randomly fade my background images in and out.
It will be a timed function, like once every 5 seconds.
I need to do it with ASP.NET, Javascript, CSS or all three.
Please help me out here guys. Thank you.
A: Cycle, a jQuery plugin is a very flexible image rotating solution: http://malsup.com/jquery/cycle/
A: This is the Answer: never mind guys, after making a bit more exact search on Google. I found a good solution.
<html>
<head>
<!--
This file retrieved from the JS-Examples archives
http://www.js-examples.com
1000s of free ready to use scripts, tutorials, forums.
Author: Steve S - http://jsmadeeasy.com/
-->
<style>
body
{
/*Remove below line to make bgimage NOT fixed*/
background-attachment:fixed;
background-repeat: no-repeat;
/*Use center center in place of 300 200 to center bg image*/
background-position: 300 200;
}
</style>
<script language="JavaScript1.2">
/* you must supply your own immages */
var bgimages=new Array()
bgimages[0]="http://js-examples.com/images/blue_ball0.gif"
bgimages[1]="http://js-examples.com/images/red_ball0.gif"
bgimages[2]="http://js-examples.com/images/green_ball0.gif"
//preload images
var pathToImg=new Array()
for (i=0;i<bgimages.length;i++)
{
pathToImg[i]=new Image()
pathToImg[i].src=bgimages[i]
}
var inc=-1
function bgSlide()
{
if (inc<bgimages.length-1)
inc++
else
inc=0
document.body.background=pathToImg[inc].src
}
if (document.all||document.getElementById)
window.onload=new Function('setInterval("bgSlide()",3000)')
</script>
</head>
<body>
<BR><center><a href='http://www.js-examples.com'>JS-Examples.com</a></center>
</body>
</html>
Found it here.
A: Just found a tutorial on how to do this with CSS background images and jQuery at...
http://www.marcofolio.net/webdesign/advanced_jquery_background_image_slideshow.html
Seems fairly in depth. Going to try to use it for a project I'm currently undertaking. Will report on how it turned out.
Edit 1
The above referenced jQuery seems to have addressed my issue where the jQuery Cycle plugin could not. Look at http://egdata.com/baf/ for an example. The main issue was that the slideshow contained slides that were 1500px wide where the page width is 960px.
For some reason, the jQuery Cycle plugin adds a css style property for width when displaying the current slide. Initially it looked correct, but fails when the browser window is resized. Cycle seems to set the width of the slides on load, and in my case, I need the width to remain 100% instead of the actual pixel width of the window. http://egdata.com/baf/index_cycle.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: targeting specific frawework version in csc.exe How do you specify a target framework version for the csc.exe c# compiler via command-line invocation (e.g., no .csproj file and not going thru the MSBUILD engine)?
e.g, using the C# 3.0 csc.exe compiler, how do you compile to IL targeting the 2.0 .net framework?
A: In the specific case of the C# 3 compiler, there isn't a problem so long as you don't use any assemblies or types which aren't in .NET 2.0 - the IL is the same (as opposed to targeting 1.1, for instance).
In addition to this, you can use /noconfig /nostdlib and then explicitly reference the .NET 2.0 assemblies (in c:\Windows\Microsoft.NET\Framework\v2.0.50727 for example). It looks like the /lib command line option can make this slightly easier by letting you specify a directory to look in for references, but I haven't tried that myself.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Is Fortran easier to optimize than C for heavy calculations? From time to time I read that Fortran is or can be faster then C for heavy calculations. Is that really true? I must admit that I hardly know Fortran, but the Fortran code I have seen so far did not show that the language has features that C doesn't have.
If it is true, please tell me why. Please don't tell me what languages or libs are good for number crunching, I don't intend to write an app or lib to do that, I'm just curious.
A: Generally FORTRAN is slower than C. C can use hardware level pointers allowing the programmer to hand-optimize. FORTRAN (in most cases) doesn't have access to hardware memory addressing hacks. (VAX FORTRAN is another story.) I've used FORTRAN on and off since the '70's. (Really.)
However, starting in the 90's FORTRAN has evolved to include specific language constructs that can be optimized into inherently parallel algorithms that can really scream on a multi-core processor. For example, automatic Vectorizing allows multiple processors to handle each element in a vector of data concurrently. 16 processors -- 16 element vector -- processing takes 1/16th the time.
In C, you have to manage your own threads and design your algorithm carefully for multi-processing, and then use a bunch of API calls to make sure that the parallelism happens properly.
In FORTRAN, you only have to design your algorithm carefully for multi-processing. The compiler and run-time can handle the rest for you.
You can read a little about High Performance Fortran, but you find a lot of dead links. You're better off reading about Parallel Programming (like OpenMP.org) and how FORTRAN supports that.
A: To some extent Fortran has been designed keeping compiler optimization in mind. The language supports whole array operations where compilers can exploit parallelism (specially on multi-core processors). For example,
Dense matrix multiplication is simply:
matmul(a,b)
L2 norm of a vector x is:
sqrt(sum(x**2))
Moreover statements such as FORALL, PURE & ELEMENTAL procedures etc. further help to optimize code. Even pointers in Fortran arent as flexible as C because of this simple reason.
The upcoming Fortran standard (2008) has co-arrays which allows you to easily write parallel code. G95 (open source) and compilers from CRAY already support it.
So yes Fortran can be fast simply because compilers can optimize/parallelize it better than C/C++. But again like everything else in life there are good compilers and bad compilers.
A: The faster code is not really up to the language, is the compiler so you can see the ms-vb "compiler" that generates bloated, slower and redundant object code that is tied together inside an ".exe", but powerBasic generates too way better code.
Object code made by a C and C++ compilers is generated in some phases (at least 2) but by design most Fortran compilers have at least 5 phases including high-level optimizations so by design Fortran will always have the capability to generate highly optimized code.
So at the end is the compiler not the language you should ask for, the best compiler i know is the Intel Fortran Compiler because you can get it on LINUX and Windows and you can use VS as the IDE, if you're looking for a cheap tigh compiler you can always relay on OpenWatcom.
More info about this:
http://ed-thelen.org/1401Project/1401-IBM-Systems-Journal-FORTRAN.html
A: The languages have similar feature-sets. The performance difference comes from the fact that Fortran says aliasing is not allowed, unless an EQUIVALENCE statement is used. Any code that has aliasing is not valid Fortran, but it is up to the programmer and not the compiler to detect these errors. Thus Fortran compilers ignore possible aliasing of memory pointers and allow them to generate more efficient code. Take a look at this little example in C:
void transform (float *output, float const * input, float const * matrix, int *n)
{
int i;
for (i=0; i<*n; i++)
{
float x = input[i*2+0];
float y = input[i*2+1];
output[i*2+0] = matrix[0] * x + matrix[1] * y;
output[i*2+1] = matrix[2] * x + matrix[3] * y;
}
}
This function would run slower than the Fortran counterpart after optimization. Why so? If you write values into the output array, you may change the values of matrix. After all, the pointers could overlap and point to the same chunk of memory (including the int pointer!). The C compiler is forced to reload the four matrix values from memory for all computations.
In Fortran the compiler can load the matrix values once and store them in registers. It can do so because the Fortran compiler assumes pointers/arrays do not overlap in memory.
Fortunately, the restrict keyword and strict-aliasing have been introduced to the C99 standard to address this problem. It's well supported in most C++ compilers these days as well. The keyword allows you to give the compiler a hint that the programmer promises that a pointer does not alias with any other pointer. The strict-aliasing means that the programmer promises that pointers of different type will never overlap, for example a double* will not overlap with an int* (with the specific exception that char* and void* can overlap with anything).
If you use them you will get the same speed from C and Fortran. However, the ability to use the restrict keyword only with performance critical functions means that C (and C++) programs are much safer and easier to write. For example, consider the invalid Fortran code: CALL TRANSFORM(A(1, 30), A(2, 31), A(3, 32), 30), which most Fortran compilers will happily compile without any warning but introduces a bug that only shows up on some compilers, on some hardware and with some optimization options.
A: It is funny that a lot of answers here from not knowing the languages. This is especially true for C/C++ programmers who have opened and old FORTRAN 77 code and discuss the weaknesses.
I suppose that the speed issue is mostly a question between C/C++ and Fortran. In a Huge code, it always depends on the programmer. There are some features of the language that Fortran outperforms and some features which C does. So, in 2011, no one can really say which one is faster.
About the language itself, Fortran nowadays supports Full OOP features and it is fully backward compatible. I have used the Fortran 2003 thoroughly and I would say it was just delightful to use it. In some aspects, Fortran 2003 is still behind C++ but let's look at the usage. Fortran is mostly used for Numerical Computation, and nobody uses fancy C++ OOP features because of speed reasons. In high performance computing, C++ has almost no place to go(have a look at the MPI standard and you'll see that C++ has been deprecated!).
Nowadays, you can simply do mixed language programming with Fortran and C/C++. There are even interfaces for GTK+ in Fortran. There are free compilers (gfortran, g95) and many excellent commercial ones.
A: There are several reasons why Fortran could be faster. However the amount they matter is so inconsequential or can be worked around anyways, that it shouldn't matter. The main reason to use Fortran nowadays is maintaining or extending legacy applications.
*
*PURE and ELEMENTAL keywords on functions. These are functions that have no side effects. This allows optimizations in certain cases where the compiler knows the same function will be called with the same values. Note: GCC implements "pure" as an extension to the language. Other compilers may as well. Inter-module analysis can also perform this optimization but it is difficult.
*standard set of functions that deal with arrays, not individual elements. Stuff like sin(), log(), sqrt() take arrays instead of scalars. This makes it easier to optimize the routine. Auto-vectorization gives the same benefits in most cases if these functions are inline or builtins
*Builtin complex type. In theory this could allow the compiler to reorder or eliminate certain instructions in certain cases, but likely you'd see the same benefit with the struct { double re; double im; }; idiom used in C. It makes for faster development though as operators work on complex types in Fortran.
A: I think the key point in favor of Fortran is that it is a language slightly more suited for expressing vector- and array-based math. The pointer analysis issue pointed out above is real in practice, since portable code cannot really assume that you can tell a compiler something. There is ALWAYS an advantage to expression computaitons in a manner closer to how the domain looks. C does not really have arrays at all, if you look closely, just something that kind of behaves like it. Fortran has real arrawys. Which makes it easier to compile for certain types of algorithms especially for parallel machines.
Deep down in things like run-time system and calling conventions, C and modern Fortran are sufficiently similar that it is hard to see what would make a difference. Note that C here is really base C: C++ is a totally different issue with very different performance characteristics.
A: Fortran has better I/O routines, e.g. the implied do facility gives flexibility that C's standard library can't match.
The Fortran compiler directly handles the more complex
syntax involved, and as such syntax can't be easily reduced
to argument passing form, C can't implement it efficiently.
A: Fortran can handle array, especially multidimensional arrays, very conveniently. Slicing elements of multidimensional array in Fortran can be much easier than that in C/C++. C++ now has libraries can do the job, such as Boost or Eigen, but they are after all external libraries. In Fortran these functions are intrinsic.
Whether Fortran is faster or more convenient for developing mostly depends on the job you need to finish. As a scientific computation person for geophysics, I did most of computation in Fortran (I mean modern Fortran, >=F90).
A: There is no such thing as one language being faster than another, so the proper answer is no.
What you really have to ask is "is code compiled with Fortran compiler X faster than equivalent code compiled with C compiler Y?" The answer to that question of course depends on which two compilers you pick.
Another question one could ask would be along the lines of "Given the same amount of effort put into optimizing in their compilers, which compiler would produce faster code?"
The answer to this would in fact be Fortran. Fortran compilers have certian advantages:
*
*Fortran had to compete with Assembly back in the day when some vowed never to use compilers, so it was designed for speed. C was designed to be flexible.
*Fortran's niche has been number crunching. In this domain code is never fast enough. So there's always been a lot of pressure to keep the language efficient.
*Most of the research in compiler optimizations is done by people interested in speeding up Fortran number crunching code, so optimizing Fortran code is a much better known problem than optimizing any other compiled language, and new innovations show up in Fortran compilers first.
*Biggie: C encourages much more pointer use than Fortran. This drasticly increases the potential scope of any data item in a C program, which makes them far harder to optimize. Note that Ada is also way better than C in this realm, and is a much more modern OO Language than the commonly found Fortran77. If you want an OO langauge that can generate faster code than C, this is an option for you.
*Due again to its number-crunching niche, the customers of Fortran compilers tend to care more about optimization than the customers of C compilers.
However, there is nothing stopping someone from putting a ton of effort into their C compiler's optimization, and making it generate better code than their platform's Fortran compiler. In fact, the larger sales generated by C compilers makes this scenario quite feasible
A: There is another item where Fortran is different than C - and potentially faster. Fortran has better optimization rules than C. In Fortran, the evaluation order of an expressions is not defined, which allows the compiler to optimize it - if one wants to force a certain order, one has to use parentheses. In C the order is much stricter, but with "-fast" options, they are more relaxed and "(...)" are also ignored. I think Fortran has a way which lies nicely in the middle. (Well, IEEE makes the live more difficult as certain evaluation-order changes require that no overflows occur, which either has to be ignored or hampers the evaluation).
Another area of smarter rules are complex numbers. Not only that it took until C 99 that C had them, also the rules govern them is better in Fortran; since the Fortran library of gfortran is partially written in C but implements the Fortran semantics, GCC gained the option (which can also be used with "normal" C programs):
-fcx-fortran-rules
Complex multiplication and division follow Fortran rules. Range reduction is done as part of complex division, but there is no checking whether the result of a complex multiplication or division is "NaN + I*NaN", with an attempt to rescue the situation in that case.
The alias rules mentioned above is another bonus and also - at least in principle - the whole-array operations, which if taken properly into account by the optimizer of the compiler, can lead faster code. On the contra side are that certain operation take more time, e.g. if one does an assignment to an allocatable array, there are lots of checks necessary (reallocate? [Fortran 2003 feature], has the array strides, etc.), which make the simple operation more complex behind the scenes - and thus slower, but makes the language more powerful. On the other hand, the array operations with flexible bounds and strides makes it easier to write code - and the compiler is usually better optimizing code than a user.
In total, I think both C and Fortran are about equally fast; the choice should be more which language does one like more or whether using the whole-array operations of Fortran and its better portability are more useful -- or the better interfacing to system and graphical-user-interface libraries in C.
A: Using modern standards and compiler, no!
Some of the folks here have suggested that FORTRAN is faster because the compiler doesn't need to worry about aliasing (and hence can make more assumptions during optimisation). However, this has been dealt with in C since the C99 (I think) standard with the inclusion of the restrict keyword. Which basically tells the compiler, that within a give scope, the pointer is not aliased. Furthermore C enables proper pointer arithmetic, where things like aliasing can be very useful in terms of performance and resource allocation. Although I think more recent version of FORTRAN enable the use of "proper" pointers.
For modern implementations C general outperforms FORTRAN (although it is very fast too).
http://benchmarksgame.alioth.debian.org/u64q/fortran.html
EDIT:
A fair criticism of this seems to be that the benchmarking may be biased. Here is another source (relative to C) that puts result in more context:
http://julialang.org/benchmarks/
You can see that C typically outperforms Fortran in most instances (again see criticisms below that apply here too); as others have stated, benchmarking is an inexact science that can be easily loaded to favour one language over others. But it does put in context how Fortran and C have similar performance.
A: Yes, in 1980; in 2008? depends
When I started programming professionally the speed dominance of Fortran was just being challenged. I remember reading about it in Dr. Dobbs and telling the older programmers about the article--they laughed.
So I have two views about this, theoretical and practical. In theory Fortran today has no intrinsic advantage to C/C++ or even any language that allows assembly code. In practice Fortran today still enjoys the benefits of legacy of a history and culture built around optimization of numerical code.
Up until and including Fortran 77, language design considerations had optimization as a main focus. Due to the state of compiler theory and technology, this often meant restricting features and capability in order to give the compiler the best shot at optimizing the code. A good analogy is to think of Fortran 77 as a professional race car that sacrifices features for speed. These days compilers have gotten better across all languages and features for programmer productivity are more valued. However, there are still places where the people are mainly concerned with speed in scientific computing; these people most likely have inherited code, training and culture from people who themselves were Fortran programmers.
When one starts talking about optimization of code there are many issues and the best way to get a feel for this is to lurk where people are whose job it is to have fast numerical code. But keep in mind that such critically sensitive code is usually a small fraction of the overall lines of code and very specialized: A lot of Fortran code is just as "inefficient" as a lot of other code in other languages and optimization should not even be a primary concern of such code.
A wonderful place to start in learning about the history and culture of Fortran is wikipedia. The Fortran Wikipedia entry is superb and I very much appreciate those who have taken the time and effort to make it of value for the Fortran community.
(A shortened version of this answer would have been a comment in the excellent thread started by Nils but I don't have the karma to do that. Actually, I probably wouldn't have written anything at all but for that this thread has actual information content and sharing as opposed to flame wars and language bigotry, which is my main experience with this subject. I was overwhelmed and had to share the love.)
A: There is nothing about the languages Fortran and C which makes one faster than the other for specific purposes. There are things about specific compilers for each of these languages which make some favorable for certain tasks more than others.
For many years, Fortran compilers existed which could do black magic to your numeric routines, making many important computations insanely fast. The contemporary C compilers couldn't do it as well. As a result, a number of great libraries of code grew in Fortran. If you want to use these well tested, mature, wonderful libraries, you break out the Fortran compiler.
My informal observations show that these days people code their heavy computational stuff in any old language, and if it takes a while they find time on some cheap compute cluster. Moore's Law makes fools of us all.
A: I compare speed of Fortran, C, and C++ with the classic Levine-Callahan-Dongarra benchmark from netlib. The multiple language version, with OpenMP, is
http://sites.google.com/site/tprincesite/levine-callahan-dongarra-vectors
The C is uglier, as it began with automatic translation, plus insertion of restrict and pragmas for certain compilers.
C++ is just C with STL templates where applicable. To my view, the STL is a mixed bag as to whether it improves maintainability.
There is only minimal exercise of automatic function in-lining to see to what extent it improves optimization, since the examples are based on traditional Fortran practice where little reliance is place on in-lining.
The C/C++ compiler which has by far the most widespread usage lacks auto-vectorization, on which these benchmarks rely heavily.
Re the post which came just before this: there are a couple of examples where parentheses are used in Fortran to dictate the faster or more accurate order of evaluation. Known C compilers don't have options to observe the parentheses without disabling more important optimizations.
A: Any speed differences between Fortran and C will be more a function of compiler optimizations and the underlying math library used by the particular compiler. There is nothing intrinsic to Fortran that would make it faster than C.
Anyway, a good programmer can write Fortran in any language.
A: I'm a hobbyist programmer and i'm "average" at both language.
I find it easier to write fast Fortran code than C (or C++) code. Both Fortran and C are "historic" languages (by today standard), are heavily used, and have well supported free and commercial compiler.
I don't know if it's an historic fact but Fortran feel like it's built to be paralleled/distributed/vectorized/whatever-many-cores-ized. And today it's pretty much the "standard metric" when we're talking about speed : "does it scale ?"
For pure cpu crunching i love Fortran. For anything IO related i find it easier to work with C. (it's difficult in both case anyway).
Now of course, for parallel math intensive code you probably want to use your GPU. Both C and Fortran have a lot of more or less well integrated CUDA/OpenCL interface (and now OpenACC).
My moderately objective answer is : If you know both language equally well/poorly then i think Fortran is faster because i find it easier to write parallel/distributed code in Fortran than C. (once you understood that you can write "freeform" fortran and not just strict F77 code)
Here is a 2nd answer for those willing to downvote me because they don't like the 1st answer : Both language have the features required to write high-performance code. So it's dependent of the algorithm you're implementing (cpu intensive ? io intensive ? memory intensive?), the hardware (single cpu ? multi-core ? distribute supercomputer ? GPGPU ? FPGA ?), your skill and ultimately the compiler itself. Both C and Fortran have awesome compiler. (i'm seriously amazed by how advanced Fortran compilers are but so are C compilers).
PS : i'm glad you specifically excluded libs because i have a great deal of bad stuff to say about Fortran GUI libs. :)
A: Quick and simple:
Both are equally fast, but Fortran is simpler.
Whats really faster in the end depends on the algorithm, but there is considerable no speed difference anyway. This is what I learned in a Fortran workshop at high performance computing center Stuttgard, Germany in 2015. I work both with Fortran and C and share this opinion.
Explanation:
C was designed to write operating systems. Hence it has more freedom than needed to write high performance code. In general this is no problem, but if one does not programm carefully, one can easily slow the code down.
Fortran was designed for scientific programming. For this reason, it supports writing fast code syntax-wise, as this is the main purpose of Fortran. In contrast to the public opinion, Fortran is not an outdated programming language. Its latest standard is 2010 and new compilers are published on a regular basis, as most high performance code is writen in Fortran. Fortran further supports modern features as compiler directives (in C pragmas).
Example:
We want to give a large struct as an input argument to a function (fortran: subroutine). Within the function the argument is not altered.
C supports both, call by reference and call by value, which is a handy feature. In our case, the programmer might by accident use call by value. This slows down things considerably, as the struct needs to be copied in within memory first.
Fortran works with call by reference only, which forces the programmer to copy the struct by hand, if he really wants a call by value operation. In our case fortran will be automatically as fast as the C version with call by reference.
A: I was doing some extensive mathematics with FORTRAN and C for a couple of years. From my own experience I can tell that FORTRAN is sometimes really better than C but not for its speed (one can make C perform as fast as FORTRAN by using appropriate coding style) but rather because of very well optimized libraries like LAPACK (which can, however, be called from C code as well, either linking against LAPACK directly or using the LAPACKE interface for C), and because of great parallelization. On my opinion, FORTRAN is really awkward to work with, and its advantages are not good enough to cancel that drawback, so now I am using C+GSL to do calculations.
A: I haven't heard that Fortan is significantly faster than C, but it might be conceivable tht in certain cases it would be faster. And the key is not in the language features that are present, but in those that (usually) absent.
An example are C pointers. C pointers are used pretty much everywhere, but the problem with pointers is that the compiler usually can't tell if they're pointing to the different parts of the same array.
For example if you wrote a strcpy routine that looked like this:
strcpy(char *d, const char* s)
{
while(*d++ = *s++);
}
The compiler has to work under the assumption that the d and s might be overlapping arrays. So it can't perform an optimization that would produce different results when the arrays overlap. As you'd expect, this considerably restricts the kind of optimizations that can be performed.
[I should note that C99 has a "restrict" keyword that explictly tells the compilers that the pointers don't overlap. Also note that the Fortran too has pointers, with semantics different from those of C, but the pointers aren't ubiquitous as in C.]
But coming back to the C vs. Fortran issue, it is conceivable that a Fortran compiler is able to perform some optimizations that might not be possible for a (straightforwardly written) C program. So I wouldn't be too surprised by the claim. However, I do expect that the performance difference wouldn't be all that much. [~5-10%]
A: This is more than somewhat subjective, because it gets into the quality of compilers and such more than anything else. However, to more directly answer your question, speaking from a language/compiler standpoint there is nothing about Fortran over C that is going to make it inherently faster or better than C. If you are doing heavy math operations, it will come down to the quality of the compiler, the skill of the programmer in each language and the intrinsic math support libraries that support those operations to ultimately determine which is going to be faster for a given implementation.
EDIT: Other people such as @Nils have raised the good point about the difference in the use of pointers in C and the possibility for aliasing that perhaps makes the most naive implementations slower in C. However, there are ways to deal with that in C99, via compiler optimization flags and/or in how the C is actually written. This is well covered in @Nils answer and the subsequent comments that follow on his answer.
A: Fortran traditionally doesn't set options such as -fp:strict (which ifort requires to enable some of the features in USE IEEE_arithmetic, a part of f2003 standard). Intel C++ also doesn't set -fp:strict as a default, but that is required for ERRNO handling, for example, and other C++ compilers don't make it convenient to turn off ERRNO or gain optimizations such as simd reduction. gcc and g++ have required me to set up Makefile to avoid using the dangerous combination -O3 -ffast-math -fopenmp -march=native.
Other than these issues, this question about relative performance gets more nit-picky and dependent on local rules about choice of compilers and options.
A: Most of the posts already present compelling arguments, so I will just add the proverbial 2 cents to a different aspect.
Being fortran faster or slower in terms of processing power in the end can have its importance, but if it takes 5 times more time to develop something in Fortran because:
*
*it lacks any good library for tasks different from pure number crunching
*it lack any decent tool for documentation and unit testing
*it's a language with very low expressivity, skyrocketing the number of lines of code.
*it has a very poor handling of strings
*it has an inane amount of issues among different compilers and architectures driving you crazy.
*it has a very poor IO strategy (READ/WRITE of sequential files. Yes, random access files exist but did you ever see them used?)
*it does not encourage good development practices, modularization.
*effective lack of a fully standard, fully compliant opensource compiler (both gfortran and g95 do not support everything)
*very poor interoperability with C (mangling: one underscore, two underscores, no underscore, in general one underscore but two if there's another underscore. and just let not delve into COMMON blocks...)
Then the issue is irrelevant. If something is slow, most of the time you cannot improve it beyond a given limit. If you want something faster, change the algorithm. In the end, computer time is cheap. Human time is not. Value the choice that reduces human time. If it increases computer time, it's cost effective anyway.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "453"
} |
Q: How Can I Get Touch Events in an iPhone App's Hidden Status Bar's Area? I have an iPhone app that hides the status bar. However, my main view doesn't get any touch events when I tap in the status bar area. Is there something I can do to fix this?
Here are a few details of my app's setup (in case it matters):
*
*It's an OpenGL-based application.
*The app launches in landscape mode. However, the touch events are missing when I tap in the area near the "top" of the portrait-mode area. (In other words, I don't get touch events on the left edge when I have the device held in landscape orientation.)
*My info.plist has UIStatusBarHidden set true, and in my application delegate's applicationDidFinishLaunching method I have "application.statusBarHidden = YES;"
*The main view has exclusiveTouch set to YES.
*The view's touchesBegan:withEvent method is called when I tap anywhere else on the screen.
*My view draws fine within the status-bar area.
(Please refrain from whining about the NDA. Thank you.)
Update: It turns out that this problem only manifests itself on the iPhone Simulator. When the app is run on an actual iPhone, touches are detected everywhere.
I'm still interested to know if there is a way to make it work on the Simulator, but it's no big deal.
A: Found my own answer (of sorts):
This behavior only happens in the iPhone Simulator. When I run the application on an actual iPhone, it works fine.
I'd still be interested to know if there is a way to make it work on the simulator.
A: I haven't had this problem, but then again, I'm not using OpenGL views. When you launch in Landscape mode, are you setting the StatusBarHidden property of the appropriate ViewController?
A: It's a bug in the Simulator. It's working fine on iPhone.
Check http://gtekna.com/?p=140
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Duplicate keys in .NET dictionaries? Are there any dictionary classes in the .NET base class library which allow duplicate keys to be used? The only solution I've found is to create, for example, a class like:
Dictionary<string, List<object>>
But this is quite irritating to actually use. In Java, I believe a MultiMap accomplishes this, but cannot find an analog in .NET.
A: It's easy enough to "roll your own" version of a dictionary that allows "duplicate key" entries. Here is a rough simple implementation. You might want to consider adding support for basically most (if not all) on IDictionary<T>.
public class MultiMap<TKey,TValue>
{
private readonly Dictionary<TKey,IList<TValue>> storage;
public MultiMap()
{
storage = new Dictionary<TKey,IList<TValue>>();
}
public void Add(TKey key, TValue value)
{
if (!storage.ContainsKey(key)) storage.Add(key, new List<TValue>());
storage[key].Add(value);
}
public IEnumerable<TKey> Keys
{
get { return storage.Keys; }
}
public bool ContainsKey(TKey key)
{
return storage.ContainsKey(key);
}
public IList<TValue> this[TKey key]
{
get
{
if (!storage.ContainsKey(key))
throw new KeyNotFoundException(
string.Format(
"The given key {0} was not found in the collection.", key));
return storage[key];
}
}
}
A quick example on how to use it:
const string key = "supported_encodings";
var map = new MultiMap<string,Encoding>();
map.Add(key, Encoding.ASCII);
map.Add(key, Encoding.UTF8);
map.Add(key, Encoding.Unicode);
foreach (var existingKey in map.Keys)
{
var values = map[existingKey];
Console.WriteLine(string.Join(",", values));
}
A: Here is one way of doing this with List< KeyValuePair< string, string > >
public class ListWithDuplicates : List<KeyValuePair<string, string>>
{
public void Add(string key, string value)
{
var element = new KeyValuePair<string, string>(key, value);
this.Add(element);
}
}
var list = new ListWithDuplicates();
list.Add("k1", "v1");
list.Add("k1", "v2");
list.Add("k1", "v3");
foreach(var item in list)
{
string x = string.format("{0}={1}, ", item.Key, item.Value);
}
Outputs k1=v1, k1=v2, k1=v3
A: In answer to the original question. Something like Dictionary<string, List<object>> is implemented in a class called MultiMap in The Code Project.
You could find more info to the below link :
http://www.codeproject.com/KB/cs/MultiKeyDictionary.aspx
A: The NameValueCollection supports multiple string values under one key (which is also a string), but it is the only example I am aware of.
I tend to create constructs similar to the one in your example when I run into situations where I need that sort of functionality.
A: When using the List<KeyValuePair<string, object>> option, you could use LINQ to do the search:
List<KeyValuePair<string, object>> myList = new List<KeyValuePair<string, object>>();
//fill it here
var q = from a in myList Where a.Key.Equals("somevalue") Select a.Value
if(q.Count() > 0){ //you've got your value }
A: If you are using strings as both the keys and the values, you can use System.Collections.Specialized.NameValueCollection, which will return an array of string values via the GetValues(string key) method.
A: If you're using .NET 3.5, use the Lookup class.
EDIT: You generally create a Lookup using Enumerable.ToLookup. This does assume that you don't need to change it afterwards - but I typically find that's good enough.
If that doesn't work for you, I don't think there's anything in the framework which will help - and using the dictionary is as good as it gets :(
A: Do you mean congruent and not an actual duplicate? Otherwise a hashtable wouldn't be able to work.
Congruent means that two separate keys can hash to the equivalent value, but the keys aren't equal.
For example: say your hashtable's hash function was just hashval = key mod 3. Both 1 and 4 map to 1, but are different values. This is where your idea of a list comes into play.
When you need to lookup 1, that value is hashed to 1, the list is traversed until the Key = 1 is found.
If you allowed for duplicate keys to be inserted, you wouldn't be able to differentiate which keys map to which values.
A: The way I use is just a
Dictionary<string, List<string>>
This way you have a single key holding a list of strings.
Example:
List<string> value = new List<string>();
if (dictionary.Contains(key)) {
value = dictionary[key];
}
value.Add(newValue);
A: You can create your own dictionary wrapper, something like this one, as a bonus it supports null value as a key:
/// <summary>
/// Dictionary which supports duplicates and null entries
/// </summary>
/// <typeparam name="TKey">Type of key</typeparam>
/// <typeparam name="TValue">Type of items</typeparam>
public class OpenDictionary<TKey, TValue>
{
private readonly Lazy<List<TValue>> _nullStorage = new Lazy<List<TValue>>(
() => new List<TValue>());
private readonly Dictionary<TKey, List<TValue>> _innerDictionary =
new Dictionary<TKey, List<TValue>>();
/// <summary>
/// Get all entries
/// </summary>
public IEnumerable<TValue> Values =>
_innerDictionary.Values
.SelectMany(x => x)
.Concat(_nullStorage.Value);
/// <summary>
/// Add an item
/// </summary>
public OpenDictionary<TKey, TValue> Add(TKey key, TValue item)
{
if (ReferenceEquals(key, null))
_nullStorage.Value.Add(item);
else
{
if (!_innerDictionary.ContainsKey(key))
_innerDictionary.Add(key, new List<TValue>());
_innerDictionary[key].Add(item);
}
return this;
}
/// <summary>
/// Remove an entry by key
/// </summary>
public OpenDictionary<TKey, TValue> RemoveEntryByKey(TKey key, TValue entry)
{
if (ReferenceEquals(key, null))
{
int targetIdx = _nullStorage.Value.FindIndex(x => x.Equals(entry));
if (targetIdx < 0)
return this;
_nullStorage.Value.RemoveAt(targetIdx);
}
else
{
if (!_innerDictionary.ContainsKey(key))
return this;
List<TValue> targetChain = _innerDictionary[key];
if (targetChain.Count == 0)
return this;
int targetIdx = targetChain.FindIndex(x => x.Equals(entry));
if (targetIdx < 0)
return this;
targetChain.RemoveAt(targetIdx);
}
return this;
}
/// <summary>
/// Remove all entries by key
/// </summary>
public OpenDictionary<TKey, TValue> RemoveAllEntriesByKey(TKey key)
{
if (ReferenceEquals(key, null))
{
if (_nullStorage.IsValueCreated)
_nullStorage.Value.Clear();
}
else
{
if (_innerDictionary.ContainsKey(key))
_innerDictionary[key].Clear();
}
return this;
}
/// <summary>
/// Try get entries by key
/// </summary>
public bool TryGetEntries(TKey key, out IReadOnlyList<TValue> entries)
{
entries = null;
if (ReferenceEquals(key, null))
{
if (_nullStorage.IsValueCreated)
{
entries = _nullStorage.Value;
return true;
}
else return false;
}
else
{
if (_innerDictionary.ContainsKey(key))
{
entries = _innerDictionary[key];
return true;
}
else return false;
}
}
}
The sample of usage:
var dictionary = new OpenDictionary<string, int>();
dictionary.Add("1", 1);
// The next line won't throw an exception;
dictionary.Add("1", 2);
dictionary.TryGetEntries("1", out List<int> result);
// result is { 1, 2 }
dictionary.Add(null, 42);
dictionary.Add(null, 24);
dictionary.TryGetEntries(null, out List<int> result);
// result is { 42, 24 }
A: The List class actually works quite well for key/value collections containing duplicates where you would like to iterate over the collection. Example:
List<KeyValuePair<string, string>> list = new List<KeyValuePair<string, string>>();
// add some values to the collection here
for (int i = 0; i < list.Count; i++)
{
Print(list[i].Key, list[i].Value);
}
A: I just came across the PowerCollections library which includes, among other things, a class called MultiDictionary. This neatly wraps this type of functionality.
A: Very important note regarding use of Lookup:
You can create an instance of a Lookup(TKey, TElement) by calling ToLookup on an object that implements IEnumerable(T)
There is no public constructor to create a new instance of a Lookup(TKey, TElement). Additionally, Lookup(TKey, TElement) objects are immutable, that is, you cannot add or remove elements or keys from a Lookup(TKey, TElement) object after it has been created.
(from MSDN)
I'd think this would be a show stopper for most uses.
A: Since the new C# (I belive it's from 7.0), you can also do something like this:
var duplicatedDictionaryExample = new List<(string Key, string Value)> { ("", "") ... }
and you are using it as a standard List, but with two values named whatever you want
foreach(var entry in duplicatedDictionaryExample)
{
// do something with the values
entry.Key;
entry.Value;
}
A: I think something like List<KeyValuePair<object, object>> would do the Job.
A: If you are using >= .NET 4 then you can use Tuple Class:
// declaration
var list = new List<Tuple<string, List<object>>>();
// to add an item to the list
var item = Tuple<string, List<object>>("key", new List<object>);
list.Add(item);
// to iterate
foreach(var i in list)
{
Console.WriteLine(i.Item1.ToString());
}
A: I stumbled across this post in search of the same answer, and found none, so I rigged up a bare-bones example solution using a list of dictionaries, overriding the [] operator to add a new dictionary to the list when all others have a given key(set), and return a list of values (get).
It's ugly and inefficient, it ONLY gets/sets by key, and it always returns a list, but it works:
class DKD {
List<Dictionary<string, string>> dictionaries;
public DKD(){
dictionaries = new List<Dictionary<string, string>>();}
public object this[string key]{
get{
string temp;
List<string> valueList = new List<string>();
for (int i = 0; i < dictionaries.Count; i++){
dictionaries[i].TryGetValue(key, out temp);
if (temp == key){
valueList.Add(temp);}}
return valueList;}
set{
for (int i = 0; i < dictionaries.Count; i++){
if (dictionaries[i].ContainsKey(key)){
continue;}
else{
dictionaries[i].Add(key,(string) value);
return;}}
dictionaries.Add(new Dictionary<string, string>());
dictionaries.Last()[key] =(string)value;
}
}
}
A: I changed @Hector Correa 's answer into an extension with generic types and also added a custom TryGetValue to it.
public static class ListWithDuplicateExtensions
{
public static void Add<TKey, TValue>(this List<KeyValuePair<TKey, TValue>> collection, TKey key, TValue value)
{
var element = new KeyValuePair<TKey, TValue>(key, value);
collection.Add(element);
}
public static int TryGetValue<TKey, TValue>(this List<KeyValuePair<TKey, TValue>> collection, TKey key, out IEnumerable<TValue> values)
{
values = collection.Where(pair => pair.Key.Equals(key)).Select(pair => pair.Value);
return values.Count();
}
}
A: This is a tow way Concurrent dictionary I think this will help you:
public class HashMapDictionary<T1, T2> : System.Collections.IEnumerable
{
private System.Collections.Concurrent.ConcurrentDictionary<T1, List<T2>> _keyValue = new System.Collections.Concurrent.ConcurrentDictionary<T1, List<T2>>();
private System.Collections.Concurrent.ConcurrentDictionary<T2, List<T1>> _valueKey = new System.Collections.Concurrent.ConcurrentDictionary<T2, List<T1>>();
public ICollection<T1> Keys
{
get
{
return _keyValue.Keys;
}
}
public ICollection<T2> Values
{
get
{
return _valueKey.Keys;
}
}
public int Count
{
get
{
return _keyValue.Count;
}
}
public bool IsReadOnly
{
get
{
return false;
}
}
public List<T2> this[T1 index]
{
get { return _keyValue[index]; }
set { _keyValue[index] = value; }
}
public List<T1> this[T2 index]
{
get { return _valueKey[index]; }
set { _valueKey[index] = value; }
}
public void Add(T1 key, T2 value)
{
lock (this)
{
if (!_keyValue.TryGetValue(key, out List<T2> result))
_keyValue.TryAdd(key, new List<T2>() { value });
else if (!result.Contains(value))
result.Add(value);
if (!_valueKey.TryGetValue(value, out List<T1> result2))
_valueKey.TryAdd(value, new List<T1>() { key });
else if (!result2.Contains(key))
result2.Add(key);
}
}
public bool TryGetValues(T1 key, out List<T2> value)
{
return _keyValue.TryGetValue(key, out value);
}
public bool TryGetKeys(T2 value, out List<T1> key)
{
return _valueKey.TryGetValue(value, out key);
}
public bool ContainsKey(T1 key)
{
return _keyValue.ContainsKey(key);
}
public bool ContainsValue(T2 value)
{
return _valueKey.ContainsKey(value);
}
public void Remove(T1 key)
{
lock (this)
{
if (_keyValue.TryRemove(key, out List<T2> values))
{
foreach (var item in values)
{
var remove2 = _valueKey.TryRemove(item, out List<T1> keys);
}
}
}
}
public void Remove(T2 value)
{
lock (this)
{
if (_valueKey.TryRemove(value, out List<T1> keys))
{
foreach (var item in keys)
{
var remove2 = _keyValue.TryRemove(item, out List<T2> values);
}
}
}
}
public void Clear()
{
_keyValue.Clear();
_valueKey.Clear();
}
IEnumerator IEnumerable.GetEnumerator()
{
return _keyValue.GetEnumerator();
}
}
examples:
public class TestA
{
public int MyProperty { get; set; }
}
public class TestB
{
public int MyProperty { get; set; }
}
HashMapDictionary<TestA, TestB> hashMapDictionary = new HashMapDictionary<TestA, TestB>();
var a = new TestA() { MyProperty = 9999 };
var b = new TestB() { MyProperty = 60 };
var b2 = new TestB() { MyProperty = 5 };
hashMapDictionary.Add(a, b);
hashMapDictionary.Add(a, b2);
hashMapDictionary.TryGetValues(a, out List<TestB> result);
foreach (var item in result)
{
//do something
}
A: i use this simple class:
public class ListMap<T,V> : List<KeyValuePair<T, V>>
{
public void Add(T key, V value) {
Add(new KeyValuePair<T, V>(key, value));
}
public List<V> Get(T key) {
return FindAll(p => p.Key.Equals(key)).ConvertAll(p=> p.Value);
}
}
usage:
var fruits = new ListMap<int, string>();
fruits.Add(1, "apple");
fruits.Add(1, "orange");
var c = fruits.Get(1).Count; //c = 2;
A: Also this is possible:
Dictionary<string, string[]> previousAnswers = null;
This way, we can have unique keys. Hope this works for you.
A: Duplicate keys break the entire contract of the Dictionary. In a dictionary each key is unique and mapped to a single value. If you want to link an object to an arbitrary number of additional objects, the best bet might be something akin to a DataSet (in common parlance a table). Put your keys in one column and your values in the other. This is significantly slower than a dictionary, but that's your tradeoff for losing the ability to hash the key objects.
A: U can define a method to building a Compound string key
every where u want to using dictionary u must using this method to build your key
for example:
private string keyBuilder(int key1, int key2)
{
return string.Format("{0}/{1}", key1, key2);
}
for using:
myDict.ContainsKey(keyBuilder(key1, key2))
A: You can add same keys with different case like:
key1
Key1
KEY1
KeY1
kEy1
keY1
I know is dummy answer, but worked for me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "290"
} |
Q: What do you use for fixed point representation in C++? I'm looking for a fixed-point standard to use for financial data, do you know any that is worth trying? Do you have any experience on the performance of that hand-made fixed-point classes?
A: Dr.Dobb's has an article about a possible implementation of fixed-point arithmetic type in C++. Check this out.
A: Ouch. Financial systems are tricky, your main problem is not fixed point math, the problem are the rounding errors.
You can have a nice spreadsheet full with maverlous calculations with discounts by client type and VAT included. You make a total, you present it to an accountant and he says the values are all wrong. The reason: The output may be formated with only 2 decimal places but internally the value has all the decimal places of a float or double. and they do add up.
You need to know your financials and decide where the base values will be. Meaning what values are the ones the accountants will check (yes it requires business knowledge, hance the 'tricky' part).
The before you save the value to a persistent form (database, file, memory ...) you truncate the extra decimal places that multiplications and divisions may have added.
Quick and dirty solution for N decimal places:
((double)((int)(Value * N * 10.0)))/10.0
Of course you need to check exactly which kind of rounding do your financials require.
A: I use my fixed point math class. It is designed to be more or less a drop in replacement for floats/doubles. http://codef00.com/coding
EDIT: As a side note, I would not personally used a fixed point class for this purpose. I would instead just store the number of cents (or tenths of a cent, or hundredths of a cent as needed). A just do the math directly with that. Then I would scale the value appropriately when displaying to the users.
A: IBM's decNumber++
A: ISO specified a decimal extension to C, TR 24732, and to C++, TR 24733. They are available for money on the ISO website. It's not yet part of any published C++ Standard. GCC provides built-in types and a library implementation of it. Another implementation is available from Intel. The most recent push for having this included in C++ is here.
A: A 64-bit int type should suffice for representing all financial values in cents.
You just need to be careful to round percentages correctly, for some definition of correct.
A: Trying to answer directly
Markus Trenkwalder has one that supports some math functions - http://www.trenki.net/content/view/17/1/:
The library consists of various functions for dealing with fixed point numbers (multiplication, division, inversion, sin, cos, sqrt, rsqrt). It also contains a C++ wrapper class which can be used to simplify working with fixed points numbers greatly. I used this fixed point number class in conjunction with my vector_math library to obtain a fixed point vector math library. Doing so made the 3D computations a lot faster compared to the floating point version.
The author made it a point to say his platform does not support floating point though, that's why he did it. Also, note that it's for 3D rendering, the question was for financial data and we want a good library of math functions....
IEEE 754-2008 Decimal Floating-Point Arithmetic specification, aimed at financial applications
This looks like an established way of handling financial data with good support (from Intel and IEEE) - http://software.intel.com/en-us/articles/intel-decimal-floating-point-math-library
To quote:
IEEE 754-2008 Decimal Floating-Point Arithmetic specification, aimed at financial applications, especially in cases where legal requirements make it necessary to use decimal, and not binary floating-point arithmetic (as computation performed with binary floating-point operations may introduce small, but unacceptable errors).
It is NOT fixed-point though, but I thought it is pretty useful for people seeking an answer to this question.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Best way to search in a table and get results and number of results (MySQL) I have a table of "items", and a table of "itemkeywords".
When a user searches for a keyword, I want to give him one page of results plus the total number of results.
What I'm doing currently is (for a user that searches "a b c":
SELECT DISTINCT {fields I want} FROM itemkeywords JOIN items
WHERE (keyword = 'a' or keyword='b' or keyword='c'
ORDER BY "my magic criteria"
LIMIT 20.10
and then I do the same query with a count
SELECT COUNT(*) FROM itemkeywords JOIN items
WHERE (keyword = 'a' or keyword='b' or keyword='c'
This may get to get a fairly large table, and I consider this solution suck enormously...
But I can't think of anything much better.
The obvious alternative to avoid hitting MySQL twice , which is doing the first query only, without the LIMIT clause, and then navigating to the correct record to show the corresponding page, and then to the end of the recordset in order to count the result seems even worse...
Any ideas?
NOTE: I'm using ASP.Net and MySQL, not PHP
A: Add SQL_CALC_FOUND_ROWS after the select in your limited select, then do a "SELECT FOUND_ROWS()" after the first select is finished.
Example:
mysql> SELECT SQL_CALC_FOUND_ROWS * FROM tbl_name
-> WHERE id > 100 LIMIT 10;
mysql> SELECT FOUND_ROWS();
A: If you're really worried about performance and you do end up needing to make two queries, you might want to consider caching the total number of matches, since that wouldn't change as the user browsed the pages of results.
A: You have 2 options:
The MySQL API should have a function that returns the number of rows. Using the older API its mysql_num_rows(). This won't work if you are uisng an unbuffered query.
THe easier method might be to combine both your queries:
SELECT DISTINCT {fields I want}, count(*) as results
FROM itemkeywords JOIN items
WHERE (keyword = 'a' or keyword='b' or keyword='c'
ORDER BY "my magic criteria"
LIMIT 20.10
I did some tests, and the count(*) function isn't affected by the limit clause. I would test this with DESCRIBE first. I don't know how much it would affect the speed of your query. A query that only has to give the first 10 results should be shorter then one that has to find all the results for the count, and then the first 10., but I might be wrong here.
A: You might look at MySQL SQL_CALC_FOUND_ROWS in your first statement followed by a second statement SELECT FOUND_ROWS() which at least prevents you doing 2 data queries, but will still tend to do an entire table scan once.
See http://dev.mysql.com/doc/refman/5.0/en/select.html and
http://dev.mysql.com/doc/refman/5.0/en/information-functions.html
Better to consider: do you really need this feature?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: nant: Can I extract the last directory in a path? In Nant, I would like to be able to extract the last name of the directory in a path.
For example, we have the path 'c:\my_proj\source\test.my_dll\'
I would like to pass in that path and extract 'test.my_dll'
Is there a way to easily do this?
A: You can actually do it with existing NAnt string functions. Just a bit ugly...
${string::substring(path, string::last-index-of(path, '\') + 1, string::get-length(path) - string::last-index-of(path, '\') - 1)}
A: It is possible to find the parent directory of your path and then use string replace to find the folder you're looking for:
<property name="some.dir" value="c:\my_proj\source\test.my_dll" />
<property name="some.dir.parent" value="${directory::get-parent-directory(some.dir)}" />
<property name="directory" value="${string::replace(some.dir, some.dir.parent + '\', '') }" />
A: You may want to try the new function added to nant 0.93 (still in the nightly builds though) -
directory::get-name(path)
This would return the name of the directory mentioned in the path.
Refer to nant help
A: See the script task. You can write custom code in C# or whatever, and return a value that you can assign to a property.
A: No. You'll need to write a custom task for something like that.
A: Expanding on Steve K:
<script language="C#" prefix="path" >
<code>
<![CDATA[
[Function("get-dir-name")]
public static string GetDirName(string path) {
return System.IO.Path.GetFileName(path);
}
]]>
</code>
</script>
<target name="build">
<foreach item="Folder" in="." property="path">
<echo message="${path::get-dir-name(path)}" />
</foreach>
</target>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Push or Pull? Turning keypresses into velocity for in-game vehicles Should I push keypresses to vehicles when they're pressed, or should vehicles pull keys pressed from the engine?
I have a vehicle object, which has location, velocity and accelleration members (among other things) and an update method, during which it updates its location based on its velocity, and its vevlocity based on its accelleration.
I have a game object which contains the game loop, which calls the update method on the vehicle.
If the player controls the vehicle with the arrow keys, should a keypress set the accelleration (push) and a key-release clear the velocity, or should the vehicle ask the game-engine if the accellerate key is pressed (pull)? I think a push would mean that the keyboard control module would need to know about vehicles, whereas pull would mean a vehicle needs to know specific keyboard controls.
I think a related question would be something like: should all objects know about all other objects, or should there be a strict hierarchy, so objects can ask things / tell things to other objects up the tree, but not down (or vice-versa)?
A: You should try and follow an Subscribing/Observer pattern.
You put all the key capture code into one singleton InputManager and then each object that requires reaction to input registers with the manager.
The manager holds the list of subscribed objects and sends events to them when the keys are pressed/depressed.
Just don't forget to unsubscribe when the object is deleted or 'loses focus'.
This avoids the polling problem.
There are very few exceptions where a polling solution is desirable.
A: IMO, your vehicle shouldn't know anything about keyboards, mice, or gamepads. And neither should your input handling code know anything about your vehicles. The input handling code should read the input for each player, and translate it into some sort of instruction specific for their context. For example, if player one is driving a car, his instruction might include steering wheel rotation, acceleration, and brake values. While a player piloting a plane might require pitch, yaw, etc.
By translating the gamepad input (or whatever) to the appropriate instruction type, you can decouple input mechanisms from game logic. One thing that would be possible with this level of decoupling would be to create a "CarInstruction" from network input.
A: Answering this question is hard without more intimate knowledge about how your game engine works. That being said, I'll take a stab at it. The "push keyboard presses" approach reads to me like an "event" or "callbacks" strategy. You define a function somewhere that looks like def handle_key_event(name_of_key): that gets called whenever a key event occurs. The advantage of this is that from a readability perspective, you know exactly where key events are being handled. On the downside, each key press needs to be treated as an atomic operation. If you need to keep lots of state variables around on the state of other keys to determine what to on each press, it can get a little messy.
On the other hand, if you pull key presses, you introduce an inherent delay in catching key presses. You won't catch key events any faster than your tickrate/framerate. This is fine if your game is ticking away nice and fast, but you don't want to have the UI become all jumpy/laggy when your framerate slows down.
Just food for thought, I guess. Above all, pick a strategy and stick with it. If keyboard events are a callback, don't use the "pull" approach for your mouse events, for instance. Consistency is more important than correctness here IMO.
A: @Joel: I agree - vehicles shouldn't know about specific hardware controls, and input handling code shouldn't know anything about vehicles. There should be some intermediate class which maps from keys to vehicles. Thanks for the contribution!
A: you want to poll:
void UpdateVehicleFromInput()
{
if (InputSystem()->IsKeyDown(key))
DoSomething();
}
And this is of course 'somewhere in your update loop where appropriate for you're design' If you want to call that particular place 'part of your input system' or 'part of your game logic' or whatever else, knock yourself out.
This way you know why your doing something (cause key is down), you can adjust the conditions trivially, you know you're doing something once and exactly once (and you can change it w/o ramification, particulary if the vehicle doesn't exist), and you know when you are doing something (before or after you say, respond to damage, or position your particle effects, or who knows what else).
Abstracting the input system can be valid iif you really are doing cross platform development. for casual development, it's very unnecessary (but a fun technical distraction when you run out of game design ideas to implement).
Contrary to irrational popular belief, there's no downside to polling. Processors do > 1B things a second, one IF a frame is irrelevant (pretty much the only relevant cpu operations are N^2 where N>100 and blowing your l2 cache and of course busy waiting for disk access). Polling input is O(1).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Change WPF DataTemplate for ListBox item if selected I need to change the DataTemplate for items in a ListBox depending on whether the item is selected or not (displaying different/more information when selected).
I don't get a GotFocus/LostFocus event on the top-most element in the DataTemplate (a StackPanel) when clicking the ListBox item in question (only through tabbing), and I'm out of ideas.
A: It should also be noted, that the stackpanel isn't focuable, so it's never going to get focus (set Focusable=True if you /really/ want it focused). However, the key to remember in scenarios like this is that the Stackpanel is child of the TreeViewItem, which is the ItemContainer in this case. As Micah suggests, tweaking the itemcontainerstyle is a good approach.
You could probably do it using DataTemplates, and things such as datatriggers which would use the RelativeSouce markup extension to look for the listviewitem
A: The easiest way to do this is to supply a template for the "ItemContainerStyle" and NOT the "ItemTemplate" property. In the code below I create 2 data templates: one for the "unselected" and one for the "selected" states. I then create a template for the "ItemContainerStyle" which is the actual "ListBoxItem" that contains the item. I set the default "ContentTemplate" to the "Unselected" state, and then supply a trigger that swaps out the template when the "IsSelected" property is true. (Note: I am setting the "ItemsSource" property in the code behind to a list of strings for simplicity)
<Window.Resources>
<DataTemplate x:Key="ItemTemplate">
<TextBlock Text="{Binding}" Foreground="Red" />
</DataTemplate>
<DataTemplate x:Key="SelectedTemplate">
<TextBlock Text="{Binding}" Foreground="White" />
</DataTemplate>
<Style TargetType="{x:Type ListBoxItem}" x:Key="ContainerStyle">
<Setter Property="ContentTemplate" Value="{StaticResource ItemTemplate}" />
<Style.Triggers>
<Trigger Property="IsSelected" Value="True">
<Setter Property="ContentTemplate" Value="{StaticResource SelectedTemplate}" />
</Trigger>
</Style.Triggers>
</Style>
</Window.Resources>
<ListBox x:Name="lstItems" ItemContainerStyle="{StaticResource ContainerStyle}" />
A: To set the style when the item is selected or not all you need to do is to retrieve the ListBoxItem parent in your <DataTemplate> and trigger style changes when its IsSelected changes. For example the code below will create a TextBlock with default Foreground color green. Now if the item gets selected the font will turn red and when the mouse is over the item will turn yellow. That way you don't need to specify separate data templates as suggested in other answers for every state you'd like to slightly change.
<DataTemplate x:Key="SimpleDataTemplate">
<TextBlock Text="{Binding}">
<TextBlock.Style>
<Style>
<Setter Property="TextBlock.Foreground" Value="Green"/>
<Style.Triggers>
<DataTrigger Binding="{Binding Path=IsSelected, RelativeSource={
RelativeSource Mode=FindAncestor, AncestorType={x:Type ListBoxItem }}}"
Value="True">
<Setter Property="TextBlock.Foreground" Value="Red"/>
</DataTrigger>
<DataTrigger Binding="{Binding Path=IsMouseOver, RelativeSource={
RelativeSource Mode=FindAncestor, AncestorType={x:Type ListBoxItem }}}"
Value="True">
<Setter Property="TextBlock.Foreground" Value="Yellow"/>
</DataTrigger>
</Style.Triggers>
</Style>
</TextBlock.Style>
</TextBlock>
</DataTemplate>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "92"
} |
Q: Default pass-by-reference semantics in C++ EDIT: This question is more about language engineering than C++ itself. I used C++ as an example to show what I wanted, mostly because I use it daily. I didn't want to know how it works on C++ but open a discussion on how it could be done.
That's not the way it works right now, that's the way I wish it could be done, and that would break C compability for sure, but that's what I think extern "C" is all about.
I mean, in every function or method that you declare right now you have to explicit write that the object will be sent by reference prefixing the reference operator on it. I wish that every non-POD type would be automatically sent by reference, because I use that a lot, actually for every object that is more than 32 bits in size, and that's almost every class of mine.
Let's exemplify how it's right now, assume a, b and c to be classes:
class example {
public:
int just_use_a(const a &object);
int use_and_mess_with_b(b &object);
void do_nothing_on_c(c object);
};
Now what I wish:
class example {
public:
int just_use_a(const a object);
int use_and_mess_with_b(b object);
extern "C" void do_nothing_on_c(c object);
};
Now, do_nothing_on_c() could behave just like it is today.
That would be interesting at least for me, feels much more clear, and also if you know every non-POD parameter is coming by reference I believe the mistakes would be the same that if you had to explicit declare it.
Another point of view for this change, from someone coming from C, the reference operator seems to me a way to get the variable address, that's the way I used for getting pointers. I mean, it is the same operator but with different semantic on different contexts, doesn't that feel a little bit wrong for you too?
A: I'd rather not abuse references any more by making every (non-qualified) parameter a reference.
The main reason references were added to C++ was to support operator overloading; if you want "pass-by-reference" semantics, C had a perfectly reasonable way of doing it: pointers.
Using pointers makes clear your intention of changing the value of the pointed object, and it is possible to see this by just looking at the function call, you don't have to look at the function declaration to see if it's using a reference.
Also, see
I do want to change the argument,
should I use a pointer or should I use
a reference? I don't know a strong
logical reason. If passing ``not an
object'' (e.g. a null pointer) is
acceptable, using a pointer makes
sense. My personal style is to use a
pointer when I want to modify an
object because in some contexts that
makes it easier to spot that a
modification is possible.
from the same FAQ.
A: I guess you're missing the point of C++, and C++ semantics. You missed the fact C++ is correct in passing (almost) everything by value, because it's the way it's done in C. Always. But not only in C, as I'll show you below...
Parameters Semantics on C
In C, everything is passed by value. "primitives" and "PODs" are passed by copying their value. Modify them in your function, and the original won't be modified. Still, the cost of copying some PODs could be non-trivial.
When you use the pointer notation (the * ), you're not passing by reference. You're passing a copy of the address. Which is more or less the same, with but one subtle difference:
typedef struct { int value ; } P ;
/* p is a pointer to P */
void doSomethingElse(P * p)
{
p->value = 32 ;
p = malloc(sizeof(P)) ; /* Don't bother with the leak */
p->value = 45 ;
}
void doSomething()
{
P * p = malloc(sizeof(P)) ;
p->value = 25 ;
doSomethingElse(p) ;
int i = p->value ;
/* Value of p ? 25 ? 32 ? 42 ? */
}
The final value of p->value is 32. Because p was passed by copying the value of the address. So the original p was not modified (and the new one was leaked).
Parameters Semantics on Java and C Sharp
It can be surprising for some, but in Java, everything is copied by value, too. The C example above would give exactly the same results in Java. This is almost what you want, but you would not be able to pass primitive "by reference/pointer" as easily as in C.
In C#, they added the "ref" keyword. It works more or less like the reference in C++. The point is, on C#, you have to mention it both on the function declaration, and on each and every call. I guess this is not what you want, again.
Parameters Semantics on C++
In C++, almost everything is passed by copying the value. When you're using nothing but the type of the symbol, you're copying the symbol (like it is done in C). This is why, when you're using the *, you're passing a copy of the address of the symbol.
But when you're using the &, then assume you are passing the real object (be it struct, int, pointer, whatever): The reference.
It is easy to mistake it as syntaxic sugar (i.e., behind the scenes, it works like a pointer, and the generated code is the same used for a pointer). But...
The truth is that the reference is more than syntaxic sugar.
*
*Unlike pointers, it authorizes manipulating the object as if on stack.
*Unline pointers, when associatied with the const keyword, it authorizes implicit promotion from one type to another (through constructors, mainly).
*Unlike pointers, the symbol is not supposed to be NULL/invalid.
*Unlike the "by-copy", you are not spending useless time copying the object
*Unlike the "by-copy", you can use it as an [out] parameter
*Unlike the "by-copy", you can use the full range of OOP in C++ (i.e. you pass a full object to a function waiting an interface).
So, references has the best of both worlds.
Let's see the C example, but with a C++ variation on the doSomethingElse function:
struct P { int value ; } ;
// p is a reference to a pointer to P
void doSomethingElse(P * & p)
{
p->value = 32 ;
p = (P *) malloc(sizeof(P)) ; // Don't bother with the leak
p->value = 45 ;
}
void doSomething()
{
P * p = (P *) malloc(sizeof(P)) ;
p->value = 25 ;
doSomethingElse(p) ;
int i = p->value ;
// Value of p ? 25 ? 32 ? 42 ?
}
The result is 42, and the old p was leaked, replaced by the new p. Because, unlike C code, we're not passing a copy of the pointer, but the reference to the pointer, that is, the pointer itself.
When working with C++, the above example must be cristal clear. If it is not, then you're missing something.
Conclusion
C++ is pass-by-copy/value because it is the way everything works, be it in C, in C# or in Java (even in JavaScript... :-p ...). And like C#, C++ has a reference operator/keyword, as a bonus.
Now, as far as I understand it, you are perhaps doing what I call half-jockingly C+, that is, C with some limited C++ features.
Perhaps your solution is using typedefs (it will enrage your C++ colleagues, though, to see the code polluted by useless typedefs...), but doing this will only obfuscate the fact you're really missing C++ there.
As said in another post, you should change your mindset from C development (of whatever) to C++ development, or you should perhaps move to another language. But do not keep programing the C way with C++ features, because by consciously ignoring/obfuscating the power of the idioms you use, you'll produce suboptimal code.
Note: And do not pass by copy anything else than primitives. You'll castrate your function from its OO capacity, and in C++, this is not what you want.
Edit
The question was somewhat modified (see https://stackoverflow.com/revisions/146271/list ). I let my original answer, and answer the new questions below.
What you think about default pass-by-reference semantics on C++? Like you said, it would break compatibility, and you'll have different pass-by for primitives (i.e. built-in types, which would still be passed by copy) and structs/objects (which would be passed as references). You would have to add another operator to mean "pass-by-value" (the extern "C" is quite awful and already used for something else quite different). No, I really like the way it is done today in C++.
[...] the reference operator seems to me a way to get the variable address, that's the way I used for getting pointers. I mean, it is the same operator but with different semantic on different contexts, doesn't that feel a little bit wrong for you too? Yes and no. Operator >> changed its semantic when used with C++ streams, too. Then, you can use operator += to replace strcat. I guess the operator & got used because its signification as "opposite of pointer", and because they did not want to use yet another symbol (ASCII is limited, and the scope operator :: as well as pointer -> shows that few other symbols are usable). But now, if & bothers you, && will really unnerve you, as they added an unary && in C++0x (a kind of super-reference...). I've yet to digest it myself...
A: A compiler option that totally changes the meaning of a section of code sounds like a really bad idea to me. Either get use to the C++ syntax or find a different language.
A: I honestly think that this whole passing by value/passing by reference idea in C++ is misleading. Everything is pass by value. You have three cases:
*
*Where you pass a local copy of a variable
void myFunct(int cantChangeMyValue)
*Where you pass a local copy of a pointer to a variable
void myFunct(int* cantChangeMyAddress) {
*cantChangeMyAddress = 10;
}
*Where you pass a 'reference', but through compiler magic it's just as if you passed a pointer and simply dereferenced it every time.
void myFunct(int & hereBeMagic) {
hereBeMagic = 10; // same as 2, without the dereference
}
I personally find it much less confusing to remember that everything is pass by value. In some cases, that value might be an address, which allows you to change things outside the function.
What you are suggesting would not allow the programmer to do number 1. I personally think that would be a bad idea to take away that option. One major plus of C/C++ is having have fine grained memory management. Making everything pass by reference is simply trying to make C++ more like Java.
A: Yeah, I'm of the opinion that that's a pretty confusing overload.
This is what microsoft has to say about the situation:
Do not confuse reference declarations with use of the address-of operator. When & identifier is preceded by a type, such as int or char, then identifier is declared as a reference to the type. When & identifier is not preceded by a type, the usage is that of the address-of operator.
I'm not really great on C or C++, but I get bigger headaches sorting out the various uses of * and & on both languages than I do coding in assembler.
A: The best advice is to make a habit of thinking about what you really want to happen. Passing by reference is nice when you don't have a copy constructor (or don't want to use it) and it's cheaper for large objects. However, then mutations to the parameter are felt outside the class. You could instead pass by const reference -- then there are no mutations but you cannot make local modifications. Pass const by-value for cheap objects that should be read-only in the function and pass non-const by-value when you want a copy that you can make local modifications to.
Each permutation (by-value/by-reference and const/non-const) has important differences that are definitely not equivalent.
A: When you pass by value, you are copying data to the stack. In the event that you have an operator= defined for the struct or class that you are passing it, it gets executed. There is no compiler directive I am aware of that would wash away the rigmarole of implicit language confusion that the proposed change would inherently cause.
A common best practice is to pass values by const reference, not just by reference. This ensures that the value cannot be changed in the calling function. This is one element of a const-correct codebase.
A fully const-correct codebase goes even further, adding const to the end of prototypes. Consider:
void Foo::PrintStats( void ) const {
/* Cannot modify Foo member variables */
}
void Foo::ChangeStats( void ) {
/* Can modify foo member variables */
}
If you were to pass a Foo object in to a function, prefixed with const, you are able to call PrintStats(). The compiler would error out on a call to ChangeStats().
void ManipulateFoo( const Foo &foo )
{
foo.PrintStats(); // Works
foo.ChangeStats(); // Oops; compile error
}
A: there are something not clear. when you say:
int b(b ¶m);
what did you intend for the second 'b'? did you forget to introduce a type? did you forget to write differently with respect to the first 'b'? don't you think it's clear to write something like:
class B{/*something...*/};
int b(B& param);
Since now, I suppose that you mean what I write.
Now, your question is "don't you think will be better that the compiler will consider every pass-by-value of a non-POD as pass-by-ref?".
The first problem is that it will broke your contract. I suppose you mean pass-by-CONST-reference, and not just by reference.
Your question now is reduced to this one: "do you know if there's some compilers directive that can optimize function call by value?"
The answer now is "I don't know".
A: I think that c++ become very messy if you start to mix all the kind of available parameters, with their const variations.
It gets rapidly out of hand to trak all the copy constructors calls, all the dereferences overloaded and so on.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Function pointer to template class member functions I have a templated class defined (in part) as
template <class T> MyClass
{
public:
void DoSomething(){}
};
If I want to call DoSomething from another class, but be able to do this for multiple 'T' types in the same place, I am stuck for an idea as method functions pointers are uniquely constrained to the class type. Of course, each MyClass is a different type, so I can not store function pointers to MyClassDoSomething() in a 'polymorphic' way.
My use-case is I want to store, in a holding class, a vector of function pointers to 'DoSomething' such that I can issue a call to all stored classes from one place.
Has anyone any suggestions?
A: Ok, so the functor solution doesn't work as you need. Perhaps you should have your template class inherit from a common base "Interface" class. And then you use a vector of those.
Something like this:
class Base {
public:
virtual ~Base(){}
virtual void DoSomething() = 0;
}
template <class T> class MyClass : public Base {
public:
void DoSomething(){}
};
std::vector<Base *> objects;
objects.push_back(new MyClass<int>);
objects.push_back(new MyClass<char>);
A: You know, that is just what I needed to do. Bizzarly I had discounted it as a solution valid for my usecase early on, for reasons that now escape me. I think I was blinded by some metaprogramming stuff I'm doing in the same place for compile-time dispatch (i.e. confusing compile time and runtime in my addled brain).
Thanks for the jolts!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Manpage scandir() prototype weirdness I have a problem with scandir(): The manpage contains this as prototype:
int scandir(const char *dir, struct dirent ***namelist,
int (*filter)(const struct dirent *),
int (*compar)(const struct dirent **, const struct dirent **));
Therefore I have this:
static inline int
RubyCompare(const struct dirent **a,
const struct dirent **b)
{
return(strcmp((*a)->d_name, (*b)->d_name));
}
And here's the call:
num = scandir(buf, &entries, NULL, RubyCompare);
Finally the compiler says this:
warning: passing argument 4 of ‘scandir’ from incompatible pointer type
Compiler is gcc-4.3.2, my CFLAGS are following:
-Wall -Wpointer-arith -Wstrict-prototypes -Wunused -Wshadow -std=gnu99
What is the meaning of this warning? The declaration of RubyCompare looks correct for me and besides the warning the code works completely.
A: Actually, there's no such constraint that you can't pass a pointer to an inline function. The inline keyword serves only as a hint to the compiler to inline calls when it can.
The problem is that the manpage for scandir() is a little misleading. The prototype in for the 4th parameter is actually int (*cmp)(const void *, const void *).
Therefore you need to change the code like so:
static inline int RubyCompare(const void *a, const void *b)
{
return(strcmp((*(struct dirent **)a)->d_name,
(*(struct dirent **)b)->d_name));
}
I'm not actually sure why you're writing this function, though, because you can use the provided alphasort compare function:
num = scandir(buf, &entries, NULL, alphasort);
A: This prototype has actually changed in recent version of GNU libc to reflect POSIX standard.
If you have code that you want to work on both old and new code, then use the __GLIBC_PREREQ macro something like
#define USE_SCANDIR_VOIDPTR
#if defined( __GLIBC_PREREQ )
# if __GLIBC_PREREQ(2,10)
# undef USE_SCANDIR_VOIDPTR
# endif
#endif
#ifdef USE_SCANDIR_VOIDPTR
static int RubyCompare(const void *a, const void *b)
#else
static int RubyCompare(const struct dirent **a, const struct dirent **b)
#endif
...
A: You're giving it a pointer to an inline function? That doesn't make sense, actually I wonder that it even compiles with only a warning.
EDIT: Chris above is right, the inline keyword is just ignored silently when it doesn't make sense / is not applicable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Hidden Features of Xcode With a huge influx of newbies to Xcode, I'm sure there are lots of Xcode tips and tricks to be shared.
What are yours?
A: Open Quickly...
*
*Command ⌘ Shift ⇧ D
*File > Open Quickly...
I'm a big fan of the Open Quickly feature, which is particularly good in Xcode 3.1 and later. When you want to open a file or a symbol definition that's in your project or in a framework, just hit the keyboard shortcut, type a bit of the file or symbol's name, use Up Arrow ↑ and Down Arrow ↓ to pick to the right result (if need be), and then hit Return ↩ to open the file or navigate to the symbol definition.
On Xcode 4:
*
*Command ⌘ Shift ⇧ o
Open Quickly uses the current word as a search term
Also, something I didn't know about Xcode until two minutes ago (when schwa pointed it out in a comment) is that, if the editor's text caret is inside of a word when Open Quickly is invoked, that word will be used as the Open Quickly search term.
A: Use ^T to swap the previous two letters
This works in all Cocoa apps, but I like it especially when coding. Use ^T (Control-T) to swap the two letters adjacent to the caret, or when the caret is at the end, the two letters before the caret. For example:
fi^T
... becomes:
if
... which is a common typo I make.
A: Recompile-free debug logging
cdespinosa's answer to Stack Overflow question How do I debug with NSLog(@“Inside of the iPhone Simulator”)? gives a method for a debugging-via-logging technique that requires no recompilation of source. An amazing trick that keeps code free of debugging cruft, has a quick turnaround, and would have saved me countless headaches had I known about it earlier.
TODO comments
Prefixing a comment with TODO: will cause it to show up in the function "shortcut" dropdown menu, a la:
int* p(0); // TODO: initialize me!
A: Right click on any word and select 'Find Selected Text in API Reference' to search the API for that word. This is very helpful if you need to look up the available properties and/or methods for a class. Instead of heading to Apple.com or Google you will get a popup window of what you were looking for (or what was found).
A: In PyObjC, you can do the equivalent of #pragma mark for the symbols dropdown:
#MARK: Foo
and
#MARK: -
A: ⌘` to properly format (reindent) your code
EDIT: Apparently re-indent feature (Edit > Format > Reindent) has no default shortcut. I guess I assigned one (in Preferences > Key bindings) a long time ago and don't even remember about that. Sorry for misleading you.
A: Build success/failure noise; from term:
defaults write com.apple.Xcode PBXBuildSuccessSound ~/Library/Sounds/metal\ stamp.wav
defaults write com.apple.Xcode PBXBuildFailureSound ~/Library/Sounds/Elephant
A: For me it’s always been: Command ⌘ + 0:
After you debug or run or anything, if you quit the iPhone Simulator or the debugging app, you’re left with the debugger window.
When you’re using “Single-Window Layout”, going back to the editor must be done with a click in the toolbar which is annoying (plus you later need to “remove the detail pane”).
The above shortcut does it and leaves you ready to code.
A: Use #pragma for organization
You can use:
#pragma mark Foo
... as a way to organize methods in your source files. When browsing symbols via the pop up menu, whatever you place in Foo will appear bold in the list.
To display a separator (i.e. horizontal line), use:
#pragma mark -
It's very useful, especially for grouping together delegate methods or other groups of methods.
A: 1. Breakpoint on "objc_exception_throw"
You should always have a breakpoint on objc_exception_throw.
2. Debugging retain/release problems with "Zombie" variables
Use the following code:
NSZombieEnabled = YES;
NSDeallocateZombies = NO;
... to debug retain and release problems. For more information, see the "Finding Memory Leaks" section of Apple's Debugging Applications document.
3. Jumping to a class in Xcode from Interface Builder
Command ⌘ + Double-click on an object in Interface Builder's Document Window to jump to that class in Xcode. This is very handy with File's Owner.
4. Reusing customized objects in Interface Builder
Drag a customized object back to Interface Builder's Library for later reuse.
5. Select overlapping items in Interface Builder
Control ⌃ Shift ⇧ + Click on an object in Interface Builder to see a menu of all of the objects under the mouse.
6. Interface Builder Gesture Guide
Interface Builder Gesture Guide.
A: Ctrl + 2: Access the popup list of methods and symbols in the current file.
This is super useful because with this shortcut you can navigate through a file entirely using the keyboard. When you get to the list, start typing characters and the list will type-select to the symbol you are looking for.
A: Zoom Editor In
If your window displays both the detail and editor view, you can zoom the editor in to expand the editor view to the full height of the window. (This is fairly easily found, but many seem to overlook it.)
You can do this by using one of the following methods:
*
*Command ⌘ Shift ⇧ E
*View > Zoom Editor In
*Drag the splitter (between the editor
window and the file list above it)
upwards.
A: *
*Hold down option while selecting text to select non-contiguous sections of text.
*Hold down option while clicking on the symbol name drop down to sort by name rather than the order they appear in the file.
A: Being able to split the current editor window horizontally, which is great for wide screen monitors to be able to view the source and header file side by side. There are two different methods for doing depending on what version of Xcode you are using.
In Xcode 3.0 it is under Preferences, Key Bindings, Text Key Bindings at the bottom of that list.
In Xcode 2.5 it is under Preferences, Key Bindings, Menu Key Bindings, View menu.
A: ⇧⌘A. It will build and analyze, meaning that Xcode will warn you about possible leaks.
A: Cmd + Ctrl + up / down collapses all of your functions or uncollapses them.
A: One more .... Hex Color Picker...
it add's hex tab to your interface builder's color panel ... so now you can use
hex color directly from Interface Builder..
A: Highlight Blocks of Code (Focus Follows Selection)
Activate "Focus Follow Selection" from View -> Code Folding -> Focus Follows Selection or ControlOptionf.
This also works for Python code, but leading whitespace in a line will throw it off. To fix it, install Google's Xcode Plugin and activate "Correct Whitespace on Save" in the preference thing that it installs. This will clear trailing whitespace every time you save a file, so if the highlighting get's screwed up, you can just save the file and it will work again. (And see, this is actually two hints in one, because this feature from the plugin is useful to have on its own).
Here is an example with some random Python code I just wrote. I am using the Midnight Xcode syntax coloring theme.
This is really helpful for highly nested parts of the code, to keep track of what is where. Also, notice how on the left, just to the right of the line numbers, those parts are colored too. That is the code folding bar. If you run your mouse down the side, it highlights the part under the mouse. And any of those colored bars can be folded, in other words, the parts of the code that are highlighted are exactly those parts that can be folded.
A: I just discovered how to change the indentation behavior used in the text macros:
For example, if you are like me and don't like this:
if (cond) {
code;
}
but prefer this instead:
if (cond)
{
code;
}
then you can change this globally (for all languages) by setting the following defaults in the terminal:
defaults write com.apple.Xcode XCCodeSenseFormattingOptions -dict-add BlockSeparator "\n"
This has been bugging me for years, I hope it is of some interest for someone else as well.
The documentation for this feature can be found in the Xcode User Default Reference
A:
*
*To Open the debugging window on Debugger starts
Change the Debugging preferences shown in the image..
*To clear the console log everytime app runs, check the Auto clear Debug Console.
A: Get Colin Wheeler's Complete Xcode Keyboard Shortcut List (available as PDF or PNG). Print it and keep it somewhere visible (I've got it on the wall next to my screen).
edit:
Updated versions for Xcode 3.2
edit 2:
Updated versions for Xcode 4
A: When you use code completion on a method and it has multiple arguments, using CTRL + / to move to the next argument you need to fill in.
A: The User Scripts menu has a lot of goodies in it, and it's relatively easy to add your own. For example, I added a shortcut and bound it to cmd-opt-- to insert a comment divider and a #pragma mark in my code to quickly break up a file.
#!/bin/sh
echo -n "//================....================
#pragma mark "
When I hit cmd-opt--, these lines are inserted into my code and the cursor is pre-positioned to edit the pragma mark component, which shows up in the symbol popup.
A: Check out a nice screencast about 'becoming productive in Xcode': becoming-productive-in-xcode
A: If you have a multi-touch capable Mac - use MultiClutch to map some of the keystrokes described by mouse gestures.
I use three finger forward and back to go forward and back in file history (cmd-alt-.), and pinch to switch between .h and .m.
A: Use AppKiDo to browse the documentation.
Use Accessorizer for a bunch of mundane, repetitive tasks in Xcode.
A: A different way to set the your company name in a project template is to:
*
*Add a contact for yourself in Address Book
*Edit Company field in your contact to your Company name
*Now select your contact then go to menu and select Card -> Make This My Card
*Your contact card should now be bold in address book to confirm this.
This should now add your company name to all your project templates as well as providing other applications with more autofill information!
A: To display the current autocompletion options in a popup menu by default (without having to press ESC first), type
defaults write com.apple.Xcode XCCodeSenseAutoSuggestionStyle List
in the Terminal and restart Xcode.
A: Xcode code formatting... is one of the thing you need when you want to make your code
readable and look good.
You can do the code formatting by yourself or save some time using scripts.
One good way is.. use Uncrustify. It is explained in Code Formatting in Xcode.
A: Not much of a keyboard shortcut but the TODO comments in the source show up in the method/function dropdown at the top of the editor.
So for example:
// TODO: Some task that needs to be done.
shows up in the drop down list of methods and functions so you can jump to it directly.
Most Java IDEs show a marker for these task tags in the scrollbar, which is nicer, but this also works.
A: ⌘-[ and ⌘-] to indent and unindent selected text. Makes cleaning up source code much easier.
A: To link a new framework
(In the Groups and Files pane, open the Targets disclosure triangle to display the targets associated with your project.)
*
*In the Groups and Files pane, double-click your current project target to display the Target Info panel.
*In the Info panel, select the General tab. The lower pane displays the currently-linked frameworks.
*Add a new framework by pressing the + button at the bottom left of the panel and selecting from the list presented in the sheet that appears. (Importantly, the list in the sheet shows only the frameworks relevant to the target...)
(This wasn't available two years ago, but it's nevertheless worth pointing out as a significant time-saver over finding the framework in the filesystem and dragging it into the project...)
A: Ctrl-left/Ctrl-right to navigate words within a variable or method name. Can't live without this one.
A: Key bindings to Xcode actions
I also adore the "re-indent". True there is no default shortcut, but you can add one from the Text Key Bindings tab of the Key Bindings preference pane.
Which is a time-saver all on its own. Just lookup your favorite actions and add/edit keyboard shortcuts!
One set of defaults I do find handy are the CMD+" and CMD+' to add/remove vertical splits. Hold down option for these and now you have the same for horizontal. But if these gestures don't work for you, you can always change them.
A: When typing a method press ESC to see the code completion options (no doubt this has been mentioned before). I already knew about this, but TODAY I discovered that if you press the button in the lower-right-hand corner of the code completion window (it'll be either an 'A' or Pi) you can toggle between alphabetical sorting and what appears to be sorting by class hierarchy.
All of a sudden this window is useful!
A: As for "Open Quickly" feature - it's great, but I've always missed TextMate's cmd-shift-t for browsing the projects and files (symbols, methods, etc).
That's why I've released an Xcode plugin that provides just that. It's called Code Pilot and you might want to take a look at it: http://macoscope.net/en/mac/codepilot/
A: In shell build phases you can write to stderr using the following format:
<filename>:<linenumber>: error | warn | note : <message>\n
It's the same format gcc uses to show errors. The filename:linenumber part can be omitted. Depending on the mode (error, warn, note), Xcode will show your message with a red or yellow badge.
If you include an absolute file path and a line number (if the error occurred in a file), double clicking the error in the build log lets Xcode open the file and jumps to the line, even if it is not part of the project. Very handy.
A: Control+R to execute selected text as a shell script which returns with the pasted output following the selection!
A: Select a block of text and use
Command + '/'
To comment out the block of text. Selected the commented block and use the same shortcut to uncomment it.
A: Hold Option while splitting windows to split vertically rather than horizontally.
A: Double-click on the square brackets or parentheses to obtain bracket and parentheses matching.
A: Move back or forward a full word with alt-. Move back or forward a file in your history with cmd-alt-. Switch between interface and implementation with cmd-alt-.
Jump to the next error in the list of build errors with cmd-=. Display the multiple Find panel with cmd-shift-f. Toggle full editor visibility with cmd-shift-e.
Jump to the Project tab with cmd-0, to the build tab with cmd-shift-b and to the debug tab with cmd-shift-y (same as the key commands for the action, with shift added).
A: Some tips to be found in Xcode Tools Tips.
A: Sort contents of Groups in Xcode's Groups and Files pane by selecting the Group, then Edit > Sort By > Name.
You would expect to find this in the contextual menu for the group, but it isn't there.
Credit: Sorting of files in Xcode.
A: I have created my own file templates for NSObject, UIView and UIViewController so when I create new classes, the files are all set up with private sections and logging of class' address in init and dealloc.
Example (NSObject derived class named 'test' will start like this):
//=====================================================
// Private Interface
//=====================================================
@interface test (private)
@end
//=====================================================
// Public Implementation
//=====================================================
@implementation test
- (void)dealloc {
NSLog(@">>> Dealloc: test [0x%X]", self);
[super dealloc];
NSLog(@"<<< Dealloc: test");
}
- (id) init
{
self = [super init];
if(self) {
NSLog(@">>> Alloc: test [0x%X]", self);
}
return self;
}
@end
//=====================================================
// Private Implementation
//=====================================================
@implementation test (private)
@end
Plenty of resources are available for this, for example Cocoa dev: Design your own Xcode project templates.
A: There are many adjustments you can make to how Xcode treats the formatting of your code, but only if you change the settings via command line. I threw together a little program that lets you adjust them to your liking. Enjoy :)
Xcode Formatting Options
A: Cmd + ~ (tilde - looks weird on the button...)
To switch between any open Xcode window - also when multiple projects are open.
A: Control Xcode's text editor from the command line: xed
> xed -x # open a new untitled document
> xed -xc foo.txt # create foo.txt and open it
> xed -l 2000 foo.txt # open foo.txt and go to line 2000
# set Xcode to be your EDITOR for command line tools
# e.g. for subversion commit
> echo 'export EDITOR="xed -wcx"' >> ~/.profile
> man xed # there's a man page, too
A: Switch to Header/Source File
*
*Option ⌥ Command ⌘ Up Arrow ↑
*View > Switch to Header/Source File
Switches between the .m and .h files.
*
*In Xcode 4 this is ctrl Command ⌘ Up Arrow ↑
A: "Ctrl+Left/Right Arrow" to do intra-word text navigation. I use this feature to jump the cursor from the one "camel hump" in a variable to the next.
A: Xcode supports text macros that can be invoked via the Insert Text Macro menu at the end of the Edit menu. They can also be invoked using Code Sense, Xcode's code completion technology.
For example, Typing the key sequence p i m control-period will insert #import "file" into your code, with file as an editable token just like with code completion.
A: Right click on a variable in your function and click edit all in scope. Been using it a lot since I found this out.
ctrl ⌘ T
A: If the hilighting gets messed up, if your ivars aren't hilighted or anything else, just do ⌘-A ⌘-X ⌘-V, which will select all, cut, and paste and all the hilighting will be corrected. So just hold down ⌘ and press A then X then V.
A: *
*To "set next statement", just drag the red instruction pointer to the next line to execute. (source)
A: Alt-Left & Right to go to the end/start of the line. This along with the CTRL-Left & Right to move to the next capital letter, or word break. these two save me so much time
A: I don't really like the code-formatting/reindent feature that is built into xcode, so I found using uncrustify as a code formatter very useful. It can be used as a User Script: http://hackertoys.com/2008/09/18/adding-a-code-beautifier-script-to-xcode/
A: Use xcodebuild command line to do a clean build on the shared build machine:
cd project_directory
xcodebuild -configuration Release -alltargets clean
xcodebuild -configuration Release -alltargets
A: My favorites have to be these general editor shortcuts:
*
*⌘ + 0 returns you back to your editor from debug mode.
*⌘ + Shift + R takes you from debug mode to editor view (project mode)
*⌘ + Shift + E "maximizes" the editor (This is very useful when you have build results, etc. displayed above your editor and you just want to make your source editor taller)
*Ctrl + 2 displays an outline of your current code
*⌘ + Return runs the application
*⌘ + Shift + Return ends the application
A: Pressing ⌥⇧⌘D activates "Open this Quickly", which navigates you to the first result from "Open Quickly" using the selected text. If the selected text is in the format <filename:lineNumber>, (with or without <>) "Open this Quickly" takes you to the file plus line number.
You can combine this with the following tip:
You can write logs that contain the filename and line number of the log entry using this macro: (Make sure to define -DDEBUG=1 on your C Flags used in your target's debug configuration)
#ifdef DEBUG
#define DLog(fmt, ...) NSLog((@"%s <%@:%d> " fmt), __PRETTY_FUNCTION__, [[NSString stringWithFormat:@"%s", __FILE__ ] lastPathComponent] ,__LINE__, ##__VA_ARGS__)
#else
#define DLog(format, ...)
#endif
In your DLog() output, double-clicking on the "<" character to select the <filename:lineNumber> and pressing ⌥⇧⌘D will open the line where the log is in the source code.
A: You can have Xcode run the preprocessor over your Info.plist file:
<key>CFBundleShortVersionString</key>
#ifdef DEBUG
<string>1.0 (debug)</string>
#else
<string>1.0</string>
#endif
See http://developer.apple.com/technotes/tn2007/tn2175.html for details.
A: Debugging - how to use GDB
Being new to this still, I find trapping and identifying faults a rather
daunting job. The console, despite it being a powerful tool, usually
does not yield very intuitive results and knowing what you are
looking at in the debugger can be equally difficult to
understand. With the help of some of they guys
on Stack Overflow and the good article about
debugging that can be found at
Cocoa With Love it becomes a little more friendly.
A: Navigate among open files back and forth:
⌥⌘←
⌥⌘→
A: Technically an Interface Builder tip, but they're a book-matched pair, so I don't think this is off topic...
Shift + Right Click on one of your controls and you get a nice pick list of the object hierarchy. No more click, click, click, frustration!
A: With Trackpad:
*
*Swipe Three Fingers Up - Switch between header and source file, which is easier than Cmd + Opt + Up;
*Swipe three fingers down - Switch between declaration and definition when selecting a class or method, found these two kind currently;
*Swipe three fingers left - Go back (Cmd + Opt + Left);
*Swipe three fingers right - Go forward (Cmd + Opt + Right);
Tested with Xcode 3.2.5.
A: The class browser in Xcode! Reached by pressing shift + ⌘ + c. You can reduce the scope to only show your active project. It gives you a less cluttered view as long as you only want to browse the class hierarchy.
A: I find that using the shortcuts for building/cleaning and running your project really saved me some time:
*
*Cmd-R: Build & Run
*Cmd-Y: Build & Debug
*Cmd-Shift-Enter: Stop running project
*Cmd-Shift-K: Clean build
A: The entire shortcut list can be found here: http://iphonehuston.blogspot.com/2009/08/shortcuts-for-xcode.html
A: The fact that I can use Emacs as my editor and Xcode as my builder/debugger... Best of both worlds, in my humble opinion.
A: I have no idea if everybody knows this already, but I was delighted when I learned I could use "code folding" and hide nested functions that I didn't want to look at by clicking on the gray area nearest to the code that you want to fold.
Hard to explain . . .
A: Rename a file shared by multiple projects:
*
*Open all the projects.
*Rename the file in one project.
*Xcode automatically adjusts all the open projects to reflect the file's new name.
A: Snapshots, File>Make Snapshot, provides a quick way to save a revision of your project if you aren't using a proper version control system. Great way to experiment with a large, potentially damaging change.
A: Show chooser for open symbol
⌘ + ⌥ + ⇧ + click over a symbol
Shows
You can choose open the symbol in:
*
*the current tab
*in an existing tab
*in a new one (with the + in the upper right corner)
*in a vertical split (with the + in the right) or
*in a new window (with the + in the left).
A: Using ] to automatically insert [ in the correct location
I come from a .NET background, so I'm used to typing a symbol and then typing one of its method names. So I always forget to include the [ before I start typing the object name. Usually this meant I would need to go to the beginning of the line and add the [ manually. I didn't realize I could just press ] at the current cursor position, and it will be added automatically.
There are ways to use this: either after typing the function's name, or right before typing the function's name.
Method 1: after the function name
myObject testMethod]
... becomes:
[myObject testMethod]
... with the caret positioned after the ].
Method 2: before the function name
myObject]
... becomes:
[myObject ]
... with the caret positioned right before the ].
The advantage of the latter (2) is that code completion will filter on the methods of your object. Whereas with the former (1) if you try to invoke code completion immediately after myObject, it won't be filtered. Another advantage to (2) is it behaves more like other programming languages that use dot notation. You type the name of the object then simply ] instead of . to access a method.
A: Select a block of text and type cmd-/ to comment it out. Do it again to remove the comments characters.
This is especially useful when combined with brace-matching by double-clicking on balanced chars (parens, braces, brackets).
A: Being able to quickly see all the methods that can be overriden from a super class. For example when extending UITableViewController I just type in my implementation:
- ta
and then I hit ESC to see all the methods from my superclass that begin with "ta" such as
- (UITableViewCell *) tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
This also works when adopting protocols.
A: ⌘ Command + Double-Click on a symbol: Jump to Definition of a symbol.
⌥ Option + Double-Click on a symbol: Find Text in Documentation of a symbol. (Only works if you have they symbol's Doc Set installed.)
Favorites Bar:
Favorites bar is just like you have in Safari for storing - well - favorites. I often use it as a place to store shortcuts (which you can do by drag-dropping) to files I am using right now. Generally this is more useful when I'm working with a large or unfamiliar project.
To show the Favorites Bar, select the following menu option:
*
*View > Layout > Show Favorites Bar
A: ctrl + alt + ⌘ + r to clear the log
A: Command ⌘ alt ⌥ shift T : reveal the current edited file in the project tree.
A: When using Code Sense with many keyboards, use control + , to show the list of available completions, control + . to insert the most likely completion, and control + / & shift + control + / to move between placeholder tokens. The keys are all together on the keyboard right under the home row, which is good for muscle memory.
A: Use the Class Browser to show inherited methods
Apple's API reference documentation does not show methods inherited from a superclass. Sometimes, though. it's useful to be able to see the full range of functionality available for a class -- including a custom class of your own. You can use the Class Browser (from the Project menu) to display a flat or hierarchical list of all the classes related to a current project. The upper pane on the right hand side of the browser window shows a list of methods associated with the object selected in the browser. You can use the Configure Options sheet to select "Show Inherited Members" to show inherited methods as well as those defined by the selected class itself. You click the small book symbol to go to the corresponding documentation.
A: Auto-completion Keyboard Shortcuts
Tab ⇥ OR Control ⌃ /: Select the next auto-completion argument.
Shift ⇧ Tab ⇥ OR Shift ⇧ Control ⌃ /: Select the previous auto-completion argument.
Escape ⎋: Shows the auto completion pop-up list.
A: Might go without saying, but if you want to use intra-word navigation, make sure you change the key presets in for Spaces (in the Expose & Spaces preference pane), if you use it.
I switched Spaces to use Ctrl-Option Left/Right.
Edit: To set Spaces to Ctrl-Option Left/Right, select the "To switch between spaces:" popup and hold down the Option key. The first item will change from Ctrl Arrow Keys to Ctrl-Option Arrow Keys.
A: Turn off the "undo past the last point" warning
When you attempt to undo after saving, you will get the following prompt:
"You are about to undo past the last
point this file was saved. Do you
want to do this?"
To get rid of this warning, enter the following into a terminal window:
defaults write com.apple.Xcode XCShowUndoPastSaveWarning NO
Change the company name in template files
Paste this into the Terminal application:
defaults write com.apple.Xcode PBXCustomTemplateMacroDefinitions '{"ORGANIZATIONNAME" = "Microsoft";}'
Change "com.yourcompanyname" in all your templates:
*
*Find the directory: /Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Project Templates/Application
*Use your favourite multi-file search-and-replace tool to change com.yourcompany to whatever value you normally use to build for a device. I used BBEdit's multi-find-and-replace after I opened the whole directory. You should be replacing the value in all the info.plist files. I found 8 files to change.
The number of times a build has failed because I forgot to change this string is ridiculous.
Quickly jump to a Group in the Groups and Files pane
*
*Control ⌃ Option ⌥ Shift ⇧ + <First letter of a Group name>
If you hold down the three keys above, you can quickly jump to groups in the left (Groups and Files) page by pressing the first letter of a groups name. For example, Control ⌃Option ⌥Shift ⇧T takes you to Targets and Control ⌃Option ⌥Shift ⇧S to Source. Press it again and it jumps to SCM. Sometimes it takes several tries for this to work (I don't know why).
Cycling between autocompletion choices
*
*Control ⌃ .
*Shift ⇧ Control ⌃ .: Cycles backwards between autocompletion choices.
Control ⌃. (Control-Period) after a word automatically accepts the first choice from the autocompletion menu. Try typing log then Control ⌃. and you'll get a nice NSLog statement. Press it again to cycle through any choices. To see all the mutable choices, type NSMu then Control ⌃..
Quick Help
*
*Control ⌃ Command ⌘ ? (While your cursor is in the symbol to look up)
*Option ⌥ + <Double-click a symbol>
*Help > Quick Help
To get to the documentation from the Quick Help window, click the book icon on the top right.
See the documentation for a symbol
*
*Command ⌘ Option ⌥ + <Double-click a symbol>
Takes you straight to the full documentation.
Make non-adjacent text selections
*
*Command ⌘ Control ⌃ + <Double-click in the editor>
Use the above shortcut for a strange way of selecting multiple words. You can make selections of words in totally different places, then delete or copy them all at once. Not sure if this is useful. It's Xcode only as far as I can tell.
Use Emacs key bindings to navigate through your code
This trick works in all Cocoa application on the Mac (TextEdit, Mail, etc.) and is possibly one of the most useful things to know.
*
*Command ⌘ Left Arrow or Command ⌘ Right Arrow Takes you to the beginning and end of a line.
*Control ^ a and Control ^ e Do the same thing
*Control ^ n and Control ^ p Move the cursor up or down one line.
*Control ^ f and Control ^ b Move the cursor back or forward one space
Pressing Shift ⇧ with any of these selects the text between move points. Put the cursor in the middle of a line and press Shift ⇧ Control ^ e and you can select to the end of the line.
Pressing Option ⌥ will let you navigate words with the keyboard. Option ⌥ Control ^ f skips to the end of the current word. Option ⌥ Control ^ b skips to the beginning of the current word. You can also use Option ⌥ with the left and right arrow keys to move one-word-at-a-time.
*
*Control ^ Left Arrow and Control ^ Right Arrow moves the cursor between camel-cased parts of a word.
Try it with NSMutableArray. You can quickly change it to NSArray by putting your cursor after the NS, pressing Shift ⇧ Control ^ Right Arrow then Delete.
A: Cmd-/ to automatically insert "//" for comments. Technically the same number of keystrokes, but it feels faster...
Also the default project structure is to put resources and class files in separate places. For larger amounts of code create logical groups and place related code and xib files together. Groups created in XCode are just logical structures and do not change where your files are on disk (though you can set them up to replicate a real directory structure if you wish)
A: Print Complete Xcode Keyboard Shortcut List and put it next to your monitor.
A: pragma mark
Example:
#pragma mark === Initialization ===
Writing this line above all initialization methods will generate a nice heading in the dropdown menu above the editor.
Open Quickly
Shift + cmd + D
Start typing a file name you'd like to open. Very cool if you look for framework headers. They have nice comments too, sometimes additional info to the docs.
ESC
When your text-cursor is on a uncomplete method name for example, press ESC. It will shop up everything that might fit in there, and you can quickly complete very large method names. It's also good if you can't remember exactly the name of a method. Just press ESC.
I think these are the best ones I know until now.
(Migrated from deleted question by Stack Overflow user Thanks.)
A: Code Completion
A: *
*Cmd+Option+O to open a file in a separate window.
*Can configure Tab to always indent. I frequently use it to indent an entire file.
*Ctrl+Arrow keys to move between camel case words. If you have OneTwo, you can move from One to Two with Ctrl+Right arrow.
*You can use emacs bindings, there's even kill ring! I use the Ctrl+w and Cmd+C together when I need to copy two different pieces of text.
*In the documentation browser, you can restrict your searches to a particular library, e.g., just iOS 4.2 library. This helps me focus on API available only on a particular iOS/Mac version of the SDK.
*Cmd+Shift+A to build and analyze.
A: I came into Xcode right from Windows world (as MANY others), and one of the first quirks which I was faced to, was trying to "indent a selected block of text" with the TAB key.
Typically, when using a Windows editor, you select a block of text, and whenever you press TAB (or shift TAB) keys, the selected text jumps right/left. Then, once you decide the new position of the text, you stop pressing TAB.
OK, in Xcode, this is completely different, because when you press TAB, the whole block of text disappears, leaving you with a silly face, and some anger inside...
But then, just by chance or intuition or something, one day I discovered some kind of workaround to achieve the same effect you might get under a proper windows editor.
The steps should be these:
*
*Select the text block as you might do under Windows.
*Instead of pressing TAB, leave your instincts away, and "copy the text block" (typically CTRL+C (yuck)).
*Then, without deselecting the text, (pressing SHIFT if needed), extend the beginning of the selection and place it on the position you would like your new text to appear.
*Paste the beforementioned text (typically CTRL+V (yuck again)).
*Result: The previous text block gets substituted by "the new" one (of course, itself), but the "auto indent" capabilities of Xcode (or someone else, who cares), will auto place the text starting onto the new position we chose in step 3.
*A big smile appears on your face.
It's kind of tricky, but when you get used to it, you find yourself using it a lot.
Enjoy!!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "338"
} |
Q: Organizing classes into namespaces Are there some principles of organizing classes into namespaces?
For example is it OK if classes from namespace N depends on classes from N.X?
And if classes from N.X depends on classes from N?
A: In general, that should be fine for your example, if your packages were "N.UI" and "N.Util". I've seen namespaces used in two general fashions:
1) All tiers of a system have a namespace (i.e. database, web, biz, etc.)
2) Each component has a namespace (i.e. Customer, Invoice) and tiered namespaced underneath
Either way, the sub namespaces would be inter-related packages within a larger namespace, so it would be perfectly fine for you UI code to depend on your domain objects.
However, while it would be fine for N.X classes to depend on classes from N, I don't think it would make much sense for classes from N to depend on classes from N.X - it sounds like you could use some reorganization in that case.
A: Classes in N.X can rely on classes in N. But classes in N shouldn't rely on classes in N.X; that's bad design.
Some namespace guidelines:
*
*Namespace Naming Guidelines
*Names of Namespaces
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: immutable class should be final? It says in this article that:
Making a class final because it is immutable is a good reason to do so.
I'm a bit puzzled by this... I understand that immutability is a good thing from the POV of thread-safety and simplicity, but it seems that these concerns are somewhat orthogonal to extensibility. So, why is immutability a good reason for making a class final?
A: Mainly security I'd think. For the same reason String is final, anything that any security-related code wants to treat as immutable must be final.
Suppose you have a class defined to be immutable, call it MyUrlClass, but you don't mark it final.
Now, somebody might be tempted to write security manager code like this;
void checkUrl(MyUrlClass testurl) throws SecurityException {
if (illegalDomains.contains(testurl.getDomain())) throw new SecurityException();
}
And here's what they'd put in their DoRequest(MyUrlClass url) method:
securitymanager.checkUrl(urltoconnect);
Socket sckt = opensocket(urltoconnect);
sendrequest(sckt);
getresponse(sckt);
But they can't do this, because you didn't make MyUrlClass final. The reason they can't do it is that if they did, code could avoid the security manager restrictions simply by overriding getDomain() to return "www.google.com" the first time it's called, and "www.evilhackers.org" the second, and passing an object of their class into DoRequest().
I have nothing against evilhackers.org, by the way, if it even exists...
In the absence of security concerns it's all about avoiding programming errors, and it is of course up to you how you do that. Subclasses have to keep their parent's contract, and immutability is just a part of the contract. But if instances of a class are supposed to be immutable, then making it final is one good way of making sure they really are all immutable (i.e. that there aren't mutable instances of subclasses kicking around, which can be used anywhere that the parent class is called for).
I don't think the article you referenced should be taken as an instruction that "all immutable classes must be final", especially if you have a positive reason to design your immutable class for inheritance. What it was saying is that protecting immutability is a valid reason for final, where imaginary performance concerns (which is what it's really talking about at that point) are not valid. Note that it gave "a complex class not designed for inheritance" as an equally valid reason. It can fairly be argued that failing to account for inheritance in your complex classes is something to avoid, just as failing to account for inheritance in your immutable classes is. But if you can't account for it, you can at least signal this fact by preventing it.
A: The explanation for this is given in the book 'Effective Java'
Consider BigDecimal and BigInteger classes in Java .
It was not widely understood that immutable classes had to be effectively final
when BigInteger and BigDecimal were written, so all of their methods may be
overridden. Unfortunately, this could not be corrected after the fact while preserving backward compatibility.
If you write a class whose security depends on the immutability of a BigInteger or BigDecimal argument from an un-trusted client, you must check to see that the argument is a “real” BigInteger or BigDecimal, rather than an instance of an un trusted subclass. If it is the latter, you must defensively copy it under the assumption that it might be mutable.
public static BigInteger safeInstance(BigInteger val) {
if (val.getClass() != BigInteger.class)
return new BigInteger(val.toByteArray());
return val;
}
If you allow sub classing, it might break the "purity" of the immutable object.
A: Because if the class is final you can't extend it and make it mutable.
Even if you make the fields final, that only means you cannot reassign the reference, it does not mean you cannot change the object that is referred to.
I don't see a lot of use in a design for an immutable class that also should be extended, so final helps keep the immutability intact.
A: Following the Liskov Substitution Principle a subclass can extend but never redefine the contract of its parent. If the base class is immutable then its hard to find examples of where its functionality could be usefully extended without breaking the contract.
Note that it is possible in principle to extend an immutable class and change the base fields e.g. if the base class contains a reference to an array the elements within the array cannot be declared final. Obviously the semantics of methods can also be changed via overriding.
I suppose you could declare all the fields as private and all the methods as final, but then what would be the use of inheriting?
A: Its a good idea to make a class immutable for performance reasons too. Take Integer.valueOf for example. When you call this static method it does not have to return a new Integer instance. It can return a previously created instance safe in the knowledge that when it passed you a reference to that instance last time you didn't modify it (I guess this is also good reasoning from a security reason perspective too).
I agree with the standpoint taken in Effective Java on these matters -that you should either design your classes for extensibility or make them non-extensible. If its your intention to make something extensible perhaps consider an interface or abstract class.
Also, you don't have to make the class final. You can make the constructors private.
A: So, why is immutability a good reason for making a class final?
As stated in oracle docs there are basically 4 steps to make a class immutable.
So one of the point states that
to make a class Immutable class should be marked as either final or have private constructor
Below are the 4 steps to make a class immutable (straight from the oracle docs)
*
*Don't provide "setter" methods — methods that modify fields or objects referred to by fields.
*Make all fields final and private.
*Don't allow subclasses to override methods. The simplest way to do this is to declare the class as final. A more sophisticated approach is to make the constructor private and construct instances in factory methods.
*If the instance fields include references to mutable objects, don't allow those objects to be changed:
*
*Don't provide methods that modify the mutable objects.
*Don't share references to the mutable objects. Never store references to external, mutable objects passed to the constructor; if necessary, create copies, and store references to the copies. Similarly, create copies of your internal mutable objects when necessary to avoid returning the originals in your methods.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Ideal number of classes per namespace branch What number of classes do you think is ideal per one namespace "branch"? At which point would one decide to break one namespace into multiple ones? Let's not discuss the logical grouping of classes (assume they are logically grouped properly), I am, at this point, focused on the maintainable vs. not maintainable number of classes.
A: With modern IDEs and other dev tools, I would say that if all the classes belong in a namespace, then there is no arbitrary number at which you should break up a namespace just for maintainability.
A: I think a namespace should be as large as it needs to be. If there is a logical reason to create a sibling namespace or child namespace, then do so. The main reason as I see it to split into namespaces is to ease development, making it easier for developers to navigate the namespace hierarchy to find what they need.
If you have one namespace with lots of types, and you feel it's difficult to find certain ones, then consider moving them to another namespace. I would use a child namespace if the types are specializing the parent namespace types, and a sibling namespace if the types can be used without the original namespace types or have a different purpose. Of course, it all depends on what you're creating and the target audience.
If a namespace has less than 20 types, it's unlikely to be worth splitting. However, you should consider namespace allocation during design so that you know up front when developing, what types go in which namespaces. If you do namespace allocation during development, expect a lot of refactoring as you determine what should go where.
A: "42? No, it doesn't work..."
Ok, let's put our programming prowess to work and see what is Microsoft's opinion:
# IronPython
import System
exported_types = [
(t.Namespace, t.Name)
for t in System.Int32().GetType().Assembly.GetExportedTypes()]
import itertools
get_ns = lambda (ns, typename): ns
sorted_exported_types = sorted(exported_types, key=get_ns)
counts_per_ns = dict(
(ns, len(list(typenames)))
for ns, typenames
in itertools.groupby(sorted_exported_types, get_ns))
counts = sorted(counts_per_ns.values())
print 'Min:', counts[0]
print 'Max:', counts[-1]
print 'Avg:', sum(counts) / len(counts)
print 'Med:',
if len(counts) % 2:
print counts[len(counts) / 2]
else: # ignoring len == 1 case
print (counts[len(counts) / 2 - 1] + counts[len(counts) / 2]) / 2
And this gives us the following statistics on number of types per namespace:
C:\tools\nspop>ipy nspop.py
Min: 1
Max: 173
Avg: 27
Med: 15
A: One thing that isn't covered here, though it relates to Chris' point in a way, is that the learnabilty of a namespace isn't just related to the number of items.
(Incidentally, this applies to "namespace" in the widest sense - a class itself is a namespace in the general sense in that it contains certain names that mean a different thing in that context than they might in another, an enum is a namespace in this sense too).
Let's say I encounter an XML-related namespace with an Element class. I learn a bit about this and when I look at the Attribute class, I see some similarity. When I then see a ProcessingInstruction class, I can make a reasonable guess about how it works (and it's probably a design flaw if I guess completely wrong, at best differences need not just to be documented, but explained). I can guess that there's a Comment class before I even see it. I'll go looking for your TextNode class and wonder if these all inherit from Node rather than having to learn about them from the docs. I'll wonder which of several reasonable approaches you took with your Lang class rather than wonder if it's there.
Because it all relates to a domain I already have knowledge of, the conceptual "cost" of these seven classes is much, much less than if the seven classes where called, Sheep, Television, FallOfSaigon, Enuii, AmandaPalmersSoloWork, ForArtsSakeQuotient and DueProcess.
This relates to Chirs' point, because he says that we are advised for the sake of usability to keep the number of choices down. However, if we have a choice of countries in alphabetical order, we immediately grok the whole list and pick the one we need instantly, so the advice to keep choices down doesn't apply (indeed, a few options at a time can be both less useful and potentially insulting).
If your namespace has 200 names, but you only have to really learn half a dozen to understand the lot, then it'll be much easier to grok than having a dozen names with little relation to each other.
A: I know you don't want to discuss logical grouping, however to do a split you need to be able to group the two different namespaces. I'd start considering a new namespace at around 30 classes; however I wouldn't consider it a major concern.
A: I must say I find all of the above very surprising reading.
Usability experts tell us to keep the number of choices in a menu to a limited number so we can immediately see all the choices. The same applies to how you organise your work.
I would typically expect 4-10 types in a namespace. Saves so much hunting round for stuff and scrolling up and down. It's so quick and easy to move stuff around using resharper that I don't see any reason why not to.
A: Another thing that should mentioned is that it often pays to put a class containing extension methods in its own namespace, so that you can enable or disable those extension methods with a using directive. So if the thing in the namespace is a static class containing extension methods, the answer is 1.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Enhancing Performance in Real-Time Systems First, I'd like to establish the acceptable end-to-end latency for a real-time system in the financial world is less than 200ms. Okay, here's what I'm after. In the design of real-time systems, there are "design patterns" (or techniques) that will increase the performance (i.e. reduce processing time, improve scalability, etc).
An example of what I'm after is, the use of GUIDs instead of sequential numbers for allocation of primary keys. Rationale for GUIDs is that handlers have their own primary key generators without "consulting" each other. This allows for parallel processing to occur and permits scaling.
Here're some more. I'll try and add to the list when able to.
*
*The use of event driven architecture (EDA).
*Use of messaging queues to support EDA.
I bow to the collective wisdom of the community. Thanks heaps!
A: For general real-time system work, the classic rule is to go after variability and kill it. Real hard real-time means using static schedules, streamlined operating systems, efficient device drivers, and rock-hard priorities. No dynamic or adaptive stuff is feasible, if you really want computation X to end within a known time-bound T.
I guess what you mean here is not really real-time in that respect, and I guess the system is a bit more complicated than reading sensors, computing control loop, activating actuators. Some more details would be nice to know waht the constraints are here.
A: You've already mentioned Event Driven Architecture, I'd suggest you have a look at Staged Event Driven Architectures (SEDA).
A stage is essentially a queue for events and a function to operate on the event. The "unconventional" thing about this architecture is that each stage can be run in its own thread and the functions typically need asynchronous I/O, etc. Arranging programs in this way is awkward at first, but allows for all kinds of magic - like QoS, tweaked scheduling, etc.
See Welsh's Berkeley dissertation and his web site. You might also look at Minor Gordon's project (from Cambridge UK) called yield. He had some very good results. It may seem like the project is geared towards Python at first, but it can be used for pure c++ as well.
A: As basic as it may sound, most line of business applications are filled with redundant calculations, eliminate them. Refactoring of calculations is the backbone of optimization patterns. Every time a processing cycle appears you have to ask:
What within this cycle is calculated with the same output it would have out of the cycle.
As a basic example:
for(int i=0;i< x/2; i++)
//do something
Here you can safely take x/2 and calculate it before the cicle and reuse that value (the modern compilers now take care of these trivial optimizations)
To see the ramifications of this simple rule I can give you the example applied to database querys. In order to avoid the INNER JOIN of two tables in order to get a highly recurrent field requent you can violate the normalization rules and duplicate it on the table relating to the one that has the value. This avoids repetitive table joining processing and can free up paralelization as only one of the tables needs to be locked on transactions. Example:
Client table querys need a client discount recurrently but the discount is saved in the Client type table.
A: Don't "fix" anything unless you know for sure that it's "broken".
The first thing I'd do is tune the blazes out of that program that has to run fast. I would use my favorite technique. Then, chances are, there will be enough wiggle room to fool around with architecture.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Snippets for C++ in VS2008 Does someone know of any port to VS2008 of the support for snippets for C++?
VS2005 had a nice enhancement pack:
Microsoft Visual Studio 2005 IDE Enhancements
But the snippets for C++ feature is not supported in VS2008. I already tryed to use the SDK to reimplement it but gave up out of lack of time since are a huge number of Language details to know in Babel. I find hard to believe nowone has needed this and implemented it as snippets can be one of the most effective production accelerators when used correctly.
A: I've used Whole Tomato's Visual Assist X for years, and it has a very nice snippets system built into it. It is very straightforward to define snippets using the visual assist UI, and it also has some great features that enhance intellisense and code navigation. It is $99 though, but well worth it in my opinion.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Setting PHP variables in httpd.conf? I'd like to automatically change my database connection settings on a per-vhost basis, so that I don't have to edit any PHP code as it moves from staging to live and yet access different databases. This is on a single dedicated server.
So I was wondering, can I set a PHP variable or constant in httpd.conf as part of the vhost definition that the site can then use to point itself to a testing database automatically?
$database = 'live';
if (some staging environment variable is true) {
$database = 'testing'; // and not live
}
If this isn't possible, I guess in this case I can safely examine the hostname I'm running on to tell, but I'd like something a little less fragile
Hope this makes sense
many thanks
Ian
A: Yep...you can do this:
SetEnv DATABASE_NAME testing
and then in PHP:
$database = $_SERVER["DATABASE_NAME"];
or
$database = getenv("DATABASE_NAME");
A: I would not set an environment variable, as this is also visible in default script outputs like PhpInfo();
just use a php_value in your .htaccess just above the htdocs folder and you're done and safe :)
A: The problem with .htaccess is that it is part of the code base tree. And the code base tree is part of VC/SVN. Hence any change in local/dev gets moved to production. Keeping the env variable setting in httpd.conf saves you the effort of being careful about not accidentally overwriting the server vs dev flag. Unless of course you want to do with IP address or host name, both of which are not scalable approaches.
A: You can set an environment variable and retrieve it with PHP.
In httpd.conf:
SetEnv database testing
In your PHP:
if (getenv('database') == 'testing') {
or
if ($_SERVER['database'] == 'testing') {
A: Did you tried to use the .htaccess file? You could override the php.ini values using it.
Just put the .htaccess file into your htdocs directory:
php_value name value
Futher information:
*
*https://php.net/manual/en/configuration.changes.php
*https://php.net/manual/en/ini.php
A: I was also looking at this type of solution. What I found is this, under Apache you can use the SetEnv KeyName DataValue in the http.conf and in IIS you can use Fast CGI Settings >> Edit... >> Environment Variables >> ... and add KeyName, DataValue.
This in turn allows the PHP $var = $_SERVER["KeyName"]; to be set to the DataValue and used as needed under both IIS and Apache consistently.
I know this is a strange use case. I use WAMP at work and MAMP at home so it is nice to be able to work the same way.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Efficiently merge string arrays in .NET, keeping distinct values I'm using .NET 3.5. I have two string arrays, which may share one or more values:
string[] list1 = new string[] { "apple", "orange", "banana" };
string[] list2 = new string[] { "banana", "pear", "grape" };
I'd like a way to merge them into one array with no duplicate values:
{ "apple", "orange", "banana", "pear", "grape" }
I can do this with LINQ:
string[] result = list1.Concat(list2).Distinct().ToArray();
but I imagine that's not very efficient for large arrays.
Is there a better way?
A: .NET 3.5 introduced the HashSet class which could do this:
IEnumerable<string> mergedDistinctList = new HashSet<string>(list1).Union(list2);
Not sure of performance, but it should beat the Linq example you gave.
EDIT:
I stand corrected. The lazy implementation of Concat and Distinct have a key memory AND speed advantage. Concat/Distinct is about 10% faster, and saves multiple copies of data.
I confirmed through code:
Setting up arrays of 3000000 strings overlapping by 300000
Starting Hashset...
HashSet: 00:00:02.8237616
Starting Concat/Distinct...
Concat/Distinct: 00:00:02.5629681
is the output of:
int num = 3000000;
int num10Pct = (int)(num / 10);
Console.WriteLine(String.Format("Setting up arrays of {0} strings overlapping by {1}", num, num10Pct));
string[] list1 = Enumerable.Range(1, num).Select((a) => a.ToString()).ToArray();
string[] list2 = Enumerable.Range(num - num10Pct, num + num10Pct).Select((a) => a.ToString()).ToArray();
Console.WriteLine("Starting Hashset...");
Stopwatch sw = new Stopwatch();
sw.Start();
string[] merged = new HashSet<string>(list1).Union(list2).ToArray();
sw.Stop();
Console.WriteLine("HashSet: " + sw.Elapsed);
Console.WriteLine("Starting Concat/Distinct...");
sw.Reset();
sw.Start();
string[] merged2 = list1.Concat(list2).Distinct().ToArray();
sw.Stop();
Console.WriteLine("Concat/Distinct: " + sw.Elapsed);
A: Disclaimer This is premature optimization. For your example arrays, use the 3.5 extension methods. Until you know you have a performance problem in this region, you should use library code.
If you can sort the arrays, or they're sorted when you get to that point in the code, you can use the following methods.
These will pull one item from both, and produce the "lowest" item, then fetch a new item from the corresponding source, until both sources are exhausted. In the case where the current item fetched from the two sources are equal, it will produce the one from the first source, and skip them in both sources.
private static IEnumerable<T> Merge<T>(IEnumerable<T> source1,
IEnumerable<T> source2)
{
return Merge(source1, source2, Comparer<T>.Default);
}
private static IEnumerable<T> Merge<T>(IEnumerable<T> source1,
IEnumerable<T> source2, IComparer<T> comparer)
{
#region Parameter Validation
if (Object.ReferenceEquals(null, source1))
throw new ArgumentNullException("source1");
if (Object.ReferenceEquals(null, source2))
throw new ArgumentNullException("source2");
if (Object.ReferenceEquals(null, comparer))
throw new ArgumentNullException("comparer");
#endregion
using (IEnumerator<T>
enumerator1 = source1.GetEnumerator(),
enumerator2 = source2.GetEnumerator())
{
Boolean more1 = enumerator1.MoveNext();
Boolean more2 = enumerator2.MoveNext();
while (more1 && more2)
{
Int32 comparisonResult = comparer.Compare(
enumerator1.Current,
enumerator2.Current);
if (comparisonResult < 0)
{
// enumerator 1 has the "lowest" item
yield return enumerator1.Current;
more1 = enumerator1.MoveNext();
}
else if (comparisonResult > 0)
{
// enumerator 2 has the "lowest" item
yield return enumerator2.Current;
more2 = enumerator2.MoveNext();
}
else
{
// they're considered equivalent, only yield it once
yield return enumerator1.Current;
more1 = enumerator1.MoveNext();
more2 = enumerator2.MoveNext();
}
}
// Yield rest of values from non-exhausted source
while (more1)
{
yield return enumerator1.Current;
more1 = enumerator1.MoveNext();
}
while (more2)
{
yield return enumerator2.Current;
more2 = enumerator2.MoveNext();
}
}
}
Note that if one of the sources contains duplicates, you might see duplicates in the output. If you want to remove these duplicates in the already sorted lists, use the following method:
private static IEnumerable<T> CheapDistinct<T>(IEnumerable<T> source)
{
return CheapDistinct<T>(source, Comparer<T>.Default);
}
private static IEnumerable<T> CheapDistinct<T>(IEnumerable<T> source,
IComparer<T> comparer)
{
#region Parameter Validation
if (Object.ReferenceEquals(null, source))
throw new ArgumentNullException("source");
if (Object.ReferenceEquals(null, comparer))
throw new ArgumentNullException("comparer");
#endregion
using (IEnumerator<T> enumerator = source.GetEnumerator())
{
if (enumerator.MoveNext())
{
T item = enumerator.Current;
// scan until different item found, then produce
// the previous distinct item
while (enumerator.MoveNext())
{
if (comparer.Compare(item, enumerator.Current) != 0)
{
yield return item;
item = enumerator.Current;
}
}
// produce last item that is left over from above loop
yield return item;
}
}
}
Note that none of these will internally use a data structure to keep a copy of the data, so they will be cheap if the input is sorted. If you can't, or won't, guarantee that, you should use the 3.5 extension methods that you've already found.
Here's example code that calls the above methods:
String[] list_1 = { "apple", "orange", "apple", "banana" };
String[] list_2 = { "banana", "pear", "grape" };
Array.Sort(list_1);
Array.Sort(list_2);
IEnumerable<String> items = Merge(
CheapDistinct(list_1),
CheapDistinct(list_2));
foreach (String item in items)
Console.Out.WriteLine(item);
A: Why do you imagine that it would be inefficient? As far as I'm aware, both Concat and Distinct are evaluated lazily, using a HashSet behind the scenes for Distinct to keep track of the elements which have already been returned.
I'm not sure how you'd manage to make it more efficient than that in a general way :)
EDIT: Distinct actually uses Set (an internal class) instead of HashSet, but the gist is still correct. This is a really good example of just how neat LINQ is. The simplest answer is pretty much as efficient as you can achieve without more domain knowledge.
The effect is the equivalent of:
public static IEnumerable<T> DistinctConcat<T>(IEnumerable<T> first, IEnumerable<T> second)
{
HashSet<T> returned = new HashSet<T>();
foreach (T element in first)
{
if (returned.Add(element))
{
yield return element;
}
}
foreach (T element in second)
{
if (returned.Add(element))
{
yield return element;
}
}
}
A: string[] result = list1.Union(list2).ToArray();
from msdn: "This method excludes duplicates from the return set. This is different behavior to the Concat(TSource) method, which returns all the elements in the input sequences including duplicates."
A: Probably creating a hashtable with your values as keys (only adding those not already present) and then converting the keys to an array could be a viable solution.
A: You don't know which approach is faster until you measure it. The LINQ way is elegant and easy to understand.
Another way is to implement an set as an hash array (Dictionary) and add all the elements of both the arrays to the set. Then use set.Keys.ToArray() method to create the resulting array.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: Python scope: "UnboundLocalError: local variable 'c' referenced before assignment" I am trying to figure out this:
c = 1
def f(n):
print c + n
def g(n):
c = c + n
f(1) # => 2
g(1) # => UnboundLocalError: local variable 'c' referenced before assignment
Thanks!
A: Errata for Greg's post:
There should be no before they are referenced. Take a look:
x = 1
def explode():
print x # raises UnboundLocalError here
x = 2
It explodes, even if x is assigned after it's referenced.
In Python variable can be local or refer outer scope, and it cannot change in one function.
A: Within a function, variables that are assigned to are treated as local variables by default. To assign to global variables, use the global statement:
def g(n):
global c
c = c + n
This is one of the quirky areas of Python that has never really sat well with me.
A: Other than what Greg said, in Python 3.0, there will be the nonlocal statement to state "here are some names that are defined in the enclosing scope". Unlike global those names have to be already defined outside the current scope. It will be easy to track down names and variables. Nowadays you can't be sure where "globals something" is exactly defined.
A: Global state is something to avoid, especially needing to mutate it. Consider if g() should simply take two parameters or if f() and g() need to be methods of a common class with c an instance attribute
class A:
c = 1
def f(self, n):
print self.c + n
def g(self, n):
self.c += n
a = A()
a.f(1)
a.g(1)
a.f(1)
Outputs:
2
3
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41"
} |
Q: What is your smallish web development company setup? I work for a small web development company (only 2 to 3 developers) that work on a wide range of projects for different customers (everything from CMS's to eCommerce sites).
Usually we work on our own projects but occasionally we need to work together on one. We use subversion as our source control software and sites are developed in .NET using VS 2008 and SQL 2005.
After reading through plenty of posts regarding unit testing and other "enterprise" level coding practices I was wondering what other developers do for small projects that only require minimal maintenance and have short development times?
I am thinking in particular of things like is unit testing necessary, etc.
A: Unit testing IS necassary no matter what. If you write code, you unit test. I work by myself alot too. I still test. I dont know how I ever wrote code before it.
Heres the way I look at it. you dont necassarily need the same expensive tools as the big boys, but if you want to be big, you have to think big. Do the same htings, follow the same practices. As your needs grow, get better tools. For example, if you do any UML diagramnming, you porbably dont need any saftware. Just a whiteboard/paper. But still use UML. When your needs grow, you can look at getting special software.
Even project management. You dont necassarily need the expensive tools to track projects, as long as you do it. As your needs grow, you can get specialized softwaare.
In short, do the same things you would do if you were bigger, but you dont necassarily need the same tools. Aquire as needed.
EDIT: I should mention though that some of what you do of course is dependant on your methodolies/processes that you use. For example, do you do agile development? Being small you dont necassarily need to do everything exactly the same. For example, I try to be agile, but I obviously dont pair program :). You just need to learn to adapt them to what works for you.
A: I think the biggest mistake you can make is to dismiss something for being an 'enterprise' level coding standard. Although stuff like CI, build servers, unit-testing, coding standards (I can't think what else you may mean) etc.. may take an initial overhead they will pay dividends in the long run. For example if your project is hacked together now and three years later your customer wants to add a feature you will be glad that you have put in the time to get unit tests now. (OK this may or may not happen but... if your customer gets someone to look at your solution in the future and they see badly hacked code they may not use you again).
Remember as well the more you do this stuff, the quicker it will become.
A: Our development environment is pretty small as well. We mostly do Java web development (some php) as oppose to .NET or something else. We use Project locker for our wiki, svn and bug tracking system. For code development it varies between Netbeans 6.1 and Eclipse and MySQL as our database backend.
We've made it standard practice to write unit tests for our code. It makes upgrading our code base soo much easier 4 months later.
A: The thing is: you sleep much better at night if you know that at least the unit tests ran through okay before you deployed that stuff.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Asp.Net MVC: How do I get Html.ActionLink to render integer values properly? I have an asp.net mvc application with a route similar to:
routes.MapRoute("Blog",
"{controller}/{action}/{year}/{month}/{day}/{friendlyName}",
new { controller = "Blog", action = "Index", id = "", friendlyName="" },
new { controller = @"[^\.]*",
year = @"\d{4}",
month = @"\d{2}",
day = @"\d{2}" }
);
My controller action method signature looks like:
public ActionResult Detail(int year, int month, int day, string friendlyName)
{ // Implementation... }
In my view, I'm doing something like:
<%= Html.ActionLink<BlogController>(item => item.Detail(blog.PostedOn.Year, blog.PostedOn.Month, blog.PostedOn.Day, blog.Slug), blog.Title) %>
While the url that is generated with ActionLink works, it uses query string variables rather than URL rewriting.
For example, it would produce /blog/detail/my-slug?year=2008&month=7&day=5 instead of /blog/detail/2008/07/05/my-slug
Is there a way to get the generic version of ActionLink to properly pad the integer values so that the url comes out as expected?
Thanks
Jim
A: The fact that your parameters are integers has nothing to do with your problem. The route definition you want to be used isn't actually being used, which is why the generated URL is using query string parameters instead of building the structure you want.
Routes are evaluated top-down, so you likely have a more generic route definition that is satisfying your requested URL generation. Try moving the route you displayed in this post to the top of your route definitions, and you'll see that your generated link is as you'd expect. Then look into modifying your route definitions to either be more specific, or just move them around as necessary.
Debugging these types of scenarios can be a huge pain. I'd suggest downloading Phil Haack's route debugger, it will make your life a lot easier.
A: I would suggest formatting the Year, Month, and Day as Strings instead.
Think about this: Will you be doing any math on these "integers"? Probably not, so there really is no point for making them integers.
Once you have them as Strings you can force the leading zero format.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/146367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.