text stringlengths 8 267k | meta dict |
|---|---|
Q: How can I set up a configuration file for .NET console applications? Is it possible to use a ".net configuration" file for a .NET console application?
I'm looking for an equivalent to web.config, but specifically for console applications...
I can certainly roll my own, but If I can use .NET's built in configuration reader then I would like to do that...I really just need to store a connection string...
Thanks
A: This might help to some people dealing with Settings.settings and App.config: Watch out for GenerateDefaultValueInCode attribute in the Properties pane while editing any of the value (rows) in the Settings.settings grid in Visual Studio (VS2008 in my case). If you set GenerateDefaultValueInCode to True (True is the default here!), the default value is compiled into the exe (or dll), you can find it embeded in the file when you open it in a plain text editor. I was working on a console application and if I had defaults in the exe, the application always ignored the config file placed in the same directory! Quite a nightmare and no information about this on the whole internet.
A: Yes - use app.config.
Exactly the same syntax, options, etc. as web.config, but for console and WinForms applications.
To add one to your project, right-click the project in Solution Explorer, Add..., New Item... and pick "Application Configuration File" from the Templates box.
A: app.config... If you have an App.config in your project, it will get copied as executableName.exe.config in the case of a console application.
A: Yes, it's possible. You just need to make an app.config file.
A: Yes. Look up "application configuration file" in the documentation.
A: SInce I haven't fully made the leap to TDD yet (though I hope to on some upcoming project) I use a console app to test my library code that I produce for another web developer in our company to use.
I use app.config for all of those settings, and as @Dylan says above the syntax is exactly the same between that and web.config, which means I can also just hand the content of my app.config over to the other dev and he can put them directly in his web.config. Very handy.
A: I asked almost the same question some days ago and got really good answers, take a look:
Simplest way to have a configuration file in a Windows Forms C# Application
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Can I map local branches to remote branches with different prefixes in git? We're working with a semi-centralized git repository here where I work. Each developer has their own subtree in the central git repository, so it looks something like this:
master
alice/branch1
alice/branch2
bob/branch1
michael/feature
release/1.0
release/1.1
Working locally in my tree I have topic/feature, which corresponds to michael/feature in the central tree.
I've been using
git push origin topic/feature:michael/feature
to push my changes to the remote tree. However, this is cumbersome and prone to mistakes (e.g. omitting the developer name, misspelling the feature name, etc.).
I'm looking for a cleaner way to do this. For instance, "git push". I suspect that setting a different remote with a modified fetch refspec will do it, but I'm not sure how exactly to do it. I'm also not sure how to modify my current branch definitions to use the different remote.
My current .git/config looks something like:
[remote "origin"]
url = git://central/git/project
fetch = +refs/heads/*:refs/remotes/origin/*
[branch "topic/feature"]
remote = origin
merge = refs/heads/michael/project
Edit: I'd also like to apply this to pulls/fetches. But does the branch.<name>.merge take care of that?
I'll continue to research this and post here if I find something, but I'm hoping to get some other good ideas.
Edit 2: I've decided I'll keep local and remote branch names the same. It appears it will be the least work and least prone to future problems.
A: In your [remote "origin"] section, add one line per mapping. Including master to master.
push = refs/heads/master:master
push = refs/heads/topic/feature:michael/feature
I'm not sure how to do it with the git-config command.
Be aware that from now on, all branches are pushed at the same when you do a straight git push (with no params).
Would you care to explain why you don't keep the same branch names locally and remotely?
A: If you can, I suggest you use the same branch names locally & remotely. Then git push will push all of your local branches to corresponding branches in the central repository.
To use different prefixes in local and remote repos, you need to add a mapping to your config file each time you create a new feature branch. The command to set up the mapping for topic/BRANCH_NAME is
git config remote.origin.push refs/heads/topic/BRANCH_NAME:michael/BRANCH_NAME
A: Use
git branch --set-upstream-to=origin/remoteBranch localBranch
where origin/remoteBranch is the remote name, and localBranch is the local branch name.
A: You can map your branch to different tracking branch on the remote with something like this:
git remote add heroku git@heroku.com:YOURAPPNAME.git
git checkout -b heroku -t heroku/master
Your config ends up similar to what @Paul suggests (a tad "simpler" actually).
See this gist (with tweaks by me) for usage steps that work well for me https://gist.github.com/2002048.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Reload configurations without restarting Emacs How do I load the edited .emacs file without restarting Emacs?
A: In the *scratch* buffer, type:
(load-file user-init-file)
Then press C-x C-e to evaluate the expression.
A: M-x load-file and then choose the .emacs file should also work
A: M-x load-file ~/.emacs
eval-buffer when the .emacs file is opened
eval-region when you want apply selected lines
C-x C-e evaluates the preceding expression
A: M-x load-file ENTER
~/.emacs
ENTER
(source)
A: Open the .emacs file, select its contents and hit C-x,C-e
A: I usually use M-x load-file. But be aware that some initialization is only done the first time through. Things like libraries that set their defaults when loaded, but don't get reloaded the second time through. Its always a good idea to start up emacs from scratch as a final check that everything works ok.
A: M-x eval-buffer
A: you can use C-x C-e which will evaluate an s-expression. Make sure the cursor is at the last parenthesis of the elisp code.
A: I use and recommend restart-emacs package on melpa
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "94"
} |
Q: What is the difference between CouchDB and Lotus Notes? I was looking into the possibility of using CouchDB. I heard that it was similar to Lotus Notes which everyone loves to hate. Is this true?
A: Damien Katz worked at Iris (Lotus), but he was not the guy behind the Notes Database. He is well-known in the Lotus Notes community for redesigning the Notes Formula Engine.
There are definitely some similarities between CouchDB and Lotus Notes, such as their document-oriented, non-relational data, and replication capabilities, but they are more disparate than similar. CouchDB is a database server and Lotus Notes is an enterprise-level collaboration platform.
A: @Lex, You should prehaps say what version of Notes/Domino you are working on because your comments are incorrect.
"No transaction support" - Domino has transactional logging. If you want more complex transaction logging that is also available within coding.
"not well suited for handling multiple data transactions" - Actually it handles them just fine. You have document locking and replication conflict resolution. Depends a lot on how you set up your application to handle workflow.
"No separation between production/dev environments." - False. The only way this could be true is if you had a badly deployed environment. Developers normally should have 0 access to deploy design changes to the production environment. They would work off a template which does not replicate to main servers. Once updates are done and approved then the administrator deploys it. They do this by taking the template and signing it with a controlled signature allowed to run on production, then drop the template in and update the design of the related applications.
"The more data lotus notes contains, the more views will likely get created" - This comment makes absolutly no sense what-so-ever. I don't believe you have used Notes/Domino in any professional ability.
"lotus script is not object oriented" - Yes you make good points there. However it doesn't mean that the language is flawed. Also they have made a large number of improvements since 8.x and with 8.5.1. For example built in web services support (point to WSDL and LS code is made for you). 8.5.1 Also has a lot of new designer features like Code Templates, auto-completion, LSDoc popup help on your own functions, etc.
You also only touch on LotusScript. Yet you can also code in:
Java, SSJS/DOJO (XPages), Javascript, @Formula language, Web Services (SOAP/REST), C-API, Eclipse Plugins(RCP). Output in JSON as well as XML.
8.5.1 Designer client is now free to download if you want to test it out.
So while I believe I am not in a position to comment on CouchDb you most certainly are not on Notes/Domino.
A: Development of Lotus Notes began over 20 years ago, with version 1 released in 1989. It was developed by Ray Ozzie, currently Chief Software Architect for Microsoft.
Lotus Notes (the client) and Domino (the server) have been around for a long time and are mature well featured products. It has:
*
*A full client server stack with rapid application design and deployment of document oriented databases.
*A full public key infrastructure for security and encryption.
*A robust replication model and active active clustering across heterogeneous platforms (someone once showed a domino cluster with an xbox and a huge AIX server).
*A built in native directory for managing users that can also be accessed over LDAP.
*A built in native mail system that can scale to manage millions of users with multi GB mail files, with live server access or replicated locally for off-line access. This can interface with standard internet mail through SMTP and also has POP and IMAP access built in. The mail infrastructure is a core feature that is available to all applications built on Notes Domino (any document in a database can be mailed to any other database with a simple doc.send() command).
*A built in HTTP stack that allows server hosted databases to be accessed over the web.
*A host of integration options for accessing, transferring and interoperating with RDBMS and ERP systems, with a closely coupled integration with DB2 available allowing Notes databases to be backed by a relational store where desired.
Backwards compatibility has always been a strong feature of Notes Domino and it is not uncommon to find databases that were developed for version 3 running flawlessly in the most up to date versions. IBM puts a huge amount of effort into this and it has a large bearing on how the product currently operates.
-
CouchDB was created by Damien Katz, starting development in 2004. He had previously worked for IBM on Notes Domino, developing templates and eventually completely rewriting one of the core features, the formula engine, for ND6.
CouchDB shares a basic concept of a document oriented database with views that Notes Domino has.
In this model "documents" are just arbitrary collections of values that are stored some how. In CouchDB the documents are JSON objects of arbitrary complexity. In Notes the values are simple name value pairs, where the values can be strings, numbers, dates or arrays of those.
Views are indexes of the documents in the database, displaying certain value, calculating others and excluding undesired docs. Once the index is build they are incrementally updated when any document in the database changes (created updated or deleted).
In CouchDB views are build by running a mapping function on each document in the database. The mapping function calls an emit method with a JSON object for every index entry it wants to create for the given document. This JSON object can be arbitrarily complex. CouchDB can then run a second reducing function on the mapped index of the view.
In Notes Domino views are built by running a select function (written in Notes Domino formula language) on each document in the database. The select function simply defines if the document should be in the view or not. Notes Domino view design also defines a number of columns for the view. Each column has a formula that is run against the selected document to determine the value for that column.
CouchDB is able to produce much more sophisticated view indexes than Notes Domino can.
CouchDB also has a replication system.
-
Summary ( TL;DR ) : CouchDB is brand new software that is developing a core that has a similar conceptual but far more sophisticated design to that used in Lotus Notes Domino. Lotus Notes Domino is a mature fully featured product that is capable of being deployed today. CouchDB is starting from scratch, building a solid foundation for future feature development. Lotus Notes Domino is continuing to develop new features, but is doing so on a 20 year old platform that strives to maintain backwards compatibility. There are features in Notes Domino that you might wish were in CouchDB, but there are also features in Notes Domino that are anachronistic in today's world.
A: It is the Notes application and UI that people usually hates. Not the architecture behind.
A: Lotus Notes client/Domino server is comprised of an object("document")-storage (not relational) mechanism, has fully integrated certificate-based security model / user management and conflict-resolution for syncing offline/online changes to data - it's a platform for distributed applications.
"CouchDB is a document-oriented, Non-Relational Database Management Server (NRDBMS)."
CouchDB is accessible via a REST style API.
A: There's a podcast interview with Jan Lehnardt of the CouchDB team here.
Without going back and listening to it again, I believe that Damien Katz, who was the initiator and is still the lead developer on CouchDB was also the guy behind the Notes database. So there's a sense in which CouchDB is a better Notes DB, I guess. He explains some of the differences in his blog.
A: It's similar to how Notes deals with data in that everything is a document of arbitrary structure, and you have views over those documents instead of tables and records like you'd have in a relational database. The replication etc also has some similarities.
There isn't anything wrong with the Notes server architecture, people don't hate that so much. It's more the implementation and bloat that comes with Notes.
CouchDB has no front end either, just a server component. The Notes client sucks, and that is what people REALLY hate. Have you ever tried to email uh I mean "memo" something from Notes? Not pleasant :(
A: Comparing Apples & Oranges
Lotus Notes Domino hasn't changed much and there is not a NoSQL service option on-prem or cloud for Notes Domino v12 or any earlier version. Domino is not cloud based tech.
When it comes to NoSQL, Domino uses NoSQL for its own application solutions built in Domino. There was an attempt with Domino Access Services which is based on Java 6, Rest API still uses Vectors in v12. This service is ok, not robust, it provided a way to interface with data in a NSF. Remember, Domino is key value pairs storage and very slow on large data sets because of the security model, each document is checked for readers and authors with every search to identify if the document can be viewed by the user. Domino is still Web 1.0.
With CouchDB one can build app on mobile and deploy it. There is no way to do the same with Notes/Domino because of the Domino Server. Domino dev also only supports MS Windows and the IDE is based on older versions of Eclipse, to this day v12, there is no way to use dual monitors with the Domino IDE. Ask any Domino Developer, they hate being forced to use a IDE on a specific platform that cannot keep up with industry.
Couch has gone through many changes as well, brief history:
*
*CouchDB started by Damian Katz, IBM Lotus Domino engineer
*Apache project BigCouch is born ; scalability and clustering added
*Cloudant is born ; BigData and IBM funding and IBM Cloud offering
*CouchDB 2.0 is born; Cloudant + BigData merged back into CouchDB
*CouchDB 3.0 is born; Enhanced security and prep for Foundation DB
*CouchDB 4.0 is born; architecture changed to Apples Foundation DB
https://www.dataengineeringpodcast.com/couchdb-document-database-episode-124/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Again with JPA not making sense I asked this earlier and was told to look at mapped by.
I have 2 tables: A s_id(key) name cli type
B sa_id(key) s_id user pwd.
So in Jpa I have:
@Entity class A...{
@OneToMany(fetch=FetchType.EAGER)
@JoinTable( name="A_B",
joinColumns={@JoinColumn(name="a_id",
table="a",unique=false)},
inverseJoinColumns={@JoinColumn(name="b_id",
table="b", unique=true)} ) Collection
getB(){...} }
class b is just a basic entity class
with no reference to A.
Hopefully that is clear. My question
is: Do I really need a join table to
do such a simple join? Can't this be
done with a simple joincolumn or
something?
So now if Have this, but jpa is trying to write the query with some new column that doesn't exist (s_s_id)
@Entity
class A{
...
@OneToMany(fetch=FetchType.EAGER, mappedBy="s")
Collection<B> x;
}
@Entity
class B{
@ManyToOne
A s;
}
With these tables:
A
s_id(key) name cli type
B
sa_id(key) s_id(the keys from A) user pwd.
How can I do the OneToMany and ManyToOne joins such that I don't need a new column nor a new table? Please give me an example. Is the issue the lack of a foreign key in the B table?
If I leave out the mapped by I get Unknown column 't1.S_S_ID' in 'field list'
If I put in the mapped by I get Unknown column 'S_S_ID' in 'field list'
A: I found it, I need to add the @JoinColumn and give it a name...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Fast pseudo random number generator for procedural content I am looking for a pseudo random number generator which would be specialized to work fast when it is given a seed before generating each number. Most generators I have seen so far assume you set seed once and then generate a long sequence of numbers. The only thing which looks somewhat similar to I have seen so far is Perlin Noise, but it generates too "smooth" data - for similar inputs it tends to produce similar results.
The declaration of the generator should look something like:
int RandomNumber1(int seed);
Or:
int RandomNumber3(int seedX, int seedY, int seedZ);
I think having good RandomNumber1 should be enough, as it is possible to implement RandomNumber3 by hashing its inputs and passing the result into the RandomNumber1, but I wrote the 2nd prototype in case some implementation could use the independent inputs.
The intended use for this generator is to use it for procedural content generator, like generating a forest by placing trees in a grid and determining a random tree species and random spatial offsets for each location.
The generator needs to be very efficient (below 500 CPU cycles), because the procedural content is created in huge quantities in real time during rendering.
A: Yep, you are looking for a fast integer hash algorithm rather than a PRNG.
This page has a few algorithms, I'm sure you'll find plenty more now you know the correct search terms.
Edit: The original page has been removed, a live version can be found on GitHub.
A: Here's a small random number generator developed by George Marsaglia. He's an expert in the field, so you can be confident the generator has good statistical properties.
v = 36969*(v & 65535) + (v >> 16);
u = 18000*(u & 65535) + (u >> 16);
return (v << 16) + (u & 65535);
Here u and v are unsigned ints. Initialize them to any non-zero values. Each time you generate a random number, store u and v somewhere. You could wrap this in a function to match your signature above (except the ints are unsigned.)
A: Seems like you're asking for a hash-function rather than a PRNG. Googling 'fast hash function' yields several promising-looking results.
For example:
uint32_t hash( uint32_t a)
a = (a ^ 61) ^ (a >> 16);
a = a + (a << 3);
a = a ^ (a >> 4);
a = a * 0x27d4eb2d;
a = a ^ (a >> 15);
return a;
}
Edit: Yep, some hash functions definitely look more suitable than others.
For your purposes, it should be sufficient to eyeball thefunction and check that a single-bit change in the input will propagate to lots of output bits.
A: see std::tr1::ranlux3, or other random number generators that are part of TR1 additions to the standard C++ library. I suggested mt19937 initialially, but then saw your note that it needs to be very fast. TR1 is should be available on Microsoft VC++ and GCC, and can also be found in the boost libraries which support even more compilers.
example adapted from boost documentation:
#include <random>
#include <iostream>
#include <iterator>
#include <functional>
#include <algorithm>
#include <ctime>
using namespace std;
using namespace std::tr1;
int main(){
random_device trueRand;
ranlux3 rng(trueRand); // produces randomness out of thin air
// see pseudo-random number generators
uniform_int<> six(1,6); // distribution that maps to 1..6
// see random number distributions
variate_generator<ranlux3&, uniform_int<> >
die(rng, six); // glues randomness with mapping
// simulate rolling a die
generate_n( ostream_iterator<int>(cout, " "), 10, ref(die));
}
example output:
2 4 4 2 4 5 4 3 6 2
Any TR1 random number generator can seed any other random number generator. If you need higher quality results, consider feeding the output of mt19937 (which is slower, but higher quality) into a minstd_rand or randlux3, which are faster generators.
A: If memory is not really an issue and speed is of utmost importance then you can prebuild a large array of random numbers and just iterate through it at runtime. For example have a seperate program generate 100,000 random numbers and save it as it's own file like
unsigned int randarray []={1,2,3,....}
then include that file into your compile and at runtime your random number function only needs to pull numbers from that array and loop back to the start when it hits the end.
A: I use the following code in my Java random number library - this has worked pretty well for me. I also use this for generating procedural content.
/**
* State for random number generation
*/
private static volatile long state=xorShift64(System.nanoTime()|0xCAFEBABE);
/**
* Gets a long random value
* @return Random long value based on static state
*/
public static long nextLong() {
long a=state;
state = xorShift64(a);
return a;
}
/**
* XORShift algorithm - credit to George Marsaglia!
* @param a initial state
* @return new state
*/
public static final long xorShift64(long a) {
a ^= (a << 21);
a ^= (a >>> 35);
a ^= (a << 4);
return a;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Possible Memory leak in Number of Loaded classes in Java Application I recently began profiling an osgi java application that I am writing using VisualVM. One thing I have noticed is that when the application starts sending data to a client (over JMS), the number of loaded classes starts increasing at a steady rate. The Heap size and the PermGen size remains constant, however. The number of classes never falls, even after it stops sending data. Is this a memory leak? I think it is, because the loaded classes have to be stored somewhere, however the heap and permgen never increase even after I run the application for several hours.
For the screenshot of my profiling application go here
A:
Are you dynamically creating new classes on the fly somehow?
Thanks for your help. I figured out what the problem is. In one of my classes, I was using Jaxb to create an XML string. In doing this, JAXB ueses reflection to create a new class.
JAXBContext context = JAXBContext.newInstance(this.getClass());
So although the JAXBContext wasn't saying around in the heap, the classes had been loaded.
I have run my program again, and I see a normal plateau as I would expect.
A: You might find some hotspot flags to be of use in understanding this behavior like:
*
*-XX:+TraceClassLoading
*-XX:+TraceClassUnloading
This is a good reference:
http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp
A: I'm willing to bet that your problem is related to bytecode generation.
Many libraries use CGLib, BCEL, Javasist or Janino to generate bytecode for new classes at runtime and then load them from controlled classloader. The only way to release these classes is to release all references to the classloader.
Since the classloader is held by each class, this also means that you should not release the references to all classes as well [1]. You can catch these with a decent profiler (I use Yourkit - search for multiple classloader instances with the same retained size)
One catch is that the JVM does not unload classes by default (the reason is backwards compatibility - that people assume (wrongly) that static initializers would be executed only once. The truth is that they get executed every time a class is loaded.) To enable unloading, you should pass some use the following options:
-XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled
(tested with JDK 1.5)
Even then, excessive bytecode generation is not a good idea, so I suggest you look in your code to find the culprit and cache the generated classes. Frequent offenders are scripting languages, dynamic proxies (including the ones generated by application servers) or huge Hibernate model (in this case you can just increase your permgen).
See also:
*
*http://blogs.oracle.com/watt/resource/jvm-options-list.html
*http://blogs.oracle.com/jonthecollector/entry/presenting_the_permanent_generation
*http://forums.sun.com/thread.jspa?messageID=2833028
A: Unless I misunderstand, we're looking here at loaded classes, not instances.
When your code first references a class, the JVM has the ClassLoader go out and fetch the information about the class from a .class file or the like.
I'm not sure under what conditions it would unload a class. Certainly it should never unload any class with static information.
So I would expect a pattern roughly like yours, where as your application runs it goes into areas and references new classes, so the number of loaded classes would go up and up.
However, two things seems strange to me:
*
*Why is it so linear?
*Why doesn't it plateau?
I would expect it to trend upwards, but in a wobbly line, and then taper off on the increase as the JVM has already loaded most of the classes your program references. I mean, there are a finite number of classes referenced in most applications.
Are you dynamically creating new classes on the fly somehow?
I would suggest running a simpler test app through the same debugger to get a baseline case. Then you could consider implementing your own ClassLoader that spits out some debug information, or maybe there is a tool to make it report.
You need to figure out what these classes being loaded are.
A: Yes, it's usually a memory leak (since we don't really deal with memory directly, it's more of a class instance leak). I've gone through this process before and usually it's some listener added to an old toolkit that didn't remove it self.
In older code, A listener relationship causes the "listener" object to remain around. I'd look at older toolkits or ones that haven't been through many revs. Any long-existing library running on a later JDK would know about reference objects which removes the requirement for "Remove Listener".
Also, call dispose on your windows if you recreate them each time. I don't think they ever go away if you don't (Actually there is also a dispose on close setting).
Don't worry about Swing or JDK listeners, they should all use references so you should be okay.
A: Use the Eclipse Memory Analyzer to check for duplicated classes and memory leaks. It might happen that the same class gets loaded more than once.
Regards,
Markus
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Quickest way to implement a C++ Win32 Splash Screen What's a simple way to implement a c++ Win32 program to...
- display an 800x600x24 uncompressed bitmap image
- in a window without borders (the only thing visible is the image)
- that closes after ten seconds
- and doesn't use MFC
A: If you're targeting modern versions of Windows (Windows 2000) and above, you can use the UpdateLayeredWindow function to display any bitmap (including one with an alpha channel, if so desired).
I blogged a four-part series on how to write a C++ Win32 app that does this. If you need to wait for exactly ten seconds to close the splash screen (instead of until the main window is ready), you would need to use Dan Cristoloveanu's suggested technique of a timer that calls DestroyWindow.
A: Register a class for the splash window and create a window using these styles:
*
*WS_POPUPWINDOW: will make sure your window has no caption/sysmenu
*WS_EX_TOPMOST: will keep the splash screen on top of everything. Note that this is a bit intrusive. It might be better to just make the splash window a child of your main window. You may have to manipulate the z-order, though, to keep any other popup windows (if you create any) below the splash screen.
Use CreateDIBSection to load the bitmap. It should be easy, since BMP files are essentially dumps of DIB structures. Or do what Ken said and use LoadImage.
Handle the WM_PAINT or WM_ERASEBKGND message to draw the bitmap on the window.
On WM_CREATE set a timer of 10 seconds and when Windows sends the WM_TIMER message, have the window destroy itself.
A: The key point here is to use layered window.
You can start with a win32 wizard generated project and change CreateWindow call to CreateWindowEx and set WS_EX_LAYERED as extended window style and combination of WS_POPUP and WS_SYSMENU as window style. When you do that launch your application it will be invisible. Then you should use UpdateLayeredWindow to paint your image. You may also need AlphaBlend function if you want use PNG image with alpha layer.
Hope this helps!
A: *
*Use LoadImage to load the bitmap
*Use CreateWindowEx to create the window.
*In the window proc capture the WM_PAINT. Use BitBlt to paint the bitmap.
A: You can:
*
*Create a dialog in your resource file
*Have it contain a Picture control
*Set the picture control type to Bitmap
*Create/import your bitmap in the resource file and set that bitmap ID to the picture control in your dialog
*Create the window by using CreateDialogParam
*Handle the WM_INITDIALOG in order to set a timer for 10 seconds (use SetTimer)
*Handle WM_TIMER to catch your timer event and to destroy the window (use DestroyWindow)
A: It's a Win32 api FAQ
See professional Win32api forum
news://194.177.96.26/comp.os.ms-windows.programmer.win32
where it has been answered hundreds of times for 20 years..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Spring: Obtaining ResourceBundle based on MessageSource I'm using hibernate validator framework with Spring. A class implementing the Spring Validator validates objects with Hibernate's ClassValidator. To localize ClassValidator's error messages I need to pass a ResourceBundle into the class' constructor. My ApplicationCountext has a MessageSource bean (ReloadableResourceBundleMessageSource) which is used throught the application. It makes sense to use this same MessafeSource for the ClassValidator. But how do I convert the MessageSource to a ResourceBundle? Is there any adapter class?
A: MessageSourceResourceBundle sounds like what you are looking for. (Haven't tried it myself.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Where can I find an open source C# project that uses ADO.NET? I am trying to write a Windows Form and ASP.NET C# front-end and MSAccess backend for a pretty small database concept I have.
I have written this application once before in just MSAccess but I now need the app and database to be in different places. I have now figured out (thanks to a StackOverflow user) that ADO will be a bad choice because it has to have a connection open all of the time.
I bought Microsoft ADO.Net 2.0 Step-by-Step and I have read through some of it and understand (I think) the basic concepts at play in ADO.NET. (Datasets and the like)
Where I get confused is the actual implementation. What I want to know is do any of you know of a C# project that has a database backend which is open source that I can go look at the code and see how they did it. I find I learn better that way. The book has a CD with code examples that I may turn to, but I would rather see real code in a real app.
A: Take a look at the MySQL .net connector. It is the nuts and bolts of how the ADO.net classes talk to the DB engine. ADO.net as a whole does not keep connections open. Certain higher level classes do. Technically the lower level objects such as the connection and command objects are part of ADO.net, but you have a high degree of control over them.
A: I haven't used this but it looks like it might be a good fit:
http://www.codeproject.com/KB/database/DBaseFactGenerics.aspx
A: Check CodePlex, they have a ton of .NET projects. I can't think of specific ones that fit your requirements, but you should be able to find something.
www.codeplex.com
A: I found this post http://www.codeproject.com/KB/database/DatabaseAcessWithAdoNet1.aspx by searching for ADO.NET on the codeproject so I am going to give Chris Porter the answer points. Thanks everyone for the help.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can you display Typing Speed using Javascript or the jQuery library? I would like to add a typing speed indicator just below the textarea we use on our contact form. It is just for fun and to give the user some interactivity with the page while they are completing the form.
It should display the average speed while typing and keep the last average when the keystrokes are idle. When they leave the textarea the last average should stick.
Ideally I would like to have a jQuery plugin if it is available.
[Edit] this was originally for just a few of my websites. But after I posted the question it struck me how this would be a neat feature for SO. If you agree vote here
A: It's my own ego involvement:
<textarea id="b" onblur="clc();"></textarea>
<script>
t=0;
document.getElementById('b').onkeypress=function()
{
t==0 ? s=new Date() : e=new Date();
t=1;
}
function clc()
{
d = e.getTime() - s.getTime();
c = b.value.length;
b.value += "\n"+c+"s in "+d+"ms: "+c/d+" cpms";
}
</script>
I spent over a week on this learning JavaScript from zero (cutting and cutting). This will be a good starting point. It's now so simple that needs bare explanation. You could add anything to it. Its the best known minimalist and purest form.
It just gives you all into a textarea:
A: Here's a tested implementation,which seems ok, but I don't guarantee the math.
A Demo: http://jsfiddle.net/iaezzy/pLpx5oLf/
And the code:
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Type Speed</title>
<script type="text/javascript" src="js/jquery-1.2.6.js"></script>
<style type="text/css">
form {
margin: 20px auto;
width: 500px;
}
#textariffic {
width: 400px;
height: 400px;
font-size: 12px;
font-family: monospace;
line-height: 15px;
}
</style>
<script type="text/javascript">
$(function() {
$('textarea')
.keyup(checkSpeed);
});
var iLastTime = 0;
var iTime = 0;
var iTotal = 0;
var iKeys = 0;
function checkSpeed() {
iTime = new Date().getTime();
if (iLastTime != 0) {
iKeys++;
iTotal += iTime - iLastTime;
iWords = $('textarea').val().split(/\s/).length;
$('#CPM').html(Math.round(iKeys / iTotal * 6000, 2));
$('#WPM').html(Math.round(iWords / iTotal * 6000, 2));
}
iLastTime = iTime;
}
</script>
</head>
<body>
<form id="tipper">
<textarea id="textariffic"></textarea>
<p>
<span class="label">CPM</span>
<span id="CPM">0</span>
</p>
<p>
<span class="label">WPM</span>
<span id="WPM">0</span>
</p>
</form>
</body>
</html>
A: Typing speed is generally computed in words per minute minus a penalty for typos. To do this it seems like you'd need an inline spell-checker at the very least, plus some heavy lifting for languages and encoding schemes. (And then it starts to get complicated; when does the clock start, for instance? What do you do about people who are entering code? How about copy-and-pasting?)
A: a horribly simple, untested implementation:
var lastrun = new Date();
textarea.onkeyup = function() {
var words = textarea.value.split(' ');
var minutes_since_last_check = somefunctiontogetminutesdifference(new Date(), lastrun);
var wpm = (words.length-1)/minutes_since_last_check;
//show the wpm in a div or something
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Hiding a function I have a class holding complex scientific computations. It is set up to only allow a user to create a properly instantiated case. To properly test the code, however, requires setting internal state variables directly, since the reference documents supply this data in their test cases. Done improperly, however, it can invalidate the state.
So I must have the ability, a member function, to set internal variables from the unit test programs. But I want to strongly discourage normal users from calling this function. (Yes, a determined user can muck with anything... but I don't want to advertise that there is a way to do something wrong.)
It would be nice to be able to tell Intellisense to not show the function, for instance.
The best solution I have at the moment is to just name the function something like: DangerousSet().
What other options do I have?
Follow-Up
I found Amy B's answer most useful to my situation. Thanks!
Mufasa's suggestion to use reflection was great, but harder to implement (for me).
Chris' suggestion of using a decorator was good, but didn't pan out.
BFree's suggestion on XML is also good, and was already in use, but doesn't really solve the problem.
Finally, BillTheLizard's suggestion that the problem is in the source documents is not something I can control. International experts publish highly technical books and journal articles for use by their community. The fact that they don't address my particular needs is a fact of life. There simply are no alternative documents.
A: Decorate your method with this attribute:
[System.ComponentModel.EditorBrowsable(System.ComponentModel.EditorBrowsableState.Never)]
This will hide it from Intellisense.
EDIT:
But apparently this has a rather significant caveat: "In Visual C#, EditorBrowsableAttribute does not suppress members from a class in the same assembly." Via MSDN.
A: Suppose you want to test this object by manipulating its fields.
public class ComplexCalculation
{
protected int favoriteNumber;
public int FavoriteNumber
{
get { return favoriteNumber; }
}
}
Place this object in your test assembly/namespace:
public class ComplexCalculationTest : ComplexCalculation
{
public void SetFavoriteNumber(int newFavoriteNumber)
{
this.favoriteNumber = newFavoriteNumber;
}
}
And write your test:
public void Test()
{
ComplexCalculationTest myTestObject = new ComplexCalculationTest();
myTestObject.SetFavoriteNumber(3);
ComplexCalculation myObject = myTestObject;
if (myObject.FavoriteNumber == 3)
Console.WriteLine("Win!");
}
PS: I know you said internal, but I don't think you meant internal.
A: It sounds like your real problem is in your reference documents. You shouldn't test cases that are impossible to encounter under proper use of your class. If users shouldn't be allowed to change the state of those variables, then neither should your tests.
A: You can use InternalsVisibleToAttribute to mark internal members as visible to your test assembly. It seems to shine when used in this context, though its not quite "friend".
*
*Mark your DangerousSet function internal instead of public.
*In Properties\AssemblyInfo.cs of the project containing DangerousSet:
[assembly:InternalsVisibleTo("YourTestAssembly")]
If you have two test assemblies for whatever reason, the syntax is:
[assembly:InternalsVisibleTo("TestAssembly1"),
InternalsVisibleTo("TestAssembly2")]
A: You can also use reflection. Google search turned up Unit testing private methods using reflection.
A: Can your test code include a subclass of the calculations class? If so, you can mark the function protected and only inheritors will be able to use it. I'm pretty sure this also takes it out of intellisense, but I could be wrong about that.
A: What I've done in the past is I put XML Comments by the method and used the section to write in big bold letters. DON'T USE THIS METHOD or whatever. That way, if someone tried to use it, Intellisense would give them a nice warning.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to achieve a photo "stack" border effect with CSS? I'd like to be able to add a class to images that adds a border that makes them look like a stack of photos. Anyone know how to do this?
Clarifications: Ideally something like the stack shown here but it doesn't need to be interactive and only needs to work for a single photo. I also don't mind using javascript if needed (jQuery would be preferred though).
A: Place your IMG tag inside a nested set of DIV elements (the number of divs will determine the number of photos in the stack). Then use CSS to set the border and padding so that the DIV elements get progressively larger than the photograph. Generally you will add more padding to the bottom and right.
A: The "depth" affect is probably going to be some type of drop shadow. Do you need to rotate the photos as well for the "messy photo pile" effect or are you looking for a "neatly stacked" look?
The "messy photo pile" effect seems to me to break down into three components:
*
*Put a background behind the image for the "polaroid" look (explained in other comments
*Put a drop shadow behind the image for the "depth" effect (explained above and in other comments
*Rotating images. I've never done this myself but it looks like someone has coded the Jquery plugin you are looking for.
A: CSS3 it's supported by everyone yet, but you might want to look into border-image.
A: Put a div around the image and then have 2 styles defined.
<div class="img-shadow"><img ...></div>
.img-shadow {style.css (line 456)
background-color:#505050;
float:left;
margin:5px 0 0 0;
}
.img-shadow img {style.css (line 461)
background-color:#FFFFFF;
border:3px solid #000000;
display:block;
margin:-8px 8px 8px -8px;
padding:10px;
position:relative;
}
in the .img-shadow class, define a graphic for your background that's large enough for your images, and looks like a stack of photos. The above makes it look like the photo is casting a shadow.
A: Below is my recommendation which has a clear and simple CSS which results in a perfect photo stack.
http://dabblet.com/gist/2023431
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What is the simplest way to allow a user to drag and drop table rows in order to change their order? I have a Ruby on Rails application that I'm writing where a user has the option to edit an invoice. They need to be able to reassign the order of the rows. Right now I have an index column in the db which is used as the default sort mechanism. I just exposed that and allowed the user to edit it.
This is not very elegant. I'd like the user to be able to drag and drop table rows. I've used Scriptaculous and Prototype a bit and am familiar with them. I've done drag and drop lists, but haven't done table rows quite like this. Anyone have any suggestions for not only reordering but capturing the reorder efficiently?
Also, the user can dynamically create a new row in JS right now, so that row has to be reorderable as well.
Bonus points if it can be done with RJS instead of direct JavaScript.
A: I've used the Yahoo User Interface library to do this before:
http://developer.yahoo.com/yui/dragdrop/
A: MooTools sortables are actually better than script.aculo.us because they are dynamic; MooTools allows the addition/removal of items to the list. When a new item is added to a script.aculo.us sortable, you have to destroy/recreate the sortable to make the new item sortable. There'll be a lot overhead in doing so if the list has many elements. I had to switch from script.aculo.us to the more lightweight MooTools just because of this limitation and ended up being extremely happy with that decision.
The MooTools way of making a newly added item sortable is just:
sortables.addItems(node);
A: Okay, I did some more scouring and figured out something that seems to mostly be working.
edit.html.erb:
...
<table id="invoices">
<thead>
<tr>
<th>Id</th>
<th>Description</th>
</tr>
</thead>
<tbody id="line_items">
<%= render :partial => 'invoice_line_item', :collection => @invoice.invoice_line_items.sort %>
</tbody>
</table>
<%= sortable_element('line_items', {:url => {:action => :update_index}, :tag => :tr, :constraint => :vertical}) -%>
...
app/controllers/invoices.rb
...
def update_index
params["line_items"].each_with_index do |id, index|
InvoiceLineItem.update(id, :index => index)
end
render :nothing => true
end
...
The important part is :tag => :tr in "sortable_element" and params["line_items"] -- this gives the new list of ids and is triggered on the drop.
Detriments: Makes the AJAX call on drop, I think I'd prefer to store the order and update when the user hits "save". Untested on IE.
A: I like jQuery http://docs.jquery.com/UI/Sortables
$("#myList").sortable({});
You will need to write some code to persist it but it isn't that tough.
A: The Yahoo interface was easier than I expected, had something snazzy working in less than four hours.
A: Scriptaculous sortables seems like the way to go since it's built in.
http://github.com/madrobby/scriptaculous/wikis/sortable
A: With Prototype and Scriptaculous :
Sortable.create('yourTable', {
tag: 'tr',
handles: $$('a.move'),
onUpdate: function() {
console.log('onUpdate');
}
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Control multiple PCs with single Mouse and Keyboard As a programmer I found it very hard to use my laptop and workstation with two different input devices, Can anyone suggest a good solution to use single mouse and keyboard to control my two machines
I am not looking for a Virtual Machine or RDP solution to see my machines in a single monitor,
A: What you want is a small gadget called a KVM switch (keyboard, video and mouse switch). Googling for that term will hook you up with plenty of suppliers.
There is also a neat software solution called Synergy that lets you use your cursor and keyboard input over multiple computers connected by a network.
A: Synergy.
Synergy lets you easily share a single mouse and keyboard between
multiple computers with different
operating systems, each with its own
display, without special hardware.
It's intended for users with multiple
computers on their desk since each
system uses its own monitor(s).
Redirecting the mouse and keyboard is
as simple as moving the mouse off the
edge of your screen. Synergy also
merges the clipboards of all the
systems into one, allowing
cut-and-paste between systems.
Furthermore, it synchronizes screen
savers so they all start and stop
together and, if screen locking is
enabled, only one screen requires a
password to unlock them all.
P. S.
See also how to fix Synergy problems on Vista.
A: Yet another vote for Synergy for a software KVM solution. I'm not sure about the others, but it's unique if your computers are running different operating systems. It worked very well when I had a W2k/Linux setup across 3 computers.
A: Synergy is great, but also give something like VNC a try: it consolidates not only the keyboard and mouse but also the screen. In my case my desktop monitor is much larger than my laptops, and I'm more comfortable facing forward anyway (not looking off to the side where the laptop is.)
There is a lag compared to using a KVM switch, but no loss in video quality.
A: In my experience Synergy is the best way to merge multiple monitors.
Others include:
- x2vnc
- x2x
- win2vnc
- osx2x
- win2x
... pretty much just take what OS/platform you're on, which one you want to connect to, and put a '2' in the middle. Type that into google and you're good2go.
A: For my linux machine I use QuickSynergy since it provides a gui for easier configuration. It also has a Mac OS version.
A: The best...
Synergy
A: I'll put in another vote for Synergy, but with a caveat - setup can be a little tricky. The first time I tried it, I could move my cursor over to another PC but I couldn't move it back. Spend some time with the documentation before you proceed.
A: InputDirector is better than Synergy. Here's why...
*
*It has built-in AES encryption functionality (without requiring you to install OpenSSH) for secure transfer of input between machines.
*It allows cut & paste of text and files between machines (by automatically translating to C$ and D$ shares)
*Based on extensive use with a laptop, it is far more reliable and stable than Synergy when reconnecting after undocking & docking. Synergy would frequently just stop working after docking and undocking, requiring me to kill it, restart it, and reconnect. InputDirector rarely has any issues.
*The configuration UI is easier to use, and has more options, than Synergy.
*Lots of little things, like matching of cursor location between machines during screen-edge transitions, and overriding mouse settings of "Slave" machines with those of the "Master" machine.
Beyond that, as far as I can tell, it does everything Synergy does. There's only a Windows version, but apparently it's also Vista compliant as well.
I've used both tools extensively, first Synergy, and then InputDirector. InputDirector is just a more robust application. It has all the features of Synergy and then some, plus the key ones listed above. It's website isn't as attractive, and while it isn't GNU GPL'd like Synergy, it free nonetheless, and an oustandingly well-functioning tool.
A: I used to use a KVM switch, but lately I've started running all my computers as virtual machines on a single hardware platform. Each "system" is a window on my desktop!
A: I have a triple monitor display, and I just remote desktop into my other machines. I have 2-3 laptops on my desk at any given time, and 3 servers to administer. Over a 1 gbps connection, I have very little latency to worry about, and I can be working on three computers at once without much trouble. This may or may not help you, but I thought I would throw it in there for you.
A: If you mean: two machines on your desktop, a lot of places use KVM-style switches.
They come in legacy PC-style and also USB. The USB version works with Macs and PCs.
My experience is that the small desktop switches are a bargain, and if you learn the keyboard shortcuts, you'll jump back and forth without much problem.
The machine room, 3-level tree KVM's are also pretty useful. They flake out more often, but when you have 60 machines, you simply can't have 60 pairs on input devices.
A: I'll second Zarkonnens comment about KVM Switches as I use one for this purpose all the time. However I might share some rather frustrating experiences with them:
I have found that PS/2 interfaces tend to be somewhat more reliable on KVM switches than USB - I have had very bad experiences with some supposedly upmarket DVI-USB KVM kit from Gefen and Avocent. Due to a quirk of my Viewsonic monitor where it would drop back to analog most of the time these were exacerbated to the point of the system being nearly unusable.
DVI and USB are finicky. DVI monitors will often time out and sleep if they get no signal. The KVM switch will assume that there is no monitor if it is not active, which will then be passed back to the video card. USB interfaces will also get put to sleep randomly.
The net effect of this was that it was very difficult to get two machines to boot up and work on the KVM switch and the switch would lose keyboard or mouse input on one or both machines every few days. This was followed by an hour or more of trying to get all of the hardware to come up and play nicely. I got the same issue with the Avocent and Gefen switches on several different machines.
My older Belkin VGA/PS2 kit worked fine with the Viewsonic monitors on VGA but I spent nearly £1000 on switches and cabling to try and get a working DVI-USB KVM setup.
In the end I got two HP LP2065 screens that didn't have the bug that the Viewsonics exhibited. These have two DVI inputs and I used one of my older Belkin PS/2 switches to switch the keyboard and mouse. The computers are plugged directly into the monitor and the monitor's input selector is used to pick the computer. The keyboard and mouse are switched off the KVM switch. This is the setup that I'm using today.
The monitors and KVM have to be switched individually but it's much more reliable than the DVI-USB KVM switches that really did not work at all. Caveat emptor.
A: You should also check out Multiplicity from Stardock.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: AJAX, postbacks and browser refreshes I have created a user control to handle adding comments to certain business entities, like contacts and customers. Works great ... except for one issue.
I am using a ListView control to edit and delete comments, and a separate area, on the same user control to add a new comment. All of this is wrapped in an UpdatePanel.
Here is my scenario ... the user adds a new comment ... the page does a postback, the data is successfully saved, and the ListView control is updated to show the new comment. Now, if the user refreshes the browser, it will naturally postback again and will add another duplicate record.
Any ideas on how best to prevent this?
A: You could try using the Post/Redirect/Get pattern. Basically instead of letting the postback send the data, redirect to the page. That way, if a user refreshes, s/he is refreshing the GET command rather than the POST.
Sorry.. missed the UpdatePanel piece. Make sure that your submit button is also within that UpdatePanel. A page refresh would not affect your AJAX call, but when the button is outside the panel, it's doing a regular postback so you would be sending the Add Request again.
A: I haven't used ASP.NET in a few years, but you should wrap your "do this on postback" code in Page.IsPostBack:
if(IsPostBack) {
//do your data-saving code...
}
MSDN link
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Java Development on a Mac - Xcode, Eclipse, or Netbeans I've been using Xcode for the usual C/C++/ObjC development. I'm wondering what are practical considerations, opinions of Xcode, Eclipse or NetBeans usage on a Mac for Java development?
Please don't include my current usage of Xcode in your analysis.
A: Do not use Xcode - Java support in the later versions is very much lacking. Even Apple, who make it, suggest you use a different IDE. As for NetBeans and Eclipse, they both have their strengths and a large number of vocal followers. I suggest you try both and use whichever you find more comfortable.
I for one use TextMate and shell scripts. But I'm strange.
A: Well, I can chime in with Netbeans, it seems to work really well. There are some function key issues that I believe has a solution, I just haven't solved it. I've been quite happy with Netbeans. I like its "all in one out of the box" nature over the pick and choose plug in nature of Eclipse, but that's just a matter of taste.
A: Another vote for IntelliJ. http://www.jetbrains.com/idea/
A: I used both Eclipse and Netbeans. I like Netbeans more than Eclipse. From java editor point of view, both have excellent context sensitive help and the usual goodies.
Eclipse sucks when it comes to setting up projects that other team members can open and use. We have a big project (around 600K lines of code) organized in many folders. Eclipse won't let you include source code that is outside the project root folder. Everything has to be below the project root folder. Usually you want to have individual projects and be able to establish dependencies among them. Once it builds, you would check them into your source control. The problem with eclipse is that a project (i.e .classpath file) dependencies are saved in user's workspace folder. If you care to see this folder, you will find many files that read like org.eclipse.* etc. What it means is that you can't put those files in your source control. We have 20 step instruction sheet for someone to go through each time they start a fresh checkout from source control. We ended up not using its default project management stuff (i.e. classpath file etc). Rather we came up with an Ant build file and launch it from inside Eclipse. That is kludgy way. If you had to jump through these many hoops, the IDE basically failed. I bet eclipse project management was designed by guys who never used an IDE. Many IDES let you have different configurations to run your code (Release, Debug, Release with JDK 1.5 etc). And they let you save those things as part of your project file. Everyone in the team can use them without a big learning curve. You can create configurations in Eclipe, but you can't save them as part of your project file (i.e it won't go into your source control). I work on half dozen fresh checkouts in a span of 6 months. I get tired to recreate them with each fresh checkout.
On the other hand, Netbeans works as expected. It doesn't have this project management nightmare.
I heard good things about IntelliJ.
If you are starting fresh, go with Netbeans.
My 2cents.
A: You missed the Rolls Royce of all IDEs. IntelliJ Idea.
If you can afford to buy a personal license, go for it. Edit: There’s a free Community Edition which is a superb way to get started with Java, Scala or Kotlin.
A: I like NetBeans on OS X for Java.
It seems like I spend more time configuring eclipse to get a decent java programming environment. With NetBeans the setup time is less and I can get down to programming quicker...
A: I would advocate Eclipse on the Mac for Java, mosly because I had a very good experience. I'm not going to bang on about its merits as an IDE, but here are some unexpected advantages I found:
*
*When my employer switched IDE's to Eclipse I was way ahead.
*Pretty much any language I fancied trying out had a free IDE somewhere as an Eclipse plug-in, so I have a very consistent multi-language development environment.
*When I eventually went over to the Windows dark side I could use the same development environment, which was a huge relief.
But this is a bit of a religious topic, so expect to get a whole bunch of different opinions
A: It depends what you want to do. My experience with Java on the Mac is about a year old by now, but NetBeans had a much better out-of-the-box support for Tomcat (in particular) deployment, and generally seemed to be a little more user friendly. For instance, the Netbeans beta I tried out used forms for web.xml configuration, in comparison to Eclipse's plain ol' XML editor (and in Europa, at least, the XML editor's row redrawing was a little sketchy on the Mac).
That said, for that project, I wound up doing a bit of configuration (for a was a n00b) in NetBeans, then moved the XML config files over to Eclipse, and developed the rest there. As others have mentioned, the zillions of plugins are great, and in general the experience is just very consistent. Especially if you have to work on another platform.
If Eclipse had better OS X bindings (does it have any? I'm unaware), I would use that for Obj-C development, as well.
A: If you're using Eclipse, be sure to use Ganymede (3.4) or later. They run great. The previous version (Europa) ran poorly on my Macbook Pro.
A: I have tested editors for Java extensively and prefer Netbeans to Eclipse by a significant margin. NetBeans has excellent support for Java, a very beautiful user interface and powerful features. It also has excellent support for C++ and I would choose for this it over, say, Visual Studio. Consider JCreator classic edition, an excellent place to start although not as powerful as NetBeans, easier to get into at first.
I'd also defend NetBeans plugins against Eclipse because although Eclipse is highly praised for the flexibility it is afforded by plugins I think this is largely down to the fact that the also very powerful plugins features of NetBeans are not shouted about so much, even though it is also very strong in this area. I have seen computational fluid dynamics applications based on the NetBeans platform, very impressive, I just don't think NetBeans developers make such a big deal over it because it's already a complete package from the moment you download it, powerful without any need for configuration with plugins.
A: I happen to use Eclipse on my Mac (actually EasyEclipse which comes preconfigured with the most important plugins) and I must say it runs great. I have a less positive experience on Linux though.
I have also used NetBeans 6 recently and I was very impressed. It seems to have more functionality build-in. Most of the functionality is undoubtedly also available as an Eclipse plugin though, if you can find it.
Currently I have the impression that if you start developing Swing, Netbeans is your best option. Otherwise, Netbeans or Eclipse with a handy set of plugins are both excellent options.
If you do check out eclipse, give a thought to EasyEclipse (free) or perhaps even MyEclipse (not free). They come with the most usefull plugins preinstalled.
A: Just to be sure you give them fair consideration, Eclipse and Netbeans have gone back and forth for a while. Eclipse used to be a good deal quicker because they didn't use Swing.
Now Netbeans has caught up (perhaps surpassed) and has a lot of momentum.
You will get more votes for Eclipse. Period. This is because it was better and more people use it--and it's just human nature to feel what you are using is the best and everyone should use it.
Because it was better does not mean it's better now. Netbeans has more languages supported and more all-around support--so it's growing faster.
Currently I use Eclipse--I've used both (and IntelliJ and TextMate and Notepad...) and I can tell you that Eclipse has exactly one feature over netbeans... Mylyn (it's been renamed, it used to be called Mylar). This thing is pretty damn cool, but few people seem to even know it exists.
So, if you don't know a bunch of keystrokes that already tie you to an editor, the up and coming is Netbeans--don't pass it up because of a bunch of Eclipse votes.
Better yet, get good with both--it can't hurt and makes me a lot more comfortable when a company requires one or another. Don't whine when they make you change.
A: I've worked with both Eclipse CDT and NetBeans's C++ support, and I must say that in my experience CDT is far superior in both stability and in features. It's really impressive how well the CDT indexer works; the tooling is almost as good as Java's. I'm also a huge fan of JDT when compared with NetBeans for Java development. The workflow is just so much smoother, if only due to the incremental compiler (compile-on-save).
One thing about NetBeans though, its UI does flow a little better in the "Mac style", which is ironic seeing as SWT was created to provide a more native interface. The next release of Eclipse should be based on Cocoa (rather than Carbon, which is the current), but that won't be until next June.
Final note: the whole "in box" vs "plugins" issue is entirely moot and it has been since Eclipse Calisto (two years ago). Now, with P2 (the new update manager), it's dead easy to get different features in the IDE. I can start with a download and get a fully-functional JDT/CDT/Mylyn environment up and running within five minutes of installation (assuming a reliable internet connection).
A: I'll suggest Eclipse because it has a zillions plugins and is almost a standard for Java development. But I've heard that NetBeans is really nice since their latest release specially if you want to do desktop application(Swing) .
I can't comment on Xcode since I haven't play with it.
A: I use Eclipse for development, and have had nothing but pain. It has more bugs than a bait shop, and is one of the worst written programs I have ever used. Use Xcode if you want to save time and frustration.
A: Eclipse, because it has better support of C++ on mac. I used Netbeans long time ago, did not like it.
Use Java based IDE on mac only if you have to (especially when doing Java development). Xcode already supports C/C++ development, so no need to switch.
A: Just from my experience, Eclipse is very large IDE. It needs more work to become better suited for the Mac environment. Netbeans is the best out of box experience. After installed, it is essentially ready to go. After I tried IntelliJ IDEA I forgot every other kind of IDE :P
But at the end no one wins over the other.
IMHO as USUAL !
A: am I missing the point here or are developers still considering using Mac for java development?
I was a strong and rigid supporter of Mac as a development environment but ever since Apple's decision to not port java on later versions of os x my confidence has shaken a little.
And please do not even think about doing any j2EE deployment on Mac as it will bring about a tsunami of woes.
So long Java but I like my mac book pro too much .
FYI:I still use Mac for java development but sometimes I wish I were a python developer :(
A: From my experience, I use both Eclipse and Intellij (license) for J2EE development.
For overall of speed on IDEs, Intellij is faster and crashed lesser than Eclipse. I used Eclipse first and later on, I got used to Intellij and fell in love with it. However, Google's Android Studio is Intellij based. It's more modernized. Debugging is much easier especially you can evaluate a block code during the debug mode to see how it behaves instead of just inspecting objects. I highly recommend!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: PHP Script Compression/"Compilation" Tools Are there any more generic tools that can "compile" or basically merge multiple PHP files into a single file based on includes and autoloading classes? I'm thinking of something similar to Doctrine's compiling functionality or the compiling that many of the major JS frameworks do for "production" sites to lighten the file size and improve performance.
Before writing a script to do it myself, I just want to find out if anything worth looking at already exists (Google hasn't been much help so far).
Edit: I've actually written a blog post about the .phar archive format and am very excited about that. I was actually more concerned about performance, but it sounds like merging files would not yield any benefit.
Does anyone have any real data that might suggest the performance gain (or lack thereof) from merging multiple scripts into a single file?
A: I am not a php programmer, but I have seen something called "phar" file. Its like jar for php. maybe u shud look into that
A quick google search reveals
http://pear.php.net/pepr/pepr-proposal-show.php?id=88
http://www.pixelated-dreams.com/archives/78-PHAR-PHPs-Answer-to-.jar.html
A: Out of curiosity, why do you want to do this? If it's for performance, don't bother. Just use regular includes instead of auto-loading, and it will have much of the same effect. For performance you're better off looking at one of the run-time caching solutions.
A: I have run across the YUI Compressor for .NET that is hosted on codeplex.
It will compress both JavaScript and CSS files in your project.
I haven't tried it yet, but I am very interested in it.
You can easily integrate it into your msbuild script.
For more information you can visit http://developer.yahoo.com/yui/compressor/
A: As someone said, Phar is what you're looking for. But I don't think it will increase performance. And, it will be available in PHP's next version.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: I work in SCM/build. How do I tell non-programmers what I do? Friends/family/etc ask me what I do and it always causes me pause while I think of how to explain it. They know what a software developer is but how can I explain what SCM is in 10 words?
A: I'd tell people "I work in software development". I wouldn't bother explaining what you do IN software development unless they ask for more details. I find that 90% of people are satisfied with that answer, and give more details to the 10% of people who are interested.
A: Surprising they know what a software developer does!
Anwyay, this sounds like a challenge for Haiku enthusiasts:
in 5-7-5 (I'm lazy when doing english haiku and my seasonal reference is flakey - try a 3-5-3 if you like)
from many good parts:
one programme on your PC;
lose track, get winter
(hmm, 13 words)
A: The guy who makes sure that what gets deployed is what's meant to get deployed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Should you worry about fake accounts/logins on a website? I'm specifically thinking about the BugMeNot service, which provides user name and password combos to a good number of sites. Now, I realize that pay-for-content sites might be worried about this (and I would suspect that most watch for shared accounts), but how about other sites? Should administrators be on the lookout for these accounts? Should web developers do anything differently to take them into account (and perhaps prevent their use)?
A: Ask yourself the question "Why do we require users to register to access my site?" Once you have business reason for this requirement, then you can try to work out what the effect of having some part of that bypassed by suspect account information.
Work on the basis that at least 10 to 15 percent of account information will be rubbish - and if people using the site can't see any benefit to them personally for registering, and if the registration process is even remotely tedious or an imposition, then accept that you will be either driving more potential visitors away, or increasing your "crap to useful information" ratio.
A: Not make registration mandatory to read something? i.e. Ask people to register when you are providing some functionality for them that 'saves' some settings, data, etc. I would imagine site like stackoverflow gets less fake registrations (reading questions doesn't require an account) than say New York Times, where you need to have an account to read articles.
If that is not upto your control, you may consider removing dormant accounts. i.e. Removing accounts after a certain amount of inactivity.
A: That entirely depends.
Most sites that find themselves listed in bugmenot.com tend to be the ones that require registration for in order to access otherwise-free content.
If registration is required in order to interact with the site (ie, add comments/posts/etc), then chances are most people would rather create their own account than use one that has been made public.
So before considering whether to do things like automatically check bugmenot - think about whether their are problems with your business model.
There are a few situations where pay-to-access content sites (I'm thinking things like, ahem, 'adult' sites) end up with a few user accounts being published publically (usually because someone has brute-forced some account details), and in that case there may be a argument for putting significant effort into it.
A: I think it depends on the aim of your site. If usage analytics are all-important, then this is something you'd have to watch out for. If advertising is your only revenue stream, then does it really matter which username someone uses?
Probably the best way to discourage use of bugmenot accounts is to make it worthwhile to have an actual account. E.g.: No one would use that here, since we all want rep and a profile, or if you're sending out useful emails, people want to receive them.
A: From an administrator viewpoint absolutely. That registration is required for a reason, even if it's something just as simple as user tracking/profile maintaining. Several thousand people using that login entirely defeats the purpose. IP tracking could help mitigate this problem, but it would definitely be hard to eliminate entirely.
A: No need to worry about BugMeNot: http://www.bugmenot.com/report.php
A: With bugmenot, keep in mind that this service is not actually there to harm the sites, but rather to make using them easier. You can request to block your site if it is pay-per-view, community-based (i.e. a forum or Wiki) or the account includes sensible information (like banking). This means in virtually all situations where you would think that bugmenot is a bad thing, bugmenot does not want to be used. So maybe things are not as bad as you might think.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How can I make a globally accessible datatable in a Winforms application? Disclaimer: I am new to Winforms.
I need to declare a datatable that I can load with data when the main form loads. I then want to be able to reference the datatable from within events like when a button is clicked etc.
Where/how should I declare this?
A: I'd suggest a private member at the top of the form class meaning it will be accessible throughout the entire form. No need for a public property, unless you have to access it outside of the form but its best to default to private if you are unsure.
A: Update: If it is a simple one form app, please check the suggestion by Quarrelsome..
Just Declare as a public property of your Data Access class.
A: Public
Class Form3
Private myTable as New DataTable
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
MsgBox(t.Rows.Count)
End Sub
Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click
MsgBox(t.Rows.Count)
End Sub
End Class
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: WSDL URL for a WCF Service (basicHttpBinding) hosted inside a Windows Service I am hosting a WCF service in a Windows Service on one of our servers. After making it work in basicHttpBinding and building a test client in .NET (which finally worked) I went along and try to access it from PHP using the SoapClient class. The final consumer will be a PHP site so I need to make it consumable in PHP.
I got stumped when I had to enter the WSDL url in the constructor of the SoapClient class in the PHP code. Where is the WSDL? All I have is :
http://172.27.7.123:8000/WordService and
http://172.27.7.123:8000/WordService/mex
None of these do not expose WSDL.
Being a newbie in WCF I might have asked a dumb thing (or I might have a wrong assumption somewhere). Please be gentle :D
And no, http://172.27.7.123:8000/WordService?wsdl does not show anything different than http://172.27.7.123:8000/WordService :(
Am I forced to host it in IIS? Am I forced to use a regular WebService?
A: This might help:
http://msdn.microsoft.com/en-us/library/ms734765.aspx
In a nutshell you need to configure your service endpoints and behaviour. Here is a minimal example:
<system.serviceModel>
<services>
<service
<!-- Namespace.ServiceClass implementation -->
name="WcfService1.Service1"
<!-- User behaviour defined below -->
behaviorConfiguration="SimpleServiceBehaviour">
<endpoint
address=""
binding="basicHttpBinding"
<!-- Namespace.Interface that defines our service contract -->
contract="WcfService1.IService1"/>
</service>
</services>
<behaviors>
<serviceBehaviors>
<behavior name="SimpleServiceBehaviour">
<serviceMetadata
<!-- We allow HTTP GET -->
httpGetEnabled="true"
<!-- Conform to WS-Policy 1.5 when generating metadata -->
policyVersion="Policy15"/>
</behavior>
</serviceBehaviors>
</behaviors>
</system.serviceModel>
Don't forget to remove the XML comments as they're invalid where they are.
A: Please see this link:
Exposing a WCF Service With Multiple Bindings and Endpoints
Unlike previous ASMX services, the WSDL (web service definition language) for WCF
services is not automatically generated. The previous image even tells us that
"Metadata publishing for this service is currently disabled.".
This is because we haven't configured our service to expose any meta data about it.
To expose a WSDL for a service we need to configure our service to provide meta information. Note:
The mexHttpBinding is also used to share meta information about a service. While
the name isn't very "gump" it stands for Meta Data Exchange.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Selenium Drag&Drop in testing javascript Need your help in such a specific situation.
I use Selenium framework for testing the application, which based on "ext js" library.
There are 2 trees of elements. I need to move an element from one tree to another element in the second tree.
I use dragAndDropToObject(xpath1,xpath2);
I can see that method takes 'xpath1' element, tries to bring it to the 'xpath2' element and no result - the element 'xpath1' comes back to the previous place. It seems like the method doesn't see the pointed object, doesn't release a taken element on that.
If I use another method of Selenium - f.e. click(xpath2); - it clicks on the pointed object, so the problem is in dragAndDropToObject.
A: I think you'll have to extend selenium by the user-extensions.js file.
Drag & drop selenium tests have been made on the SweetDEV RIA open source tag library.
You may find a very interesting method (Selenium.prototype.doDragTo) on the SweetDEV RIA SVN repository.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Need assistance with serial port communications in Ruby I need to do some communications over a serial port in Ruby. From my research, it appears that there aren't many modern libraries for serial communications and the newest material I can find is from 2006. Are there any gems that I'm not aware of?
I ultimately need to maintain communications with a serial device attached to USB (I can figure out the port no problem) for back and forth communications like so (somewhat Ruby-esque pseudo code).
def serial_write_read
if serial.read == "READY"
serial.write "1"
until serial.read == "OK"
serial.write "5"
end
return when serial.read == "DONE"
end
end
A: Just because searching for ruby-serialport will lead you sometimes here:
toholio's github repo no longer seems to be active (as of 09/2010).
The published gem comes from
http://github.com/hparra/ruby-serialport
A: The serial port specification has not changed in forever, I wouldn't worry about how old the libraries are.
I'm assuming you saw this article from 2006 about ruby and serial ports
Here's someone who got the Ruby-SerialPort library mentioned there to work on macs this year.
There's also this old post from ruby talk, about interfacing to the Win32 Serial API.
A: While the serial standard has not changed, the way Ruby Gems interact with Ruby C extensions changed enough over the years so that the RubyForge serial port extension would not play well. There have been some patches over the years on RubyForge to fix that, but it hasn't been pretty. The great news is that Github has allowed an incredible acceleration in the activity to clean up the Ruby serial port extension. At least three different people are cross-branching their serial port code on Github. You can search on Github, but I believe that Toholio has the latest code, which recodes and repackages the Ruby serial port as a Ruby Gem. (Yea!)
http://github.com/toholio/ruby-serialport/tree/master
It works great for me on Linux, solving the earlier conflict with the latest Ruby Gems release. On Windows, I'm still having a problem getting it working. Compiling Ruby extensions on Windows is never very easy, but that is a whole 'nuther can of worms. I'm just happy that people are working on the Ruby serial port support again. I've asked Toholio to generate a Windows binary gem, which would solve everyone's problems, and he says it's on his list to do.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How can I "unuse" a namespace? One of the vagaries of my development system (Codegear C++Builder) is that some of the auto-generated headers insist on having...
using namespace xyzzy
...statements in them, which impact on my code when I least want or expect it.
Is there a way I can somehow cancel/override a previous "using" statement to avoid this.
Maybe...
unusing namespace xyzzy;
A: Nope. But there's a potential solution: if you enclose your include directive in a namespace of its own, like this...
namespace codegear {
#include "codegear_header.h"
} // namespace codegear
...then the effects of any using directives within that header are neutralized.
That might be problematic in some cases. That's why every C++ style guide strongly recommends not putting a "using namespace" directive in a header file.
A: No you can't unuse a namespace. The only thing you can do is putting the using namespace-statement a block to limit it's scope.
Example:
{
using namespace xyzzy;
} // stop using namespace xyzzy here
Maybe you can change the template which is used of your auto-generated headers.
A: How about using sed, perl or some other command-line tool as part of your build process to modify the generated headers after they are generated but before they are used?
A: You may be stuck using explicit namespaces on conflicts:
string x; // Doesn't work due to conflicting declarations
::string y; // use the class from the global namespace
std::string z; // use the string class from the std namespace
A: For future reference : since the XE version there is a new value that you can #define to avoid the dreaded using namespace System; int the include : DELPHIHEADER_NO_IMPLICIT_NAMESPACE_USE
A: Quick experiment with Visual Studio 2005 shows that you can enclose those headers in your own named namespace and then use what you need from this namespace (but don't use the whole namespace, as it will introduces the namespace you want to hide.
A: #include<iostream>
#include<stdio.h>
namespace namespace1 {
int t = 10;
}
namespace namespace2 {
int t = 20;
}
int main() {
using namespace namespace1;
printf("%d" , t);
printf("%d" , namespace2::t);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "94"
} |
Q: Repeated cookie query or Storing in viewstate? Which is the better practice? I have a internal website that users log into. This data is saved as a cookie. From there the users go on their merry way. Every so often the application(s) will query the authentication record to determine what permissions the user has.
My question is this: Is it more efficent to just query the cookie for the user data when it is needed or to save the user information in viewstate?
[Edit] As mentioned below, Session is also an option.
A: Personally, I prefer using a session to store things, although the other developers here seem to think that's a no-no.
There is one caveat: You may want to store the user's IP in the session and compare it to the user's current IP to help avoid session hijacking. Possibly someone else here has a better idea on how to prevent session hijacking.
A: Viewstate is specific to the page they are viewing, so its gone once they go along thier merry way. Not a good way to persist data.
Your best bet is to use Forms Authentication, its built in to ASP.NET and you can also shove any user-specific information into the Forms Authentication Ticket's Value. You can get 4000 bytes in (after encrypting) there that should hold whatever you need. It will also take care of allowing and denying users access to pages on the site, and you can set it to expire whenever you need.
Storing in the session is a no-no because it scales VERY poorly (eats up resources on the server), and it can be annoying to users with multiple browser connections to the same server. It is sometimes unavoidable, but you should take great pains to avoid it if you can.
A: You can use session data - that way you know that once you have stored it there, users can't fool around with it by changing the query string.
A: I would use the cookie method. Session is okay but gets disposed by asp.net on recompile, and you have to use a non session cookie if you want to persist it after session anyway. Also if you ever use a stateserver its essentially doing the same thing (stores session in the db). Session is like a quick and dirty fix, real men use cookies.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I GZip compress a file from Excel VBA using code in an .xla file only? I need to be able to GZip compress a file in an Excel VBA function. Specifically I need to be able to use the 'deflate' algorithm.
Is there a way to do this without having to exec a command line application? With no dependency on external tools the code will be more robust.
Ideally the code would make use of pre-installed VBA or COM library functions - I don't want to have to implement this logic myself or install DLLs etc.
If possible, I want installation of the function to be as simple as adding a .xla to the available Excel Add-Ins. No DLLs, EXEs, registry entries etc. required.
Edit Can I make use of the .NET GZipStream to do this?
A: VBA (which is really a dialect of VB6) is slow for these kind of applications. I remember I once implemented Shannon-Fano algorithm on VB6 and on C, the C version was about 10 times faster, even after being turned into a DLLMain and called from there rather than on a command-line executable.
There are lots of COM DLLs that provide compression services, both open source and shareware, and some of them implement GZIP's deflate algorithm. It'd be really simple to just call one function from such a DLL from your VBA code to do the compression on your behalf.
I understand your being reluctant on using something external to your application, though in this case you might have to apply an exception for performance's sake.
In an effort to completely spoil your fun, examine file ZIPFLDR.DLL on windows\system32. you may also like to take a look at these links:
*
*This has an example of how to do what you want (zipping using windows built-in ZIP capabilities) from VB.NET, it shouldn't be much different from VBA or VB6:
Transparent ZIP with DLL call
*This one has a sample application on VB6 using windows built-in capabilities to zip (in ZIP rather than GZIP format, of course): Using Windows XP "Compressed Folder" shell extension to work with .zip files
Found both thru googling, you should be able to find more/better examples.
A: OK, I think I have an answer for you.
zlib is a library written by the guy that wrote the deflate algorithm you don't want to implement. There is a win32 DLL available. Here's the FAQ regarding using it from Windows:
http://www.zlib.net/DLL_FAQ.txt
Check out question 7. The authors don't seem too keen on Windows users, and don't seem at all keen on VB users, but as long as they're kind enough to provide the library we can do the rest.
If this is enough to help you, then great. If you want help with calling the C library from VBA add a comment and we'll figure it out. I haven't done any VB-to-C calls in years--it sounds like fun.
A: If you want to implement the algorithm in VBA, you would need to (in VBA) save the spreadsheet and then use VB's I/O functions to open the file, deflate it, and save it again. For all intents and purposes it's identical to writing an ordinary VB application that works on a file. You might need to put the VBA macro in a separate workbook to avoid "file in use" types of errors, but if you reopen the file as read-only and save it with a different filename you should be OK keeping everything in one workbook.
But I'm almost certain that shelling out to gzip from within the VBA would be functionally identical and infinitely easier.
EDIT: Some code. It didn't fail when I ran it, so it's OK to keep everything in the same workbook.
Sub main()
ActiveWorkbook.Save
Open "macrotest.xls" For Binary Access Read As #1
Open "newfile.zip" For Binary Access Write As #2
'do your stuff here
Close #2
Close #1
End Sub
A: It seems that you want to open a bottle of wine but you definitly refuse to use a bottle-opener. As long as there is no VBA function allowing the GZipping of a file, you will not be able to do the job without some external ressource such as a dll or exe file.
A: If somebody wanted to compress files without relying on 3rd-party software they would generally implement it as a COM object/DLL so it would be available to more than just Excel. If somebody wanted to incorporate zip functionality into Excel they would use 3rd-party tools so they wouldn't have to re-implement the algorithm. So you're swimming against the tide. However...
http://www.cpearson.com/excel/SaveCopyAndZip.htm
There are two versions. The COM Add-in version "...allows you to zip any workbook that has been saved to disk (but it may be in an unsaved state)." It relies on a Moonlight Software component but all the components and set-up are contained in the installer. It's not quite public domain but the license is less restrictive than the GPL. The end result is an Excel add-in (that uses a 3rd-party component).
But if you really, truly don't want any dependencies on external tools you're either going to have to implement the compression algorithm yourself or wait until Microsoft builds that functionality into Windows and exposes it through Excel.
I hope this helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Very slow response for Visual Studio 2005 Web Site Administration Tool I am working on an ASP.NET application and am trying to add user authentication. As a first step, I am using the Web Site Administration tool (Website | ASP.NET Configuration) to manage users and permissions.
Accessing this website is incredibly slow. To load the main page takes 30 seconds. When navigating to the Security page (also 30 seconds), I am presented with this error:
I have authentication mode set to "Forms" in the web.config file.
There is a problem with your selected data store. This can be caused by an invalid server name or credentials, or by insufficient permission. It can also be caused by the role manager feature not being enabled. Click the button below to be redirected to a page where you can choose a new data store.
The following message may help in diagnosing the problem: Unable to connect to SQL Server database.
It asks me to run "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_regsql.exe", which also gives me an error message.
How do I fix these problems with speed and enabling security/users?
A: The issue that you are getting is due to an invalid SQL Server connection, you can setup the SQL Server connection in the web.config, once it is able to actually connect to the database, it will perform in the proper manner.
EDIT below is the code needed in web.config to setup the new connection
<connectionStrings>
<clear />
<add name="LocalSqlServer" connectionString="server=yourservername;database=somedatabasename;etc..." />
</connectionStrings>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do you stop interim solutions from lasting forever? Say there are two possible solutions to a problem: the first is quick but hacky; the second is preferable but would take longer to implement. You need to solve the problem fast, so you decide to get the hack in place as quickly as you can, planning to start work on the better solution afterwards. The trouble is, as soon as the problem is alleviated, it plummets down the to-do list. You're still planning to put in the better solution at some point, but it's hard to justify implementing it right now. Suddenly you find you've spent five years using the less-than-perfect solution, cursing it the while.
Does this sound familiar? I know it's happened more than once where I work. One colleague describes deliberately making a bad GUI so that it wouldn't be accidentally adopted long-term. Do you have a better strategy?
A: YOU DON'T DO INTERIM SOLUTIONS.
Sometimes I think programmers just need to be told this.
Sorry about that, but seriously--a hacky solution is worthless and even on the first iteration can take longer than doing a portion of the solution correctly.
Please stop leaving me your crap code to maintain. Just ALWAYS CODE IT RIGHT. No matter how long it takes and who yells at you.
When you are sitting there twiddling your thumbs after delivering early while everyone else is debugging their stupid hacks, you'll thank me.
Even if you don't think you are a great programmer, always strive to do the best you can, never take shortcuts--it doesn't cost you ANY time to do it right. I can justify this statement if you don't believe me.
A:
Suddenly you find you've spent five years using the less-than-perfect solution, cursing it the while.
If you're cursing it, why is it at the bottom of the TODO list?
*
*If it's not affecting you, why are you cursing it?
*If it is affecting you, then it's a problem that needs to be fixed NOW.
A: *
*I make sure that I'm vocal about the priority of the long term fix ESPECIALLY after the short term fix has gone in.
*I detail the reasons why it's a hack and not a good long term solution and use those to get the stakeholders (managers, clients, etc) to understand why it needs to be fixed
* Depending on the case, I may even inject a bit of worst case scenario fear in there. "If this safely line snaps, the whole bridge could collapse!"
*I take responsibility for coming up with a long term solution and make sure that it gets deployed
A: It is a hard call. I have done hacks personally cause, sometimes you HAVE to get that product out the door and into the customers hands. However, the way that I take care of it is to just do it.
Tell the project lead or your boss, or the customer: There are some spots that need to be cleaned up, and coded better. I need a week to do it, and it is going to cost less to do it now, then it will be to do it 6 months from now, when we need to implement an extension onto the subsystem.
A: Usually problems like this arise from bad communication with management or the customer. If the solution works for the customer then they see no reason to ask for it to be changed. So they need to be told about the tradeoffs you made beforehand so they can plan extra time to fix the problems after you've implemented the quick solution.
How to solve it depends a bit on why it's a bad solution. If your solution is bad because it's hard to change or maintain then the first time you have to do maintenance and have a bit more time then that is the right time to upgrade to a better solution. In this case it helps if you tell the customer or your boss that you took a shortcut in the first place. That way they know that they can't expect a fast solution next time around. Cripling the UI can be a good way to make sure the customer comes back to get stuff fixed.
If the solution is bad because it's risky or unstable then you really need to talk to the person doing the planning and have some time planned in to fix the problem asap.
A: Good luck. In my experience this is almost impossible to achieve.
Once you go down the slippery slope of implementing a hack because you are under pressure then you might as well get used to living with it for all time. There is almost NEVER enough time to re-work something that already works, no matter how badly it is implemented internally. What makes you think you will magically have more time "at some later date" to fix the hack?
The only exception I can think of to this rule is if the hack completely prevents you from implementing another piece of functionality that is needed by a customer. Then you have no choice but to do the re-work.
A: Write a test case which the hack fails.
If you can't write a test which the hack fails, then either there's nothing wrong with the hack after all, or else your test framework is inadequate. If the former, run away quick before you waste your life on needless optimisation. If the latter, seek another approach (either to flagging hacks, or to testing...)
A: I try to build the hacky solution so that it can be migrated to the longterm way as painlessly as possible. Say you got a guy who is building a database in SQL Server cuz that's his strongest DB, but your corporate standard is Oracle. Build the db with as few non-transferable features (like Bit datatypes) as possible. In this example, it's not hard to avoid bit types, but it makes transitioning later an easier process.
A: Educate whoever is in charge of making the final decision why the hacky way of doing things is bad in the long-run.
*
*Describe the problem in terms they can relate to.
*Include a graph of cost, productivity, and revenue curves.
*Teach them about technical debt.
*Regularly refactor if you're pushed forward.
*Never call it "refactoring" or "going back and cleaning up" in front of non-technical people. Instead, call it "adapting" the system to handle "new features".
Basically, people who don't understand software don't get the concept of revisiting things that already work. The way they look at it, developers are like mechanics who want to keep taking apart and reassembling the entire car every time someone wants to add a feature, which sounds insane to them.
It helps to make analogies to everyday things. Explain to them how when you made the interim solution, you made choices that suited building it quickly, as opposed to being stable, maintainable, etc. It's like choosing to build with wood instead of steel because wood is easier to cut, and thus, you could build the interim solution quicker. The wood, however, simply can not support the foundation of a 20-story building.
A: We use Java and Hudson for continuous integration. 'Interim solutions' must be commented with:
// TODO: Better solution required.
Every time Hudson runs a build it provides a report of each TODO item so that we have an up to date, highly visible record of any outstanding items that need improved.
A: Strategy 1 (almost never selected): Don't implement the kluge. Don't even let people know it's a possibility. Just do it the right way the first time. Like I said, this one is almost never selected, due to time constraints.
Strategy 2 (dishonest): Lie and Cheat. Tell management that there are bugs in the hack, and they could cause major problems later on. Unfortunately, most of the time, the managers just say to wait until the bugs become a problem, then fix the bugs.
Strategy 2a: Same as strategy 2, except there really are bugs. Same problem, though.
Strategy 3 (and my personal favorite): Design the solution whenever you can, and do it well enough that an intern or code-monkey could do it. It's easier to justify spending the small amount of code-monkey money than to justify your own salary, so it might just get done.
Strategy 4: Wait for a rewrite. Keep waiting. Sooner or later (probably later), someone is going to have to rewrite the thing. Might as well do it right then.
A: Here is a great related article on technical debt.
Basically, it is an analogy of debt with all the technical decisions you make. There is good debt and bad debt... and you have to pick the debt that is going to achieve the goals you want with the least long term cost.
The worst kind of debt is small little accumulating shortcuts that are analogous to credit card debt... each one doesn't hurt, but pretty soon you are in the poor house.
A: This is a major issue when doing deadline driven work. I find that adding very detailed comments about why this way was chosen and some hints at how it should be coded help. This way people looking at the code see it and keep it fresh.
Another option that will work is add a bug.feature in your tracking framework (you do have one, right?) detailing the rework. That way it is visible and may force the issue at some point.
A: The only time you can ever justify fixing these things (because they're not really broken, just ugly) is when you have another feature or bug fix that touches the same section of code, and you might as well re-write it.
You have to do the math on what a developer's time costs. If software requirements are being met, and the only thing wrong is that the code is embarrasing under the hood, it's not really worth fixing.
Whole companies can go out of business because over-zealous engineers insist on a re-architecture every year or so when they get antsy.
If it's bug-free and meets requirements, it's done. Ship it. Move on.
[Edit]
Of course I'm not advocating that everything be hacked in all the time. You have to design and write code carefully in the normal course of the development process. But when you do end up with hacks that just had to be done quickly, you have to do a cost-benefit analysis on whether or not it's worth it to clean up the code. If over the lifetime of the application you will spend more time coding around a messy hack than you would have fixing it, then of course fix it. But if not, it's way too expensive and risky to re-code a working, bug-free application just because looking at the source makes you ill.
A: Great question. This bothers me a lot, too - and most of the time I'm the sole person responsible for prioritizing issues in my own projects (yep, small business).
I found out that the problem that needs to be fixed is usually just a subset of the problem. IOW, the customer that needs an urgent fix does not need the whole problem to be solved, just a part of it - smaller or larger. That sometimes enables me to create a workaround that is not solution to the complete problem but just to the customer's subset and that allows me to leave the bigger issue open in the issue tracker.
That may of course not apply at all to your work environment :(
A: This reminds me of the story of "CTool". In the beginning CTool was put forward by one of our devs, I'll call him Don, as one possible way to solve the problem we were having. Being an earnest hard-working type, Don plugged away and delivered a working prototype. You know where I am going with this. Overnight, CTool became a part of the company work flow with an entire department depending on it. By the second or third day, bitter complaints started streaming in about CTool's shortcomings. Users questioned Don's competence, commitment and IQ. Don's protests that this was never supposed to be a production app fell on deaf ears. This went on for years. Finally, someone got around to re-writing the app, well after Don had departed. By this time, so much loathing had become attached to the name CTool that naming it CTool version 2 was out of the question. There was even a formal funeral for CTool, somewhat reminiscent of the copier (or was it a printer?) execution scene in Office Space.
Some might say Don deserved the slings and arrows for not making it go right to fix CTool. My only point is that saying you should never hack out a solution is probably unjustifiable in the Real World. But if you are the one to do it, tread cautiously.
A: *
*Get it in writing (an email). So when it becomes a problem later management doesn't "forget" that it was supposed to be temporary.
*Make it visible to the users. The more visible it is the less likely people are going to forget to go back and do it the right way when the crisis is over.
*Negotiate before the temp solution is in place for a project, resources, and time lines to get the real fix in. Work for the real solution should probably begin as soon as the temp solution is finished.
A: You file a second very descriptive bug against your own "fix" and put a to-do comment right in the affected areas that says, "This area needs a lot of work. See defect #555" (use the right number of course). People who say "don't put in a hack" don't seem to understand the question. Assume you have a system that needs to be up and running now, your non-hack solution is 8 days of work, your hack is 38 minutes of work, the hack is there to buy you time to do the work and not lose money while you're doing it.
Now you still have to get your customer or management agree to schedule the N*100 minutes of time required to do the real fix in addition to the N minutes needed now to fix it. If you must refuse to implement the hack until you get such agreement, then maybe that's what you have to do, but I've worked with some understanding people in that regard.
A: The real price of introducing a quick-fix is that when someone else needs to introduce a 2nd quick fix, they will introduce it based on your own quick-fix. So, the longer a quick-fix is in place, the more entrenched it will become. Quite often, a hack takes only a little bit longer than doing things right, until you encounter a 2nd hack which builds on the first.
So, obviously it is (or seems to be) sometimes necessary to introduce a quick fix.
One possible solution, assuming your version control supports it, is to introduce a fork from the source whenever you make such a hack. If people are encouraged to avoid coding new features within these special "get it done" forks, then it will eventually be more work to integrate the new features with the fork than it will be to get rid of the hack. More likely, though, the "good" fork will get discarded. And if you are far enough away from release that making such a fork will not be practical (because it is not worth doing the dual integration mentioned above), then you probably shouldn't even be using a hack anyways.
A very idealistic approach.
A more realistic solution is to keep your program segmented into as many orthogonal components as possible and to occasionally do a complete rewrite of some of the components.
A better question is why the hacky solution is bad. If it is bad because it reduces flexibility, ignore it until you need flexibility. If it is bad because it impacts the programs behavior, ignore it and eventually it will become a bug fix and WILL be addressed. If it is bad because it looks ugly, ignore it, as long as the hack is localized.
A: Some solutions I've seen in the past:
*
*Mark it with a comment HACK in the code (or similar scheme such as XXX)
*Have an automatic report run and emailed weekly to those that care which counts how many times the HACK comments appear
*Add a new entry in your bug tracking system with the line number and description of the right solution (so the knowledge gained from the research before writing the hack isn't lost)
*write a test case that demonstrates how the hack fails (if possible) and check it into the appropriate test suite (i.e. so that it throws errors that someone will eventually want to cleanup)
*once the hack is installed and the pressure is off, immediately start on the right solution
This is an excellent question. One thing I've noticed as I get more experience: hacks buy you a very short amount of time, and often cost you a huge amount more. Closely related is the 'quick fix' that solves what you think is the problem -- only to find when it blows up that that it wasn't the problem at all.
A: Setting aside the debate about whether you should do it, let's assume that you have to do it. The trick now is to do it in a way that minimizes long range affects, it easily ripped out later, and makes itself a nuisance so you remember to fix it.
The nuisance part is easy: make it issue a warning every time you execute the kludge.
The ripped out part can be easy: I like to do this be putting the kludge behind a subroutine name. That makes it easier to update since you compartmentalize the code. When you get your permanent solution, you're subroutine can either implement it or be a no-op. Sometimes a subclass can work nicely for this too. Don't let other people depend on whatever your quick fix is, though. It's difficult to recommend any particular technique without seeing the situation.
Minimizing long range effects should be easy if the rest of the code is nice. Always go through the published interface, and so on.
A: Try to make the cost of the hack clear to the business folks. Then they can make an informed decision either way.
A: You could intentionally write it in way that is overly restrictive and singe purposed and would require a re-write to be modified.
A: We had to do this once - make a short term demo version that we knew we did not want to keep. The customer wanted it on a winTel box, so we developed the prototype in SGI/XWindows. (We were fluent in both, so it wasn't a problem).
A: Confession:
I have used '#define private public' in C++ in order to read data from some other code layer. It went in as a hack but works well and fixing it has never become a priority. It is now 3 years later...
One of the main reasons hacks do not get removed is the risk that one introduces new bugs while fixing the hack. (Especially when dealing with pre-TDD code bases.)
A: My answer is a bit different from the others. My experience is that the following practices help you stay agile and move from hackey first iteration/alpha solutions to beta/production ready:
*
*Test Driven Development
*Small units of refactoring
*Continous Integration
*Good Configuration management
*Agile database techniques/database refactoring
And it should go without saying you have to have stakeholder support to do any of these correctly. But with these products in place you have the right tools and processes to quickly change a product in major ways with confidence. Sometimes your ability to change is your ability to manage the risk of the changes and from the development perspective these tools/techniques give you surer footing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "79"
} |
Q: Is F# suitable for Physics applications? I hate Physics, but I love software development. When I go back to school after Thanksgiving, I'll be taking two more quarters of Physics before I'm done with the horrid thing. I am currently reading postings on the F# units of measurement feature, but I've never used a language like F#. Would it be suitable to write applications so I can perhaps learn something about Physics while doing something I like?
I'm interested in command-line applications (even those that I can just execute and have spit out an answer without needing inputs) for things like kinematics, planar motion, Newton's Laws, gravitation, work, energy, momentum and impulse, systems of particles, rotational kinematics and dynamics, angular momentum, static equilibrium, oscillatory motion, wave motion, sound, physical optics, electrostatics, Gauss' law, electric field and potential, capacitance, resistance, DC circuits, magnetic field, Ampere's law, and inductance.
The reason I'm interested in F# is because of the units of measure functionality that the language provides.
A: I rode the introduction of a book call "F# for scientists" (the intro is available for free), and it seems to be a good introduction to the field, since F# seems to be very well adapted to this kind of field.
You might want to have a look at the introduction.
http://www.ffconsultancy.com/products/fsharp_for_scientists/
(And no, I have no relationship with the author ;-)
A: Yes (any language is) and No (learn what your future colleagues will use, like maybe they use python.). An interesting aside is Fortress.
A: About dimensional analysis : a fun calculus trick once given by one of my physics professors: given that it takes one hour to perfectly cook a one pound turkey in a given oven, how long would it take to cook a 2 pound turkey is the same oven ?
Well, dimensional analysis shows
(1) that the total amount of heat energy needed in order to cook the turkey is proportional to the mass of the turkey, which itself is proportional to its volume, which itself is proportional to the cube of it average "radius"
i.e
Cooking heat energy needed = k1 * (turkeyRadius" ^3) ==> unit : m^3 * k (where k1 unit is J / m^3)
(2) That the total amount of heat energy provided by the oven is proportional to the surface of the turkey multiplied by the amount of time you cook it,
i.e
Heat provided by the oven = k2 * time * (turkeyRadius ^ 2) (where k2 unit is J / s / m^2 )
Then by using (1) = (2) , you obtain
time = k1 / k2 * turkeyRadius ^ (3/2)
i.e
- the cooking time is proportionnal to the radius ^ 3/2
- given that turkeyRadius is proportionnal to the cubic root of the mass, we obtain
cooking time = k3 * sqrt(mass)
So, it will take sqrt(2) times longer to cook our 2 pounds turkey, and the result is obtained with no calculation at all - only dimensional analysis.
A: Yes, F# is a great way to build on functional programming, just as Chris Smith said in his response. I am working on building an extensive discussion about physics, engineering and biology using F#. I could certainly use input from a student like yourself. Programming without a real life problem in mind is one way of programming. The other way that is successful is to provide solutions that are only used by people using computers, certainly another way to go and one that builds wealth.
F# is made for knowledge domains like Physics.
A: In my biased opinion, F# is ideal for physics. It has a feature called Units of Measure which does dimensional analysis for you, providing errors if you get it wrong. For example if you write:
let distance : float<meters> = gravity * 3.0<seconds>
That would yield a compile-error, since gravity is < meters/seconds^2 > and not < meters >. This prevents a great deal of physics-related programming errors.
For more information check out Andrew Kennedy's blog.
A: Fsharp is one choice. If you want to learn a skill which may also be of more long-term benefit why not learn python. You'll also have numpy and scipy at your fingertips then too.
A: Learning any computer language won't teach you physics, and you can learn physics by writing programs in any language.
Dimensional analysis is a rather handy tool for physics problems, it can steer you away from being "not even wrong".
I've always gained a certain perverse pleasure in getting an answer wrong by factors of 10^34 because I'd got my units wrong somewhere ;-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Table load via Partition Exchange (Oracle 10g) I have a few questions about optimizing this type of load.
One builds a new table of data to be loaded into a partitioned table and then builds the indexes on this new table.
*
*Should you build the index with the COMPUTE STATISTICS option or use the Cascade option of the DBMS_Stats?
*Should you gather stats on the table before the swap or on the partition after the swap?
*If you do it after the swap and you specify the partition name in the parameter list, what interplay does the granularity parameter have? For instance, if I specify a partition name and then set granularity to 'GLOBAL AND PARTITION' does that do Global at all? Does it do just that one partition?
A:
Should you build the index with the COMPUTE STATISTICS option or use the Cascade option of the DBMS_Stats?
If this is a data warehouse then first consider not gathering statistics at all, and using dynamic sampling. Second, if you do gather statistics then by all means use compute statistics on the index.
Should you gather stats on the table before the swap or on the partition after the swap?
Gather statistics on the new-data table before the swap to get partition statistics on the new data -- gather statistics on the partitioned table afterwards to gather table statistics
If you do it after the swap and you specify the partition name in the parameter list, what interplay does the granularity parameter have? For instance, if I specify a partition name and then set granularity to 'GLOBAL AND PARTITION' does that do Global at all? Does it do just that one partition?
See above.
Seriously, give no statistics and dynamic sampling a chance.
A: *
*DBMS_STATS is considered the proper way to calculate statistics for this version. Building the index with the COMPUTE STATISTICS is doable, but usually you want to calculate all your stats at one time and take snapshots.
*You want to gather stats after the swap. This way the optimizer will make the best guess for executing queries using that partitions' data.
*Why would you put both?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do you enforce strong passwords? There are many techniques to enforce strong passwords on website:
*
*Requesting that passwords pass a regex of varying complexity
*Setting the password autonomously, so that casual users have a strong password
*Letting passwords expire
*etc.
On the other hands there are drawbacks, because all of them make life less easy for the user, meaning less registrations.
So, what techniques do you use? Which provide the best protection vs. inconvenience ratio?
To clear things up, I am not referring to banking sites or sites that store credit cards. Think more in terms of popular (or not-so-popular) sites that still require registration.
A: Don't enforce anything ... if you are not protecting financial information or something equally important, then don't make the user choose a strong password.
I have the same weak password on a whole load of sites that require registration for forums, etc. I don't really care if someone guesses it and can post messages as me (and don't think there is much motivation for someone to do so). What I can't do is remember different strong passwords for a dozen sites and don't really want to use another piece of software to manage them for me.
The best compromise would be to show some kind of feedback to the user on how strong the password is (based on whether it is a dictionary word, number of different character types, length, etc).
A: I don't think it's possible to enforce strong passwords, but there are lots of things you can do to encourage them as much as possible.
*
*Rate each password and give the user feedback in the form of a score or a graphical bar, etc.
*Set a minimum password score to weed out the awful ones
*Have a list of common words that are either banned, or tank the password score
One excellent trick I like to use is to have the password's expiry date tied to the password score. So stronger passwords don't need to be changed so often. This works particularly well if you can give users direct feedback about how long the password they've chosen will live for (and dynamically update it so they can see how adding characters affects the date).
A: Why enforce it?
I found that a "password strength meter" (a bar indicating password strength as you type) is usually a good non-intrusive measure. It makes those who care about security to have a guilty conscience about password weakness, yet does not frustrate those who do not care as much.
Also, there is an insightful essay on why periodic password change policy is a bad idea with today's threat model.
A: It's been my experience that it depends really on the type of site, as you said.
If you are creating a bank or financial website then users typically understand if you have a more secure password, since their personal data may be at risk.
However for sites that typically don't contain a lot of personal information a simpler password will be fine. They may be less prone to hack attempts, and wouldn't get anything worthwhile anyway.
I've also found that most people also seem to have a couple passwords they use often. One being complex, and another being simple. So requesting they use a complex password usually won't keep people from registering.
I've never found expiring passwords to work successfully. As I said before, many people already have a set couple of passwords they use often, so asking them to go outside of this just for your site may make them not want to come back.
A: The best way really depends on your site and what you are using. But the ideal way is to do as much on the client side as you can before they submit it. Using RegEx is a good way. If you can make them not have to submit the form again, that is ideal.
A: On letting passwords expire, there are two notable problems with the practice:
*
*Users find it more difficult to remember their current passwords, and so they are more likely to do silly things like write them on a post-it stuck to their monitor.
*Users don't generate a new, strong, unrelated password on each attempt. Most of the time they use some scheme to generate a password similar to their old one. Therefore, if an attacker gets an old password, it's still pretty easy for them to deduce a newer one.
EDIT: Which isn't to say I'm against the whole idea, but just that this needs to be considered along with other factors.
A: There's an Ajax tool, PasswordStrength, that will give the user an idea if their password is any good. I like it because it doesn't have to prohibit the creation of a password.
http://www.asp.net/AJAX/AjaxControlToolkit/Samples/PasswordStrength/PasswordStrength.aspx
A: I've never seen this done, but it seems like it would work wonderfully: the password creation page could have an expandable list of the,say, the 50 most common passwords, forcing the user to scroll down a bit before typing in their password. This, combined with Checkers' suggestion, would do much to prevent careless choices.
However, solving the problem of preventing password reuse... no clue.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: What is best way to remove duplicate lines matching regex from string using Python? This is a pretty straight forward attempt. I haven't been using python for too long. Seems to work but I am sure I have much to learn. Someone let me know if I am way off here. Needs to find patterns, write the first line which matches, and then add a summary message for remaining consecutive lines which match pattern and return modified string.
Just to be clear...regex .*Dog.* would take
Cat
Dog
My Dog
Her Dog
Mouse
and return
Cat
Dog
::::: Pattern .*Dog.* repeats 2 more times.
Mouse
#!/usr/bin/env python
#
import re
import types
def remove_repeats (l_string, l_regex):
"""Take a string, remove similar lines and replace with a summary message.
l_regex accepts strings and tuples.
"""
# Convert string to tuple.
if type(l_regex) == types.StringType:
l_regex = l_regex,
for t in l_regex:
r = ''
p = ''
for l in l_string.splitlines(True):
if l.startswith('::::: Pattern'):
r = r + l
else:
if re.search(t, l): # If line matches regex.
m += 1
if m == 1: # If this is first match in a set of lines add line to file.
r = r + l
elif m > 1: # Else update the message string.
p = "::::: Pattern '" + t + "' repeats " + str(m-1) + ' more times.\n'
else:
if p: # Write the message string if it has value.
r = r + p
p = ''
m = 0
r = r + l
if p: # Write the message if loop ended in a pattern.
r = r + p
p = ''
l_string = r # Reset string to modified string.
return l_string
A: The rematcher function seems to do what you want:
def rematcher(re_str, iterable):
matcher= re.compile(re_str)
in_match= 0
for item in iterable:
if matcher.match(item):
if in_match == 0:
yield item
in_match+= 1
else:
if in_match > 1:
yield "%s repeats %d more times\n" % (re_str, in_match-1)
in_match= 0
yield item
if in_match > 1:
yield "%s repeats %d more times\n" % (re_str, in_match-1)
import sys, re
for line in rematcher(".*Dog.*", sys.stdin):
sys.stdout.write(line)
EDIT
In your case, the final string should be:
final_string= '\n'.join(rematcher(".*Dog.*", your_initial_string.split("\n")))
A: Updated your code to be a bit more effective
#!/usr/bin/env python
#
import re
import types
def remove_repeats (l_string, l_regex):
"""Take a string, remove similar lines and replace with a summary message.
l_regex accepts strings/patterns or tuples of strings/patterns.
"""
# Convert string/pattern to tuple.
if not hasattr(l_regex, '__iter__'):
l_regex = l_regex,
ret = []
last_regex = None
count = 0
for line in l_string.splitlines(True):
if last_regex:
# Previus line matched one of the regexes
if re.match(last_regex, line):
# This one does too
count += 1
continue # skip to next line
elif count > 1:
ret.append("::::: Pattern %r repeats %d more times.\n" % (last_regex, count-1))
count = 0
last_regex = None
ret.append(line)
# Look for other patterns that could match
for regex in l_regex:
if re.match(regex, line):
# Found one
last_regex = regex
count = 1
break # exit inner loop
return ''.join(ret)
A: First, your regular expression will match more slowly than if you had left off the greedy match.
.*Dog.*
is equivalent to
Dog
but the latter matches more quickly because no backtracking is involved. The longer the strings, the more likely "Dog" appears multiple times and thus the more backtracking work the regex engine has to do. As it is, ".*D" virtually guarantees backtracking.
That said, how about:
#! /usr/bin/env python
import re # regular expressions
import fileinput # read from STDIN or file
my_regex = '.*Dog.*'
my_matches = 0
for line in fileinput.input():
line = line.strip()
if re.search(my_regex, line):
if my_matches == 0:
print(line)
my_matches = my_matches + 1
else:
if my_matches != 0:
print('::::: Pattern %s repeats %i more times.' % (my_regex, my_matches - 1))
print(line)
my_matches = 0
It's not clear what should happen with non-neighboring matches.
It's also not clear what should happen with single-line matches surrounded by non-matching lines. Append "Doggy" and "Hula" to the input file and you'll get the matching message "0" more times.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Looking for CSS Example, or Explanation of Layout Below I've been learning CSS for a while now, but the simplified layout below is still a little beyond me, so I am asking whether anyone knows of a model for such a layout, or would have an explanation to make this work.
The page should have 3 bands or blocks:
header, bottom, and content.
The 'header' would start at left 0, top 0 in the visible screen, go all the way to the right edge, and be 70 px in height.
The 'bottom' band would start at left 0, but at the bottom of the visible screen, and also be 70px in height (eg start at the bottom of the visible screen minus 70px). It would extend all the way to the right edge of the visible screen.
The 'content' band would start at left 0, the top would start at the bottom of the 'header' band, and the bottom of the content block would extend down as far as the top of the 'bottom' band.
It would also be nice if the 'header' and 'bottom' band were fixed in their places, but the 'content' block were scrollable if there were more content that space in the block.
I think it's doable, but I can only get this so far at my current level, so I'd like to see how an expert would do it.
Many thanks
Mike Thomas
A: Fixed headers at the bottom of pages are difficult to implement and maintain. Can you guarantee that your content will always fit? Scrolling just a block instead of the entire page can be tedious for users because you have to get the focus right before using page-up and page-down.
Better solution would be to let the footer naturally go after the content is finished. You can set a min-height on the content for pages with out much content.
Sorry not a real answer, but your site will be better this way.
A: I'm not sure if this solves it, but for fixed header and footer with a scrolling middle content section you can check this link out:
http://www.cssplay.co.uk/layouts/basics2.html
A: here is a great site i have come across concerning css
CSS tips and tutorials
i think it would be best to just follow some basic designs there and learn from there :-)
A: What you want is a sticky footer. As for the content, you'll need to implement a scrollable DIV - the key there is handling overflow.
A: CSS Frames
This example covers the headers and footers that stay fixed at the top and bottom, although the width is less than 100%. I'm sure you can adjust.
By the way, the way I found this was by searching for "CSS frames" - keeping one part of the content stationary while the rest scrolls is sometimes referred to that way, because it's similar to what people used to do with HTML frames.
A: Examples - Css List Properties / Examples - Css Layer Properties / Examples - Css TextBox Properties / Examples - Css Font Properties / Examples - Css Text Properties / Examples - Css Cursor Properties / Examples - Css Background Properties / Examples - Css Table Properties / Examples - Css Scroll Bar Properties / Examples - Tools - Units - Parameters - Template Page - Web Design
http://www.css-lessons.ucoz.com/
A: Blueprint CSS is a nice tool that simplifies making CSS layouts.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there a function to invoke a stack dump in C? Can someone please provide an implementation of a C function that gets invoked to dump the current stack? It's for a x86 linux system. It can be invoked in 2 ways: explicitly by another function, or after a crash (probably as a trap/int handler). The output can be either to the screen or to a file, as indicated by a parameter (handle). A clear explanation/comments of how the stack is unwound would obviously be very helpful. Thank you.
A: The documentation for the backtrace() function is in the GNU LIBC MANUAL.
A: following on Adam's answer, the source code that shows how to perform the actual stack backtracing is in gnu libc's backtrace(), under /libc/debug/backtrace.c - not sure if the full link below will be accepted by stackoverflow's html filters...
http://cvs.savannah.gnu.org/viewvc/*checkout*/libc/debug/backtrace.c?root=libc&revision=1.1.2.1&content-type=text%2Fplain
A: When function calls are nested, the stack grows downwards and builds a chain of stack frames. At any given point in a program it is theoretically possible to backtrace the sequence of stack frames to the original calling point. The backtrace() function navigates the stack frames from the calling point to the beginning of the program and provides an array of return addresses. The implementation of backtrace() in the glibc library contains platform-specific code for each platform.
In the case of an x86 platform, the contents of the ebp (base pointer) and esp (stack pointer) CPU registers, which hold the address of the current stack frame and of the stack pointer for any given function, are used to follow the chain of pointers and move up to the initial stack frame. This allows the sequence of return addresses to be gathered to build the backtrace.
If you would like to know more information on how backtrace() works and how to use it, I would recommend reading Stack Backtracing Inside Your Program (LINUX Journal).
Since you mentioned executing a backtrace from a signal handler for an x86 platform, I would like to add to Adam's answer and direct you to my response to the question he linked to for details on how to ensure a backtrace from a signal handler points to the actual location of the fault.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How would you compare two XML Documents? As part of the base class for some extensive unit testing, I am writing a helper function which recursively compares the nodes of one XmlDocument object to another in C# (.NET). Some requirements of this:
*
*The first document is the source, e.g. what I want the XML document to look like. Thus the second is the one I want to find differences in and it must not contain extra nodes not in the first document.
*Must throw an exception when too many significant differences are found, and it should be easily understood by a human glancing at the description.
*Child element order is important, attributes can be in any order.
*Some attributes are ignorable; specifically xsi:schemaLocation and xmlns:xsi, though I would like to be able to pass in which ones are.
*Prefixes for namespaces must match in both attributes and elements.
*Whitespace between elements is irrelevant.
*Elements will either have child elements or InnerText, but not both.
While I'm scrapping something together: has anyone written such code and would it be possible to share it here?
On an aside, what would you call the first and second documents? I've been referring to them as "source" and "target", but it feels wrong since the source is what I want the target to look like, else I throw an exception.
A: I googled up a more complete list of solutions of this problem today, I am going to try one of them soon:
*
*http://xmlunit.sourceforge.net/
*http://msdn.microsoft.com/en-us/library/aa302294.aspx
*http://jolt.codeplex.com/wikipage?title=Jolt.Testing.Assertions.XML.Adaptors
*http://www.codethinked.com/checking-xml-for-semantic-equivalence-in-c
*https://vkreynin.wordpress.com/tag/xml/
*http://gandrusz.blogspot.com/2008/07/recently-i-have-run-into-usual-problem.html
*http://xmlspecificationcompare.codeplex.com/
*https://github.com/netbike/netbike.xmlunit
A: This code doesn't satisfy all your requirements, but it's simple and I'm using for my unit tests. Attribute order doesn't matter, but element order does. Element inner text is not compared. I also ignored case when comparing attributes, but you can easily remove that.
public bool XMLCompare(XElement primary, XElement secondary)
{
if (primary.HasAttributes) {
if (primary.Attributes().Count() != secondary.Attributes().Count())
return false;
foreach (XAttribute attr in primary.Attributes()) {
if (secondary.Attribute(attr.Name.LocalName) == null)
return false;
if (attr.Value.ToLower() != secondary.Attribute(attr.Name.LocalName).Value.ToLower())
return false;
}
}
if (primary.HasElements) {
if (primary.Elements().Count() != secondary.Elements().Count())
return false;
for (var i = 0; i <= primary.Elements().Count() - 1; i++) {
if (XMLCompare(primary.Elements().Skip(i).Take(1).Single(), secondary.Elements().Skip(i).Take(1).Single()) == false)
return false;
}
}
return true;
}
A: Microsoft has an XML diff API that you can use.
Unofficial NuGet: https://www.nuget.org/packages/XMLDiffPatch.
A: try XMLUnit. This library is available for both Java and .Net
A: For comparing two XML outputs in automated testing I found XNode.DeepEquals.
Compares the values of two nodes, including the values of all descendant nodes.
Usage:
var xDoc1 = XDocument.Parse(xmlString1);
var xDoc2 = XDocument.Parse(xmlString2);
bool isSame = XNode.DeepEquals(xDoc1.Document, xDoc2.Document);
//Assert.IsTrue(isSame);
Reference: https://learn.microsoft.com/en-us/dotnet/api/system.xml.linq.xnode.deepequals?view=netcore-2.2
A: Comparing XML documents is complicated. Google for xmldiff (there's even a Microsoft solution) for some tools. I've solved this a couple of ways. I used XSLT to sort elements and attributes (because sometimes they would appear in a different order, and I didn't care about that), and filter out attributes I didn't want to compare, and then either used the XML::Diff or XML::SemanticDiff perl module, or pretty printed each document with every element and attribute on a separate line, and using Unix command line diff on the results.
A: https://github.com/CameronWills/FatAntelope
Another alternative library to the Microsoft XML Diff API. It has a XML diffing algorithm to do an unordered comparison of two XML documents and produce an optimal matching.
It is a C# port of the X-Diff algorithm described here:
http://pages.cs.wisc.edu/~yuanwang/xdiff.html
Disclaimer: I wrote it :)
A: I am using ExamXML for comparing XML files. You can try it.
The authors, A7Soft, also provide API for comparing XML files
A: Another way to do this would be -
*
*Get the contents of both files into two different strings.
*Transform the strings using an XSLT (which will just copy everything over to two new strings). This will ensure that all spaces outside the elements are removed. This will result it two new strings.
*Now, just compare the two strings with each other.
This won't give you the exact location of the difference, but if you just want to know if there is a difference, this is easy to do without any third party libraries.
A: Not relevant for the OP since it currently ignores child order, but if you want a code only solution you can try XmlSpecificationCompare which I somewhat misguidedly developed.
A: All above answers are helpful but I tried XMLUnit which look's easy to use Nuget package to check difference between two XML files, here is C# sample code
public static bool CheckXMLDifference(string xmlInput, string xmlOutput)
{
Diff myDiff = DiffBuilder.Compare(Input.FromString(xmlInput))
.WithTest(Input.FromString(xmlOutput))
.CheckForSimilar().CheckForIdentical()
.IgnoreComments()
.IgnoreWhitespace().NormalizeWhitespace().Build();
if(myDiff.Differences.Count() == 0)
{
// when there is no difference
// files are identical, return true;
return true;
}
else
{
//return false when there is 1 or more difference in file
return false;
}
}
If anyone want's to test it, I have also created online tool using it, you can take a look here
https://www.minify-beautify.com/online-xml-difference
A: Based @Two Cents answer and using this link XMLSorting i have created my own XmlComparer
Compare XML program
private static bool compareXML(XmlNode node, XmlNode comparenode)
{
if (node.Value != comparenode.Value)
return false;
if (node.Attributes.Count>0)
{
foreach (XmlAttribute parentnodeattribute in node.Attributes)
{
string parentattributename = parentnodeattribute.Name;
string parentattributevalue = parentnodeattribute.Value;
if (parentattributevalue != comparenode.Attributes[parentattributename].Value)
{
return false;
}
}
}
if(node.HasChildNodes)
{
sortXML(comparenode);
if (node.ChildNodes.Count != comparenode.ChildNodes.Count)
return false;
for(int i=0; i<node.ChildNodes.Count;i++)
{
string name = node.ChildNodes[i].LocalName;
if (compareXML(node.ChildNodes[i], comparenode.ChildNodes[i]) == false)
return false;
}
}
return true;
}
Sort XML program
private static void sortXML(XmlNode documentElement)
{
int i = 1;
SortAttributes(documentElement.Attributes);
SortElements(documentElement);
foreach (XmlNode childNode in documentElement.ChildNodes)
{
sortXML(childNode);
}
}
private static void SortElements(XmlNode rootNode)
{
for(int j = 0; j < rootNode.ChildNodes.Count; j++) {
for (int i = 1; i < rootNode.ChildNodes.Count; i++)
{
if (String.Compare(rootNode.ChildNodes[i].Name, rootNode.ChildNodes[1 - 1].Name) < 0)
{
rootNode.InsertBefore(rootNode.ChildNodes[i], rootNode.ChildNodes[i - 1]);
}
}
}
// Console.WriteLine(j++);
}
private static void SortAttributes(XmlAttributeCollection attribCol)
{
if (attribCol == null)
return;
bool changed = true;
while (changed)
{
changed = false;
for (int i = 1; i < attribCol.Count; i++)
{
if (String.Compare(attribCol[i].Name, attribCol[i - 1].Name) < 0)
{
//Replace
attribCol.InsertBefore(attribCol[i], attribCol[i - 1]);
changed = true;
}
}
}
}
A: I solved this problem of xml comparison using XSLT 1.0 which can be used for comparing large xml files using an unordered tree comparison algorithm.
https://github.com/sflynn1812/xslt-diff-turbo
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "68"
} |
Q: Should I use NULL or an empty string to represent no data in table column? Null or empty string -- is one better than the other to represent no data in a table column? (I specifically use MySQL, but I'm thinking this is system-independent.) Are there major advantages/disadvantages to using one over the other, or is it simply programmer preference?
A: Null is better "" actually represents data and it wont register the same in your code
A: In the context of the relational database model, null indicates "no value" or "unknown value". It exists for exactly the purpose you describe.
UPDATE: Sorry, I forgot to add that while most (all?) RDMBSs use this same definition for null, there are nuanced differences in how null is handled. For example, MySQL and Oracle allow multiple nulls in a UNIQUE column (or set of columns), because null is not a value, and cannot be considered unique (null != null). But the last time I used MS SQL Server, it only allowed a single null. So you might need to consider the RDBMS behavior, and whether the column in question will be constrained or indexed.
A: Neither. Represent absence of data as absence of tuples in a relation.
For performance reasons you might want to avoid joins in some RDBMS' but try to design the model so that the information that can be missing is in a seperate relation.
A: I strongly disagree with everyone who says to unconditionally use NULL. Allowing a column to be NULL introduces an additional state that you wouldn't have if you set the column up as NOT NULL. Do not do this if you don't need the additional state. That is, if you can't come up with a difference between the meaning of empty string and the meaning of null, then set the column up as NOT NULL and use empty string to represent empty. Representing the same thing in two different ways is a bad idea.
Most of the people who told you to use NULL also gave an example where NULL would mean something different than empty string. And in those examples, they are right.
Most of the time, however, NULL is a needless extra state that just forces programmers to have to handle more cases. As others have mentioned, Oracle does not allow this extra state to exist because it treats NULL and empty string as the same thing (it is impossible to store an empty string in a column that does not allow null in Oracle).
A: Here are a couple links from the MySQL site:
http://dev.mysql.com/doc/refman/5.0/en/problems-with-null.html
http://dev.mysql.com/doc/refman/5.0/en/working-with-null.html
I did read once, that a NULL value is 2 bits, where as an empty string is only 1 bit. 99% of the time this won't make any difference, but in a very large table when it doesn't matter if NULL or '', then it might be better to use '' if this is true.
A: Always use NULL. Consider the difference between "I don't know what this person's phone number is" (NULL) and "this person left it blank" (blank).
A: Use the right tool for the job. NULL can signify that no value was provided (yet) or it can signify that no value is applicable.
But an empty string is information too. It can signify that a value is applicable, and was given, but it happens to be an empty string.
Allowing a column to contain both NULL and '' gives you the opportunity to distinguish between these cases. In any case, it's not good to use one to signify the other.
Be aware that in string concatenation, anything combined with NULL yields NULL. For example: CONCAT(NULL, 'foo') yields NULL. Learn to use the COALESCE() function if you want to convert NULL to some default value in an SQL expression.
A: Null. An empty string isn't "no data", it's data that happens to be empty.
A: Most of the time null is better. There are probably some situations where it makes little difference, but they are few. Just remember when you query that field = '' is not the same as field is null (in MySQL, at least).
A: As far as I can tell, Oracle doesn't distinguish a difference.
select 1 from (select '' as col from dual) where col is null;
A: Consider why there is no data in the column. Does it mean the table design is sloppy? Despite not liking nulls, there are occasions when they are appropriate (or, appropriate enough), and the system won't usually die. Just never allow nulls in anything that is a candidate key (primary or alternative key).
A: Create a separate table for just the nullable column and a foreign key to the main table. If a record doesn't have data for that column then it won't have a record in the second table. This is the cleanest solution and you don't have to worry about handling nulls or giving special meaning to empty strings.
A: NULL is a non-value that should be relegated to the dark ages from where it sprung. I have found that there is a non-trivial amount of programming required to handle special NULL cases that could easily be handled with a default value.
Set the default for your column to be an empty string.
Force the column to not allow null, which would most likely never happen once you assign a default value.
Write your code blissfully ignoring the case where the column value is null.
One huge issue I have always had with NULL is that "SELECT * from tbl WHERE column = NULL" will always return an empty result set. NULL can never be equal to anything, including NULL. The speical keyword "column is null" is the only way to check for something being null. If you back away from null, then the comparison will succeed: "column = ''" 7 rows returned.
I've done two major DB implementations from scratch where in the end I've regretted using NULL. Next time, no NULLs for me!
A: There is one important exception. Bill Karwin stated "CONCAT(NULL, 'foo') yields NULL" which is true for most RDBMSs but NOT for Oracle.
As suggested by James Curran above, Oracle chose this rather critical juncture to depart from standard SQL by treating NULLs and empty strings exactly the same. Worse than just treating them the same, however, it can actually corrupt the meaning of a NULL value by returning something other than NULL when concatenating.
Specifically, in oracle CONCAT(NULL, 'foo') yields 'foo'. Thanks Oracle, I've now lost my nulls which may not matter to you but sure makes a difference when the data is passed to other RDBMSs for further processing.
A: A "no data" value in a column should be represented by a default value. Remember that NULL signifies an unknown value, that is, the column can have a value or not but you don't know it as of this time.
In a loan application system for example, a NULL value on the Driver's License Number field means that the applicant or the loan processor didn't input the driver's license number. The NULL value doesn't automatically mean the applicant doesn't have a license. He may or may not have a license, you just don't know it, that's why it's NULL.
The ambiguity lies for string columns. A numeric column obviously contains a zero if there is no value. How can you represent a no value string? In the example above, for applicants with no driver's license, you can assign an arbitrary default value such as "none" or better yet an empty string. Just ensure that you use the default empty value in your other tables for consistency.
On the issue of not using NULLs as a principle, there are instances where they are in fact essential. As someone who works with statistics extensively, it is common for data providers to give you data sets with incomplete data. For example, in a data set of GDP per country, you can find missing GDP figures in the earlier and later years. One reason is that there is no official data for those years from the country's government. It will be incorrect to conclude that their GDP is zero (DUH!) and show a zero value in the extracted data or a graph. The correct value is NULL, meaning you don't have the data yet. The end user correctly interprets the missing datapoints in the extracted data and graphs as NOT zero. Furthermore, it won't cause errors in your computations especially when you do averages.
Some "rules" that make sense theoretically would in fact be a poor or incorrect solution in your case.
A: I find NULL values to be helpful for referential integrity. In the case of MySQL if a field is set to NOT NULL, then an insert requires the data to be set; otherwise, NULL is a possible value and Foreign Key constraint is not enforced.
*
*id: primary key
*product_id: FOREIGN KEY NOT NULL
*ref_id: (NULLABLE)
id and product_id area always required. ref_id can be set to NULL. However, if any other value is used it must satisfy the FOREIGN KEY constraint.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Order dates by upcoming So I build an array of various dates. Birthdays, anniversaries, and holidays. I'd like to order the array by which one is happening next, essentially sort October to September (wrapping to next year)
so if my array is
$a = ([0]=>"1980-04-14", [1]=>"2007-06-08",
[2]=>"2008-12-25", [3]=>"1978-11-03")
I'd like to sort it so it is arranged
$a = ([0]=>"1978-11-03", [1]=>"2008-12-25",
[2]=>"1980-04-14", [3]=>"2007-06-08")
because the november 'event' is the one that will happen next (based on it being october right now).
I'm trying usort where my cmp function is
function cmp($a, $b)
{
$a_tmp = split("-", $a);
$b_tmp = split("-", $b);
return strcmp($a_tmp[1], $b_tmp[1]);
}
I am not sure how to modify this to get my desired effect.
A: function relative_year_day($date) {
$value = date('z', strtotime($date)) - date('z');
if ($value < 0)
$value += 365;
return $value;
}
function cmp($a, $b)
{
$aValue = relative_year_day($a);
$bValue = relative_year_day($b);
if ($aValue == $bValue)
return 0;
return ($aValue < $bValue) ? -1 : 1;
}
$a = array("1980-04-14", "2007-06-08",
"2008-12-25", "1978-11-03");
usort($a, "cmp");
A: I would be tempted to establish the original year of the event, and then add enough whole years to it to ensure that the value is greater than your reference date (normally today's date). Or, possibly, greater than or equal to the reference date. You can then sort in simple date order.
Edited to add:
I'm not fluent enough in PHP to give an answer in that, but here's a Perl solution.
#!/bin/perl -w
# Sort sequence of dates by next occurrence of anniversary.
# Today's "birthdays" count as low (will appear first in sequence)
use strict;
my $refdate = "2008-10-05";
my @list = (
"1980-04-14", "2007-06-08",
"2008-12-25", "1978-11-03",
"2008-10-04", "2008-10-05",
"2008-10-06", "2008-02-29"
);
sub date_on_or_after
{
my($actdate, $refdate) = @_;
my($answer) = $actdate;
if ($actdate lt $refdate) # String compare OK with ISO8601 format
{
my($act_yy, $act_mm, $act_dd) = split /-/, $actdate;
my($ref_yy, $ref_mm, $ref_dd) = split /-/, $refdate;
$ref_yy++ if ($act_mm < $ref_mm || ($act_mm == $ref_mm && $act_dd < $ref_dd));
$answer = "$ref_yy-$act_mm-$act_dd";
}
return $answer;
}
sub anniversary_compare
{
my $r1 = date_on_or_after($a, $refdate);
my $r2 = date_on_or_after($b, $refdate);
return $r1 cmp $r2;
}
my @result = sort anniversary_compare @list;
print "Before:\n";
print "* $_\n" foreach (@list);
print "Reference date: $refdate\n";
print "After:\n";
print "* $_\n" foreach (@result);
Clearly, this is not dreadfully efficient - to make it efficient, you'd calculate the date_on_or_after() value once, and then sort on those values. Perl's comparison is slightly peculiar - the variables $a and $b are magic, and appear as if out of nowhere.
When run, the script produces:
Before:
* 1980-04-14
* 2007-06-08
* 2008-12-25
* 1978-11-03
* 2008-10-04
* 2008-10-05
* 2008-10-06
* 2008-02-29
Reference date: 2008-10-05
After:
* 2008-10-05
* 2008-10-06
* 1978-11-03
* 2008-12-25
* 2008-02-29
* 1980-04-14
* 2007-06-08
* 2008-10-04
Note that it largely ducks the issue of what happens with the 29th of February, because it 'works' to do so. Basically, it will generate the 'date' 2009-02-29, which compares correctly in sequence. The anniversary for 2000-02-28 would be listed before the anniversary for 2008-02-29 (if 2000-02-28 were included in the data).
A: So it occured to me just to add 12 to any month that is less than my target month.
Which is now working.
so the final function
function cmp($a, $b)
{
$a_tmp = explode('-', $a['date']);
$b_tmp = explode('-', $b['date']);
if ($a_tmp[1] < date('m')) {
$a_tmp[1] += 12;
}
if ($b_tmp[1] < date('m')) {
$b_tmp[1] += 12;
}
return strcmp($a_tmp[1] . $a_tmp[2], $b_tmp[1] . $b_tmp[2]);
}
A: use strtotime() to convert the all dates to a timestamp before you add them to the array, then you can sort the array into ascending (also chronological) order. Now all you have to do is deal with the dates in the past which is easily done by comparing them against a current timestamp
i.e.
for ($i=0; $i<count($a); $i++){
if ($currentTimestamp > $a[$i]){
unset($a[$i]);
}
}
A: No reason to reinvent the wheel. If you don't care about the keys you can use this.
$a = array_combine(array_map('strtotime', $a), $a);
ksort($a);
Or if you want to define your own callback.
function dateCmp($date1, $date2) {
return (strtotime($date1) > strtotime($date2))?1:-1;
}
usort($a, 'dateCmp');
If you want to keep the keys associated correctly just call uasort instead.
uasort($a, 'dateCmp');
I did a quick speed check and the callback functions were over a magnitude slower.
A: Don't compare strings, instead use seconds since 1970 (ints):
$date1 = split("-", $a);
$date2 = split("-", $b);
$seconds1 = mktime(0,0,0,$date1[1],$date1[2],$date1[0]);
$seconds2 = mktime(0,0,0,$date2[1],$date2[2],$date2[0]);
// eliminate years
$seconds1 %= 31536000;
$seconds2 %= 31536000;
return $seconds1 - $seconds2;
Also I don't know PHP but I think the gist is correct.
Edit: The comparison function is encapsulated to perform comparison, nothing more. To order a list in regards to the original question sort an array with today's date included, locate today's date in the array, and then move the elements before that position to the end in ascending order by position.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How can I embed one file into another with Ant? I am developing a small web app project (ColdFusion) and I am trying to keep my project split into multiple files during development, but deploy just one file on completion.
I have references to external files, for instance:
<script type="text/javascript" src="jquery-1.2.6.pack.js"></script>
<link type="text/css" rel="stylesheet" href="project.css" />
And when I build my project, I want to have the files included and embedded within the single finished product file.
<script type="text/javascript">eval(function(p,a,c,k,e,r) [...]</script>
<style type="text/css">div{font:normal;} [...]</style>
Anyway, it doesn't look like there is a basic way for Ant to do this. Anyone know?
A: Does this do what you want?
<property
name="filename"
value="jquery-1.2.6.pack.js"
/>
<loadfile
property="contents"
srcfile="${filename}"
/>
<replace dir=".">
<include name="index.cfm"/>
<replacetoken><![CDATA[<script type="text/javascript" src="${filename}"></script>]]></replacetoken>
<replacevalue><![CDATA[<script type="text/javascript">${contents}</script>]]></replacevalue>
</replace>
A: For a solution in pure ant, try the following:
<target name="replace">
<property name="js-filename" value="jquery-1.2.6.pack.js"/>
<property name="css-filename" value="project.css"/>
<loadfile property="js-file" srcfile="${js-filename}"/>
<loadfile property="css-file" srcfile="${css-filename}"/>
<replace file="input.txt">
<replacefilter token="<script type="text/javascript" src="${js-filename}"></script>" value="<script type="text/javascript">${js-file}</script>"/>
<replacefilter token="<link type="text/css" rel="stylesheet" href="${css-filename}" />" value="<style type="text/css">${css-file}</style>"/>
</replace>
</target>
I tested it, and it worked as expected. In the text to replace and the value you insert instead all characters '<', '>' and '"' should be quoted as <, > and ".
A: Answering my own question after a few hours of hacking...
<script language="groovy" src="build.groovy" />
and this groovy script replaces any referenced javascript or css file with the file contents itself.
f = new File("${targetDir}/index.cfm")
fContent = f.text
fContent = jsReplace(fContent)
fContent = cssReplace(fContent)
f.write(fContent)
// JS Replacement
def jsReplace(htmlFileText) {
println "Groovy: Replacing Javascript includes"
// extract all matched javascript src links
def jsRegex = /<script [^>]*src=\"([^\"]+)\"><\/script>/
def matcher = (htmlFileText =~ jsRegex)
for (i in matcher) {
// read external files in
def includeText = new File(matcher.group(1)).text
// sanitize the string for being regex replace string (dollar signs like jQuery/Prototype will screw it up)
includeText = java.util.regex.Matcher.quoteReplacement(includeText)
// weak compression (might as well)
includeText = includeText.replaceAll(/\/\/.*/, "") // remove single-line comments (like this!)
includeText = includeText.replaceAll(/[\n\r\f\s]+/, " ") // replace all whitespace with single space
// return content with embedded file
htmlFileText = htmlFileText.replaceFirst('<script [^>]*src="'+ matcher.group(1) +'"[^>]*></script>', '<script type="text/javascript">'+ includeText+'</script>');
}
return htmlFileText;
}
// CSS Replacement
def cssReplace(htmlFileText) {
println "Groovy: Replacing CSS includes"
// extract all matched CSS style href links
def cssRegex = /<link [^>]*href=\"([^\"]+)\"[^>]*>(<\/link>)?/
def matcher = (htmlFileText =~ cssRegex)
for (i in matcher) {
// read external files in
def includeText = new File(matcher.group(1)).text
// compress CSS
includeText = includeText.replaceAll(/[\n\r\t\f\s]+/, " ")
// sanitize the string for being regex replace string (dollar signs like jQuery/Prototype will screw it up)
includeText = java.util.regex.Matcher.quoteReplacement(includeText)
// return content with embedded file
htmlFileText = htmlFileText.replaceFirst('<link [^>]*href="'+ matcher.group(1) +'"[^>]*>(<\\/link>)?', '<style type=\"text/css\">'+ includeText+'</style>');
}
return htmlFileText;
}
So I guess that does it for me. It's been working pretty well, and it's extensible. Definitely not the best Groovy ever, but it's one of my first. Also, it required a few classpathed jars for it to compile. I lost track of which, but I believe it is the javax.scripting engine, groovy-engine.jar and groovy-all-1.5.6.jar
| {
"language": "en",
"url": "https://stackoverflow.com/questions/167990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do you defend your application against corporate politics? I can't tell you how many times I've run into situations where a higher-up says 'Look, just do it'. This is obviously at the expense of rigidity in the planning process, and will (probably) affect product quality.
By now, I have more experience with this, and with that, I have the confidence to stand up for the quality of the applications we produce here.
How would you handle this problem?
Is anyone out there a manager who's used the just do it command on a programmer? Why?
A: The most likely cause of being told "look just do it" by a superior is information asymmetry: either of you or both of you know something that other party doesn't. The manager might not be telling you that in the grand scheme of issues this specific problem is fairly unimportant or they just looking for a quick fix, since there is much pressure from someone else to get the thing done quickly or they simply might plan not to stick around long enough to take responsibility for the consequences.
In a similar manner they might not be able to fully appreciate the risks associated with the adverse choice, may be deliberately making “wrong” pick since it helps them to meet their personal goals etc. Information asymmetry http://en.wikipedia.org/wiki/Information_asymmetry is a well known concept in the field of economics and you might want to read up on the topic.
The most likely cause, however, is a looming deadline, lack of planning and hence the total absence of time for any manoeuvres.
Solutions are many. There are two that worked for me best:
a) Improve communication; communicate more often and more efficiently. This means listening more, trying to understand whether the problem is lack of understanding of the risks associated with the poor quality, lack of appreciation of the software quality and adverse effects of taking shortcuts on the future maintainability (it staggers me that often it is the same people who would buy only best quality very expensive cars and insist on taking shortcuts in building software, highlighting the difference in how personal and non-personal choices are treated).
Or the issue might be that whilst actually fully appreciating the value of software quality and understanding the impact of future system maintainability (in my personal experience this is less common) they make a conscious decision to go down a cheaper route.
In essence, communication here means not that much trying to actively sell what you've got on your mind, but to try and absorb as much information as possible from the environment and the manager. Then it is going to be much easier to figure out your next step.
b) Alliances and partnerships. It is impossible to overstate the value of alliances. Even when your manager or project sponsor does not provide adequate support for quality (which is part of their job) a right alliance can help improve things significantly. Find these who care and unite. It can be a project sponsor who cares, when project manager doesn't, or it can be project manager when team mates don't. It can be quality manager, a director, a fellow developer or business analyst or tester. The bad guys will back out or leave you alone to do the things right and then will definitely jump on the boat to collect the credits. Look at politicians, when they trying to achieve something first thing the do is forming a coalition. Unfortunately, when you told to “just get on with it” you’re already involved in politics whether you like the smell of it or not.
Find someone who has a significant stake in the project’s success or failure, in its quality and make them an ally.
A: I find it's very important to be able to estimate completion times for tasks. If a manager gives you a crazy task, be able to tell him in a relatively short amount of time a realistic expectation for the amount of time it will take to complete their dream task. At least this way the manager can decide how important it is to him/her to get their task done.
A: Typically I have a good working relationship with managers. What I usually try to do is given them trade-offs: "I could do that but then...If I did this instead...", then I let them make the decision. I once designed a de-normalized database -- completely flat table structure for each type of query -- because the boss's boss asked me to. I was 1 month on the job and I knew, because my boss shared the history of the project with me beforehand, that I likely wouldn't be able to convince him to do it otherwise. He just hated joins of all kinds. Now that boss is gone and I have a project in my backlog to rework the original database to add some extensions and I'll normalize it as I refactor. Now that I've been here longer, I'm more likely than not to be taken up on the alternatives that I offer partly because I've genuinely left the decisions to the person paying the bills when that person cares.
A: Whatever you propose to your boss, make sure that it looks good in Powerpoint. If it looks good in Powerpoint, chances are that he (or she) will go for it.
A: I make sure to document the requirements from that boss, complete the code and document it, and put in writing "suggestion" on how to improve the process. I put these suggestions in the code comments, in the documentation, and in the one page spec I usually write.
Most of my "just do it" jobs were because the boss wanted X to happen and couldn't be bothered to do any planning. So I covered my ass by writing down my objections on all the paperwork, but kept my job (and my paycheck) by "just doing it"
I don't work there anymore, and that company will never be as big or successful in projects as the place I am at now. The lack of process and the "just do it" mentality are signs of a small company. I handled it by moving on to a new outfit to progress my career.
A: Take a 'how to sell' class, or read Selling for Dummies, seriously though, is all about how you present the solution and SELL the idea that you want them to BUY. I did sales for a while before being a full time software engineer and I can really see the value of the stuff I learned there.
A: Some of these problems are caused by bad specifications.
However you also need to consider does the manager actually know best? (yes it can happen sometimes!) they may be privy to some info you do not have.
Ultimatly if you have to deal with this all the time you may want to look for another position.
Take a look at this book it discusses the politics in detail:
http://www.amazon.co.uk/Career-Programmer-Guerilla-Tactics-Imperfect/dp/1590596242/ref=sr_1_2?ie=UTF8&s=books&qid=1223055601&sr=8-2
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Convert BSTR to int Does any one know how can I convert a BSTR to an int in VC++ 2008
Thanks in advance.
A: Google suggests VarI4FromStr:
HRESULT VarI4FromStr(
_In_ LPCOLESTR strIn,
_In_ LCID lcid,
_In_ ULONG dwFlags,
_Out_ LONG *plOut
);
A: Try the _wtoi function:
int i = _wtoi( mybstr );
A: You should use ::VarI4FromStr(...).
A: You can pass a BSTR safely to any function expecting a wchar_t *. So you can use _wtoi().
A: BSTR s = SysAllocString(L"42");
int i = _wtoi(s);
A: You should be able to use boost::lexical_cast<>
#include <boost/lexical_cast.hpp>
#include <iostream>
int main()
{
wchar_t plop[] = L"123";
int value = boost::lexical_cast<int>(plop);
std::cout << value << std::endl;
}
The cool thing is that lexical_cast<>It will work for any types that can be passed through a stream and its type safe!
A: This is a method I use to parse values out of strings. It's similar to Boost's lexical cast.
std::wistringstream iss(mybstr); // Should convert from bstr to wchar_t* for the constructor
iss >> myint; // Puts the converted string value in to myint
if(iss.bad() || iss.fail())
{
// conversion failed
}
A: You should use VarI4FromStr like others pointed out. BSTR is not wchar_t* because of differences in their NULL semantics (SysStringLen(NULL) is ok, but wcslen(NULL) is not).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Naming conventions in a Python library I'm implementing a search algorithm (let's call it MyAlg) in a python package. Since the algorithm is super-duper complicated, the package has to contain an auxiliary class for algorithm options. Currently I'm developing the entire package by myself (and I'm not a programmer), however I expect 1-2 programmers to join the project later. This would be my first project that will involve external programmers. Thus, in order to make their lifes easier, how should I name this class: Options, OptionsMyAlg, MyAlgOptions or anything else?
What would you suggest me to read in this topic except for http://www.joelonsoftware.com/articles/Wrong.html ?
Thank you
Yuri
[cross posted from here: http://discuss.joelonsoftware.com/default.asp?design.4.684669.0 will update the answers in both places]
A: I suggest you read PEP8 (styleguide for Python code).
A: Just naming it Options should be fine. The Python standard library generally takes the philosophy that namespaces make it easy and manageable for different packages to have identically named things. For example, open is both a builtin and a function in the os module, several different modules define an Error exception class, and so on.
This is why it's generally considered bad form to say from some_module import * since it makes it unclear to which open your code refers, etc.
A: If it all fits in one file, name the class Options. Then your users can write:
import myalg
searchOpts = myalg.Options()
searchOpts.whatever()
mySearcher = myalg.SearchAlg(searchOpts)
mySearcher.search("where's waldo?")
Note the Python Style Guide referenced in another answer suggests that packages should be named with all lowercase letters.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Creating a random ordered list from an ordered list I have an application that takes the quality results for a manufacturing process and
creates graphs both to show Pareto charts of the bad, and also to show production throughput.
To automate the task of testing these statistical procedures I would like to deterministically be able to add records into the database and have the quality tech go to certain graphs and compare to a known good graph. But, I also would like to simulate the results so they would go into the database as if a user was running through the testing process.
One idea I have had is to fill a list with i number good, j number bad1, k number bad 2, etc. And then somehow randomly sort the list before insertion into the database.
So, my question, is there a standard algorithm to take a sorted list of values and create a randomly sorted list?
A: You'll want to use a shuffle algorithm. Make sure to use a proper shuffle algorithm and not a home-baked one, because it may introduce some form of subtle pattern to the data. See this post by Jeff Atwood about the problem with using "random enough" shuffles.
A: Our host has a very good article about card shuffling, I believe some good ideas can be adopted
http://www.codinghorror.com/blog/archives/001008.html
A: Depends on what you need for "randomness". The easiest way is probably just to through all of the elements into a hash set and iterate over the result. The order you get will be deterministic, but for most intents and purposes can be considered random. Alternatively, you can generate random numbers between [0..length] of the list, picking out elements and pushing them onto a new list. Assuming that list removal is constant time, the result would be a random list generated with O(n) efficiency.
A: Random rnd = new Random();
List<int> orderedList = new List<int>();
List<int> randomList = new List<int>();
while (orderedList.Count != 0)
{
int index = rnd.Next(0, orderedList.Count);
randomList.Add(orderedList[index]);
orderedList.RemoveAt(index);
}
A: A simple answer is to have an array or vector, loop once through it, and for each position visited, pick an element at random from the remainder of the list and swap it with the current element.
You need a reasonable random number generator for this.
A: the way i used to do this was to have a loop that ran a number of times that would generate two random numbers between 0 and the length of the list, then swap those two elements.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Whats the BEST way to setup a library to support links into precompiled Software for multiple platforms, compilation options I'm maintaining a library that contains compiled objects that need to be linked into
a 3rd party executable. sometimes the executable has been compiled for Solaris, sometimes as a 32bit Linux Application, sometimes its a 64bit linux application.
What I'd love to do is pass one "path" to the library, and have the application then automatically pick up the right flavor of the library.
It'd be OK if it only worked on linux, so that I could just define the path in terms of the OS.
this particular case is for a library of PLI/VPI functions I want to link into a verilog simulator.
What I have now is
root/path/${MYPLILIB_VER}/rootname/${MYPLIFLAVOR}/plilib.so
where flavor is one of
solaris linux linux64
The flavor depends on the os, and if Linux, if running on a 64bit platform, it also depends on which version 32/64bit of the program I am running.
I'm looking for a better way..
A: Use the system info given from "uname" to set the paths automatically?
'uname -s' gives you the kernel name (eg Linux / SunOS)
'uname -i' will give you the architecture (eg x86 / x86_64)
A: Hm.. its looking like ELF might do what I want.. now for some good
application notes..
and on the LAST page of this paper on making DSO's is some info
on the $PLATFORM and $LIB expectations..
seems like on linux I should be able to use the lib lib64 directory structure
to hold the two objects..
off to learn more.
shared objects for the disoriented
A: I don't know which simulator you are using but you might try putting the path in the LD_LIBRARY_PATH environment variable. I believe both Cadence and Mentor simulators will look in there. I'm not sure abut VCS. Your simulator's user manual will have details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: My SQL Server 2000 Transaction Log Size Seems Small I'm wondering why my transaction log would only be 2 MB on my 40 GB database when I have recovery mode set to full and unlimited file growth set on the transaction log. Any ideas?
Edit: I'd say there is probably a few hundred MB inserted every day and a lot of updates going on. It's a database that drives a fairly active website.
A: Because you back up the log, and that's what it's supposed to do?
Sidenote (given that I can't comment): A full backup does not truncate the log. Only log backups, or TRUNCATE_ONLY commands, truncate the log.
A: Of the 40 GB how many data is changed every day? Transaction log only traces logged operations (insert, delete, update) and never traces read operation or bulk copied insertt using BCP or other bulk command (actually I do not remember if the T-SQL command to bulk load data is availabe on SQL2000 or not).
If your feel your logged operation should take more than 2 Mb each day examine scheduled jobs to see if someone is periodically dumping the log
A: if you are doing regular full backups, those will truncate the transaction log.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I redirect a file download using Grails? I'm currently writing a website that allows people to download Excel and text files. Is there a way to redirect to a different page when they click, so that we run javascript and do analytics (i.e. keep download count)? Currently, nothing prevents the user from simply right-clicking and saving.
Edit:
To be more specific, it would be nice for a single or double click of a file link to redirect to a temporary download page for analytics, then have the file be downloaded.
A: I started describing how you might do this in Grails but then remembered most analytics services (Google, Omniture, etc.) will let you track downloaded files by using the onclick event. If you have some custom javascript based tracking you're doing, you can do the same thing. The onclick will get called before the document starts downloading. For example:
<a href="/path-to-download-file" onclick="record_download('filename')">myfile.txt</a>
More specifically for Google Analytics, here's some javascript to do this automatically:
http://www.goodwebpractices.com/downloads/gatag.js
A: I'm not sure what you are asking here, are you trying to figure out how to redirect in the controller or are you trying to override the right-click behavior in the browser?
To redirect in the controller you can do something like this documented here.
redirect(controller:"book",action:"list")
If you are trying to change button or link behavior that's client side and will require some Javascript most likely.
If you clarify I might be able to help more.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: JPA and inheritance I have a some JPA entities that inherit from one another and uses discriminator to determine what class to be created (untested as of yet).
@Entity(name="switches")
@DiscriminatorColumn(name="type")
@DiscriminatorValue(value="500")
public class DmsSwitch extends Switch implements Serializable {}
@MappedSuperclass
public abstract class Switch implements ISwitch {}
@Entity(name="switch_accounts")
public class SwitchAccounts implements Serializable {
@ManyToOne()
@JoinColumn(name="switch_id")
DmsSwitch _switch;
}
So in the SwitchAccounts class I would like to use the base class Switch because I don't know which object will be created until runtime. How can I achieve this?
A: As the previous commentors I agree that the class model should be different. I think something like the following would suffice:
@Entity(name="switches")
@DiscriminatorColumn(name="type")
@DiscriminatorValue(value="400")
public class Switch implements ISwitch {
// Implementation details
}
@Entity(name="switches")
@DiscriminatorValue(value="500")
public class DmsSwitch extends Switch implements Serializable {
// implementation
}
@Entity(name="switches")
@DiscriminatorValue(value="600")
public class SomeOtherSwitch extends Switch implements Serializable {
// implementation
}
You could possibly prevent instantiation of a Switch directly by making the constructor protected. I believe Hibernate accepts that.
A: As your switch class is not an entity, it cannot be used in an entity relationship... Unfortunately, you'll have to transform your mappedsuperclass as an entity to involve it in a relationship.
A: I don't think that you can with your current object model. The Switch class is not an entity, therefore it can't be used in relationships. The @MappedSuperclass annotation is for convenience rather than for writing polymorphic entities. There is no database table associated with the Switch class.
You'll either have to make Switch an entity, or change things in some other way so that you have a common superclass that is an entity.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is there a more efficient way of making pagination in Hibernate than executing select and count queries? Usually pagination queries look like this. Is there a better way instead of making two almost equal methods, one of which executing "select *..." and the other one "count *..."?
public List<Cat> findCats(String name, int offset, int limit) {
Query q = session.createQuery("from Cat where name=:name");
q.setString("name", name);
if (offset > 0) {
q.setFirstResult(offset);
}
if (limit > 0) {
q.setMaxResults(limit);
}
return q.list();
}
public Long countCats(String name) {
Query q = session.createQuery("select count(*) from Cat where name=:name");
q.setString("name", name);
return (Long) q.uniqueResult();
}
A: My solution will work for the very common use case of Hibernate+Spring+MySQL
Similar to the above answer, I based my solution upon Dr Richard Kennar's. However, since Hibernate is often used with Spring, I wanted my solution to work very well with Spring and the standard method for using Hibernate. Therefore my solution uses a combination of thread locals and singleton beans to achieve the result. Technically the interceptor is invoked on every prepared SQL statement for the SessionFactory, but it skips all logic and does not initialize any ThreadLocal(s) unless it is a query specifically set to count the total rows.
Using the below class, your Spring configuration looks like:
<bean id="foundRowCalculator" class="my.hibernate.classes.MySQLCalcFoundRowsInterceptor" />
<!-- p:sessionFactoryBeanName="mySessionFactory"/ -->
<bean id="mySessionFactory"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"
p:dataSource-ref="dataSource"
p:packagesToScan="my.hibernate.classes"
p:entityInterceptor-ref="foundRowCalculator"/>
Basically you must declare the interceptor bean and then reference it in the "entityInterceptor" property of the SessionFactoryBean. You must only set "sessionFactoryBeanName" if there is more than one SessionFactory in your Spring context and the session factory you want to reference is not called "sessionFactory". The reason you cannot set a reference is that this would cause an interdependency between the beans that cannot be resolved.
Using a wrapper bean for the result:
package my.hibernate.classes;
public class PagedResponse<T> {
public final List<T> items;
public final int total;
public PagedResponse(List<T> items, int total) {
this.items = items;
this.total = total;
}
}
Then using an abstract base DAO class you must call "setCalcFoundRows(true)" before making the query and "reset()" after [in a finally block to ensure it's called]:
package my.hibernate.classes;
import org.hibernate.Criteria;
import org.hibernate.Query;
import org.springframework.beans.factory.annotation.Autowired;
public abstract class BaseDAO {
@Autowired
private MySQLCalcFoundRowsInterceptor rowCounter;
public <T> PagedResponse<T> getPagedResponse(Criteria crit, int firstResult, int maxResults) {
rowCounter.setCalcFoundRows(true);
try {
@SuppressWarnings("unchecked")
return new PagedResponse<T>(
crit.
setFirstResult(firstResult).
setMaxResults(maxResults).
list(),
rowCounter.getFoundRows());
} finally {
rowCounter.reset();
}
}
public <T> PagedResponse<T> getPagedResponse(Query query, int firstResult, int maxResults) {
rowCounter.setCalcFoundRows(true);
try {
@SuppressWarnings("unchecked")
return new PagedResponse<T>(
query.
setFirstResult(firstResult).
setMaxResults(maxResults).
list(),
rowCounter.getFoundRows());
} finally {
rowCounter.reset();
}
}
}
Then a concrete DAO class example for an @Entity named MyEntity with a String property "prop":
package my.hibernate.classes;
import org.hibernate.SessionFactory;
import org.hibernate.criterion.Restrictions
import org.springframework.beans.factory.annotation.Autowired;
public class MyEntityDAO extends BaseDAO {
@Autowired
private SessionFactory sessionFactory;
public PagedResponse<MyEntity> getPagedEntitiesWithPropertyValue(String propVal, int firstResult, int maxResults) {
return getPagedResponse(
sessionFactory.
getCurrentSession().
createCriteria(MyEntity.class).
add(Restrictions.eq("prop", propVal)),
firstResult,
maxResults);
}
}
Finally the interceptor class that does all the work:
package my.hibernate.classes;
import java.io.IOException;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import org.hibernate.EmptyInterceptor;
import org.hibernate.HibernateException;
import org.hibernate.SessionFactory;
import org.hibernate.Transaction;
import org.hibernate.jdbc.Work;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
public class MySQLCalcFoundRowsInterceptor extends EmptyInterceptor implements BeanFactoryAware {
/**
*
*/
private static final long serialVersionUID = 2745492452467374139L;
//
// Private statics
//
private final static String SELECT_PREFIX = "select ";
private final static String CALC_FOUND_ROWS_HINT = "SQL_CALC_FOUND_ROWS ";
private final static String SELECT_FOUND_ROWS = "select FOUND_ROWS()";
//
// Private members
//
private SessionFactory sessionFactory;
private BeanFactory beanFactory;
private String sessionFactoryBeanName;
private ThreadLocal<Boolean> mCalcFoundRows = new ThreadLocal<Boolean>();
private ThreadLocal<Integer> mSQLStatementsPrepared = new ThreadLocal<Integer>() {
@Override
protected Integer initialValue() {
return Integer.valueOf(0);
}
};
private ThreadLocal<Integer> mFoundRows = new ThreadLocal<Integer>();
private void init() {
if (sessionFactory == null) {
if (sessionFactoryBeanName != null) {
sessionFactory = beanFactory.getBean(sessionFactoryBeanName, SessionFactory.class);
} else {
try {
sessionFactory = beanFactory.getBean("sessionFactory", SessionFactory.class);
} catch (RuntimeException exp) {
}
if (sessionFactory == null) {
sessionFactory = beanFactory.getBean(SessionFactory.class);
}
}
}
}
@Override
public String onPrepareStatement(String sql) {
if (mCalcFoundRows.get() == null || !mCalcFoundRows.get().booleanValue()) {
return sql;
}
switch (mSQLStatementsPrepared.get()) {
case 0: {
mSQLStatementsPrepared.set(mSQLStatementsPrepared.get() + 1);
// First time, prefix CALC_FOUND_ROWS_HINT
StringBuilder builder = new StringBuilder(sql);
int indexOf = builder.indexOf(SELECT_PREFIX);
if (indexOf == -1) {
throw new HibernateException("First SQL statement did not contain '" + SELECT_PREFIX + "'");
}
builder.insert(indexOf + SELECT_PREFIX.length(), CALC_FOUND_ROWS_HINT);
return builder.toString();
}
case 1: {
mSQLStatementsPrepared.set(mSQLStatementsPrepared.get() + 1);
// Before any secondary selects, capture FOUND_ROWS. If no secondary
// selects are
// ever executed, getFoundRows() will capture FOUND_ROWS
// just-in-time when called
// directly
captureFoundRows();
return sql;
}
default:
// Pass-through untouched
return sql;
}
}
public void reset() {
if (mCalcFoundRows.get() != null && mCalcFoundRows.get().booleanValue()) {
mSQLStatementsPrepared.remove();
mFoundRows.remove();
mCalcFoundRows.remove();
}
}
@Override
public void afterTransactionCompletion(Transaction tx) {
reset();
}
public void setCalcFoundRows(boolean calc) {
if (calc) {
mCalcFoundRows.set(Boolean.TRUE);
} else {
reset();
}
}
public int getFoundRows() {
if (mCalcFoundRows.get() == null || !mCalcFoundRows.get().booleanValue()) {
throw new IllegalStateException("Attempted to getFoundRows without first calling 'setCalcFoundRows'");
}
if (mFoundRows.get() == null) {
captureFoundRows();
}
return mFoundRows.get();
}
//
// Private methods
//
private void captureFoundRows() {
init();
// Sanity checks
if (mFoundRows.get() != null) {
throw new HibernateException("'" + SELECT_FOUND_ROWS + "' called more than once");
}
if (mSQLStatementsPrepared.get() < 1) {
throw new HibernateException("'" + SELECT_FOUND_ROWS + "' called before '" + SELECT_PREFIX + CALC_FOUND_ROWS_HINT + "'");
}
// Fetch the total number of rows
sessionFactory.getCurrentSession().doWork(new Work() {
@Override
public void execute(Connection connection) throws SQLException {
final Statement stmt = connection.createStatement();
ResultSet rs = null;
try {
rs = stmt.executeQuery(SELECT_FOUND_ROWS);
if (rs.next()) {
mFoundRows.set(rs.getInt(1));
} else {
mFoundRows.set(0);
}
} finally {
if (rs != null) {
rs.close();
}
try {
stmt.close();
} catch (RuntimeException exp) {
}
}
}
});
}
public void setSessionFactoryBeanName(String sessionFactoryBeanName) {
this.sessionFactoryBeanName = sessionFactoryBeanName;
}
@Override
public void setBeanFactory(BeanFactory arg0) throws BeansException {
this.beanFactory = arg0;
}
}
A: If you don't need to display the total number of pages then I'm not sure you need the count query. Lots of sites including google don't show the total on the paged results. Instead they just say "next>".
A: You can use MultiQuery to execute both queries in a single database call, which is much more efficient. You can also generate the count query, so you don't have to write it each time. Here's the general idea ...
var hql = "from Item where i.Age > :age"
var countHql = "select count(*) " + hql;
IMultiQuery multiQuery = _session.CreateMultiQuery()
.Add(s.CreateQuery(hql)
.SetInt32("age", 50).SetFirstResult(10))
.Add(s.CreateQuery(countHql)
.SetInt32("age", 50));
var results = multiQuery.List();
var items = (IList<Item>) results[0];
var count = (long)((IList<Item>) results[1])[0];
I imagine it would be easy enough to wrap this up into some easy-to-use method so you can have paginateable, countable queries in a single line of code.
As an alternative, if you're willing to test the work-in-progress Linq for NHibernate in nhcontrib, you might find you can do something like this:
var itemSpec = (from i in Item where i.Age > age);
var count = itemSpec.Count();
var list = itemSpec.Skip(10).Take(10).AsList();
Obviously there's no batching going on, so that's not as efficient, but it may still suite your needs?
Hope this helps!
A: There is a way
mysql> SELECT SQL_CALC_FOUND_ROWS * FROM tbl_name
-> WHERE id > 100 LIMIT 10;
mysql> SELECT FOUND_ROWS();
The second SELECT returns a number indicating how many rows the first SELECT would have returned had it been written without the LIMIT clause.
Reference: FOUND_ROWS()
A: I know this problem and have faced it before. For starters, the double query mechanism where it does the same SELECT conditions is indeed not optimal. But, it works, and before you go off and do some giant change, just realize it might not be worth it.
But, anyways:
1) If you are dealing with small data on the client side, use a result set implementation that lets you set the cursor to the end of the set, get its row offset, then reset the cursor to before first.
2) Redesign the query so that you get COUNT(*) as an extra column in the normal rows. Yes, it contains the same value for every row, but it only involves 1 extra column that is an integer. This is improper SQL to represent an aggregated value with non aggregated values, but it may work.
3) Redesign the query to use an estimated limit, similar to what was being mentioned. Use rows per page and some upper limit. E.g. just say something like "Showing 1 to 10 of 500 or more". When they browse to "Showing 25o to 260 of X", its a later query so you can just update the X estimate by making the upper bound relative to page * rows/page.
A: Baron Schwartz at MySQLPerformanceBlog.com authored a post about this. I wish there was a magic bullet for this problem, but there isn't. Summary of the options he presented:
*
*On the first query, fetch and cache all the results.
*Don't show all results.
*Don't show the total count or the intermediate links to other pages. Show only the "next" link.
*Estimate how many results there are.
A: I think the solution depends on database you are using. For example, we are using MS SQL and using next query
select
COUNT(Table.Column) OVER() as TotalRowsCount,
Table.Column,
Table.Column2
from Table ...
That part of query can be changed with database specified SQL.
Also we set the query max result we are expecting to see, e.g.
query.setMaxResults(pageNumber * itemsPerPage)
And gets the ScrollableResults instance as result of query execution:
ScrollableResults result = null;
try {
result = query.scroll();
int totalRowsNumber = result.getInteger(0);
int from = // calculate the index of row to get for the expected page if any
/*
* Reading data form page and using Transformers.ALIAS_TO_ENTITY_MAP
* to make life easier.
*/
}
finally {
if (result != null)
result.close()
}
A: At this Hibernate wiki page:
https://www.hibernate.org/314.html
I present a complete pagination solution; in particular, the total number of elements is computed by scrolling to the end of the resultset, which is supported by now by several JDBC drivers. This avoids the second "count" query.
A: I found a way to do paging in hibernate without doing a select count (*) over a large dataset size. Look at the solution that I posted for my answer here.
processing a large number of database entries with paging slows down with time
you can perform paging one at a time without knowing how many pages you will need originally
A: Here is a solution by Dr Richard Kennard (mind the bug fix in the blog comment!), using Hibernate Interceptors
For summary, you bind your sessionFactory to your interceptor class, so that your interceptor can give you the number of found rows later.
You can find the code on the solution link. And below is an example usage.
SessionFactory sessionFactory = ((org.hibernate.Session) mEntityManager.getDelegate()).getSessionFactory();
MySQLCalcFoundRowsInterceptor foundRowsInterceptor = new MySQLCalcFoundRowsInterceptor( sessionFactory );
Session session = sessionFactory.openSession( foundRowsInterceptor );
try {
org.hibernate.Query query = session.createQuery( ... ) // Note: JPA-QL, not createNativeQuery!
query.setFirstResult( ... );
query.setMaxResults( ... );
List entities = query.list();
long foundRows = foundRowsInterceptor.getFoundRows();
...
} finally {
// Disconnect() is good practice, but close() causes problems. Note, however, that
// disconnect could lead to lazy-loading problems if the returned list of entities has
// lazy relations
session.disconnect();
}
A: here's the way pagination is done in hibernate
Query q = sess.createQuery("from DomesticCat cat");
q.setFirstResult(20);
q.setMaxResults(10);
List cats = q.list();
you can get more info from hibernate docs at : http://www.hibernate.org/hib_docs/v3/reference/en-US/html_single/#objectstate-querying-executing-pagination
10.4.1.5 and 10.4.1.6 section give you more flexbile options.
BR,
~A
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: SQL Server Service Broker Issue & Tutorials I've been looking into implementing an external activator in SQL Server Express 2005, and I added the queues, services, contracts, and event notifications to the database. I also added a trigger to send a message to the target queue. Everything parses, runs, and the trigger is firing. However, when I select from the target queue, or use a quick T-SQL script to receive from the queue, nothing is there.
I'm wondering:
*
*How is that even possible? Are the messages being auto-received?
*Is there any way to check while sending a message if it arrived correctly?
*Is there a better way to run a process on the server asynchronously after a trigger is fired?
As an aside, good tutorial material for the Service Broker is hard to find. If anyone has any resources, please let me know. Right now, I'm reading a book from our companies' online resource but even that is a pain to filter through.
Thanks,
William
A: In answer to your first question, hopefully, you'll see something in the sys.transmission_queue system view. See
http://msdn.microsoft.com/en-us/library/ms190336.aspx for documentation on that.
If you Google that, you might find some useful troubleshooting resources too.
Dave
A: John,
I've only recently begun looking into the service broker in order to implement asynch messaging between DB instances. I found the following to be quite useful in getting my head around it.
http://msdn.microsoft.com/en-us/library/bb839489(SQL.90).aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Django: How do I create a generic url routing to views? I have a pretty standard django app, and am wondering how to set the url routing so that I don't have to explicitly map each url to a view.
For example, let's say that I have the following views: Project, Links, Profile, Contact. I'd rather not have my urlpatterns look like this:
(r'^Project/$', 'mysite.app.views.project'),
(r'^Links/$', 'mysite.app.views.links'),
(r'^Profile/$', 'mysite.app.views.profile'),
(r'^Contact/$', 'mysite.app.views.contact'),
And so on. In Pylons, it would be as simple as:
map.connect(':controller/:action/:id')
And it would automatically grab the right controller and function. Is there something similar in Django?
A: mods = ('Project','Links','Profile','Contact')
urlpatterns = patterns('',
*(('^%s/$'%n, 'mysite.app.views.%s'%n.lower()) for n in mods)
)
A: Unless you have a really huge number of views, writing them down explicitly is not too bad, from a style perspective.
You can shorten your example, though, by using the prefix argument of the patterns function:
urlpatterns = patterns('mysite.app.views',
(r'^Project/$', 'project'),
(r'^Links/$', 'links'),
(r'^Profile/$', 'profile'),
(r'^Contact/$', 'contact'),
)
A: You might be able to use a special view function along these lines:
def router(request, function, module):
m =__import__(module, globals(), locals(), [function.lower()])
try:
return m.__dict__[function.lower()](request)
except KeyError:
raise Http404()
and then a urlconf like this:
(r'^(?P<function>.+)/$', router, {"module": 'mysite.app.views'}),
This code is untested but the general idea should work, even though you should remember:
Explicit is better than implicit.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Best Flash Audio/Video + Interactivity server? I'm looking for suggestions on Flash realtime servers. Currently, we use a combination of Moock's Unity and Red5, but there are a couple problems. First, we are moving to AS3, and Unity only supports AS2. Secondly, Red5 is pretty flaky for us, we'd prefer something more stable. We can't use the official Flash Media Server, it's a bit out of our price range (starts at $4,500 for a single license).
So far, I've found two servers that look like they would meet my needs, ElectroServer and Wowza Media Server. Does anyone have any experience with these, or have any other servers to suggest? The main features I'm looking for:
*
*Stable
*AS3 support in client libraries
*Can extend server-side (with Java or other languages)
*Supports real time audio/video from flash clients (eg webcams)
*(not required, but very helpful) Some method of communicating when all traffic except HTTP or HTTPS is blocked. Eg RTMPT (tunnels RTMP over HTTP) support or similar.
*Reasonable performance, I'd like to get at least a couple hundred users connected without killing a server.
A: Give Wowza a try! I've only used it for webcam recording, but the experience was very seamless, a far cry from Red5. Plus as a developer you can use the full Wowza for free AFAIK, so you don't have to take my word for it. It's easy to install, they have good code samples, it really gave me a good impression.
Another interesting fact is that Wowza is made by ex-Adobe/Macromedia engineers who used to work on FMS.
A: I vote ElectroServer - it's pretty stable, reasonably priced and I've met the guy who runs the company and he's a swell guy!
*
*Stable - YES
*AS3 support in client libraries -YES
*Can extend server-side (with Java or other languages) - YES
*Supports real time audio/video from flash clients (eg webcams) - YES
*(not required, but very helpful) Some method of communicating when all traffic except HTTP or HTTPS is blocked. Eg RTMPT (tunnels RTMP over HTTP) support or similar. NO - BUT WILL BE SUPPORTED IN NEXT VERSION
*Reasonable performance, I'd like to get at least a couple hundred users connected without killing a server. YES - EXTENSIVELY LOAD TESTED WITH THOUSANDS OF USERS
A: For future reference, we ended up going with Wowza. Wowza and ElectroServer seemed pretty close in most aspects, so we tried to do a little demo in each one to compare. We had some trouble gettings started with ES, so we just went with Wowza.
A: I prefer red5 , it has been developed a lot these lasts months and i found it pretty stable. Once you get one app working the rest is easy and all the requirements you mention are available.
If for "more stable" you mean the red5 source code changes too much , yes it does if you use the svn but you could choose one of the old versions and have a pretty good server for a while until you need to move to a newer version.
Also there is nothing like open source , you can change anything and share it so others can help improve it taking something and giving as well, the problem with a commercial solution is that even when you get proffesional support or a pretty stable product you will get the answer "we dont support customized servers" or such when you have very specific requirements, so forget about experimenting :P
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: What would be Linux equivalent of GetProcessIoCounters? Here's a link to Windows documentation.
Basically I would like to get similar data, but on Linux. If not all is possible, then at least some parts.
A: Have a look at /proc/PID/io - it's the current I/O accounting information of the process with PID.
A: If you enable CONFIG\_TASK\_IO\_ACCOUNTING, you will have the information available in /proc/<pid>/io. This is available since kernel 2.6.20, but not normally enabled by default (However, in Ubuntu 8.04 it seems like it is enabled).
You can read about the various data items in Documentation/filesystems/proc.txt in the kernel source tree. Especially section 2.14 should be of interest.
A: Look at the pseudo-files under /proc/<PID>/. Maybe you can find what you need there.
Look at man 5 proc, or failing that the kernel documentation. However, I don't see much that looks promising. Sorry.
A: Perhaps you want getrusage()? Not all fields are maintained under linux however. Perhaps enabling the CONFIG_TASK_IO_ACCOUNTING will cause them to be maintained?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Time math in Delphi I have a pretty unusual problem (for me). I am writing an application that will allow a user to change their system time forward or back either by explicit date (change my date to 6/3/1955) or by increment using buttons (go forward 1 month).
I'm writing this to help some of my users test some software that requires jumps like this in order to simulate real world usage of a billing system.
Changing the time in Delphi is of course very easy:
SetDateTime(2008,05,21,16,07,21,00);
But I'm not sure if Delphi (2006) has any built in helpers for date math, which is one of my least favorite things :)
Any suggestions for the best way to handle this? I'd prefer to stay native as the winapi datetime calls suck.
Thanks!
A: There is plenty of helpers in the DateUtils unit.
A: What do you want to happen if the day of the current month doesn't exist in your future month? Say, January 31 + 1 month? You have the same problem if you increment the year and the starting date is February 29 on a leap year. So there can't be a universal IncMonth or IncYear function that will work consistantly on all dates.
For anyone interested, I heartily recommend Julian Bucknall's article on the complexities that are inherent in this type of calculation
on how to calculate the number of months and days between two dates.
The following is the only generic date increment functions possible that do not introduce anomolies into generic date math. But it only accomplishes this by shifting the responsibility back onto the programmer who presumably has the exact requirements of the specific application he/she is programming.
IncDay - Add a or subtract a number of days.
IncWeek - Add or subtract a number of weeks.
But if you must use the built in functions then at least be sure that they do what you want them to do. Have a look at the DateUtils and SysUtils units. Having the source code to these functions is one of the coolest aspects of Delphi. Having said that, here is the complete list of built in functions:
IncDay - Add a or subtract a number of days.
IncWeek - Add or subtract a number of weeks.
IncMonth - Add a or subtract a number of months.
IncYear - Add a or subtract a number of years.
As for the second part of your question, how to set the system date & time using a TDatetime, the following shamelessly stolen code from another post will do the job:
procedure SetSystemDateTime(aDateTime: TDateTime);
var
lSystemTime: TSystemTime;
lTimeZone: TTimeZoneInformation;
begin
GetTimeZoneInformation(lTimeZone);
aDateTime := aDateTime + (lTimeZone.Bias / 1440);
DateTimeToSystemTime(aDateTime, lSystemTime);
setSystemTime(lSystemTime);
end;
A: The VCL has types (TDate and TDateTime) which are doubles and you can use in arithmetic operations.
Also see EncodeDate and DecodeDate
A: As mentioned by gabr and mliesen, have a look at the DateUtils and SysUtils units, useful functions include.
*
*IncDay - Add a or subtract a number of days.
*IncMonth - Add a or subtract a number of months.
*IncWeek - Add a or subtract a number of weeks.
*IncYear - Add a or subtract a number of years.
*EncodeDate - Returns a TDateTime value from the Year, Month, and Day params.
A: There is plenty of helpers in the SysUtils unit (and as gabr pointed out, also in DateUtils).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How can I do bi-directional communication with a custom USB device? I'm planning to build a USB device that has buttons that some software needs to respond to, and indicators that the software needs to control. I'm new to USB, so I'm looking for any pointers that will get me started.
A: When I did some USB development a while ago, I found the information at USB Central extremely valuable.
For low bandwidth requirements, you can use something like the FT232R which is a single-chip USB serial implementation. The FTDI drivers are readily available and make the device appear as a regular serial port to the host computer. This is orders of magnitude easier than rolling your own USB implementation (for either end!).
A: Kinda vague, but in the past I've done a little bit of USB development. The easiest stuff tends to be HID related device as the subset of USB used to commincate is very to implement on both sides. There are hardware devices which are essentially stubbed out to work with HID, you just customize some circuity and go.
A: The USB standard is actually quite readable. Though it might be a bit overkill if you just want to create a simple device. You could probably get something like this, which is basically an 8051 controller with a USB connector together with firmware and a DLL.
A: Checkout WinDriver, which is a commercial multiplatform tool what give you easy way to implement usb drivers in user mode, source code compatible between Linux and Windows.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to set up your own PEAR Channel? I am looking for instructions on how to setup a PEAR channel for our project so that we can deploy it with the pear installer. I have searched the web for a while and cannot find any straightforward information. I followed this tutorial for a while, but I am having a hell of a time getting this to work. Does anyone know how to do this? Is there a simpler way?
A: It looks like you are one of the few people who want to do this. That tutorial you linked to appears to be the latest (!) but the package is still somewhat in development. The documentation in that package is also non-existent. It looks like it's up to you to write the docs. Or maybe contact Greg Beaver; the author of the package and blog post you linked to. He also wrote a book about PEAR (albeit in 2006.) The amazon writeup mentions this:
Next, you will learn how to set up
your own PEAR Channel for distributing
PHP applications, both open-source and
proprietary closed-source PHP
applications that can be secured using
technology already built into the PEAR
Installer
.
A: What problems are you encountering on following the tutorial that you linked to?
You could set up your own channel with pirum or the chiara server ( http://pear.chiaraquartet.net/ ) but you could also look into getting an account on http://pearfarm.org and hosting your packages there (or on http://pearhub.org).
A: The PEAR website lists a number of channel server softwares now (bottom of the page).
They are:
*
*Chiara_PEAR_Server
*SimpleChannelServer
*Pirum
*Pearfarm
I would not use the chiara pear server anymore; it's dead.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Other than for LINQ queries, how do you use anonymous types in C#? I've been trying to get up to speed on some of the newer features in C# and one of them that I haven't had occasion to use is anonymous types.
I understand the usage as it pertains to LINQ queries and I looked at this SO post which asked a similar question. Most of the examples I've seen on the net are related to LINQ queries, which is cool. I saw some somewhat contrived examples too but not really anything where I saw a lot of value.
Do you have a novel use for anonymous types where you think it really provides you some utility?
A: With a bit of reflection, you can turn an anonymous type into a Dictionary<string, object>; Roy Osherove blogs his technique for this here: http://weblogs.asp.net/rosherove/archive/2008/03/11/turn-anonymous-types-into-idictionary-of-values.aspx
Jacob Carpenter uses anonymous types as a way to initialize immutable objects with syntax similar to object initialization: http://jacobcarpenter.wordpress.com/2007/11/19/named-parameters-part-2/
Anonymous types can be used as a way to give easier-to-read aliases to the properties of objects in a collection being iterated over with a foreach statement. (Though, to be honest, this is really nothing more than the standard use of anonymous types with LINQ to Objects.) For example:
Dictionary<int, string> employees = new Dictionary<int, string>
{
{ 1, "Bob" },
{ 2, "Alice" },
{ 3, "Fred" },
};
// standard iteration
foreach (var pair in employees)
Console.WriteLine("ID: {0}, Name: {1}", pair.Key, pair.Value);
// alias Key/Value as ID/Name
foreach (var emp in employees.Select(p => new { ID = p.Key, Name = p.Value }))
Console.WriteLine("ID: {0}, Name: {1}", emp.ID, emp.Name);
While there's not much improvement in this short sample, if the foreach loop were longer, referring to ID and Name might improve readability.
A: ASP.NET MVC routing uses these objects all over the place.
A: Occasionally I suspect it may be useful to perform something which is like a LINQ query, but doesn't happen to use LINQ - but you still want a projection of some kind. I don't think I'd use anonymous types in their current form for anything radically different to LINQ projections.
One thing I would like to see is the ability to create "named" types with simple declarations, which would generate the properties and constructor in the same way as for anonymous types, as well as overriding Equals/GetHashCode/ToString in the same (useful) way. Those types could then be converted into "normal" types when the need to add more behaviour arose.
Again, I don't think I'd use it terribly often - but every so often the ability would be handy, particularly within a few methods of a class. This could perhaps be part of a larger effort to give more support to immutable types in C# 5.
A: To add to what Justice said, ASP.Net MVC is the first place I've seen these used in interesting and useful ways. Here's one example:
Html.ActionLink("A Link", "Resolve", new { onclick = "someJavascriptFn();" })
ASP.Net MVC uses anonymous types like this to add arbitrary attributes to HTML elements. I suppose there's a number of different ways you could accomplish the same thing, but I like the terse style of anonymous types, it gives things more of a dynamic language feel.
A: The biggest use for anonymous types is LINQ, in fact that's why it was created.
I guess one reason for an anonymous type outside of linq is to create a temporary struct-like object, e.g.:
var x = new { a = 1, b = 2 };
That may make your life a little easier in some situations.
A: I've used them for doing templated emails as they are great if you're using reflection and generics.
Some info can be found here: http://www.aaron-powell.com/blog.aspx?id=1247
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Initializing cherrypy.session early I love CherryPy's API for sessions, except for one detail. Instead of saying cherrypy.session["spam"] I'd like to be able to just say session["spam"].
Unfortunately, I can't simply have a global from cherrypy import session in one of my modules, because the cherrypy.session object isn't created until the first time a page request is made. Is there some way to get CherryPy to initialize its session object immediately instead of on the first page request?
I have two ugly alternatives if the answer is no:
First, I can do something like this
def import_session():
global session
while not hasattr(cherrypy, "session"):
sleep(0.1)
session = cherrypy.session
Thread(target=import_session).start()
This feels like a big kludge, but I really hate writing cherrypy.session["spam"] every time, so to me it's worth it.
My second solution is to do something like
class SessionKludge:
def __getitem__(self, name):
return cherrypy.session[name]
def __setitem__(self, name, val):
cherrypy.session[name] = val
session = SessionKludge()
but this feels like an even bigger kludge and I'd need to do more work to implement the other dictionary functions such as .get
So I'd definitely prefer a simple way to initialize the object myself. Does anyone know how to do this?
A: For CherryPy 3.1, you would need to find the right subclass of Session, run its 'setup' classmethod, and then set cherrypy.session to a ThreadLocalProxy. That all happens in cherrypy.lib.sessions.init, in the following chunks:
# Find the storage class and call setup (first time only).
storage_class = storage_type.title() + 'Session'
storage_class = globals()[storage_class]
if not hasattr(cherrypy, "session"):
if hasattr(storage_class, "setup"):
storage_class.setup(**kwargs)
# Create cherrypy.session which will proxy to cherrypy.serving.session
if not hasattr(cherrypy, "session"):
cherrypy.session = cherrypy._ThreadLocalProxy('session')
Reducing (replace FileSession with the subclass you want):
FileSession.setup(**kwargs)
cherrypy.session = cherrypy._ThreadLocalProxy('session')
The "kwargs" consist of "timeout", "clean_freq", and any subclass-specific entries from tools.sessions.* config.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: public variables vs private variables with accessors Has anyone else seen people do this:
private string _name;
public string Name{ get{ return _name; } set{ _name = value;}}
I understand using accessors if you are going to exercise some sort of control over how it gets set or perform some sort of function on it when there is a get. But if you are just going to do this, why not just make the variable public to begin with? Am I missing something?
A: The idea is that if you use accessors, the underlying implementation can be changed without changing the API. For example, if you decide that when you set the name, you also need to update a text box, or another variable, none of your client code would have to change.
A: It might be worth noting that DataBinding in .NET also refuses to work off public fields and demands properties. So that might be another reason.
A: If you make the member a public field, then you can't later refactor it into a property without changing the interface to your class. If you expose it as a property from the very beginning, you can make whatever changes to the property accessor functions that you need and the class's interface remains unchanged.
Note that as of C# 3.0, you can implement a property without creating a backing field, e.g.:
public string Name { get; set; }
This removes what is pretty much the only justification for not implementing public fields as properties in the first place.
A: Good programming practice. This is a very common pattern that fits with OO design methodologies. By exposing a public field you expose the internals of how that data is being stored. Using a public property instead allows you more flexibility to change the way the data is stored internally and not break the public interface. It also allows you more control over what happens when the data is accessed (lazy initialization, null checks, etc.)
A: Variables are part of the implementation of a class. Properties more logically represent the interface to it. With C# 3.0, automatically implemented properties make this a breeze to do from the start.
I've written more thoughts on this, including the various ways in which changing from a variable to a property breaks not just binary compatibility but also source compatibility, in an article on the topic.
A: If you define a public interface with a property in assembly A, you could then use this interface in assembly B.
Now, you can change the property's implementation (maybe fetching the value from a database instead of storing it in a field). Then you can recompile assembly A, and replace an older one. Assembly B would carry on fine because the interface wouldn't have changed.
However, if you'd started off initially with a public field, and decided this wasn't suitable and wanted to change the implementation and to do that you needed to convert it to a property, then this would mean you'd have to change assembly A's public interface. Any clients of that interface (including assembly B) would also have to be recompiled and replaced to be able to work with this new interface.
So, you're better off starting with a property initially. This encapsulates the implementation of the property, leaving you free to change it in the future without having to worry what clients (including assembly B) are already out in the world using assembly A. Because, if there are any clients already out in the world making use of assembly A, changing the interface would break all clients. If they're used by another team in your company, or another company, then they are going to be not happy if you break their assemblies by changing the interface of yours!
A: Preparation. You never know when you'll want to removed the set accessor down the road, perform additional operations in the setter, or change the data source for the get.
A: Publicly accessible members should typically be methods and not fields. It's just good practice, and that practice helps you ensure that the encapsulated state of your objects is always under your control.
A: For encapsulation, it is not recommended to use public fields.
http://my.safaribooksonline.com/9780321578815/ch05lev1sec5?displaygrbooks=0
As Chris Anderson said later in this book, it would be ideal would be if the caller were blind to the difference of field vs. property.
A: To retain a high degree of extensibility without the pain of re-compiling all your assemblies, you want to use public properties as accessors. By following a "contract" or a defined mechanism that describes how your objects will exchange data a set of rules will be put in place. This contract is enforced with an interface and fulfilled by the getters and setters of your class that inherits this interface.
Later on, should you create additional classes from that interface, you have flexibility of adhering to the contract with the use of the properties, but since you are providing the data via the getters and setters, the implementation or process of assembling data can anything you want, as along as it returns the type that the "contract" expects.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Do .net applications run on Linux? Do .net applications run on linux?
Are there any free/paid interop libraries available ?
A: The Mono project provides a standards compliant implementation of the CLR virtual machine component of .Net. They've also reverse engineered a significant number of the framework portion of .Net. You'll have significant issues trying to develop WinForms apps. Mono provides a list of several graphical toolkits you can use: http://www.mono-project.com/Gui_Toolkits (it looks like they actually support WinForms now; though I'm not sure of the extent of that support).
Note that the Mono port of Silverlight, Moonlight, is officially endorsed by Microsoft. So if you can get away with using that, it might be your best shot for cross-platform compatibility.
A: Mono is a .NET-compatible platform, including compiler and runtime. The Mono Migration Analyzer helps figure out compatibility issues.
A: Note that you have the dotGNU project. It is an implementation of the .NET for linux.
They are not as feature complete as Mono. But it is worth mentioning.
A: Yes, with some caveats. It's called Mono.
A: 10 years after the question was first posted here! Now you can run .Net on Linux and iOS. The new generation of .Net is called dotnet core and is the future of the framework.
https://learn.microsoft.com/en-us/dotnet/core/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Regular expression for parsing name value pairs Can someone provide a regular expression for parsing name/value pairs from a string? The pairs are separated by commas, and the value can optionally be enclosed in quotes. For example:
AssemblyName=foo.dll,ClassName="SomeClass",Parameters="Some,Parameters"
A: *
*No escape:
/([^=,]*)=("[^"]*"|[^,"]*)/
*Double quote escape for both key and value:
/((?:"[^"]*"|[^=,])*)=((?:"[^"]*"|[^=,])*)/
key=value,"key with "" in it"="value with "" in it",key=value" "with" "spaces
*Backslash string escape:
/([^=,]*)=("(?:\\.|[^"\\]+)*"|[^,"]*)/
key=value,key="value",key="val\"ue"
*Full backslash escape:
/((?:\\.|[^=,]+)*)=("(?:\\.|[^"\\]+)*"|(?:\\.|[^,"\\]+)*)/
key=value,key="value",key="val\"ue",ke\,y=val\,ue
Edit: Added escaping alternatives.
Edit2: Added another escaping alternative.
You would have to clean up the keys/values by removing any escape-characters and surrounding quotes.
A: Nice answer from MizardX. Minor niggles - it doesn't allow for spaces around names etc (which may not matter), and it collects the quotes as well as the quoted value (which also may not matter), and it doesn't have an escape mechanism for embedding double quote characters in the quoted value (which, once more, may not matter).
As written, the pattern works with most of the extended regular expression systems. Fixing the niggles would probably require descent into, say, Perl. This version uses doubled quotes to escape -- hence a="a""b" generates a field value 'a""b' (which ain't perfect, but could be fixed afterwards easily enough):
/\s*([^=,\s]+)\s*=\s*(?:"((?:[^"]|"")*)"|([^,"]*))\s*,?/
Further, you'd have to use $2 or $3 to collect the value, whereas with MizardX's answer, you simply use $2. So, it isn't as easy or nice, but it covers a few edge cases. If the simpler answer is adequate, use it.
Test script:
#!/bin/perl -w
use strict;
my $qr = qr/\s*([^=,\s]+)\s*=\s*(?:"((?:[^"]|"")*)"|([^,"]*))\s*,?/;
while (<>)
{
while (m/$qr/)
{
print "1= $1, 2 = $2, 3 = $3\n";
$_ =~ s/$qr//;
}
}
This witters about either $2 or $3 being undefined - accurately.
A: This is how I would do it if you can use Perl 5.10.
qr/
(?<key>
(?:
[^=,\\]
|
(?&escape)
)++ # Prevent null keys
)
\s*+
=
\s*+
(?<value>
(?"ed)
|
(?:
[^=,\s\\]
|
(?&escape)
)++ # Prevent null value ( use quotes for that )
)
(?(DEFINE)
(?<escape>\\.)
(?<quoted>
"
(?:
(?&escaped)
|
[^"\\]
)*+
"
)
)
/x
The elements would be accessed through %+.
perlretut was very helpful in creating this answer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Change name of file sent to client? I have a webpage that pulls information from a database, converts it to .csv format, and writes the file to the HTTPResponse.
string csv = GetCSV();
Response.Clear();
Response.ContentType = "text/csv";
Response.Write(csv);
This works fine, and the file is sent to the client with no problems. However, when the file is sent to the client, the name of the current page is used, instead of a more friendly name (like "data.csv").
My question is, how can I change the name of the file that is written to the output stream without writing the file to disk and redirecting the client to the file's url?
EDIT: Thanks for the responses guys. I got 4 of the same response, so I just chose the first one as the answer.
A: You just need to set the Content-Disposition header
Content-Disposition: attachment; filename=data.csv
This Microsoft Support article has some good information
How To Raise a "File Download" Dialog Box for a Known MIME Type
A: Add a "Content-Disposition" header with the value "attachment; filename=filename.csv".
A: Response.AddHeader("content-disposition", "attachment; filename=File.doc")
A: I believe this will work for you.
Response.AddHeader("content-disposition", "attachment; filename=NewFileName.csv");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Double-click double-insert resolutions? A team member has run into an issue with an old in-house system where a user double-clicking on a link on a web page can cause two requests to be sent from the browser resulting in two database inserts of the same record in a race condition; the last one to run fails with a primary key violation. Several solutions and hacks have been proposed and discussed:
*
*Use Javascript on the web page to mitigate the second click by disabling the link on the first click. This is a quick and easy way to reduce the occurrences of the problem, but not entirely eliminate it.
*Wrap the request execution on the sever side in a transaction. This has been deemed too expensive of an operation due to server load and lock levels on the table in question.
*Catch the primary key exception thrown by the failed insert, identify it as such, and eat it. This has the disadvantages of (a) vendor lock-in, having to know the nuances of the database-specific exceptions, and (b) potentially not logging/dealing with legitimate database failures.
*An extension of #3 by attempting to update the record if the insert fails and checking the result of the update to ensure it returns 1 record affected.
Are the other options that haven't been considered? Are there pros and cons of the options presented that were overlooked? Which is the lesser of all evils?
A: Put a unique identifier on the page in a hidden field. Only accept one response with a given unique identifier.
A: It sounds like you might be misusing a GET request to modify server state (although this is not necessarily the case). While it may not be appropriate for you situation, it should be stated that you should consider converting the link into a form POST.
A: You need to implement the Synchronizer Token pattern.
How it works is: a value (the token) is generated on the server for each request. This same token must then be included in your form submission. On receipt of the request the server token and client token are compared and if they are the same you may continue to add your record. The server side token is then regenerated, so subsequent requests containing the old token will fail.
There's a more thorough explanation about half-way down this page.
I'm not sure what technology you're using, but Struts provides framework level support for this pattern. See example here
A: It seems you already replied to your own question there; #1 seems to be the only viable option.
Otherwise, you should really do all three steps -- data integrity should be handled at the database level, but extra checks (such as the explicit transaction) in the code to avoid roundtrips to the database could be good for performance.
A: REF You need to implement the Synchronizer Token pattern.
This is for Javascript/HTML not JAVA
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I make Ruby Gems installations on Windows use MinGW for making and compiling? Trying to update some gems on a Windows machine and I continually get this error output for gems that do not have pre-compiled binaries:
Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--srcdir=.
--curdir
--ruby=c:/server/ruby/bin/ruby
These are configuration options that are provided to the extconf.rb ruby file during the installation of the gem.
I have installed MinGW so I should have everything I need to install, make and compile these gems.
However, I do not know how to change the configuration for RubyGems so that when extconf.rb is called it includes the appropriate options pointing to the MinGW include directory.
A: There's a DevKit that could well be what you're after.
A: I don't know if this works with the native Windows Ruby, but if you use the Cygwin version and have a full Cygwin installed (compilers etc) then you shouldn't have any problems - we've been able to use a lot of gems that require compiled stuff.
A: Yardboy,
Too bad you didn't mention which gem are you trying to update, you only put there the options output.
Also, some of these gems needs development headers and libraries, not just the compiler (MinGW).
Plus, MinGW is going to work as long the Ruby build you have is created with MinGW.
There is some work being done to ease this, but compiler, headers and library requirements are needed on all the platforms, not just Windows.
You can find more info and resources on my blog
Cheers.
A: I had just the same problem.
The only way I found to get gems that did not have pre-compiled binaries - such as parsetree - to run on windows was to recompile the Ruby source using Mingw as well as copy several libraries and applications from the visual c++ install I already had. What I copied included the zlib library as well as the iconv library and application.
Note: I am using this setup as a test configuration. I would not use such a setup for production (since who knows what happens when you a library from one distribution to another).
A: In general, in my experience, code designed for a unix system can be very hard to make work on MinGW. For a quick port, use CygWin. Or do a full port of the software to Windows host, including using native windows shell and OS API -- which is pretty darn expensive in terms of time, but it pays of if you plan to support Windows long term.
Not familiar with this particular software package, this is just a general observation on trying to port some other dastardly pieces of code to Windows.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Pass a PHP string to a JavaScript variable (and escape newlines) What is the easiest way to encode a PHP string for output to a JavaScript variable?
I have a PHP string which includes quotes and newlines. I need the contents of this string to be put into a JavaScript variable.
Normally, I would just construct my JavaScript in a PHP file, à la:
<script>
var myvar = "<?php echo $myVarValue;?>";
</script>
However, this doesn't work when $myVarValue contains quotes or newlines.
A: <script>
var myVar = <?php echo json_encode($myVarValue); ?>;
</script>
or
<script>
var myVar = <?= json_encode($myVarValue) ?>;
</script>
A: Expanding on someone else's answer:
<script>
var myvar = <?php echo json_encode($myVarValue); ?>;
</script>
Using json_encode() requires:
*
*PHP 5.2.0 or greater
*$myVarValue encoded as UTF-8 (or US-ASCII, of course)
Since UTF-8 supports full Unicode, it should be safe to convert on the fly.
Note that because json_encode escapes forward slashes, even a string that contains </script> will be escaped safely for printing with a script block.
A: Micah's solution below worked for me as the site I had to customise was not in UTF-8, so I could not use json; I'd vote it up but my rep isn't high enough.
function escapeJavaScriptText($string)
{
return str_replace("\n", '\n', str_replace('"', '\"', addcslashes(str_replace("\r", '', (string)$string), "\0..\37'\\")));
}
A: Don't run it though addslashes(); if you're in the context of the HTML page, the HTML parser can still see the </script> tag, even mid-string, and assume it's the end of the JavaScript:
<?php
$value = 'XXX</script><script>alert(document.cookie);</script>';
?>
<script type="text/javascript">
var foo = <?= json_encode($value) ?>; // Use this
var foo = '<?= addslashes($value) ?>'; // Avoid, allows XSS!
</script>
A: You can insert it into a hidden DIV, then assign the innerHTML of the DIV to your JavaScript variable. You don't have to worry about escaping anything. Just be sure not to put broken HTML in there.
A: You could try
<script type="text/javascript">
myvar = unescape('<?=rawurlencode($myvar)?>');
</script>
A: *
*Don’t. Use Ajax, put it in data-* attributes in your HTML, or something else meaningful. Using inline scripts makes your pages bigger, and could be insecure or still allow users to ruin layout, unless…
*… you make a safer function:
function inline_json_encode($obj) {
return str_replace('<!--', '<\!--', json_encode($obj));
}
A: encode it with JSON
A: function escapeJavaScriptText($string)
{
return str_replace("\n", '\n', str_replace('"', '\"', addcslashes(str_replace("\r", '', (string)$string), "\0..\37'\\")));
}
A: I have had a similar issue and understand that the following is the best solution:
<script>
var myvar = decodeURIComponent("<?php echo rawurlencode($myVarValue); ?>");
</script>
However, the link that micahwittman posted suggests that there are some minor encoding differences. PHP's rawurlencode() function is supposed to comply with RFC 1738, while there appear to have been no such effort with Javascript's decodeURIComponent().
A: The paranoid version: Escaping every single character.
function javascript_escape($str) {
$new_str = '';
$str_len = strlen($str);
for($i = 0; $i < $str_len; $i++) {
$new_str .= '\\x' . sprintf('%02x', ord(substr($str, $i, 1)));
}
return $new_str;
}
EDIT: The reason why json_encode() may not be appropriate is that sometimes, you need to prevent " to be generated, e.g.
<div onclick="alert(???)" />
A: htmlspecialchars
Description
string htmlspecialchars ( string $string [, int $quote_style [, string $charset [, bool $double_encode ]]] )
Certain characters have special significance in HTML, and should be represented by HTML entities if they are to preserve their meanings. This function returns a string with some of these conversions made; the translations made are those most useful for everyday web programming. If you require all HTML character entities to be translated, use htmlentities() instead.
This function is useful in preventing user-supplied text from containing HTML markup, such as in a message board or guest book application.
The translations performed are:
* '&' (ampersand) becomes '&'
* '"' (double quote) becomes '"' when ENT_NOQUOTES is not set.
* ''' (single quote) becomes ''' only when ENT_QUOTES is set.
* '<' (less than) becomes '<'
* '>' (greater than) becomes '>'
http://ca.php.net/htmlspecialchars
A: If you use a templating engine to construct your HTML then you can fill it with what ever you want!
Check out XTemplates.
It's a nice, open source, lightweight, template engine.
Your HTML/JS there would look like this:
<script>
var myvar = {$MyVarValue};
</script>
A: I'm not sure if this is bad practice or no, but my team and I have been using a mixed html, JS, and php solution. We start with the PHP string we want to pull into a JS variable, lets call it:
$someString
Next we use in-page hidden form elements, and have their value set as the string:
<form id="pagePhpVars" method="post">
<input type="hidden" name="phpString1" id="phpString1" value="'.$someString.'" />
</form>
Then its a simple matter of defining a JS var through document.getElementById:
<script type="text/javascript" charset="UTF-8">
var moonUnitAlpha = document.getElementById('phpString1').value;
</script>
Now you can use the JS variable "moonUnitAlpha" anywhere you want to grab that PHP string value.
This seems to work really well for us. We'll see if it holds up to heavy use.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "398"
} |
Q: Is it possible to have separate SQLite databases within the same Django project? I was considering creating a separate SQLite database for certain apps on a Django project.
However, I did not want to use direct SQLite access if possible.
Django-style ORM access to these database would be ideal.
Is this possible?
Thank you.
A: Yes - the low-level API for this is in place, it's just missing a convenient high-level API at the moment. These quotes are from James Bennett (Django's release manager) on programming reddit:
It's been there -- in an extremely low-level API for those who look at the codebase -- for months now (every QuerySet is backed by a Query, which in turn accepts a DB connection as an argument). There isn't any high-level documented API for it, but I know people who are already doing and have been doing stuff like multiple-DB/sharding scenarios.
...it's not necessarily something that needs a big write-up; the __init__() method of QuerySet accepts a keyword argument query, which should be an instance of django.db.models.sql.Query. The __init__() method of Query, in turn, accepts a keyword argument connection, which should be an instance of (a backend-specific subclass for your DB of) django.db.backends.BaseDatabaseWrapper.
From there, it's pretty easy; you could, for example, override get_query_set() on a manager to always return a QuerySet using the connection you want, or set up things like sharding logic to figure out which DB to use based on incoming query parameters, etc., etc.
A: Already supported http://docs.djangoproject.com/en/dev/topics/db/multi-db/
A: Currently no -- each project uses one database, and every app must exist within it. If you want to have an app-specific database, you cannot do so through the Django ORM. See the Django wiki page on Multiple Database Support.
A: This isn't possible yet, but there is some talk of it on the wiki, Multiple Database Support in Django. It was also brought up during the keynote on the future of Django at DjangoCon 2008 and made one of the higher priority issues.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Unresolved External Symbol Errors switching from build library to exe or dll I am building an application as a library, but to make sure I can get the output that I'd like, I switched it over to produce an exe. As soon as I did, I got several errors about unresolved external symbols.
At first I thought that I didn't have a path set to the 3rd party library that I was referencing, so I added the folder to my path variable and even added it to my include, references, and source files, just to make sure I had all the paths.
I still get the error:
error LNK2019: unresolved external
symbol "__declspec(dllimport) public:
static void
__cdecl xercesc_2_8::XMLPlatformUtils::Initialize(char
const * const,char const *
const,class xercesc_2_8::PanicHandler
* const,class xercesc_2_8::MemoryManager *
const,bool)"
(__imp_?Initialize@XMLPlatformUtils@xercesc_2_8@@SAXQBD0QAVPanicHandler@2@QAVMemoryManager@2@_N@Z)
referenced in function "void __cdecl
xsd::cxx::xml::initialize(void)"
(?initialize@xml@cxx@xsd@@YAXXZ)
The reason that I'm asking it here is because in Visual Studio, when I built it as a library, I didn't get these errors, but as a dll and exe, I do.
Anybody have any thoughts?
A: You also need to specify that you wish to link against that library in particular. The link paths merely tell the linker where the data you need to find is, not what to look for. You will also need to specify that you are linking against the library in question (xerces?).
Unfortunately, I don't know how to specify this in MSVC, but it's probably somewhere under 'Linker Options'.
A: Building a library, the linker doesn't need to resolve imported symbols. That happens only when it starts linking object files and libraries together.
That's why you only started seeing the error when building an executable.
Indeed, in VC2008 (and 2005, if I remember well), use the project properties -> Linker -> Input -> Additional dependencies. The libraries you need are to be separated by spaces (odd, hey?)
Good Luck!
A: As @coppro said, you need to specify that you want to link with that library. When you build an EXE or DLL, a linker is run, and it needs to find all the functions you are using, but to build a library, the librarian is run, and it doesn't have to resolve all function references (but when you use that lib in an EXE, you'll have to, again).
So go to the project's options, Linker Options, Input, and list the library that defines the missing function (xerces.lib?) under Additional Library Paths. You might need to add its location under Additional Library Paths.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I access an IFRAME from the codebehind file in ASP.NET? I am trying to set attributes for an IFRAME html control from the code-behind aspx.cs file.
I came across a post that says you can use FindControl to find the non-asp controls using:
The aspx file contains:
<iframe id="contentPanel1" runat="server" />
and then the code-behind file contains:
protected void Page_Load(object sender, EventArgs e)
{
HtmlControl contentPanel1 = (HtmlControl)this.FindControl("contentPanel1");
if (contentPanel1 != null)
contentPanel1.Attributes["src"] = "http://www.stackoverflow.com";
}
Except that it's not finding the control, contentPanel1 is null.
Update 1
Looking at the rendered html:
<iframe id="ctl00_ContentPlaceHolder1_contentPanel1"></iframe>
i tried changing the code-behind to:
HtmlControl contentPanel1 = (HtmlControl)this.FindControl("ctl00_ContentPlaceHolder1_contentPanel1");
if (contentPanel1 != null)
contentPanel1.Attributes["src"] = "http://www.clis.com";
But it didn't help.
i am using a MasterPage.
Update 2
Changing the aspx file to:
<iframe id="contentPanel1" name="contentPanel1" runat="server" />
also didn't help
Answer
The answer is obvious, and unworthy of even asking the original question. If you have the aspx code:
<iframe id="contentPanel1" runat="server" />
and want to access the iframe from the code-behind file, you just access it:
this.contentPanel1.Attributes["src"] = "http://www.stackoverflow.com";
A: This works for me.
ASPX :
<iframe id="ContentIframe" runat="server"></iframe>
I can access the iframe directly via id.
Code Behind :
ContentIframe.Attributes["src"] = "stackoverflow.com";
A: If the iframe is directly on the page where the code is running, you should be able to reference it directly:
contentPanel1.Attribute = value;
If not (it's in a child control, or the MasterPage), you'll need a good idea of the hierarchy of the page... Or use the brute-force method of writing a recursive version of FindControl().
A: Try using
this.Master.FindControl("ContentId").FindControl("controlId")
instead.
A: Where is your iframe embedded?
Having this code
<body>
<iframe id="iFrame1" runat="server"></iframe>
<form id="form1" runat="server">
<div>
<iframe id="iFrame2" runat="server"></iframe>
</div>
</form>
I can access with iFrame1.Attributes["src"] just to iFrame1 and not to iFrame2.
Alternatively, you can access to any element in your form with:
FindControl("iFrame2") as System.Web.UI.HtmlControls.HtmlGenericControl
A: Try instantiating contentPanel1 outside the Load event; keep it global to the class.
A: The FindControl method looks in the child controls of the "control" the method is executed on. Try looking through the control collection recursively.
protected virtual Control FindControlRecursive(Control root, String id)
{
if (root.ID == id) { return root; }
foreach (Control c in root.Controls)
{
Control t = FindControlRecursive(c, id);
if (t != null)
{
return t;
}
}
return null;
}
A: Try this.
ContentPlaceHolder cplHolder = (ContentPlaceHolder)this.CurrentMaster.FindControl("contentMain");
HtmlControl cpanel= (HtmlControl)cplHolder.FindControl("contentPanel1");
A: <iframe id="yourIframe" clientIDMode="static" runat="server"></iframe>
You should them be able to find your iframe using the findcontrol method.
setting clientIDMode to Static prevents you object from being renamed while rendering.
A: None of your suggestions worked for me, here is my solution:
add src="<%=_frame1%>" //to the iframe id="frame1" html control
public string _frame1 = "http://www.google.com";
A: aspx page
<iframe id="fblikes" runat="server"></iframe>
Code behind
this.fblikes.Attributes["src"] = "/productdetails/fblike.ashx";
Very simple....
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: Is it possible to connect PHP to SQL Server Compact Edition? Is it possible to connect PHP to a SQL Server Compact Edition database? What would be the best driver?
I need it for a desktop application where SQL Server Express is too heavy.
A: Short Answer : No.
Long Answer : To my knowledge, unlike PostgreSQL / MySQL / MS-SQL, there is no native driver to connect PHP to SQL Server Compact Edition.
If you want to connect to it, your best bet is to use PHP ODBC connections to talk to a ODBC Driver connected to the SQL Compact server. But its pretty much a hack, and you'd be crazy to use this kind of thing for anything remotely important.
If you are worried about SQL Server Express being too heavy, use MySQL with MyISAM tables. It's pretty fast and lightweight. Emergent has a good checklist of things to configure / disable to make MySQL even faster and use less resources.
Relevant links :
MSDN Post asking the same question
Erik EJ's blog - SQL Compact with OLE DB
A: You could also consider SQLite:
http://www.devshed.com/c/a/PHP/Introduction-to-Using-SQLite-with-PHP-5/
A: I've used the php-odbtp to interface PHP (with ADOdb) to a MS SQL server and it runs well, even across remote networks.
It provides a tunneling protocol from a non-odbc platform (Linux) to a service installed on the Win32 machine to buffer requests to and from an ODBC connection. Bit of a pain to setup the first time, at least 2-3 years ago when I first used it. Should also work fine for Win32<->Win32 applications.
Not familiar with SQL C.E., but I'd imagine it supports an ODBC connection of some sort, and the standard T-SQL commands.
A: I wrote a php class that handles SQL compact edition files using the COM object of PHP.
This means it will only work on Windows based machines where the SQL Compact Edition runtime is installed.
You can download here (article is in German, link is at the bottom) with an example database file and script
http://www.klemmkeil.de/sql-compact-edition-sdf-mit-php-auslesen/
A: The question is Why? Why not just use an Express Version?
I must say that I'm curious, but I can't say that I've used a C.E. data base for anything outside of a .Net application that had the assemblies in the application folder.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I use an arbitrary string as a lock in C++? Let's say I have a multithreaded C++ program that handles requests in the form of a function call to handleRequest(string key). Each call to handleRequest occurs in a separate thread, and there are an arbitrarily large number of possible values for key.
I want the following behavior:
*
*Simultaneous calls to handleRequest(key) are serialized when they have the same value for key.
*Global serialization is minimized.
The body of handleRequest might look like this:
void handleRequest(string key) {
KeyLock lock(key);
// Handle the request.
}
Question: How would I implement KeyLock to get the required behavior?
A naive implementation might start off like this:
KeyLock::KeyLock(string key) {
global_lock->Lock();
internal_lock_ = global_key_map[key];
if (internal_lock_ == NULL) {
internal_lock_ = new Lock();
global_key_map[key] = internal_lock_;
}
global_lock->Unlock();
internal_lock_->Lock();
}
KeyLock::~KeyLock() {
internal_lock_->Unlock();
// Remove internal_lock_ from global_key_map iff no other threads are waiting for it.
}
...but that requires a global lock at the beginning and end of each request, and the creation of a separate Lock object for each request. If contention is high between calls to handleRequest, that might not be a problem, but it could impose a lot of overhead if contention is low.
A: It will depend on the platform, but the two techniques that I'd try would be:
*
*Use named mutex/synchronization
objects, where object name = Key
*Use filesystem-based locking, where you
try to create a non-shareable
temporary file with the key name. If it exists already (=already
locked) this will fail and you'll
have to poll to retry
Both techniques will depend on the detail of your OS. Experiment and see which works.
.
A: Perhaps an std::map<std::string, MutexType> would be what you want, where MutexType is the type of the mutex you want. You will probably have to wrap accesses to the map in another mutex in order to ensure that no other thread is inserting at the same time (and remember to perform the check again after the mutex is locked to ensure that another thread didn't add the key while waiting on the mutex!).
The same principle could apply to any other synchronization method, such as a critical section.
A: Raise granularity and lock entire key-ranges
This is a variation on Mike B's answer, where instead of having several fluid lock maps you have a single fixed array of locks that apply to key-ranges instead of single keys.
Simplified example: create array of 256 locks at startup, then use first byte of key to determine index of lock to be acquired (i.e. all keys starting with 'k' will be guarded by locks[107]).
To sustain optimal throughput you should analyze distribution of keys and contention rate. The benefits of this approach are zero dynamic allocations and simple cleanup; you also avoid two-step locking. The downside is potential contention peaks if key distribution becomes skewed over time.
A: You could do something similar to what you have in your question, but instead of a single global_key_map have several (probably in an array or vector) - which one is used is determined by some simple hash function on the string.
That way instead of a single global lock, you spread that out over several independent ones.
This is a pattern that is often used in memory allocators (I don't know if the pattern has a name - it should). When a request comes in, something determines which pool the allocation will come from (usually the size of the request, but other parameters can factor in as well), then only that pool needs to be locked. If an allocation request comes in from another thread that will use a different pool, there's no lock contention.
A: After thinking about it, another approach might go something like this:
*
*In handleRequest, create a Callback that does the actual work.
*Create a multimap<string, Callback*> global_key_map, protected by a mutex.
*If a thread sees that key is already being processed, it adds its Callback* to the global_key_map and returns.
*Otherwise, it calls its callback immediately, and then calls the callbacks that have shown up in the meantime for the same key.
Implemented something like this:
LockAndCall(string key, Callback* callback) {
global_lock.Lock();
if (global_key_map.contains(key)) {
iterator iter = global_key_map.insert(key, callback);
while (true) {
global_lock.Unlock();
iter->second->Call();
global_lock.Lock();
global_key_map.erase(iter);
iter = global_key_map.find(key);
if (iter == global_key_map.end()) {
global_lock.Unlock();
return;
}
}
} else {
global_key_map.insert(key, callback);
global_lock.Unlock();
}
}
This has the advantage of freeing up threads that would otherwise be waiting for a key lock, but apart from that it's pretty much the same as the naive solution I posted in the question.
It could be combined with the answers given by Mike B and Constantin, though.
A: /**
* StringLock class for string based locking mechanism
* e.g. usage
* StringLock strLock;
* strLock.Lock("row1");
* strLock.UnLock("row1");
*/
class StringLock {
public:
/**
* Constructor
* Initializes the mutexes
*/
StringLock() {
pthread_mutex_init(&mtxGlobal, NULL);
}
/**
* Lock Function
* The thread will return immediately if the string is not locked
* The thread will wait if the string is locked until it gets a turn
* @param string the string to lock
*/
void Lock(string lockString) {
pthread_mutex_lock(&mtxGlobal);
TListIds *listId = NULL;
TWaiter *wtr = new TWaiter;
wtr->evPtr = NULL;
wtr->threadId = pthread_self();
if (lockMap.find(lockString) == lockMap.end()) {
listId = new TListIds();
listId->insert(listId->end(), wtr);
lockMap[lockString] = listId;
pthread_mutex_unlock(&mtxGlobal);
} else {
wtr->evPtr = new Event(false);
listId = lockMap[lockString];
listId->insert(listId->end(), wtr);
pthread_mutex_unlock(&mtxGlobal);
wtr->evPtr->Wait();
}
}
/**
* UnLock Function
* @param string the string to unlock
*/
void UnLock(string lockString) {
pthread_mutex_lock(&mtxGlobal);
TListIds *listID = NULL;
if (lockMap.find(lockString) != lockMap.end()) {
lockMap[lockString]->pop_front();
listID = lockMap[lockString];
if (!(listID->empty())) {
TWaiter *wtr = listID->front();
Event *thdEvent = wtr->evPtr;
thdEvent->Signal();
} else {
lockMap.erase(lockString);
delete listID;
}
}
pthread_mutex_unlock(&mtxGlobal);
}
protected:
struct TWaiter {
Event *evPtr;
long threadId;
};
StringLock(StringLock &);
void operator=(StringLock&);
typedef list TListIds;
typedef map TMapLockHolders;
typedef map TMapLockWaiters;
private:
pthread_mutex_t mtxGlobal;
TMapLockWaiters lockMap;
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How do I build and install P4Python for Mac OS X? I've been unable to build P4Python for an Intel Mac OS X 10.5.5.
These are my steps:
*
*I downloaded p4python.tgz (from
http://filehost.perforce.com/perforce/r07.3/tools/) and expanded
it into "P4Python-2007.3".
*I downloaded p4api.tar (from
http://filehost.perforce.com/perforce/r07.3/bin.macosx104x86/)
and expanded it into "p4api-2007.3.143793".
*I placed "p4api-2007.3.143793" into "P4Python-2007.3" and edited
setup.cfg to set "p4_api=./p4api-2007.3.143793".
*I added the line 'extra_link_args = ["-framework", "Carbon"]' to
setup.py after:
elif unameOut[0] == "Darwin":
unix = "MACOSX"
release = "104"
platform = self.architecture(unameOut[4])
*I ran python setup.py build and got:
$ python setup.py build
API Release 2007.3
running build
running build_py
running build_ext
building 'P4API' extension
gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -DID_OS="MACOSX104X86" -DID_REL="2007.3" -DID_PATCH="151416" -DID_API="2007.3" -DID_Y="2008" -DID_M="04" -DID_D="09" -I./p4api-2007.3.143793 -I./p4api-2007.3.143793/include/p4 -I/build/toolchain/mac32/python-2.4.3/include/python2.4 -c P4API.cpp -o build/temp.darwin-9.5.0-i386-2.4/P4API.o -DOS_MACOSX -DOS_MACOSX104 -DOS_MACOSXX86 -DOS_MACOSX104X86
cc1plus: warning: command line option "-Wstrict-prototypes" is valid for C/ObjC but not for C++
P4API.cpp: In function âint P4Adapter_init(P4Adapter*, PyObject*, PyObject*)â:
P4API.cpp:105: error: âPy_ssize_tâ was not declared in this scope
P4API.cpp:105: error: expected `;' before âposâ
P4API.cpp:107: error: âposâ was not declared in this scope
P4API.cpp: In function âPyObject* P4Adapter_run(P4Adapter*, PyObject*)â:
P4API.cpp:177: error: âPy_ssize_tâ was not declared in this scope
P4API.cpp:177: error: expected `;' before âiâ
P4API.cpp:177: error: âiâ was not declared in this scope
error: command 'gcc' failed with exit status 1
which gcc returns /usr/bin/gcc and gcc -v returns:
Using built-in specs.
Target: i686-apple-darwin9
Configured with: /var/tmp/gcc/gcc-5465~16/src/configure
--disable-checking -enable-werror --prefix=/usr --mandir=/share/man
--enable-languages=c,objc,c++,obj-c++
--program-transform-name=/^[cg][^.-]*$/s/$/-4.0/
--with-gxx-include-dir=/include/c++/4.0.0 --with-slibdir=/usr/lib
--build=i686-apple-darwin9 --with-arch=apple --with-tune=generic
--host=i686-apple-darwin9 --target=i686-apple-darwin9
Thread model: posix
gcc version 4.0.1 (Apple Inc. build 5465)
python -V returns Python 2.4.3.
A: From http://bugs.mymediasystem.org/?do=details&task_id=676 suggests that Py_ssize_t was added in python 2.5, so it won't work (without some modifications) with python 2.4.
Either install/compile your own copy of python 2.5/2.6, or work out how to change P4Python, or look for an alternative python-perforce library.
A: The newer version 2008.1 will build with Python 2.4.
I had posted the minor changes required to do that on my P4Python page, but they were rolled in to the official version.
Robert
A: Very outdated, but maybe you can use http://public.perforce.com:8080/@md=d&cd=//guest/miki_tebeka/p4py/&c=5Fm@//guest/miki_tebeka/p4py/main/?ac=83 for now
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I show an embedded excel file in a WebPage? I want to allow an Excel report to be viewed embedded in a WebPage... is there a way?
*
*I don't want to use an ActiveX, or OWC (Office Web Components), I just want to open an existing file from the internet explorer application.
*I don't want users to download and then open it.
Using an iframe wouldn't be a problem, but my preliminary tests weren't successful
Any ideas? Is it at all possible?
A: This has to do with the local person's browser set up and not really anything you can do on your end. If they click a link with the .xls(x) extension, the browser determines if it wants to open it itself or in a new window.
Here 2 microsoft pages on how to change these settings:
http://support.microsoft.com/.../how-to-configure-internet-explorer-to-open-office-documents-in-the-app
http://support.microsoft.com/.../embed-your-excel-workbook-on-your-web-page-or-blog-from-sharepoint-or-onedrive-for-business
A: You should try using the Excel Web App Embed feature that lets you embed tables and charts from Excel directly on your Web Page. You can even let users interact with the spreadsheet so that they can sort and filter data and use your spreadsheets formulas to calculate make their own calculations all without altering the source.
The Excel Web App and storage is all free from Microsoft. Any data you embed on your Web page can be viewed by all the major destkop and mobile browsers and when you update your spreadsheet the data on your web pages is automatically updated as well.
A: I think your best bet is going to be extracting the data out of the Excel file and displaying it in a regular HTML table. Excel isn't exactly safe to invoke from a web page and not everyone has it anyway.
A: Take a look at scribd iPaper Viewer - this is a Flash based Viewer of XLS (and other) docs.
A: MOSS 2007 has a nifty feature called Excel Services which might fit the bill...
A: Excel Web App allows embedding "live" interactive spreadsheets on a web page. For an example, see http://datawiz.wordpress.com/2011/01/10/how-to-embed-excel-on-a-web-page/
A: in your comments you say that the Excel file is on the client's filesystem, not on the webserver. i think the security model of sane browsers forbid this; but wouldn't be surprised if setting high permissions to your pages could allow this.
A: <iframe src="file:\\yourpath\yourfile.xls" width="100%" height="500"></iframe>
A: Well this is a bit crude but sort of fits the bill.
*
*Select the area of the spreadsheet you wish to display.
*Copy this area into MS Paint.
*Select the area in Paint and use the Edit/Copy to/ function to save this as a bitmap.
*Now load the bitmap as you would any other pic.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: When do you use an IDE? I know that some people swear against using a language-specific IDE ever (vim/emacs or die! type stuff) and that some people are really uncomfortable with coding/compiling in the terminal at all, so my question has the following parts.
*
*When do you switch from one to the other
*Is it even necessary to know both? If not, which should you know?
*Lightweight or Heavyweight IDEs? (just code highlighting or every feature you could imagine)
*What IDE do you recommend in general, and why?
Feel free to answer all, some, or none.
Short summary so far:
IDEs
*
*Big Projects
*GUIs
*Easy Version Control Integration?
Text Editors
*
*Quick/Small projects
*Adapts to other languages more easily
*Less overhead
A: IDE usage is very subjective to personal opinion. With that disclaimer, here's mine.
Know your tools and know your platforms. Developing software is your domain, so be good at it.
When do you switch? When your knowledge tells your intuition that it would be easier with the other tool.
Should you know both? I would argue that you should know everything about working with your platform. Low level knowledge makes higher level applications easier to understand.
Lightweight or heavyweight depends on the task at hand. Both are appropriate at times.
I can't recommend any one IDE, it depends on your application's platform(s) and what you, the developer, are comfortable with. If you're doing .NET, Visual Studio on Windows is probably the best, but that doesn't mean you can throw out Notepad. For Java on Linux, Eclipse is great, but don't discard vim.
Hey, Front Page is probably the right tool for the job to some people (ouch, yes, I said it. /me ducks).
A: As several other people have said, which IDE you use or if you use one at all is heavily dependent on the language your developing in, the scale of the project, and the platform your working on. Although I've never encamped with either the Vi or Emacs guys I do use a number of other editing tools in roughly this language breakdown:
C#, or anything else .Net: Visual Studio. There is no serious competition, the CLR languages beg for syntax highlighting, refactoring, and advanced file management. Thankfully Microsoft got this one right and the Express editions are an incredible value.
C++: I haven't touched it in a while, but I would typically view the code in Visual Studio, but compile through batch files, which had a lot to do with the eccentricities of the particular embedded platform I was working on.
Python: I recommend Stani's Python Editor if you need something with bells and whistles, but Python is so direct in its structure that I usually end up just using Scite. It does everything you really need in a Python editor.
SQL: Notepad++ or, if your doing heavy lifting, any supported editor + SQL Prompt.
Java: I hear good things about Eclipse, but Java is evil so I don't touch it.
PHP, Perl, Javascript, BASH, or most other languages: Notepad++ on Windows, Scite on Linux.
Although switching between all these IDEs can be troublesome, especially when a feature you love in one is missing from another, the benefits are to be found in using the best balanced tool for the job your doing. I switch IDEs all the time as my needs vary and I would encourage others to as well. Having worked on a limited number of projects, of limited scale, on particular platforms I hardly know all the use cases and I'm sure there are plenty of other situations and code editors out there that pair up in unique and wonderfully functional ways.
A: Personally, I almost never use an IDE. I use vim/make almost exclusively. There's lots of benefits to this:
*
*Totally language agnostic. Once some commands and shortcuts are memorized, they work with all of my projects
*Parts are easy to swap in and out. If I want to switch compilers, I change the variable in my makefile.
*"configuration agnostic". No matter how the settings are, I can develop. No GUI? No problem. Different desktop environments? No problem. There are even ports of vim to Windows. I develop on my local machine and when I'm ssh-ed into a server in the same manner.
There are also some downsides:
*
*Vim is hard to learn. I'm not even going to lie about this. It takes time to acquire some power.
*Mostly limited to *NIX. Yeah, there are things like cygwin. Yeah, there are ports of Vim to Windows. It's somehow not exactly the same.
*It's possible that if I learned an IDE that focused on a specific language, it'd have some features that would be pretty powerful for that language.
A: I usually use the IDE only for debugging (which IDE depends on the language/platform), and use my personal editor for the actual editing of code.
I feel that using one editor for everything is a much better approach than relearning key-bindings for every language/platform change I make.
A: In my experience, if the project involves building a GUI, an IDE is an invaluable tool.
If it's small, "gut-level", or a web service, I'd go exclusively text editor.
A: My rule of thumb is that if the language and IDE are tied together, then use the IDE (see anything using MS project files). Otherwise any editor will do. Personally I like visual Slickedit, or notepad++ if the company isn't going to shell out for slickedit. On the linux side I use Emacs, which you can consider a heavyweight editor, or a lightweight OS.
A: While I use VIM and non-IDE type tools, I have to admit that Visual Studio (especially 2005/2008) is possibly one of the best programs ever written. The intelli-sense and debugging tools are well worth there weight in gold. I find myself being able to write code very fast. It is especially helpful in cases where you are utilizing frameworks (e.g. .NET) and need that little extra guide to tell you what functions are available off an object without having to refer to the help documentation. It is hard to beat auto-code formatting, bookmarking, immersive debugging, refactoring, source control integration, and plug-in support.
For everything else, I use VIM. I have to admit I'm still learning how to use VIM well, but I already know it is powerful. It truly is a matter of choosing the right tool for the job.
EDIT: One thing I will mention is that you pick a tool or two and learn it very well. Become an expert at it. Learn the ins/outs and explore the nitty gritty secret stuff your editor/IDE can do. The more you do this, it will matter less what the tool is.
A: For me the break-down is by technology.
I use Notepad++ or vim for anything in the LAMP stack - I've never found anything particularly useful in the IDEs for those technologies (unless you could the MySQL Client Tools as an IDE, which I use when I am able to).
When working in the WISC (Windows, IIS, SQL Server, C#) stack, on the other hand, I use an IDE - one of the Visual Studio products depending on which project I'm working on.
A lot of this probably has to do with the sorts of projects I work on. I work in PHP in the LAMP stack, so I don't have to handle bunches of external libraries the way I probably would if I was using perl, and the projects I develop on LAMP are usually simpler than my Windows development. In .Net, on the other hand, navigating the libraries can be quite difficult without the IDE, and the debugging can (I find) be more complicated. Plus when developing web services using SOAP, I wouldn't even want to think about doing it without the tools Microsoft supplies.
A: I find that IDEs for C and Python don't buy you much, at least the ones that are available for Linux. So, when I write C and Python code, I will usually use GVIM + Ctags + standalone debugger + make.
However, in the case of Java, Eclipse offers a Java programmer so much that it's hard not use it and after a while become spoiled to the extent that it's just too painful to go back to writing Java code in VIM.
Strangely enough, I haven't had the same experience when using Visual Studio for C projects (even though I do find its debugger indispensable). One reason for this is that I prefer to manage the build scripts myself. Even with Eclipse, I would still use ant so that I know exactly what is happening during each build. Admittedly, you can of course look at the build configurations in both Visual Studio and Eclipse, but this just isn't as direct as seeing the exact command used. That being said, I'm still forced to use Visual Studio for builds mainly because of convention, as there other people working on these projects, but I will still edit the code in GVIM (with the help of ctags).
A: The ONLY time when you have to do compile/link using the commandline is when you work on a pure Linux server with no GUI installed - in this case your ONLY REAL OPTION is emacs. (Using anything else is pure masochism).
At all other times it would be sheer stupidity not to use a mature IDE.
Your Questions:
When do you switch from one to the other
(answered above)
Is it even necessary to know both? If not, which should you know?
That will depend on how keen you are on being a programmer AS WELL AS a systems administrator. The latter is called upon to support the former and to implement whatever is necessary to keep all systems running smoothly. To know both can NEVER be harmful, but if you insist on being a programmer only, then the necessity of knowing a specific IDE for the platform you choose is obvious.
Lightweight or Heavyweight IDEs? (just code highlighting or every feature you could imagine)
I assume the choice here refer to programmer only: there is no doubt that the more assistence your programming tool/environment can give you, and the more support you can get that allows you to focus on implementing your program spec. the better. So YES: HEAVYWEIGHT is the only sensible choice.
What IDE do you recommend in general, and why?
On a Windows platform this question is moot: whatever Mictrosoft recommends (and if you have the $$$). On Linux you have several choices: GTK-based (Gnome & company), QT-based (KDE & company), (if you're a sucker for punishment you can go for pure X and then you are more-or-less in for an interesting time) both Gnome and KDE have IDE's you can use. Third party IDE's, like eclipse, are available; they all enable you to develop GUI-applications that will run in whatever the user chooses as his GUI-environment. Some of these IDE's even allows a multiplicity of programming languages (that is the glory of Linux and FOSS: you have choices; you are not pinned to the "geography", not led to the slaughter, ...).
My personal choice of IDE (C++ only) is Ultimate++. It is source compatible with both Windows and Linux. The IDE is approaching a rich maturity AND it offers 'everything-plus' that other C++-IDE's are aspiring to. (I know this is a plug, but I've got good reason: give it a try.)
HTH
A: I've done both, but the debugger is really a great tool to have.
You can get pretty far just adding debug output and rebuilding, and that even forces you to use your noodle and the scientific method a bit more, but in the end (imo) the debugger just cannot be denied... it lets you really get in there and explore the system during runtime.
A: There are opportunities for both specialization and generality. In my own work, I've found that flexibility and being able to ride a steep learning curve has paid off well.
A: I use an IDE whenever I'm using a language that's more suited to being written by a computer than by a human being, for example Java. It's far too verbose to write by hand without loads of auto-completion. Vim's auto-complete is never quite as good as a language-specific IDE.
For less wordy languages though, you can't beat a good Vim- or Emacs-like editor for churning out plaintext quickly.
A: If the project exceeds two or three source files, I tend to use an IDE.
A: Once you've used an IDE with good (vs6 was not good, and most generic text editor's support for this is crude at best) intellisense-style prompting and auto-complete for a few weeks, you won't go back.
A: I use vim for everything except Objective-C stuff, which I use xCode for. The Interface builder, error checking, debugging are quite valuable.
However, I use a InputManager to let me use vim commands/key bindings in xCode, so I never really leave vim for anything ;)
A: I do all my coding in SciTE, my favorite editor, lightweight, with good syntax highlighting, and with shortcuts I am familiar with (or which I coded!).
Now, doing lot of Java at work, I also use Eclipse, which greatly improve with each version. I appreciate it is quite customizable, and very flexible about coding help, but I still do most of my editing in SciTE... Fortunately, the auto-detection/update of files edited elsewhere works quite well in Eclipse (although sometime with a sensible delay to analyze the reloaded class: we have big projects and not that much memory (1GB)).
I appreciate Eclipse mainly for two things (we compile with a special ant process and we don't really do refactoring): quick navigation in the project, particularly find a class and call hierarchy, and of course debugging.
I don't think these are so much distinct worlds, these are just tools, quite complementary: I won't fire Eclipse to do a quick edit of a small HTML file, I can't debug in SciTE. So I use both.
A: I haven't used an IDE since the last time I created a native GUI app, lo those many years ago. And, even then, I pretty much just used the IDE to create the GUI forms and my own editor for the actual code once I figured out how to plug an alternate editor into the IDE.
I'm not a "give me vim, or give me death!" fanatic, but my experience with having used IDEs and having used a bunch of xterms running vim has left me with the impression that the only thing IDEs really contribute, for me, is drag-n-drop form editing, which is something I don't need, given how long it's been since I last did anything with a non-web-based GUI.
A: re: when? I use Eclipse for Java and debugging. I keep jEdit open with a variety of Groovy scripts available. I also use Cygwin's bash in 3 shell windows for executing Ant tasks, searching etc. I can use Bash aliases for some truly powerful stuff with respect to running our server and client.
re: both? Yes, it is vital to know both IDE and non-IDE development. It doesn't matter which IDE but a debugger is essential.
re: which? For Java, Eclipse if you are paying (free) and Idea if someone else is ;-) Actually, I prefer Eclipse but I think that Idea has better support for Groovy so that it is attractive to me.
A: A decent code highlighting editor is all I use.
A: I use IDE for debugging, or for every big project that require to go on many different file. Lot of IDE have quick click to move from one class to an other. IDE for that is more productive. I use IDE too when I use big FrameWork that has a lot of folder and file, more easy to manage.
Update because question has been updated
When do you switch from one to the other?
I switch for small change in CSS or other small task,
Is it even necessary to know both? If not, which should you know?
Some time when building form (example in C#) it's very much profitable to use the IDE. Samething for debugging with breakpoint. I think both are required.
Lightweight or Heavyweight IDEs? (just code highlighting or every feature you could imagine)
Code hightlight is fine for small job, but when you have big stuff to do, bigger IDE with autocomplete, real-time error check and other refactoring tool is necessary to be more productive.
What IDE do you recommend in general, and why?
Visual Studio is great for .net, Eclipse for Java.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Will Learning C++ Help for Building Fast/No-Additional-Requirements Desktop Applications? Will learning C++ help me build native applications with good speed? Will it help me as a programmer, and what are the other benefits?
The reason why I want to learn C++ is because I'm disappointed with the UI performances of applications built on top of JVM and .NET. They feel slow, and start slow too. Of course, a really bad programmer can create a slower and sluggish application using C++ too, but I'm not considering that case.
One of my favorite Windows utility application is Launchy. And in the Readme.pdf file, the author of the program wrote this:
0.6 This is the first C++ release. As I became frustrated with C#’s large
.NET framework requirements and users
lack of desire to install it, I
decided to switch back to the faster
language.
I totally agree with the author of Launchy about the .NET framework requirement or even a JRE requirement for desktop applications. Let alone the specific version of them. And some of the best and my favorite desktop applications don't need .NET or Java to run. They just run after installing. Are they mostly built using C++? Is C++ the only option for good and fast GUI based applications?
And, I'm also very interested in hearing the other benefits of learning C++.
A: yep, C++ is absolutely great. Check Qt. It has a great Python binding too, so you can easily prototype in Python, and when/if you need extra performance, porting to C++ is a mostly 1:1 translation.
But in fact, writing C++ isn't so hard either when you have a great platform, the worst part is writing all the class declarations :-)
A: If you want to build Windows applications that will run without frameworks such as .NET or virtual machines/interpreters, then your only really viable choices are going to be Visual Basic or C/C++
I've written some small Windows apps before in C++ code, and there is definitely a benefit in terms of speed and ease of deployment, at the expense of difficulty going up for development. C++ can be very fast, natively compiles, has many modern language features, and wide support. The trade off is that it's likely that you'll need to write more code in certain situations, or seek out libraries like Boost that provide the functionality you're after.
As a programmer, working in C++ and especially in C is good experience for helping you understand something just a tad closer to the machine than say, .NET, Java or a scripting language like VBScript, Python, Perl etc. It won't necessarily make you a better programmer, but if you are open to learning new lessons from it you may find that it helps you gain a new perspective on software. After all, most of the frameworks and systems you depend on are written in pure C, so it will never hurt you to understand the foundations. C++ is a different animal from straight C, but if you develop in C++ for Windows you'll likely find yourself working in a mix of C and C++ to work with Windows APIs, so it will have a trickle-down effect.
A: I wrote C++ windows apps for 10 years and switched to c# about 2 years ago to work on the latest product. I am embarrassed by how pathetic the C# app is. It takes 20 seconds to start up, you have to wait a second or so for it to switch between screens. I've used some third party GUI control library, and that leaks memory like a sieve! My app runs at 150 meg, and its doing hardly anything.
I am looking to go back to C++.
You can use MFC, it will be far quicker than .Net. Or, if you really want to burn, check out WTL - aLthough, there's not much documentation for that. I suggest you go with either MFC or Qt because you'll find plenty of good information and tutorials for them.
I can see that C# can be quicker to develop with, and maybe in some future version it will be quicker and smaller.
A: As always. It depends.
As long as you stay away from microsofts large Frameworks, as MFC, .net etc your applications can be blazing fast, but hard to code. Your benefit: You will really learn how windows is working behind its nice(?)surface. Just look into the initialisation code for COM-Objects and you know what I mean. You will never see such things in VB or C#
You have to program each button, each window and each control by yourself, sending silly window messages, however your applications are small, very small. This is an forgotten art:
Write small, fast programs
Good luck!
A: You will hate my answer:
The biggest bottlenecks in GUI development usually are not because of the language. After all most of the time in most applications the UI is idling, waiting for some user input. I can hear your screams already, but I said in most of the apps.
Let's put it this way: I am pretty sure that one can design a good UI on top of the .Net CLR. Learning C++ is a good thing, but will not solve the inherent problems of developing a good UI.
A: If you're committed to learning the raw, gritty details of Win32, then C++ will get you there. If you're not, then you'll end up using a bunch of wrappers anyway. For something like a small utility or especially something like a shell extension (where trying to use .NET will cause you problems anyway), C++ will let you write effective code with the absolute minimum in external dependencies. For a larger app, YMMV - a lot of the UI sluggishness out there comes from poor design: naive algorithms, an unwillingness to spin off non-trivial operations onto separate thread(s), reliance on badly-written 3rd-party components instead of custom controls... Mistakes easily made in any language.
A: Here is my honest answer to this.
First, I think every programmer should learn C/C++ just for the fact that by learning C++ you learn about programming. It is a systems-level language. You have to consider the finer details of memory management and so forth. I am shocked at how many programmers do not understand the foundational aspects of a programming language or computer system. By learning C/C++, you force yourself to understand programming at a more intimate level. On top of that, if you learn how to program in C/C++, you can program in almost anything.
That isn't to say C/C++ is always the right tool for the job. It can be a total pain to debug and take longer to write more meaningful code. However, it is perfect for those situations where you need absolute control of how a program executes.
This goes to say, I don't prefer C/C++ for UI programming. You still have to use a windowing framework specific to the OS you run on (MFC,Win32,Motif,GTK,QT,etc.). These frameworks don't lend themselves to easy learning curves. For at least Windows development, .NET is really the future of UI development (even though surprisingly MFC got a major overhaul for Vista that does stuff .NET doesn't even do yet). If you write your UI in .NET, it is easier to maintain and for others to maintain.
I typically write my UI in .NET and backend in C++.
A: Yes and no. It's all about optimization...And since C++ allows you to work on a lower level C++ is one of the best languages to write fast applications.
However those low-level mechanics implemented in C++ could be very annoying if you're used to more abstract approaches to OOP.
Testing your software in C++ is usually a long process.
If you are looking for speed anyway, C++ would definitely be one of the best choices.
A: C++ will indeed potentially get you a leaner, meaner and faster application (if you do it right).
However, the .NET framework is built for comfort from a developer point of view; a vast improvement over Win32 API or MFC, which may seem like hard work in comparison, So consider how you will implement the aspects for which your application depends on .NET (there are other frameworks available that may be easier than MFC or Win32 API), and also consider the costs and license issues of using such frameworks; for example the free VC++ Express Edition for example does not include MFC support.
If you know where you application is sluggish, C++/CLI may be a solution; allowing you to mix managed and native code to accelerate the parts that need it. However if it is the GUI that is intrinsically slow rather than the application processing; this may not be a useful path.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How to enforce locking workstation when leaving? Is this important? Within your organization, is every developer required to lock his workstation when leaving it?
What do you see a risks when workstations are left unlocked, and how do you think such risks are important compared to "over-wire" (network hacking) security risks?
What policies do you think are most efficient to enforce locking the workstations? (The policies might be either something "technical", like a domain group security settings for screen-savers to be locking, or a "social", like applying some penalties to those who do not lock, or encouraging Goating?)
A: For me, this has become habit. On a Windows machine, pressing Windows-L is a quick way of locking the machine.
The solution might be social rather than technical. Convince people that they don't want anyone else reading their email or spoofing their accounts when they step away.
A: In my org (government), yes. We deal with sensitive data (medical and SSN). It's instinctual for me: Windows+L every time I walk away.
The policy is both social and technical. On the social side, we're reminded that personal security is important, and everyone is aware of the data with which we are privy. On the technical side, the workstations use a group policy that turns on the screensaver after 2 minutes, with "On resume, password protect" turned on (and unable to be turned off).
A: No, but I'm an organization of 1 - the last time I worked in a large organization, we were not required, but encouraged to. If I were in an enviroment with other people, I would probably lock my workstation now when I left it.
While certainly people with physical access can add hardware keyloggers, locking it does add an additional level of security. Depending on the type of organization you are, I think the risks are more from internal organizational snooping than over-the-wire attacks.
A: I used to work at a very large corp where the workstation required your badge to be inserted inside it to work. You weren't allowed to move in the building (you needed the badge to open the doors anyway) without that badge on you. Taking the badge out of the workstation's smartcard reader logged you out automatically.
Off topic but even neater, the workstations were more like "networkstations" (note that it is not a necessity to use the system I've just described, though) and the badge held your session. Pop it into another workstation in another building and here's your session just as you left it when you pulled the badge on the other computer.
So they basically solved the issue by physically forcing people to log off their workstation, which I think is the best way to enforce any kind of security-critical policy. People forget, it's human nature.
A: The primary real world risks are your co-workers "goating" you. You can enforce this by setting a group policy to run the screen saver after X minutes, which can lock the computer as well.
A: The only place I have seen where this is truly important is government, defense, and medical facilities. The best way to enforce it is through user policies on Windows and "dot files" on Unix systems where a screensaver and timeout are pre-chosen for you when you log in and you aren't allowed to change them.
A: I never lock my workstation.
When my coworker and friend mocked me and theatened to send embarassing emails from my machine, I mocked him back for thinking that locking does ANYTHING when I have physical access to his machine, and I linked him to this url:
http://www.google.com/search?hl=en&q=USB+keylogger
I don't work with any sensitive data my coworkers wouldn't already have equal access to, but I am doubtful of the effectiveness of workstation locking against a determined snoopish coworker.
edit: the reason I don't lock is because I used to, but it kept creating weird instabilities in windows. I reboot only on demand, so I expect my machine to run for months without becoming unstable and locking was getting in the way.
A: Goating can get you fired, so I don't recommend it. However, if that's not the case where you work, it can be effective, even if it's just a broad email that says, "I will always lock my workstation from now on."
At the very least, machines should lock themselves after X minutes of inactivity, and this should be set via group policy.
Security is about raising the bar, making a greater amount of effort necessary to accomplish something bad. Not locking your workstation at all lowers that bar.
A: We combine social and technical methods to encourage IT people to lock their PCs: default screen saver/locking settings plus the threat of goating. (The last place I worked actually locked the screen saver settings.)
One thing to keep in mind is that if you have applications (particularly if they are SSO) that track activity, changes, or both, the data you collect may be less valuable if you can't be sure the user recorded in the data is the user who actually made those changes.
Even at a company like ours, where there isn't a lot of company-related sensitive information available to most users, there's certainly potential for someone to acquire NBR data from another employee via an unlocked workstation. How many people save passwords to websites on their computer? Amazon? Fantasy football? (A dangerous goating technique: drop a key player from someone else's roster. It's really only funny if the commish is in on it with you, so the player can be restored ...)
Another thing to consider is that you can't be sure that everyone in your building belongs there. It's much easier to hack into a network if you're actually in the building: of course the vast majority of people in the building are there because they're allowed to be, but you really don't want to be the victimized company when that one guy in a million does get into the building. (It doesn't even have to be an intentionally bad guy: it could be somebody's kid, a friend, a relative ...) Of course the employee who let that person in could also let that person use their computer, but that kind of attack is much more difficult to stop.
A: locking your workstation each time you go for a coffee means that you type your password 10 times per day rather than once. And everyone around you can see you type it. And once they have that password they can impersonate you from remote computers which is far more difficult to prove than using your PC in the office with everyone watching. So surely locking your workstation is actually more of a security risk?
A: I'm running Pageant and have my SSH-public key distributed over all the servers here. Whoever sits down on my workstation can basically log into any account everywhere with my keys.
Therefore I always lock my machine, even for a 30s break. (Windows-L is basically the only Windows-key based shortcut I know.)
A: I personally think the risk is low, but in my experience most of the time it's not matter of opinion -- it's often a requirement for big corporate or government clients who will actually come in and audit your security. In that case, some kind of technical (group policy) solution would be best because you can actually prove you are complying with the requirement. I would also do it in cases where there is a legal privacy requirement (like medical data and HIPAA.)
A: I worked at a place where the people who supplied some of our equipment were from a company in direct competition with us. They were in the building when the equipment required maintenance. An email would go out every now and then saying they would be there, please lock your machine when you're not at it. If a competitor got our source because a developer forgot to lock their machine, the developer would be looking for a new job.
A: We are required to at work, and we enforce it ourselves. Mass chats are started professing love for people, emails are sent, backgrounds are changed, etc. Gotta love the first day when it happens to a new hire, everyone is sure to leave a nice note :)
A: The place I used to work had a policy on always locking your workstation. They enforced it by setting up a company wide mailing list - if you left your workstation unlocked, your co-workers would send an embarrassing mail to the list from your account, then lock your machine. It was kind of funny, and also kind of annoying, but it generally worked.
A: You could start sitting down at people's workstations and loading up [insert anything bad here] right after they walk away. That will work I'm sure.
A: In some/most government offices I've visited, that have the possibility of having members of the public walking about they have smartcards that plug into a USB reader on the PC. The card is on a necklace around the user's neck and locks the workstation when it's removed.
A: The owner of my company (and a developer), will make a minor change to your code window if you left your computer unlocked, making you go crazy wondering why your code isn't working until you find it.
Have to say, I never keep my computer unlocked after hearing about that prank, I go crazy enough as it is with some of my code.
A: You could rig up a simple foolproof way, have a fingerprint reader plugged into the computer programmed for your password, then wear this necklace with a usb receiver, and if you move away from the workstation, the screen saver actively locks it, then when you appear within the range, swipe your finger off the fingerprint reader to unlock the workstation - I think that would be a quite cheap way of doing it, simple, un-intrusive, and clutter free, no forgetting to lock via 'WinKey+L'
Hope this helps,
Best regards,
Tom.
A: Is it important or is it a good habit? Locking your computer is may not be important say in your own house, but if you are at a client's office and walk away then I would say it is important.
As for how to enforce...
After reading Jeff's blog entry on Don't Forget To Lock Your Computer; I like to change co-worker's desktop backgrounds to...
1,238px × 929px
Needless to say, co-workers started locking their computers.
A: GateKeeper is an easy solution to this. It locks the workstation automatically when the user walks away, and unlocks automatically when the user comes back within range of the computer. It can also require two factor authentication and other methods of lock/unlock.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: C# .Net exe doesn't close when PC is restarted, keeping the machine from restarting We have a SmartClient built in C# that stubornly remains open when the PC its running on is being restarted. This halts the restart process unless the user first closes the SmartClient or there is some other manual intervention.
This is causing problems when the infrastructure team remotely installs new software that requires a machine reboot.
Any ideas for getting the SmartClient app to recognize the shutdown/restart event from Windows and gracefully kill itself?
UPDATE:
This is a highly threaded application with multiple gui threads. yes, multiple gui threads. Its really a consolidation of many project that in and of themselves could be standalone applications - all of which are launched and managed from a single exe that centralizes those management methods and keeps track of those threads. I don't believe using background threads is an option.
A: It must be a thread that continues to run preventing your application to close. If you are using threading an easy fix would be to set it to background.
A thread is either a background thread or a foreground thread. Background threads are identical to foreground threads, except that background threads do not prevent a process from terminating. Once all foreground threads belonging to a process have terminated, the common language runtime ends the process. Any remaining background threads are stopped and do not complete.
http://msdn.microsoft.com/en-us/library/system.threading.thread.isbackground.aspx
A: OK, if you have access to the app, you can handle the SessionEnded event.
...
Microsoft.Win32.SystemEvents.SessionEnded +=new
Microsoft.Win32.SessionEndedEventHandler(shutdownHandler);
...
private void shutdownHandler(object sender, Microsoft.Win32.SessionEndedEventArgs e) {
// Do stuff
}
A: When a user is logging off or Windows is being shut down, WM_QUERYENDSESSION message is sent to all top-level windows. See MSDN documentation here.
The default behavior of a WinForm application in response to this message is to trigger the FormClosing event with CloseReason == WindowsShutDown or others. The event handler though can choose to be stubborn and refuse to shut the app down, thus keeping the system running.
Check FormClosing handlers of your applications. Maybe there is something in there. I've seen this kind of stuff a couple of times.
A: Or maybe the .Net app is ignoring close or quit messages on purpose?
A: Background threads was a quick and dirty solution, best solution is to use synchronization objects (ManualResetEvent, Mutex or something else) to stop the other threads;
Or else keep track of all your opened windows and sent WM_CLOSE message when main app closes.
You have to give more information about how do you start those GUI applications. maybe you start one thread for each application and call Application.Run(new Form1()); ?
You may also look into creating a AppDomain for each GUI Application
A: Normally a .Net app would respond correctly- at least, that's the 'out of the box' behavior. If it's not, there could be a number of things going on. My best guess without knowing anything more about your program is that you have a long-running process going in the main UI thread that's preventing the app from responding to window messages.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: how to identify the minimal set of parameters describing a data set I have a bunch of regression test data. Each test is just a list of messages (associative arrays), mapping message field names to values. There's a lot of repetition within this data.
For example
test1 = [
{ sender => 'client', msg => '123', arg => '900', foo => 'bar', ... },
{ sender => 'server', msg => '456', arg => '800', foo => 'bar', ... },
{ sender => 'client', msg => '789', arg => '900', foo => 'bar', ... },
]
I would like to represent the field data (as a minimal-depth decision tree?) so that each message can be programatically regenerated using a minimal number of parameters. For example, in the above
*
*foo is always 'bar', so I don't need to mention it
*sender and client are correlated, so I only need to mention one or the other
*and msg is different each time
So I would like to be able to regenerate these messages with a program along the lines of
write_msg( 'client', '123' )
write_msg( 'server', '456' )
write_msg( 'client', '789' )
where the write_msg function would be composed of nested if statements or subfunction calls using the parameters.
Based on my original data, how can I determine the 'most important' set of parameters, i.e. the ones that will let me recreate my data set using the smallest number of arguments?
A: The following papers describe algortithms for discovering functional dependencies:
Y. Huhtala, J. Kärkkäinen, P. Porkka,
and H. Toivonen. TANE: An efficient
algorithm for discovering functional
and approximate dependencies. The
Computer Journal, 42(2):100–111,
1999, doi:10.1093/comjnl/42.2.100.
I. Savnik and P. A. Flach. Bottom-up
induction of functional dependencies
from relations. In Proc. AAAI-93 Workshop:
Knowledge Discovery in Databases,
pages 174–185, Washington, DC, USA,
1993.
C. Wyss, C. Giannella, and E.
Robertson. FastFDs: A
Heuristic-Driven, Depth-First
Algorithm for Mining Functional
Dependencies from Relation Instances.
In Proc. Data Warehousing and Knowledge Discovery, pages 101–110, Munich,
Germany, 2001, doi:10.1007/3-540-44801-2.
Hong Yao and Howard J. Hamilton. "Mining functional dependencies from data." Data Mining and Knowledge Discovery, 2008, doi:10.1007/s10618-007-0083-9.
There has also been some work on discovering multivalued dependencies:
I. Savnik and P. A. Flach. "Discovery
of Mutlivalued Dependencies from
Relations." Intelligent Data Analysis
Journal, 4(3):195–211, IOS Press, 2000.
A: This looks very similar to Database Normalization.
You have a relation (your test data set), and some known functional dependencies ({sender} => arg, {} => foo and possibly {msg} => sender. If the order of tests is important then add {testNr} => msg.) and you want to eliminate redundancies.
Treat your test set as a database table, apply the normalization rules and create equivalent functions (getArgFromSender(sender) etc.) for each join.
A: If the number of fields and records is small:
Brute force it by looping through every combination of fields, and for each combination detect if there are multiple items in the list which map to the same value.
If you can live with a fairly good choice of fields:
Start off assuming you need all fields. Then, select a field at random and see if it can be eliminated; if it can, cross it off the set of fields. Otherwise, choose another field at random and try again. If you find no fields can be eliminated, then you've found a reasonable set of fields. Had you chosen other fields first, you may find a better solution. You can repeat the whole procedure a few times and pick the best solution if you like. This kind of approach is called hill climbing.
(I suspect that this problem is NP complete, i.e. we probably don't know of an efficient and powerful solution so it is not worth losing sleep over trying to dream up a perfect solution.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Eclipse's Ctrl+click in Visual Studio? After working for a few days with Eclipse Java I totally got addicted to pressing Ctrl and clicking on an identifier to go to its definition. Since then I've been looking for a way to achieve this in Visual Studio as well.
I realize VS has right click, Go to definition, and that F12 does the same. I also realize that Visual Assist does something similar with Alt + G. Yet none of these are as perfect as Ctrl + click.
I've actually tried my luck for a few hours trying to write a VS plugin to do it but didn't get anywhere in the time frame I thought reasonable for this.
Does anyone know how this could be achieved? A ready plugin? A macro of some kind?
A: If you use Visual Studio 2010, you can use the free
Visual Studio 2010 Productivity Power Tools from Microsoft to achieve this.
A: I use visual studio 2013 and 2015, I installed Go To Definition. To install this extension navigate on TOOLS -> Extensions and Updates.
A: If you have Visual Studio 2010 you can use "Go To Definition" by Noah Richards.
http://visualstudiogallery.msdn.microsoft.com/en-us/4b286b9c-4dd5-416b-b143-e31d36dc622b
A: I'll answer the commentors who asked about the difference between Ctrl-click and F12.
Ctrl-click workflow:
*
*Move hand to mouse
*Move mouse to hover over variable name
*Other hand holds down Ctrl key while you click
*Move mouse to position cursor, highlight, right-click, or whatever
*Move hand back to keyboard to continue typing
F12 workflow
*
*Move hand to mouse
*Mouse mouse to hover over variable name
*Move hand back to keyboard
*Hit F12 key
*Move hand back to mouse
*Move mouse to position cursor, highlight, right-click, or whatever
*Move hand back to keyboard to continue typing
If you assume the cursor is already positioned on the desired variable, F12 is better. However, that's rarely the case. Also, if you stop after this specific action, assuming you want hands back at the keyboard, the cost is the same. But if you keep in mind that you probably had a reason for wanting to go to the definition, the Ctrl-click workflow saves you an instance of moving between the keyboard and mouse.
A: I use the built in options (F12, Right-click -> Go to definition) but I know a lot of the guys at my company use Resharper and it definitely has this functionality.
A: oh man, just install resharper!! (vs plugin) with it installed you just go and Ctrl + click to go to definition.
this is not the only thing resharper does, try it out free!!!
A: Microsoft released a Visual Studio 2010 extension named "Productivity Power Tools" which now adds Ctrl+Click functionality. So if you're like me, and hate installing third-party addons, you can now have the same functionality!
A: Another option with VS (besides F12 and right-click > Go to Def) is add the code definition pane (View > Code Def Window). This is essentially another editing pane that shows the code for the current symbol - no need to Ctrl-click or anything. I keep it pinned to my secondary monitor. Any time I need to see the implementation for a symbol I just click it and look over.
Another nice thing about F12 is you can also do ShiftF12 to find references to a symbol and F8 through them. The two go together like love and happiness.
A: Visual Assist supports Ctrl+Click as of June 2009 (build 1727). Activate Ctrl+LeftClick in VA Options | Advanced | General. (See the comment below.)
A: I prefer to bind Go To Definition to CtrlD. This makes it extremely easy to use either with both hands on the keyboard (CtrlD to go tho the definition of the symbol under the cursor) or one hand on the keyboard and one hand on the mouse (Click on a symbol, then CtrlD).
A: All in all, both VS and Eclipse have weird key shortcuts.
I just had to respond, too: F12 is far too right on the keyboard and you have to leave the the mouse right hand for the keyboard to use it. As a long time VS user I just didn't find it until I searched for the Ctrl+Mouse equivalent in Eclipse. It's completely borked. OK? No need to argue. (The same goes for F3 in Eclipse going for definition. ???? Why the face??? It's FIND NEXT for Pete's sake. But this can be removed after mastering the Eclipse keyboard shortcut system in the course of a few years.)
Anyway, as has been said here before, Microsoft has already understood this can be an issue for new programmers coming in from Eclipse, so they provided the Power Tools (I followed the link up above).
http://visualstudiogallery.msdn.microsoft.com/d0d33361-18e2-46c0-8ff2-4adea1e34fef/
A: If you are using Visual Studio 2017, you can use Productivity Power Tools 2017
A: I don't work in VS much, so I haven't used it, but I've heard incredibly good things about Resharper from everyone I know who does. Everyone has told me it's worth every penny, and significantly improves efficiency in Visual Studio. I think it has a feature like what you're looking for, along with a TON of others.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "109"
} |
Q: What is the best way to partition large tables in SQL Server? In a recent project the "lead" developer designed a database schema where "larger" tables would be split across two separate databases with a view on the main database which would union the two separate database-tables together. The main database is what the application was driven off of so these tables looked and felt like ordinary tables (except some quirky things around updating). This seemed like a HUGE performance problem. We do see problems with performance around these tables but nothing to make him change his mind about his design. Just wondering what is the best way to do this, or if it is even worth doing?
A: I don't think that you are really going to gain anything by partitioning the table across multiple databases in a single server. All you have essentially done there is increased the overhead in working with the "table" in the first place by having several instances (i.e. open in two different DBs) of it under a single SQL Server instance.
How large of a dataset do you have? I have a client with a 6 million row table in SQL Server that contains 2 years worth of sales data. They use it transactionally and for reporting without any noticiable speed problems.
Tuning the indexes and choosing the correct clustered index is crucial to performance of course.
If your dataset is really large and you are looking to partition, you will get more bang for your buck partitioning the table across physical servers.
A: Partitioning is not something to be undertaken lightly as there can be many subtle performance implications.
My first question is are you referring simply to placing larger table objects in separate filegroups (on separate spindles) or are you referring to data partitioning inside of a table object?
I suspect that the situation described is an attempt to have the physical storage of certain large tables on different spindles from the rest of the tables. In this case, adding the extra overhead of separate databases, losing any ability to enforce referential integrity across databases, and the security implications of enabling cross-database ownership chaining does not provide any benefit over using multiple filegroups within a single database. If, as is quite possible, the separate databases you refer to in your question are not even stored on separate spindles but are all stored on the same spindle then you negate even the slight performance benefit you could have gained by physically separating your disk activity and have received absolutely no benefit.
I would suggest instead of using additional databases to hold large tables you look into the Filegroup topic in SQL Server Books Online or for a quick review see this article:
If you are interested in data partitioning (including partitioning into multiple file groups) then I recommend reading articles by Kimberly Tripp, who gave an excellent presentation at the time SQL Server 2005 came out about the improvements available there. A good place to start is this whitepaper
A: Which version of SQL Server are you using? SQL Server 2005 has partitioned tables, but in 2000 (or 7.0) you needed to use partition views.
Also, what was the reasoning for putting the table partitions in a separate database?
When I've had to partition tables in the past (pre-2005), it's usually by a date column or something similar, with a view over the various partitions. Books Online has a section that talks about how to do this and all of the rules around it. You need to follow the rules to make it work how it's supposed to work.
The key thing to remember is that your partitioning column must be part of the primary key and you want to try to always use that column in any access against the table so that the optimizer can ignore partitions that shouldn't be affected by the query.
Look up "partitioned table" in MSDN and you should be able to find a more complete tutorial for SQL Server 2005 partitioned tables as well as advice on how to set them up for maximum performance.
A: Are you asking about best practices in terms of database design, or convincing your lead to change his mind? :)
In terms of design... Back in the goode olde days, vertical partitioning was sometimes needed to work around database engine limitations, where the number of columns in a table was a hard limit, like 255 columns. These days the main benefits are purely for performance: putting rarely used columns, or blobs on a separate disk array. But if you're regularly pulling things from both tables it will likely be a loss. It sounds like your lead is suffering from a case of premature optimisation.
In terms of telling your lead is wrong... that requires diplomacy. If he's aware of mutterings of discontent in terms of performance, a benchmark is probably the best way to show the difference.
Create a new physical table somewhere with 'create table t1 as select * from view1' and then run some lengthy batch with the vertically partitioned table and your new table. If it's as bad as you say, the difference should be evident.
But this too may be premature optimisation. Find out what the end-users think of the performance. If the performance is good enough, for some definition of good, then don't fix what ain't broke.
A: There is a definite benefit for table partitioning (regardless whether it's on same or different filegroups /disks). If the partition column is correctly selected, you'll realize that your queries will hit only the required partition. So imagine if you have 100 million records (I've partitioned tables much bigger than that - about 20+ Billion rows) and if for the most part, more than 70% of your data access is only a certain category or timeline or type of data then it helps to keep the most accessed data in a separate partition. Plus you can align the partition with separate file groups with various type of disks (SATA, Fiber channel, SSDs) so that the most accessed/busy data are on the fastest storage and the least/rarely accessed are virtually on slower disks.
Although, in SQL Server there's limited partitioning ability, unlike Oracle. You can choose only one column for partitioning (even in SQL 2008). So you've to choose a column wisely where that column also is part of most of your frequent queries. For the most part, people find it easy to choose to partition by a date column. However although it seems logical to partition that way, if your queries do not have that column as part of the condition, you won't be gaining sufficient benefits from partitioning (in other words, your query will hit all the partition regardless).
It's much easier to partition for data warehouse/data mining type databases than OLTP as most DW database queries are limited by time period.
That's why these days due to the volume of data being handled by databases, it's wise to design the application in such a way that ever query is limited by some broader group such as time, geographical location or such so that when such columns are chosen for partitioning you'll gain maximum benefits.
A: I would disagree with the assumption that nothing can be gained by partitioning.
If the partition data is physically and logically aligned, then the potential IO of queries should be dramatically reduced.
For example, We have a table which has the batch field as an INT representing an INT.
If we partition the data by this field and then re-run a query for a particular batch, we should be able to run set statistics io ON before and after partitioning and see a reduction in IO,
If we have a million rows per partition and each partition is written to a separate device. The query should be able to eliminate the nonessential partitions.
I've not done a lot of partitioning on SQL Server, but I do have experience of partitioning on Sybase ASE, and this is known as partition eliminiation. When I have time I'm going to test out the scenario on a SQL Server 2005 machine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Visual Studio 2005 Memory Usage I find that quite often Visual Studio memory usage will average ~150-300 MB of RAM.
As a developer who very often needs to run with multiple instances of Visual Studio open, are there any performance tricks to optimize the amount of memory that VS uses?
I am running VS 2005 with one add-in (TFS)
A: Upgrade to a 64-bit OS. My instances of VS were taking ~700MB each (very large solutions).. and you rapidly run out of room with that.
Everyone on my team that has switched to 64-bit (and 8GB RAM) has wondered why they didn't do it sooner.
A: minimize and re-maximize the main vs window to get vs to release the memory.
A: From this blog post:
[...]
These changes are all available from the Options dialog (Tools –> Options):
Environment
*
*General:
*
*Disable “Animate environment tools”
*Documents:
*
*Disable “Detect when file is changed outside the environment”
*Keyboard:
*
*Remove the F1 key from the Help.F1Help command
*Help\Online:
*
*Set “When loading Help content” to “Try local first, then online” or “Try local only, not online”
*Startup:
*
*Change the “At startup” option to “Show empty environment”
Projects and Solutions
*
*General:
*
*Disable “Track Active Item in Solution Explorer”
Text Editor
*
*General (for each language you want):
*
*Disable “Navigation bar” (this is the toolbar that shows the objects and procedures drop down lists allowing you to choose a particular object in your code.
*Disable “Track changes”
Windows Forms Designer
*
*General:
*
*Set “AutotoolboxPopulate” to false.
*Set “EnableRefactoringOnRename” to false.
A: By uninstalling (and re-installing) Visual Assist the problem got solved for me.
A: The number 1 thing you can do is switch to Windows 8.
It uses memory sharing / combining if the same DLL or memory page is loaded into multiple processes. Obviously there's a lot of overlap when running two instances of VS.
As you can see I've got 4 Visual studios running and the shared memory column (you need to enable this column for it to be visible) shows how much memory is being shared.
So in Windows 7 this would use 2454MB but I'm saving 600+MB that are shared with the other devenv processes.
Chrome too has a lot of savings (because each browser tab is a new process). So overall I've still got 2GB free where I'd normally be maxed out.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Templated delegates I have the following piece of code pattern:
void M1(string s, string v)
{
try
{
// Do some work
}
catch(Exception ex)
{
// Encapsulate and rethrow exception
}
}
The only difference is that the return type and the number and types of parameters to the methods can vary.
I want to create a generic / templated method that handles all of the code except for the "Do some work" part, how can it be achieved.
A: I like the Action
public static void Method(Action func)
{
try
{
func();
}
catch (Exception ex)
{
// Encapsulate and rethrow exception
throw;
}
}
public static void testCall()
{
string hello = "Hello World";
// Or any delgate
Method(() => Console.WriteLine(hello));
// Or
Method(() => AnotherTestMethod("Hello", "World"));
}
public static void AnotherTestMethod(string item1, string item2)
{
Console.WriteLine("Item1 = " + item1);
Console.WriteLine("Item2 = " + item2);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Does typeof(myType).TypeHandle use reflection? If I wrote this code:
typeof(myType).TypeHandle
Would it use reflection?
How much different from:
Type.GetType(string).TypeHandle
is it?
A: Well, it really depends on what you mean by "reflection" - which isn't exactly strictly defined.
There are two parts to using typeof in the compiled code. The first is the use of the ldtoken which is an IL instruction described like this in the CIL spec:
The ldtoken instruction pushes a
RuntimeHandle for the specified
metadata token. The token shall be one
of:
*
*A methoddef, methodref or
methodspec: pushes a
RuntimeMethodHandle
*A typedef,
typeref, or typespec : pushes a
RuntimeTypeHandle
*A fielddef or
fieldref : pushes a RuntimeFieldHandle
The value pushed on the stack can be
used in calls to reflection methods in
the system class library
After this, a call to Type.GetTypeFromHandle is made.
This is all significantly quicker than Type.GetType(string) however, if that's what you were concerned with.
EDIT: I just noticed the TypeHandle part of your question. As far as I can see, the MS compiler doesn't optimise away the call to GetTypeFromHandle and then TypeHandle, even though I guess you really only need the ldtoken call.
Whether all of this counts as "reflection" or not is up to you...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Capturing method state using Reflection Is there a way to use .NET reflection to capture the values of all parameters/local variables?
A: You could get at this information using the CLR debugging API though it won't be a simple couple of lines to extract it.
A: Reflection is not used to capture information from the stack. It reads the Assembly.
You might want to take a look at StackTrace
http://msdn.microsoft.com/en-us/library/system.diagnostics.stacktrace.aspx
Good article here:
http://www.codeproject.com/KB/trace/customtracelistener.aspx
A: Reflection will tell you the type of parameters that a method has but it won't help discover their values during any particular invocation. Reflection doesn't tell you anything about local variables at all.
You need the sort of APIs that the debugger uses to access this sort of info.
A: I dont think this is possible, you can get the method and its parameters by looking at the StackTrace.
System.Diagnostics.StackTrace sTrace = new System.Diagnostics.StackTrace(true);
for (Int32 frameCount = 0; frameCount < sTrace.FrameCount; frameCount++){
System.Diagnostics.StackFrame sFrame = sTrace.GetFrame(frameCount);
System.Reflection.MethodBase thisMethod = sFrame.GetMethod();
if (thisMethod == currentMethod){
if (frameCount + 1 <= sTrace.FrameCount){
System.Diagnostics.StackFrame prevFrame = sTrace.GetFrame(frameCount + 1);
System.Reflection.MethodBase prevMethod = prevFrame.GetMethod();
}
}
}
A: I don't know how it's possible using reflection, but look at using weaving. SpringFramework.Net allows you to define pointcuts that can intercept method calls. Others probably do it as well.
Here's a link to the "BeforeAdvice" interceptor
http://www.springframework.net/docs/1.2.0-M1/reference/html/aop.html#d0e8139
A: The folks at secondlife suspend scripts and move them between servers. That implies that they have to capture the state of a running script, including the values of variables on the call stack.
Their scripting language runs on mono, an open source implementation of the .NET runtime. I doubt that their solution applies to the regular .NET runtime, but the video of the presentation on how they did it (skip to second half) might still be interesting.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: XML Schema XSD TotalDigits vs. MaxInclusive I have run across an XML Schema with the following definition:
<xs:simpleType name="ClassRankType">
<xs:restriction base="xs:integer">
<xs:totalDigits value="4"/>
<xs:minInclusive value="1"/>
<xs:maxInclusive value="9999"/>
</xs:restriction>
</xs:simpleType>
However, it seems to me that totalDigits is redundant. I am somewhat new to XML Schema, and want to make sure I'm not missing something.
What is the actual behavior of totalDigits vs. maxInclusive?
Can totalDigits always be represented with a combination of minInclusive and MaxInclusive?
How does totalDigits affect negative numbers?
A:
can totalDigits always be represented with a combination of minInclusive and MaxInclusive?
In this case, yes. As you're dealing with an integer, the value must be a whole number, so you have a finite set of values between minInclusive and maxInclusive. If you had decimal values, totalDigits would tell you how many numbers in total that value could have.
How does totalDigits affect negative numbers?
It is the total number of digits allowed in the number, and is not affected by decimal points, minus signs, etc. From auxy.com:
The number specified by the value attribute of the <xsd:totalDigits> facet will restrict the total number of digits that are allowed in the number, on both sides of the decimal point.
A: totalDigits is the total number of digits the number can have, including decimal numbers. So a totalDigits of 4 would allow 4.345 or 65.43 or 932.1 or a 4 digit whole integer as in the example above. Same for negative. Any of those previous examples can all be made negative and still validate as a totalDigits of 4.
max and min inclusive/exclusive limit the range of the numbers. The maxinclusive might seem be a little redundant in your example, but the mininclusive makes certain the number is greater than 0.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: C++ alternatives to void* pointers (that isn't templates) It looks like I had a fundamental misunderstanding about C++ :<
I like the polymorphic container solution. Thank you SO, for bringing that to my attention :)
So, we have a need to create a relatively generic container type object. It also happens to encapsulate some business related logic. However, we need to store essentially arbitrary data in this container - everything from primitive data types to complex classes.
Thus, one would immediately jump to the idea of a template class and be done with it. However, I have noticed C++ polymorphism and templates do not play well together. Being that there is some complex logic that we are going to have to work, I would rather just stick with either templates OR polymorphism, and not try to fight C++ by making it do both.
Finally, given that I want to do one or the other, I would prefer polymorphism. I find it much easier to represent constraints like "this container contains Comparable types" - a la java.
Bringing me to the topic of question: At the most abstract, I imagine that I could have a "Container" pure virtual interface that has something akin to "push(void* data) and pop(void* data)" (for the record, I am not actually trying to implement a stack).
However, I don't really like void* at the top level, not to mention the signature is going to change every time I want to add a constraint to the type of data a concrete container can work with.
Summarizing: We have relatively complex containers that have various ways to retrieve elements. We want to be able to vary the constraints on the elements that can go into the containers. Elements should work with multiple kinds of containers (so long as they meet the constraints of that particular container).
Edit: I should also mention that the containers themselves need to be polymorphic. That is my primary reason for not wanting to use templated C++.
So - should I drop my love for Java type interfaces and go with templates? Should I use void* and statically cast everything? Or should I go with an empty class definition "Element" that declares nothing and use that as my top level class in the "Element" hierarchy?
One of the reasons why I love stack overflow is that many of the responses provide some interesting insight on other approaches that I hadn't not have even considered. So thank you in advance for your insights and comments.
A: Polymorphism and templates do play very well together, if you use them correctly.
Anyway, I understand that you want to store only one type of objects in each container instance. If so, use templates. This will prevent you from storing the wrong object type by mistake.
As for container interfaces: Depending on your design, maybe you'll be able to make them templated, too, and then they'll have methods like void push(T* new_element). Think of what you'll know about the object when you want to add it to a container (of an unknown type). Where will the object come from in the first place? A function that returns void*? Do you know that it'll be Comparable? At least, if all stored object classes are defined in your code, you can make them all inherit from a common ancestor, say, Storable, and use Storable* instead of void*.
Now if you see that objects will always be added to a container by a method like void push(Storable* new_element), then really there will be no added value in making the container a template. But then you'll know it should store Storables.
A: The simple thing is to define an abstract base class called Container, and subclass it for each kind of item you may wish to store. Then you can use any standard collection class (std::vector, std::list, etc.) to store pointers to Container. Keep in mind, that since you would be storing pointers, you would have to handle their allocation/deallocation.
However, the fact that you need a single collection to store objects of such wildly different types is an indication that something may be wrong with the design of your application. It may be better to revisit the business logic before you implement this super-generic container.
A: First, of all, templates and polymorphism are orthogonal concepts and they do play well together. Next, why do you want a specific data structure? What about the STL or boost data structures (specifically pointer containter) doesn't work for you.
Given your question, it sounds like you would be misusing inheritance in your situation. It's possible to create "constraints" on what goes in your containers, especially if you are using templates. Those constraints can go beyond what your compiler and linker will give you. It's actually more awkward to that sort of thing with inheritance and errors are more likely left for run time.
A: Can you not have a root Container class that contains elements:
template <typename T>
class Container
{
public:
// You'll likely want to use shared_ptr<T> instead.
virtual void push(T *element) = 0;
virtual T *pop() = 0;
virtual void InvokeSomeMethodOnAllItems() = 0;
};
template <typename T>
class List : public Container<T>
{
iterator begin();
iterator end();
public:
virtual void push(T *element) {...}
virtual T* pop() { ... }
virtual void InvokeSomeMethodOnAllItems()
{
for(iterator currItem = begin(); currItem != end(); ++currItem)
{
T* item = *currItem;
item->SomeMethod();
}
}
};
These containers can then be passed around polymorphically:
class Item
{
public:
virtual void SomeMethod() = 0;
};
class ConcreteItem
{
public:
virtual void SomeMethod()
{
// Do something
}
};
void AddItemToContainer(Container<Item> &container, Item *item)
{
container.push(item);
}
...
List<Item> listInstance;
AddItemToContainer(listInstance, new ConcreteItem());
listInstance.InvokeSomeMethodOnAllItems();
This gives you the Container interface in a type-safe generic way.
If you want to add constraints to the type of elements that can be contained, you can do something like this:
class Item
{
public:
virtual void SomeMethod() = 0;
typedef int CanBeContainedInList;
};
template <typename T>
class List : public Container<T>
{
typedef typename T::CanBeContainedInList ListGuard;
// ... as before
};
A: You can look at using a standard container of boost::any if you are storing truly arbitrary data into the container.
It sounds more like you would rather have something like a boost::ptr_container where anything that can be stored in the container has to derive from some base type, and the container itself can only give you reference's to the base type.
A: Using polymorphism, you are basically left with a base class for the container, and derived classes for the data types. The base class/derived classes can have as many virtual functions as you need, in both directions.
Of course, this would mean that you would need to wrap the primitive data types in derived classes as well. If you would reconsider the use of templates overall, this is where I would use the templates. Make one derived class from the base which is a template, and use that for the primitive data types (and others where you don't need any more functionality than is provided by the template).
Don't forget that you might make your life easier by typedefs for each of the templated types -- especially if you later need to turn one of them into a class.
A: You might also want to check out The Boost Concept Check Library (BCCL) which is designed to provide constraints on the template parameters of templated classes, your containers in this case.
And just to reiterate what others have said, I've never had a problem mixing polymorphism and templates, and I've done some fairly complex stuff with them.
A: You could not have to give up Java-like interfaces and use templates as well. Josh's suggestion of a generic base template Container would certainly allow you do polymorphically pass Containers and their children around, but additionally you could certainly implement interfaces as abstract classes to be the contained items. There's no reason you couldn't create an abstract IComparable class as you suggested, such that you could have a polymorphic function as follows:
class Whatever
{
void MyPolymorphicMethod(Container<IComparable*> &listOfComparables);
}
This method can now take any child of Container that contains any class implementing IComparable, so it would be extremely flexible.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How do you get a directory listing sorted by creation date in python? What is the best way to get a list of all files in a directory, sorted by date [created | modified], using python, on a windows machine?
A: # *** the shortest and best way ***
# getmtime --> sort by modified time
# getctime --> sort by created time
import glob,os
lst_files = glob.glob("*.txt")
lst_files.sort(key=os.path.getmtime)
print("\n".join(lst_files))
A: sorted(filter(os.path.isfile, os.listdir('.')),
key=lambda p: os.stat(p).st_mtime)
You could use os.walk('.').next()[-1] instead of filtering with os.path.isfile, but that leaves dead symlinks in the list, and os.stat will fail on them.
A: There is an os.path.getmtime function that gives the number of seconds since the epoch
and should be faster than os.stat.
import os
os.chdir(directory)
sorted(filter(os.path.isfile, os.listdir('.')), key=os.path.getmtime)
A: Here's my version:
def getfiles(dirpath):
a = [s for s in os.listdir(dirpath)
if os.path.isfile(os.path.join(dirpath, s))]
a.sort(key=lambda s: os.path.getmtime(os.path.join(dirpath, s)))
return a
First, we build a list of the file names. isfile() is used to skip directories; it can be omitted if directories should be included. Then, we sort the list in-place, using the modify date as the key.
A: Here's a one-liner:
import os
import time
from pprint import pprint
pprint([(x[0], time.ctime(x[1].st_ctime)) for x in sorted([(fn, os.stat(fn)) for fn in os.listdir(".")], key = lambda x: x[1].st_ctime)])
This calls os.listdir() to get a list of the filenames, then calls os.stat() for each one to get the creation time, then sorts against the creation time.
Note that this method only calls os.stat() once for each file, which will be more efficient than calling it for each comparison in a sort.
A: For completeness with os.scandir (2x faster over pathlib):
import os
sorted(os.scandir('/tmp/test'), key=lambda d: d.stat().st_mtime)
A: In python 3.5+
from pathlib import Path
sorted(Path('.').iterdir(), key=lambda f: f.stat().st_mtime)
A: I've done this in the past for a Python script to determine the last updated files in a directory:
import glob
import os
search_dir = "/mydir/"
# remove anything from the list that is not a file (directories, symlinks)
# thanks to J.F. Sebastion for pointing out that the requirement was a list
# of files (presumably not including directories)
files = list(filter(os.path.isfile, glob.glob(search_dir + "*")))
files.sort(key=lambda x: os.path.getmtime(x))
That should do what you're looking for based on file mtime.
EDIT: Note that you can also use os.listdir() in place of glob.glob() if desired - the reason I used glob in my original code was that I was wanting to use glob to only search for files with a particular set of file extensions, which glob() was better suited to. To use listdir here's what it would look like:
import os
search_dir = "/mydir/"
os.chdir(search_dir)
files = filter(os.path.isfile, os.listdir(search_dir))
files = [os.path.join(search_dir, f) for f in files] # add path to each file
files.sort(key=lambda x: os.path.getmtime(x))
A: Without changing directory:
import os
path = '/path/to/files/'
name_list = os.listdir(path)
full_list = [os.path.join(path,i) for i in name_list]
time_sorted_list = sorted(full_list, key=os.path.getmtime)
print time_sorted_list
# if you want just the filenames sorted, simply remove the dir from each
sorted_filename_list = [ os.path.basename(i) for i in time_sorted_list]
print sorted_filename_list
A: Update: to sort dirpath's entries by modification date in Python 3:
import os
from pathlib import Path
paths = sorted(Path(dirpath).iterdir(), key=os.path.getmtime)
(put @Pygirl's answer here for greater visibility)
If you already have a list of filenames files, then to sort it inplace by creation time on Windows (make sure that list contains absolute path):
files.sort(key=os.path.getctime)
The list of files you could get, for example, using glob as shown in @Jay's answer.
old answer
Here's a more verbose version of @Greg Hewgill's answer. It is the most conforming to the question requirements. It makes a distinction between creation and modification dates (at least on Windows).
#!/usr/bin/env python
from stat import S_ISREG, ST_CTIME, ST_MODE
import os, sys, time
# path to the directory (relative or absolute)
dirpath = sys.argv[1] if len(sys.argv) == 2 else r'.'
# get all entries in the directory w/ stats
entries = (os.path.join(dirpath, fn) for fn in os.listdir(dirpath))
entries = ((os.stat(path), path) for path in entries)
# leave only regular files, insert creation date
entries = ((stat[ST_CTIME], path)
for stat, path in entries if S_ISREG(stat[ST_MODE]))
#NOTE: on Windows `ST_CTIME` is a creation date
# but on Unix it could be something else
#NOTE: use `ST_MTIME` to sort by a modification date
for cdate, path in sorted(entries):
print time.ctime(cdate), os.path.basename(path)
Example:
$ python stat_creation_date.py
Thu Feb 11 13:31:07 2009 stat_creation_date.py
A: from pathlib import Path
import os
sorted(Path('./').iterdir(), key=lambda t: t.stat().st_mtime)
or
sorted(Path('./').iterdir(), key=os.path.getmtime)
or
sorted(os.scandir('./'), key=lambda t: t.stat().st_mtime)
where m time is modified time.
A: Here's my answer using glob without filter if you want to read files with a certain extension in date order (Python 3).
dataset_path='/mydir/'
files = glob.glob(dataset_path+"/morepath/*.extension")
files.sort(key=os.path.getmtime)
A: this is a basic step for learn:
import os, stat, sys
import time
dirpath = sys.argv[1] if len(sys.argv) == 2 else r'.'
listdir = os.listdir(dirpath)
for i in listdir:
os.chdir(dirpath)
data_001 = os.path.realpath(i)
listdir_stat1 = os.stat(data_001)
listdir_stat2 = ((os.stat(data_001), data_001))
print time.ctime(listdir_stat1.st_ctime), data_001
A: Alex Coventry's answer will produce an exception if the file is a symlink to an unexistent file, the following code corrects that answer:
import time
import datetime
sorted(filter(os.path.isfile, os.listdir('.')),
key=lambda p: os.path.exists(p) and os.stat(p).st_mtime or time.mktime(datetime.now().timetuple())
When the file doesn't exist, now() is used, and the symlink will go at the very end of the list.
A: This was my version:
import os
folder_path = r'D:\Movies\extra\new\dramas' # your path
os.chdir(folder_path) # make the path active
x = sorted(os.listdir(), key=os.path.getctime) # sorted using creation time
folder = 0
for folder in range(len(x)):
print(x[folder]) # print all the foldername inside the folder_path
folder = +1
A: Here is a simple couple lines that looks for extention as well as provides a sort option
def get_sorted_files(src_dir, regex_ext='*', sort_reverse=False):
files_to_evaluate = [os.path.join(src_dir, f) for f in os.listdir(src_dir) if re.search(r'.*\.({})$'.format(regex_ext), f)]
files_to_evaluate.sort(key=os.path.getmtime, reverse=sort_reverse)
return files_to_evaluate
A: Add the file directory/folder in path, if you want to have specific file type add the file extension, and then get file name in chronological order.
This works for me.
import glob, os
from pathlib import Path
path = os.path.expanduser(file_location+"/"+date_file)
os.chdir(path)
saved_file=glob.glob('*.xlsx')
saved_file.sort(key=os.path.getmtime)
print(saved_file)
A: Maybe you should use shell commands. In Unix/Linux, find piped with sort will probably be able to do what you want.
A: Turns out os.listdir sorts by last modified but in reverse so you can do:
import os
last_modified=os.listdir()[::-1]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "197"
} |
Q: Tcp/Ip Socket connections in .NET For my current project, I need to request XML data over a tcp/ip socket connection. For this, I am using the TcpClient class:
Dim client As New TcpClient()
client.Connect(server, port)
Dim stream As NetworkStream = client.GetStream()
stream.Write(request)
stream.Read(buffer, 0, buffer.length)
// Output buffer and return results...
Now this works fine and dandy for small responses. However, when I start receiving larger blocks of data, it appears that the data gets pushed over the socket connection in bursts. When this happens, the stream.Read call only reads the first burst, and thus I miss out on the rest of the response.
What's the best way to handle this issue? Initially I tried to just loop until I had a valid XML document, but I found that in between stream.Read calls the underlying stream would sometimes get shut down and I would miss out on the last portion of the data.
A: You create a loop for reading.
Stream.Read returns int for the bytes it read so far, or 0 if the end of stream is reached.
So, its like:
int bytes_read = 0;
while (bytes_read < buffer.Length)
bytes_read += stream.Read(buffer, bytes_read, buffer.length - bytes_read);
EDIT: now, the question is how you determine the size of the buffer. If your server first sends the size, that's ok, you can use the above snippet. But if you have to read until the server closes the connection, then you have to use try/catch (which is good idea even if you know the size), and use bytes_read to determine what you received.
int bytes_read = 0;
try
{
int i = 0;
while ( 0 < (i = stream.Read(buffer, bytes_read, buffer.Length - bytes_read) )
bytes_read += i;
}
catch (Exception e)
{
//recover
}
finally
{
if (stream != null)
stream.Close();
}
A: Read is not guaranteed to fully read the stream. It returns the number of actual bytes read and 0 if there are no more bytes to read. You should keep looping to read all of the data out of the stream.
A: This is a possible way to do that and get in "response" the response string. If you need the byte array, just save ms.ToArray().
string response;
TcpClient client = new TcpClient();
client.Connect(server, port);
using (NetworkStream ns = c.GetStream())
using (MemoryStream ms = new MemoryStream())
{
ns.Write(request);
byte[] buffer = new byte[512];
int bytes = 0;
while(ns.DataAvailable)
{
bytes = ns.Read(buffer,0, buffer.Length);
ms.Write(buffer, 0, bytes);
}
response = Encoding.ASCII.GetString(ms.ToArray());
}
A: I strongly advice you to try WCF for such tasks. It gives you, after a not so steep learning curve, many benefits over raw socket communications.
For the task at hand, I agree with the preceeding answers, you should use a loop and dynamically allocate memory as needed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Why does subversion chown/recreate files on checkin? I have a personal wiki that I take notes on. The wiki's pages are in a subversion working copy directory, "pages", and I set their permissions to 664, owned by www-data:www-data. My username is in the "www-data" group, so I can checkin and mess with the pages manually.
For a while, I had an issue because every time I ran a checkin, the files would be owned by me:www-data instead of www-data:www-data, and I would no longer be able to change the wiki files through my web interface! I solved the issue by flipping the setgid bit on the "pages" directory, but I'm still confused as to why this happened in the first place:
Every time I check something into subversion, it appears as if svn deletes it and recreates it. Why? Does this behavior support some functionality that I'm not aware of? Is there a way to change it?
Thanks!
A: Set the "sticky" permissions bit.
find -type d -exec chgrp www-data {} +
find -type d -exec chmod g+s {} +
this will encourage checkout's file creation phase to inherit the directories permissions instead of switching to the person whom last edited it.
Edit: dow +s == setgid. Information left here for posterity and other readers.
A: I think you are using it wrong. What you could do is still have everything in subversion and have your local working copy separate from the www directory which you develop on.
Then just have the www working-copy auto-updated (or exported if you don't want the .svn directories in the www foldeR) for the www-user by some script (perhaps as a post-commit hook) which then setups permissions accordingly.
Work flow would be:
*
*edit files in /home/youruser/yourwiki-working-copy/
*do svn commit
*
*post-commit hook updates the files in /var/www/ (or wherever the wiki is located)
*goto 1.
This way, you don't have to worry about permissions and you can even have more than one person work on the web site with all the benefits of version control.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I add a shortcut key to Eclipse 3.2 Java plug-in to build the current project? One of the few annoying things about the Eclipse Java plug-in is the absence of a keyboard shortcut to build the project associated with the current resource. Anyone know how to go about it?
A: You can assign a keyboard binding to Build Project doing the following
*
*Open up the Keys preferences, Window> Preferences >General>Keys
*Filter by type Build Project
*Highlight the binding field.
You can then choose the binding you want
i.e. Ctrl+ALt+B, P,
A: In the Preferences dialog box, under the General section is a dialog box called "Keys". This lets you attach key bindings to many events, including Build Project.
A: I believe Ctrl+B is already configured for this by default. Just need to have an edit window with focus.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Compare SQL Server Reporting Services to Crystal Reports Which of Crystal Reports and SSRS (SQL Server Reporting Services) is better to use?
A: I've used both, I'll add a couple of points to what's already been said:
*
*For simple stuff, I'd recommend SSRS by default. Crystal is a bit bloated and quirky.
*Crystal can easily export to MS Word format (.doc). Customers want this pretty often in my experience.
*If formatting is important, Crystal may be better. For example, SSRS reports can't have more than one type of text in a single text box. Meaning that you can't have, say, a comment at the top of the report that has both italics and normal text. Crystal can do this:
Note: This report contains data from start date to end date inclusive of those dates.
SRSS can't (without multiple overlapping textboxes). I once had a 20 page word document given to me, to be converted to a report with data for the dozen or so graphs and tables in it. I started out in SSRS, but realised that in Crystal I could just copy and paste the hardcoded bits of the report straight from word, with coloured headings and all, and saved days of work. So Crystal does have a better "designer" in many respects.
Update:
Apparently both of these issues have been fixed in the current SRSS. Anyone care to comment further on this?
A: I agree with @Carlton partly for the reasons he describes. I also think that reporting services is a more mature product (even though Crystal Reports has been around longer). The Test and deploy model is pretty hearty, and the built-in ability to track report usage is very helpful.
I also find it much easier to design reports in Reporting Services - Microsoft has learned how to build a good IDE, whereas the Crystal IDE has always seemed like an after thought (though that's better than an afterbirth, which is what it used to be).
Edit: Additional thoughts
I also think that in a Windows shop, SSRS offers all kinds of sweet integrations with the OS and SQL Server. You can rely on SQL assemblies for built-in code reuse fairly easily in SSRS, and the integration with the Active Directory security model makes securing your reports very easy.
A: Man...my company has sooo many crystal reports...and the company before that had lots too. From version 8.5 to 11.5. They kind of already have their foot in the door so to speak. I think the CrystalReportViewer is a steaming piece of crap but it does work(for the most part).
After reading some of these answers, I'm switching to SSRS for my next reporting project! The writing is on the wall...MS will drop Crystal from VS and replace with SSRS. The only thing that's going to suck is when MS starts charging for it.
EDIT: Messing around with SSRS today and it looks quite promising. I must say the designer is taking some getting used to...CR Designer has it beat in ease of use. You can tell this is designed for programmers where as CR is geared toward report designers.
EDIT2: SSRS really fails to meet my reporting needs. Designing reports sucks when you want to preview and no parameter prompting available for standalone. Is there a better way to design them...preferably not in VS?
A: Did you think about an alternative? If you want to use the features of Crystal Reports but don't want to pay so much for it you could have a look at Crystal-Clear which is an Java based reporting tool supporting Crystal Reports templates too. It comes with a GUI-designer and data sources are also configurable per system. (Almost ODBC-like, you just set a name for the connection and the connection is configured on the system.)
A: I wonder why no-one mentioned one big issue with CR - that it just fails in source control or team environment. Correct me if I am wrong but I really looked very hard for any report diff tools. There's one (released about year ago) but it just doesn't do well - not because it's bad but (I guess) because CR just don't expose report structure correctly or something... I tried to export .rpt to XML but it's clunky and wrong. I even tried to write my own .rpt comparer.
It's not about team development only; even if there's single developer it's a nightmare to maintain reports versions, and if your customer decided to add few things or to change few colors, you're now cursed to track every single textbox since there's absolutely no way to find out the changes.
RDL format is much more clean and open. And this can be a pretty major advantage.
A: I've been using Crystal report till version 10 and was always doing stuff i wanted successfully along with ASP.NET applications. Its output on web is really good like WYSIWYG and exports to Excel and PDF are also accurate. Printing is also marvellously correct.
Recently, i've been working on SSRS 2005 for around an year and have been living to witness so many lackings that must have been provided out-of-the-box too. SSRS web output varies greatly with different browsers and diff resolutions and would easily make a developer sick. Moreover, the scrolling issues with report viewer would make an end-user mad quite early as it is based on HTML using an IFRAME. (Note: Crystal 13 uses an IFRAME in the web-viewer which suffers from sporadic text-wrapping and overlapping issues). The exports are not good at all. You cannot align images left or center in cells and cannot specify background colors for images. You cannot center align complete report body. For possibility, i've played with the rendered HTML for hours and figured out exact replacements to make that works, but these simple fixes were not known to SSRS developers i guess because probably, they never used SSRS for themselves.
Further, in web applications, you need to bear the bad UI for parameters out-of-the-box. I have simply removed it completely and the cost of creating it in ASPX pages made me think to design tabular reports in DataGrids instead using ObjectDataSource and database pagination technique. You cannot layout the parameters to your needs. Bugs in paramters sections postsback complete reports without any changes. Paging with grouping works with a trick, but than sorting fails on complete dataset. For every bit of medium to advanced level of UI requirement, SSRS costs so much of time figuring out that it is simply not possible. As there are less of SSRS users, online community has no good solutions for simple problems. Not to forget the good side of SSRS is its deployment, notifications built-in, caching and configuration side, but no UI to win.
BOTTOMLINE is that i've seen SSRS frustating you just due to the nonresponsiveness of Microsoft Support team when they have to say 'sorry! not now' after a month. SSRS 2008 also doesn't have many of these issues fixed rightaway. Moreover, moving to SSRS' 08 means a complete migration of back-end platforms as well. Keeping the equation in mind that the more you use a software, the more it gets mature over time, Crystal is anyways a much better choice because, SSRS soon accumulates costs for fixing their bugs by yourselve.
A: I have used both for years.
Crystal reports charges way too much and I try to use SSRS whenever possible.
However, SSRS does not support firefox or any other browser, only IE, this is a problem.
The reports in Crystal look nicer and the exports are more powerful, users want good exporting to Word.
If you are a java programmer, I would use Jasper Reports, it is free and uses Java language for functions.
A: I've used both (Crystal Reports 2008 and SSRS 2008) because I did not notice this thread in time.
Apart from the setup which was a bit easier with CR, I could not notice a single feature where CR is at least on par with SSRS. Yes, Crystal Reports is really that bad.
In my opinion the absoultely worst part in CR is the IDE. But there are also other killer features, such as poor SQL performance and horribly looking graphs (at least in the CR version that comes with VS 2008) are also notable "killer" features.
A: I have worked with both CR and SSRS and this what i found.
Crystal Reports runs in its own memory while SSRS runs in the limited SQL Server memory.
Crystal report is way too expensive. Recently they have slashed their price to 250$ i think as a response to SSRS 2008 release.
SSRS is free.
The biggest reason why Crystal report thrives :
You can design 80% of reports in a project using SSRS. But for the remaining 20% you have to use some other reporting tool. These 20% reports are used by none other than top level managers , directors & CEO. Their requirement can never be undermined and CR does a wonderfull job there.
Crystal report is still COM based. which is a pain in the a**.
Crystal report is not lacking capabilities or features. It is the work horse of SAP. But lot of its classes are protected and dont provide access to programmers. This is by intention. The SAP people are so greedy they want to keep every feature under control and charge extra fortune for exposing the claases and objects to the developers under special license arrangement. Just debug and quick watch the ReportDocument object in VS you will know inspite of everything available in the object you can hardly use them in your code !!
As far as GUI & CSS issues are concerned expecting a COM object which is designed for precision printing , to render correctly in every browser is a moot point as even a simple div renders differently in different browsers.
I have been working with Crystal reports since 7 years and cursing it all the time while actively exploring all other alternatives. But i am yet to come across something as flexible as Cystal Report. For bulk of the work SSRS is good. But for Dashboards , Complex Reports with subreports, Balance sheets, trial balances i shall never waste my time in SSRS.
Just try a Google Trend search on Crystal Report. It has been steadily declining since last 6 years. surely the future does not look good for CR.
But Hey ! MS, SAP and ORACLE still endorse Crystal Report at the core of their applications !! and no BI product comes cheap.
A: You can deploy an app using Reporting Services by including 3 DLL files. That's a huge benefit. (Note--you have to get one of the 3 DLL files from the GAC.)
With Crystal Reports, you have to install the runtime on each machine that will run the application (either a website or client app).
Reporting Services has all of the features most people need, and the deployment is MUCH easier. I will never user Crystal Reports unless I have to.
A: Since this thread has popped back open, I'll add my two cents. I had to use Crystal for about three years during the version 7 and 8 days. I hated every minute of it. I've seen a little bit of the newer versions and still don't like it.
I dislike it so much that it pains me to say this: from my experience Crystal's better suited than SSRS for complex reports. A coworker and I tried desperately to get a moderately complex report layout to work in SSRS and gave up. My impression of the product -- just my opinion, mind you -- is that it's not quite ready for prime time.
Crystal will make you hate your life and look for another job, but there's a reason it's so pervasive: it works.
A: On the one-hand, Crystal Reports is a steaming pile of expensive and overhyped donkey poo, and on the other hand SSRS actually fulfils all the promises that CR marketing makes - and it's free.
My contempt for CR stems from many years of being obliged to use the horrible thing. There's really no point in detailing the utter odiousness of CR when I can give you references like Clubbing the Crystal Dodo or Crystal Reports Sucks Donkey Dork (not as funny but rather more literate and substantiated with technical details).
Free?! Yup. You don't even have to buy MS SQL Server to get it - you can install SQL Express with Advanced Services. This is available as a download that includes SQL Server Reporting Services. While SQL Express is limited in the number of concurrent users it can support, the following observations are salient:
*
*The licence for SSRS obtained as
part of SQL Express only requires
that it be deployed as part of SQL
Express. There is nothing forbidding
connection to other data sources or
requiring that a report obtain data
from SQL Server.
*The abovementioned version of SSRS
has no intrinsic restrictions on
user connections. All limitations
are imposed on the SQL Express
database engine.
*SSRS uses ADO.NET, which includes,
out of the box, drivers for Oracle,
Jet (Access), OLEDB and ODBC
Thus you can connect the free version of SSRS to any back-end to which you can connect ADO.NET, which includes (for example) MySQL. I am told by Rory in a comment below that this is "not supported". That's true but I can't find anything in the licence that forbids it and while the drivers are not supplied by SSExpress they certainly are supplied by most versions of Visual Studio and you can ship them in your setup kit. This may not be an expressly supported configuration but so what? Even if you did have a full MSSQL licence it would be asking a bit much to expect Microsoft to help you talk to some third party database (not to mention a bit weird).
I use SSRS extensively at work both for inward facing reports and for outward facing reports embedded in ASP.NET applications that provide bureau services to large numbers of paying customers. In our case it happens that the backing store is a licensed copy of Microsoft SQL Server 2008, but this is incidental to the technical merits of our reporting solution.
There is a long list of capabilities that Crystal Reports claims to support but which either don't work or which require a staggeringly expensive licence if you want more than five users. You can't even trust CR to do SQL correctly. SELECT COUNT(*) FROM SOMETABLE WHERE 1=0 should produce a result of zero but it it produces one. The built-in query engine is defective, and a team that screws up something a bunch of amateurs can do for free (eg MySQL) has no hope of getting anything you'd describe as performance out of their code.
And they don't. The evil thing leaks memory like a bucket with no bottom, and if you use SQL profiling tools you will find it is spectacularly inefficient.
As for the alleged support, I can personally attest that dialog resize bugs have gone uncorrected for decades after they were first publicly documented. If you get out your credit card and pay the extortionate ransoms demanded (I too would want handsome pay to support such a horror) you will find yourself talking to someone who claims his name is David, but inexplicably pronounces it "Dah-feet", and who doesn't even understand your question, much less have an answer.
The SSRS support situation is fairly similar, but it actually works so you don't really need much.
SSRS, on the other hand, does everything that CR claims to. It is not without bugs, but they are delightfully few, and they seldom survive more than one release cycle.
The SSRS designer UI is hosted within the Visual Studio IDE. It is attractively presented in typical Microsoft style, but more than this it is quite well thought out, incorporating several simple but fundamental departures from traditional report designers. For example, to present tabular data you define a table rather than fiddling about with individual text boxes. As a result you don't have to screw around trying to line them up, and putting borders on them is a trivial stylesheet exercise.
SSRS actually does all the things CR claims to, it's inexpensive, there is extensive reliable technical documentation, it's designed to be extended (also documented) and you can connect it to anything for which you can get an ODBC driver. This is a no brainer.
Some shortcomings of SSRS
*
*It is not obvious how to bind fields in page headers and footers.
*It is not possible (so far as I know) to position relative to the bottom of a page. This is a genuine problem for certain types of report, and one for which I can think of no workaround.
*There's no support for expando horizontal rollups in cross-tabulations.
*There's no direct support for report headers and footers. Use Rectangle objects at top and bottom of the report layout, with pagebreaking properties set appropriately. Or use subreports. The people who complain about this obviously haven't tried very hard.
*Lack of support for overlapping group intervals (the CR grouping system can do this) UPDATE SSRS 2008 R2 now supports this. It's buried in the grouping edit dialog. Look up "group variables" and read this.
It actually looks like overlapping groups can be done with SSRS2005 too, although I never knew that. I wonder did anyone ever crack the bottom-relative positioning issue?
A: Reporting Services is much better in my experience. It is a better environment, but best of all the connections (data sources) are separate from the report and can be shared. This makes for much simpler deployment between environments.
A: I feel like a martian having an extensive and positive (but sometimes complex) experience with Crystal Reports, which is now completely integrated in our user interface (VBA), where requested reports parameters and filters are transparently inherited from the user interface ...
A: If you're considering SSRS and are concerned about the fact that it's "free" but you need to either buy and additional SQL Server license or distribute SQL Express, then you might be interested in Data Dynamics Reports
It offers all that is in SSRS and adds Master Reports, Themes, Calendar data region, Data Visualization (Databar, Sparkline, Iconset, ColorScale, ...), complete object model for maximum programming flexibility, royalty free end user report designer, barcode report item, excel template export and data merging, and much more. You can download a trial from Data Dynamics (now GrapeCity) and try it with few reports, you will not be disappointed.
A: I've worked with both now and have seen them side by side. Crystal has been good, but expensice over the years. Its clunky, but we've gron accustomed to it and familiar with the interface. I don't work in the LAMP environment, This house works with MS Dynamics and MAS with some pretty large clients.
I love not having to worry about the client install for SSRS. Distributions is far easier and sharing data sources and report models is working out well.
AS far as broweser go, I've seen perfectly rendered SSRS 2008 gauges in Firefox. I have exported those gauges to Excel without issue. I have deployed reports with and without MOSS to phones. The ability to use windows authentication to deploy reports as well as hide them is fantastic. The report viewer object in VS 2005 and later is sweet.
A: People, please refer to version of which you are talking about!
For example, the VS2008 built-in free RDLC reporting (the same as SQL Server 2005 Reporting Services) doesn't support binding fields in header and footer, and it is a basic feature!
Now I'm converting a huge report from this VS2008 Reporting / RDLC 2005 to Crystal Report 2008 Basic (which comes with VS2008) because it doesn't have this basic feature.
I am confident that Reporting Services 2.0 / RDLC 2008 (which comes with Visual Studio 2010) and better yet, the newest Reporting Services 3.0 / RDLC 2010 (which comes for FREE in SQL Server 2008 R2 Express With Advanced Services) are better SSRS solutions.
SQL Server R2 Express with Advanced Services (FREE)
http://www.microsoft.com/express/Database/InstallOptions.aspx
Right now I am making a Proof of Concept for Reporting Services 3.0 / RDLC 2010, and will post the results.
Reporting Services (SSRS/RDLC) is always more easy to work, but easy comes with a price. For simple reports, always choose SSRS/RDLC. For complex reports with master-detail, page control and so on, please make a PoC of these scenarios with newest SSRS/RDLC versions (2008,2010) and also with Crystal Reports.
A: For those who are comparing the old Crystal Reports XI and Reporting Service 1.0 please see this 2005 post:
SQL Server Reporting Services and Crystal Reports: A Competitive Analysis
http://www.crystalreportsbook.com/SSRSandCR_Conclusion.asp
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "84"
} |
Q: RTTI on objects in Delphi I'm trying to parse objects to XML in Delphi, so I read about calling the object's ClassInfo method to get its RTTI info.
The thing is, this apparently only works for TPersistent objects. Otherwise, I have to specifically add a compiler directive {$M+} to the source code for the compiler to generate RTTI info.
So I happily added the directive, only to find that, even if it did return something from the ClassInfo call (it used to return nil), now I cannot retrieve the class' properties, fields or methods from it. It's like it created the object empty.
Any idea what am I missing here? Thanks!
A: Did you put those properties and methods into the published section?
Besides that, 'classical' RTTI ($TYPEINFO ON) will only get you information on properties, not on methods. You need 'extended' RTTI ($METHODINFO ON) for those.
Good starting point for extended RTTI: David Glassborow on extended RTTI
(who would believe that just this minute I finished writing some code that uses extended RTTI and decided to browse the Stack Overflow a little:))
A: RTTI will only show you published properties,etc. - not just public ones.
Try your code with a TObject and see what happens - if that isn't working, post your code because not everyone is psychic.
A: Have you considered using the TXMLDocument component? It will look at your XML and then create a nice unit of Delphi classes that represents your XML file -- makes it really, really easy to read and write XML files.
A: As for the RttiType problem returning only nil, this probably occurs for one reason: in your test, you did not instantiate the class at any time. The compiler, because it never has a reference to this class (because it is not an instance at all), simply removes it from the information as a form of optimization. See the two examples below. The behavior is different when you have the class instantiated at some point in your code or not.
Suppose the following class:
type
TTest = class
public
procedure Test;
end;
and the following code below:
var
LContext: TRttiContext;
LType: TRttiType;
LTest: TTest;
begin
LContext := TRttiContext.Create;
for LType in LContext.GetTypes do
begin
if LType.IsInstance then
begin
WriteLn(LType.Name);
end;
end;
end;
so far, TTest class information is not available for use by RTTI. However, when we create at some point, within the application, then a reference is created for it within the compile, which makes this information available:
var
LContext: TRttiContext;
LType: TRttiType;
LTest: TTest;
begin
LTest := TTest.Create; //Here i´m using TTest.
//Could be in another part of the program
LContext := TRttiContext.Create;
for LType in LContext.GetTypes do
begin
if LType.IsInstance then
begin
WriteLn(LType.Name);
end;
end;
end;
At that point, if you use LContext.FindType ('TTest'), there will not be a nil return, because the compiler kept reference to the class. This explains the behavior you were having in your tests.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do you add a link that will add an event to your iPhone calendar from safari? This seems like it should be simple but after a couple hours of googling I have not figured it out. I know I can add iCal links using ICS files, but this does not work on the iPhone.
BTW, when I say iPhone I would like it to work on the touch also. Anyone have any luck with this?
A: As of iOS 5, if you create a simple http link to an .ics file, Mobile Safari will offer to open it up in Calendar.
A: According to the iPhone documentation there is no URL scheme for the Calendar application. (There are URL schemes for Mail, Phone, Map, YouTube and iTunes.)
Of course there could be something undocumented, but I'm not sure that using it would be a good idea even if you can find it.
A: You can get iPhone to download the .ics file (using Safari on a mobile web page) by using the webcal protocol:
webcal://website.mobi/mymeeting.ics
A: Of course it is possible but only if your JavaScript application is installed on the device. Look at http://tetontech.wordpress.com to see how to make calls from JavaScript to Objective-C. You can then use this and the Calendar Store Programming Guide from the documentation in Xcode to do what you want.
A: It is not possible. Apple does not want you to do this.
Now, what you could do is bookmark a javascript bookmarklet that checks the user-agent of the browser invoking it, and if the user is on Safari on their laptop or desktop Mac, then invoke the iCal using standard method (ICS file).
The user on iPhone could bookmark your page into a home screen bookmark with a useful (and perhaps custom) icon that said "Event" and title of "Meet Mary at 8:15". They could then, when they have synced their bookmarks, be reminded of the event and invoke it on their desktop browser.
Significant barriers here to educating users how to use this system, but it would work if you could convince people to do it, I think.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: How do you post to an iframe? How do you post data to an iframe?
A: Depends what you mean by "post data". You can use the HTML target="" attribute on a <form /> tag, so it could be as simple as:
<form action="do_stuff.aspx" method="post" target="my_iframe">
<input type="submit" value="Do Stuff!">
</form>
<!-- when the form is submitted, the server response will appear in this iframe -->
<iframe name="my_iframe" src="not_submitted_yet.aspx"></iframe>
If that's not it, or you're after something more complex, please edit your question to include more detail.
There is a known bug with Internet Explorer that only occurs when you're dynamically creating your iframes, etc. using Javascript (there's a work-around here), but if you're using ordinary HTML markup, you're fine. The target attribute and frame names isn't some clever ninja hack; although it was deprecated (and therefore won't validate) in HTML 4 Strict or XHTML 1 Strict, it's been part of HTML since 3.2, it's formally part of HTML5, and it works in just about every browser since Netscape 3.
I have verified this behaviour as working with XHTML 1 Strict, XHTML 1 Transitional, HTML 4 Strict and in "quirks mode" with no DOCTYPE specified, and it works in all cases using Internet Explorer 7.0.5730.13. My test case consist of two files, using classic ASP on IIS 6; they're reproduced here in full so you can verify this behaviour for yourself.
default.asp
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC
"-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<title>Form Iframe Demo</title>
</head>
<body>
<form action="do_stuff.asp" method="post" target="my_frame">
<input type="text" name="someText" value="Some Text">
<input type="submit">
</form>
<iframe name="my_frame" src="do_stuff.asp">
</iframe>
</body>
</html>
do_stuff.asp
<%@Language="JScript"%><?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC
"-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<title>Form Iframe Demo</title>
</head>
<body>
<% if (Request.Form.Count) { %>
You typed: <%=Request.Form("someText").Item%>
<% } else { %>
(not submitted)
<% } %>
</body>
</html>
I would be very interested to hear of any browser that doesn't run these examples correctly.
A: This function creates a temporary form, then send data using jQuery :
function postToIframe(data,url,target){
$('body').append('<form action="'+url+'" method="post" target="'+target+'" id="postToIframe"></form>');
$.each(data,function(n,v){
$('#postToIframe').append('<input type="hidden" name="'+n+'" value="'+v+'" />');
});
$('#postToIframe').submit().remove();
}
target is the 'name' attr of the target iFrame, and data is a JS object :
data={last_name:'Smith',first_name:'John'}
A: An iframe is used to embed another document inside a html page.
If the form is to be submitted to an iframe within the form page, then it can be easily acheived using the target attribute of the tag.
Set the target attribute of the form to the name of the iframe tag.
<form action="action" method="post" target="output_frame">
<!-- input elements here -->
</form>
<iframe name="output_frame" src="" id="output_frame" width="XX" height="YY">
</iframe>
Advanced iframe target use
This property can also be used to produce an ajax like experience, especially in cases like file upload, in which case where it becomes mandatory to submit the form, in order to upload the files
The iframe can be set to a width and height of 0, and the form can be submitted with the target set to the iframe, and a loading dialog opened before submitting the form. So, it mocks a ajax control as the control still remains on the input form jsp, with the loading dialog open.
Exmaple
<script>
$( "#uploadDialog" ).dialog({ autoOpen: false, modal: true, closeOnEscape: false,
open: function(event, ui) { jQuery('.ui-dialog-titlebar-close').hide(); } });
function startUpload()
{
$("#uploadDialog").dialog("open");
}
function stopUpload()
{
$("#uploadDialog").dialog("close");
}
</script>
<div id="uploadDialog" title="Please Wait!!!">
<center>
<img src="/imagePath/loading.gif" width="100" height="100"/>
<br/>
Loading Details...
</center>
</div>
<FORM ENCTYPE="multipart/form-data" ACTION="Action" METHOD="POST" target="upload_target" onsubmit="startUpload()">
<!-- input file elements here-->
</FORM>
<iframe id="upload_target" name="upload_target" src="#" style="width:0;height:0;border:0px solid #fff;" onload="stopUpload()">
</iframe>
A: If you want to change inputs in an iframe then submit the form from that iframe, do this
...
var el = document.getElementById('targetFrame');
var doc, frame_win = getIframeWindow(el); // getIframeWindow is defined below
if (frame_win) {
doc = (window.contentDocument || window.document);
}
if (doc) {
doc.forms[0].someInputName.value = someValue;
...
doc.forms[0].submit();
}
...
Normally, you can only do this if the page in the iframe is from the same origin, but you can start Chrome in a debug mode to disregard the same origin policy and test this on any page.
function getIframeWindow(iframe_object) {
var doc;
if (iframe_object.contentWindow) {
return iframe_object.contentWindow;
}
if (iframe_object.window) {
return iframe_object.window;
}
if (!doc && iframe_object.contentDocument) {
doc = iframe_object.contentDocument;
}
if (!doc && iframe_object.document) {
doc = iframe_object.document;
}
if (doc && doc.defaultView) {
return doc.defaultView;
}
if (doc && doc.parentWindow) {
return doc.parentWindow;
}
return undefined;
}
A: You can use this code, will have to add proper params to be passed and also the api url to get the data.
var allParams = { xyz, abc }
var parentElm = document.getElementBy... // your own element where you want to create the iframe
// create an iframe
var addIframe = document.createElement('iframe');
addIframe.setAttribute('name', 'sample-iframe');
addIframe.style.height = height ? height : "360px";
addIframe.style.width = width ? width : "360px";
parentElm.appendChild(addIframe)
// make an post request
var form, input;
form = document.createElement("form");
form.action = 'example.com';
form.method = "post";
form.target = "sample-iframe";
Object.keys(allParams).forEach(function (elm) {
console.log('elm: ', elm, allParams[elm]);
input = document.createElement("input");
input.name = elm;
input.value = allParams[elm];
input.type = "hidden";
form.appendChild(input);
})
parentElm.appendChild(form);
form.submit();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "309"
} |
Q: Do JavaScript properties calculate on each call? Since length is a JavaScript property, does it matter whether I use
for( var i = 0; i < myArray.length; i++ )
OR
var myArrayLength = myArray.length;
for( var i = 0; i < myArrayLength ; i++ )
A: for(var i = 0, iLen = myArray.length; i < iLen; i++)
See http://blogs.oracle.com/greimer/resource/loop-test.html for benchmarks of various Javascript loop constructs.
A: If myArray is javascript array then it doesn't matter enough for you to worry about it, its just a property look up on an object but then so is variable usage.
If OTH length is a property exposed by a collection object provided by a browsers DOM (especially IE) then it can be surprisingly expensive. Hence when enumerating such a DOM provided collection I tend to use:-
for (var i = 0, length = col.length; i < length; i++)
but for arrays I don't bother with that.
A: I think the answer to the intent of your question is, yes, the array.length property gets recalculated each iteration through the loop if you modify the array in the loop. For example, the following code:
var arr = [1,2,3];
for(var i = 0; i < arr.length; i++){
console.debug("i = " + i);
console.debug("indexed value = " + arr[i])
arr.pop();
}
will output:
i = 0
indexed value = 1
i = 1
indexed value = 2
whereas this code:
var arr = [1,2,3];
var l = arr.length;
for(var i = 0; i < l; i++){
console.debug("i = " + i);
console.debug("indexed value = " + arr[i])
arr.pop();
}
will output:
i = 0
indexed value = 1
i = 1
indexed value = 2
i = 2
indexed value = undefined
-J
A: No. It doesn't recalculate on call. It recalculates as required within the Array class.
It'll change when you use push, pop, shift, unshift, concat, splice, etc. Otherwise, it's just a Number -- the same instance every time you call for its value.
But, as long as you don't override it explicitly (array.length = 0), it'll be accurate with each call.
A: The length property is not computed on each call, but the latter version will be faster as you are caching the property lookup. Even with the most up to date JS implementations (V8, TraceMonkey, SquirrelFish Extreme) which use advanced (eg. SmallTalk era ;) ) property caching the property lookup is still at least one extra conditional branch more than your second version.
Array.length is not constant however as JS Arrays are mutable, so push, pop, array[array.length]=0, etc may all change it.
There are other concepts like the DOM NodeLists that you get from calls like document.getElementsBySelector which are expected to be live in which case the length may be recomputed as you iterate. But then if the length does get recomputed there's a good chance that it will also have actually changed, so manually caching the output may not be valid.
A: While the second form may be faster:
function p(f) { var d1=new Date(); for(var i=0;i<20;i++) f(); print(new Date()-d1) }
p(function(){for(var i=0;i<1000000; i++) ;})
p(function(){var a = new Array(1000000); for(var i=0;i<a.length; i++) ;})
> 823
> 1283
..it shouldn't really matter in any non-edge case.
A: According to the ECMAScript specification, it just tells how the "length" property should be calculated, but it doesn't says when.
I think that it might be implementation dependent.
If I were to implement it, I would do as Jonathan pointed out, but that in case of the "length" property from the Array objects.
A: If you ever has an idea that it could change during the looping then of course it should be checked for every loop ...
-- else it's obviously nuts to ask an object several times, as it would be if you place it in the evaluation-property of the if-statement ...
if(i=0, iMax=object.length; iMax>i; i++)
-- only in special cases you should think of doing otherwise !-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: What's the difference between ASP.Net MVC Routing and the new ASP.Net DynamicData Site routing? I've only started playing with both ASP.Net MVC and the new-to-VS2008 Dynamic Data Website Templates. I note that they both use routing in their URL handling, and I'm given to understand that because of routing, ASP.Net MVC won't work under IIS6. However my DynamicData site works just fine on IIS6.
I've had to temporarily abandon my exploration of ASP.Net MVC for an upcoming project due to the IIS7 requirement, and I'm wondering what the essential difference between the two is under the hood, i.e. what makes DynamicData sites work on IIS6 and MVC not?
A: ASP.NET MVC does indeed work under IIS6 (and IIS5 for that matter) as long as you enable wildcard mappings to ASP.NET. I have deployed MVC applications to production using IIS6, so I can guarantree that it's possible.
The key difference is that all URLs in DynamicData end in a file with an ASPX extension so, regardless of physical existance, the ASP.NET runtime is invoked (because ASPX is associated with ASP.NET), whereas most ASP.NET MVC requests to not have an extension (or have an MVC extension, which is not mapped by default) and thus IIS configuration is required before it will work.
IIS7 works automatically because IIS7 itself is managed and thus there is no separation between IIS/ASP.NET.
A: They all work on IIS6 out-of-the-box, without modifying IIS6. You just have to use some extension that is mapped to asp.net isapi, like .aspx, .ashx or similar.
Also, ASP.NET MVC works on IIS6 without problems! I run it moslty on IIS6, with .html extension mapped to asp.net isapi!
Some shared hosting providers are willing to make changes to IIS6 in order to support extension-less urls. If they don't want to do that, you can ask them to map .html to asp.net, urls are nice with that and seo friendly. Just to mention; google won't mind if you have .aspx or .html, it's the same like without extension.
A: ASP.Net MVC and Dynamic Data use the same routing engine contained in System.Web.Routing, so they both work under IIS6. The issue is with mapping requests to ASP.Net (as described by @Richard Szalay). MVC will work fine under IIS6 if a wildcard mapping is used, if the .mvc extension is mapped to ASP.Net, or if another file extension already mapped to ASP.Net (.aspx, .ashx, .axd, etc.) is used in your MVC routes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What's your #1 way to be careful with a live database? For my customer I occasionally do work in their live database in order to fix a problem they have created for themselves, or in order to fix bad data that my product's bugs created. Much like Unix root access, it's just dangerous. What lessons should I learn ahead of time?
What is the #1 thing you do to be careful about operating on live data?
A: *
*Check, recheck, and check again any statment that is doing updates. Even if you think you're just doing a simple, single column update, sooner or later you will not have enough coffee and forget a 'where' clause, nuking a whole table.
A couple other things I've found helpful:
*
*if using MySQL, enable Safe updates
*If you have a DBA, ask them to do it.
I 've found these 3 things have kept me from doing any serious harm.
A: *
*Nobody wants backup but everyone cries for recovery
*Create your DB with foreign key references, because you should:
*make it as hard as possible for yourself to update/delete data and destroying the structural integrity / something else with that
*If possible, run on a system where you have to commit the changes before you permanently store them (i.e. deactivate autocommit while repairing the db)
*Try to identify your problem's classes so that you get an understanding how to fix without trouble
*Get a routine in playing backups into a database, always have a second database on a test server at hand so you can just work on that
*Because remember: If something fails totally, you need to be up and running again as fast as any possible
Well, that's about all I can think of now. Take the bold passages and you see whats #1 for me. ;-)
A: Do a backup first: it should be the number 1 law of sysadmining anyways
EDIT: incorporating what others have said, make sure your UPDATES have appropriate WHERE clauses.
Ideally, changing a live database should never happen (beyond INSERTs and basic maintenance). Changing the live DB's structure is especially fraught with potential bad karma.
A: Maybe consider not using any deletes or drops at all. Or maybe reduce the user permissions so that only a special DB user can delete/drop things.
A: If you're using Oracle or another database that supports it, verify your changes before doing a COMMIT.
A: Data should always be deployed to live via scripts, which can be rehearsed as many times as it is required to get it right on dev. When there's dependent data for the script to run correctly on dev, stage it appropriately -- you can not get away with this step if you truly want to be careful.
A: Check twice, commit once!
A: Make your changes to a copy, and when you're satisfied, then apply the fix to live.
A: Often before I do an UPDATE or DELETE, I write the equivalent SELECT.
A: Backup or dump the database before starting.
A: To add on to what @Wayne said, write your WHERE before the table name in a DELETE or UPDATE statement.
A: BACK UP YOUR DATA. Learned that one the hard way working with customer databases on a regular basis.
A: Always add a using clause.
A: My rule (as an app developer): Don't touch it! That's what the trained DBAs are for. Heck, I don't even want permission to touch it. :)
A: Different colors per environment: We've setup our PL\SQL developer (IDE for Oracle) so that when you logon to the production DB all the windows are in bright red. Some have gone as far as assigning a different color for dev and test as well.
A: NEVER do an update unless you are in a BEGIN TRAN t1--not in a dev database, not in production, not anywhere. NEVER run a COMMIT TRAN t1 outside a comment--always type
--COMMIT TRAN t1
and then select the statement in order to run it. (Obviously, this only applies to GUI query clients.) If you do these things, it will become second nature to do them and you won't lose hardly any time.
I actually have a "update" macro that types this. I always paste this in to set up my updates. You can make a similar one for deletes and inserts.
begin tran t1
update
set
where
rollback tran t1
--commit tran t1
A: Always make sure your UPDATEs and DELETEs have the proper WHERE clause.
A: To answer my own question:
When writing an update statement, write it out of order.
*
*Write UPDATE [table-name]
*Write WHERE [conditions]
*Go back and write SET [columns-and-values]
Choosing the rows you want to update before you say what values you want to change is much safer than doing it in the other order. It makes it impossible for update person set email = 'bob@bob.com' to be sitting in your query window, ready to be run by a misplaced keystroke, ready to mess up every row in the table.
Edit: As others have said, write the WHERE clause for your deletes before you write DELETE.
A: As an example, I create SQL like this
--Update P Set
--Select ID, Name as OldName,
Name='Jones'
From Person P
Where ID = 1000
I highlight the text from the end up to the Select and run that SQL. Once I verify that it is pulling the record I want to update, I hit shift-up to hightlight the Update statement and run that.
Note that I used an alias. I never update a table name explicity. I always use an alias.
If I do this in conjunction with transactions and rollback/commits, I am really, really safe.
A: My #1 way to be careful with a live database? Don't touch it. :)
Backups can undo damage that you inflict on the database, but you're still likely to introduce negative side effects during that span of time.
No matter how solid you think the script you're working with is, run it through a test cycle. Even if a "test cycle" means running the script against your own instance of the database, make sure you do it. It's much better to introduce defects on your local box than a production environment.
A: BEGIN TRANSACTION;
That way you can rollback after a mistake.
A: Three things I've learned the hard way over the years...
First, if you're doing updates or deletes on live data, first write a SELECT query with the WHERE clause you'll be using. Make sure it works. Make sure it's correct. Then prepend the UPDATE/DELETE statement to the known working WHERE clause.
You never want to have
DELETE FROM Customers
sitting in your query analyzer waiting for you to write the WHERE clause... accidentally hit "execute" and you've just killed your Customer table. Oops.
Also, depending on your platform, find out how to take a quick'n'dirty backup of a table. In SQL Server 2005,
SELECT *
INTO CustomerBackup200810032034
FROM Customer
will copy every row from the entire Customer table into a new table called CustomerBackup200810032034, which you can then delete once you've done your updates and made sure everything's OK. If the worst happens, it's a lot easier to restore missing data from this table than to try and restore last night's backup from disk or tape.
Finally, be wary of cascade deletes getting rid of stuff you didn't intend to delete - check your tables' relationships and key constraints before modifying anything.
A: Make sure you specify a where clause when deleting records.
A: always test any queries beyond select on development data first to ensure it has the correct impact.
A: *
*if possible, ask to pair with someone
*always count to 3 before pressing Enter (if alone, as this will infuriate your pair partner!)
A: If I'm updating a database with a script, I always make sure I put a breakpoint or two at the start of my script, just in case I hit the run/execute by accident.
A: I'll add to recommendations of doing BEGIN TRAN before your UPDATE, just don't forget to actually do the COMMIT; you can do just as much damage if you leave your uncommitted transaction open. Don't get distracted by phones, co-workers, lunch etc when in the middle of updates or you'll find everyone else is locked up until you COMMIT or ROLLBACK.
A: I always comment out any destructive queries (insert, update, delete, drop, alter) when writing out adhoc queries in Query Analyzer. That way, the only way to run them, is to highlight them, without selecting the commented part, and press F5.
I also think it's a good idea, as already mentioned, to write your where statement first, with a select, and ensure that you are altering the right data.
A: *
*Always back up before changing.
*Always make mods (eg. ALTER TABLE) via a script.
*Always modify data (eg. DELETE) via a stored procedure.
A: Create a read only user (or get the DBA to do it) and only use that user to look at the DB. Add the appropriate permissions to schema so that you can view the content of stored procedures/views/triggers/etc. but not have the ability to change them.
A: The danger of running unintentional Deletes (or inserts, or updates) is always on my mind.
I always add "where 1=2" after them until I'm ready to pull the trigger.
A: I learned this in an interview and thought it was a great idea.
Begin Transaction
Delete from foo where FooID = 100
IF @@RowCount <> 1 Begin
Rollback Transaction
End
A: Never design any databases with cascading deletes. They're evil. If you do have cascading deletes on FKs, you never know how many rows in other referenced tables will be deleted when you delete a row with a delete statement.
That said, you can't assume anything about what other people do. I always do this:
1. Copy database to locally installed db (use dumps). Simply tell management you refuse to work if you cannot have a copy of the full DB on you local computer.
2. Make your script work on your local db, import the dump over and over until the script works perfectly on a cleanly imported dump. Then save the script to a file on disk.
3. Run script on production server.
4. Import script into SCM.
A: Make sure your query has a WHERE parameter specified
I was once mid-way through a complex update, got distracted, and finished the query early, forgetting the "where" clause. Then I got that sinking feeling, watching a half-second query rumble on for 3.. The several hours afterwards spent cleaning up customer data was quite the lesson!
A result of which is now when I work on the live db, I structure my queries like:
UPDATE my_table WHERE condition = true;
then go back and put in the columns etc to update. Takes a bit longer to write, but massively reduces my chance of making the same mistake again!
A: Do the exact same update in a Development environment first to make sure it works properly.
A: Turn off AutoCommit in Database IDE if it supports it. I have it turned off in Oracle SQL Developer all the time.
A: One quick extra I have not seen but that I do often is: backup the table your are updating. I do this by having a database to hold these backups. I can then write:
select *
into MyBackupDb..PeterTableName2008_09_28BeforeABigUpdate
This makes recovery from mistakes much faster down the road (when a full restore is not practical).
A: 1 - Always create a backup before opening a connection when you know you will need to update or insert records.
2 - When writing an update statement ALWAYS write the WHERE clause first then cursor back to the beginning of the line and write the field update portion.
3 - the where statement for #2 should be checked with a select statement.
A: Go buy Apex SQL Log. If you realize that you really screwed up, or even if it was someone else, you can use the log to reverse the changes.
A: dev against a backup - make sure the changes/fixes you want to apply come from a script. fat, clumsy fingers have no place when working with live data. If you can, wait for a maintenance window to apply and roll back if you can.
If you can't wait to apply right after a snapshot,backup, Make sure eveyrone understands how much work might be invovled in rolling forward the changes between the last snapshot and the time whne you applied the "fix" should it not work out.
A: Use the same process to QA even a simple SQL data fix as you would a code change of any kind. Ours includes being committed into CVS, Having and having executed a documented test plan, having a code review and having a change control process (where various members of management and the senior operations engineer review and sign off a change).
We do this for all normal SQL data fixes, even simple ones- the only exception being when something is required to fix a major issue with production RIGHT NOW (e.g. blocking all customers from logging in) - in which case we ensure that there are as many pairs of eyes on the job as possible (typically 3-4 people around one workstation, all of whom can veto any action).
A: Besides making a backup of the database before making any destructive changes, another trick I find useful sometimes is if I know the exact number of records I expect to be changed by whatever I'm doing, then add a limit clause:
delete from customers where id = 5 limit 1;
"id" might be a unique index and I know there's only row that's going to match my where clause, but the limit is additional layer of prevention against accidentally nuking the wrong data. I've gotten in the habit of typing this part first, in hopes of further prevention against accidental keystrokes. I start out with "delete limit 1", then go back and add the other stuff.
A: If your using SQL Server 2005 and above you can create a database snapshot that will allow you to roll back any changes made to the snapshot point in time.
A: When updating/deleting only one record mysql lets you put "LIMIT 1" at the end so only one record gets damaged even when WHEN clause is wrong.
A: I often have to insert,update or delete data on the live production site (As a data analyst that is probably 40% of my job). Most of the time it is through automated DTS or SSIS packages. However, we are also the people who have to fix problem records or update production when a major client driven change occurs (such as a re-organization of the sales force). Sometimes the issues are due to bugs in the code, but usually they are as a result of strange things the client did to their file or things the users managed to mess up to save us time fixing a problem or because they wanted to circumvent the normal process for just this one quick easy change!(Note to users -Please don't try to fix things manually that are normally done thorugh an automated process, you do not know what else the process may be doing!!!!!) So sometimes we don't have the luxury of testing a script on dev first as what is in need of fixing is not on dev.
My rules: Never insert data directly from a file to a production table. Always bring it into a work table so you can view it first. Have checks in place so that if there is bad data in the file, the process will fail before you get to the final step of inserting into production data. Clean up the data first.
If you must delete a large number of records, it can save you if you select those records first into a work table. Then do the delete. That way if things go wrong it is much easier to recover. If you have audit tables, know how to recover data from them quickly. Again if something goes wrong it is much faster to recover from the audit tables than from the tape backup.
I write a delete statement like this:
begin tran
delete a
--select (list important fields to see here)
from table1 a where field1 = 'x'
--rollback tran
--commit tran
Note several things about this. First by using the alias I can't accidentally delete the whole table by only highlighting one line and running the code. By starting the where clause on the same line as the table I am much less likely to miss highlighting it. If I had joins I would make sure each line ends in a place where the code won't work unless it goes to the next line. Again, this ensures you get an error instead of an oopsie. Always run the select first and note the number of records affected (and look at the data to make sure it looks like the right records!) Then do not commit unless the number of records is correct when you run the actual delete. Yeah, it's prettier to start the where on a separate line, it is safer to end each line of a delete so that it will not run unless the whole query is highlighted.
Updates follow simliar rules.
A: if you are using oracle 10/11g... Flashback
http://www.oracle.com/technology/deploy/availability/htdocs/Flashback_Overview.htm
It basically maintains a sliding window of undo logs that can be referenced by time or a named marker. It makes dead simple to undo days worth of changes in a couple minutes. without bringing the database down.
A: To let the DBAs do the work. Coming from a development background, I don't want/need/should have access to anyone's live database. To me, it is the equivalent of letting a DBA fix coding issue in the DAL, just because it has "database" in the title. :-)
A: If you are using SQL Server 2005+ Management Studio, you can turn Implicit Transactions ON.
A: *
*I always like to have someone look over my shoulder whenever I connect to a live database.
*Have a recent copy of the production database stored somewhere. This will often preclude your need to query the production db.
*If you ever have to do anything to a running db. Document it, and add a fix in as a coded feature available to admins. This way you have one less excuse to point a query tool at your db.
A: Whenever I open a connection to PROD, or switch to a PROD data context, the first thing I always do is add this comment before and after my active working code block:
-- PROD -- PROD -- PROD -- PROD -- PROD -- PROD --
There have been times when I noticed this while my thumb was on the Alt key and my middle finger was halfway to the 'X' key. Whew!
A: If you are using Microsoft SQL Server Management Studio 2008 you can specify which color to be used in the info window while executing querys (at the bottom of the Sql Query Editor)
On the Connection Promt choose Options > Use Custom Color and select RED for production.
A: Backups of the data before you start messing with it just like anything else.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "80"
} |
Q: Unauthorized Sharepoint WSDL from ColdFusion 8 How do I solve the error:
Unable to read WSDL from URL: https://workflowtest.site.edu/_vti_bin/Lists.asmx?WSDL.
Error: 401 Unauthorized.
I can successfully view the WSDL from the browser using the same user account.
I'm not sure which authentication is being used (Basic or Integrated).
How would I find that out?
The code making the call is:
<cfinvoke
username="username"
password="password"
webservice="https://workflowtest.liberty.edu/_vti_bin/Lists.asmx?WSDL"
method="GetList"
listName="{CB02EB71-392E-4906-B512-8EC002F72436}"
>
The impression I get is that ColdFusion doesn't like being made to authenticate to get the WSDL.
Full stack trace:
coldfusion.xml.rpc.XmlRpcServiceImpl$CantFindWSDLException: Unable to read WSDL from URL: https://workflowtest.liberty.edu/_vti_bin/Lists.asmx?WSDL.
at coldfusion.xml.rpc.XmlRpcServiceImpl.retrieveWSDL(XmlRpcServiceImpl.java:709)
at coldfusion.xml.rpc.XmlRpcServiceImpl.access$000(XmlRpcServiceImpl.java:53)
at coldfusion.xml.rpc.XmlRpcServiceImpl$1.run(XmlRpcServiceImpl.java:239)
at java.security.AccessController.doPrivileged(Native Method)
at coldfusion.xml.rpc.XmlRpcServiceImpl.registerWebService(XmlRpcServiceImpl.java:232)
at coldfusion.xml.rpc.XmlRpcServiceImpl.getWebService(XmlRpcServiceImpl.java:496)
at coldfusion.xml.rpc.XmlRpcServiceImpl.getWebServiceProxy(XmlRpcServiceImpl.java:450)
at coldfusion.tagext.lang.InvokeTag.doEndTag(InvokeTag.java:413)
at coldfusion.runtime.CfJspPage._emptyTcfTag(CfJspPage.java:2662)
at cftonytest2ecfm1787185330.runPage(/var/www/webroot/tonytest.cfm:16)
at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:196)
at coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:370)
at coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65)
at coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:279)
at coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:48)
at coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40)
at coldfusion.filter.PathFilter.invoke(PathFilter.java:86)
at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:70)
at coldfusion.filter.BrowserDebugFilter.invoke(BrowserDebugFilter.java:74)
at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:28)
at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38)
at coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:46)
at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38)
at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22)
at coldfusion.CfmServlet.service(CfmServlet.java:175)
at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89)
at jrun.servlet.FilterChain.doFilter(FilterChain.java:86)
at coldfusion.monitor.event.MonitoringServletFilter.doFilter(MonitoringServletFilter.java:42)
at coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:46)
at jrun.servlet.FilterChain.doFilter(FilterChain.java:94)
at jrun.servlet.FilterChain.service(FilterChain.java:101)
at jrun.servlet.ServletInvoker.invoke(ServletInvoker.java:106)
at jrun.servlet.JRunInvokerChain.invokeNext(JRunInvokerChain.java:42)
at jrun.servlet.JRunRequestDispatcher.invoke(JRunRequestDispatcher.java:286)
at jrun.servlet.ServletEngineService.dispatch(ServletEngineService.java:543)
at jrun.servlet.jrpp.JRunProxyService.invokeRunnable(JRunProxyService.java:203)
at jrunx.scheduler.ThreadPool$DownstreamMetrics.invokeRunnable(ThreadPool.java:320)
at jrunx.scheduler.ThreadPool$ThreadThrottle.invokeRunnable(ThreadPool.java:428)
at jrunx.scheduler.ThreadPool$UpstreamMetrics.invokeRunnable(ThreadPool.java:266)
at jrunx.scheduler.WorkerThread.run(WorkerThread.java:66)
A: CFInvoke can only pass basic authentication, not windows integrated authentication.
Sharepoint won't be able to downgrade to basic authentication since sharepoint needs to know who is calling the services to check authentication and authorization of the data being requested.
Your best bet here is to create an asp.net proxy service you can call with CFInvoke which will impersonate the windows authentication you need to call the sharepoint web service.
Another option would be to create a C# com object which makes the authenticated call and passes the information back to CF when you call the C# com object from CF.
A: This blog post on cfsilence.com might help. ColdFusion/Sharepoint Integration - Part 1 - Authenticating
What it boils down to:
*
*ColdFusion uses the Apache Axis web service library
*by default, this library can do nothing but basic HTTP authentication
*you can configure Axis to use an alternative HTTP client library (Jakarta Commons)
*this one can do NTLM authentication, no need to change code or IIS authentication scheme
*after a restart of ColdFusion, you should be good to go
A: I know nothing about ColdFusion but I my first suspect would be a simple permision problem rather than anything CF specific.
Does that CF call use Basic or Integrated authentication? Does IIS match?
Can you browse to the WSDL using IE/Firefox and the same user account?
A: It it's a permission error like darpy and Ryan suggest, the easiest thing to do is grant the right permission to ColdFusion. On Windows, ColdFusion defaults and runs as the Local System account. You can change that by updating the LogOn properties of the Windows Service for ColdFusion.
A: I had the same problem.
Open your IIS, and change the LoginType to Basic.
(in my german Windows it is: "Verzeichnissicherheit" -> "Steuerung des Anonymen Zugriffs und der Authentifizierung" -> "Bearbeiten" -> Set the checkbox for "Standardauthentifizierung" )
-Kevin
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Win32_LogicalDisk fails for floppies Using an idea from Bob King idea I wrote the following method.
It works great on CD's, removable drives, regular drives.
However for a floppy it always return "Not Available". Any ideas?
public static void TestFloppy( char driveLetter ) {
using( var searcher = new ManagementObjectSearcher( @"SELECT * FROM Win32_LogicalDisk WHERE DeviceID = '" + driveLetter + ":'" ) )
using( var logicalDisks = searcher.Get() ) {
foreach( ManagementObject logicalDisk in logicalDisks ) {
var fs = logicalDisk[ "FreeSpace" ];
Console.WriteLine( "FreeSpace = " + ( fs ?? "Not Available" ) );
logicalDisk.Dispose();
}
}
}
A: I'm sorry that I don't have a better answer, but I used to do the same thing (use the ManagementObjectSearcher) and found that everytime the code ran the floppy drive would do some sort of seek/init sequence.
So instead I changed to the below and interate:
ManagementClass comp = new ManagementClass(scope, new ManagementPath(obj), null);
comp.Get();
objs = comp.GetInstances();
I want to say this is a known bug in WMI but unfortunately the code comments don't leave any hints :(
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Handling Managed Delegates in Unmanaged code I know I can get this to technically work but I'd like to implement the cleanest possible solution. Here's the situation:
I have a managed library which wraps an unmanaged C-style library. The C-style library functionality I'm currently wrapping does some processing involving a list of strings. The library's client code can provide a delegate, such that during the list processing, if an "invalid" scenario is encountered, the library can callback to the client via this delegate and allow them to choose the strategy to use (throw an exception, replace the invalid characters, etc.)
What I'd ideally like to have is all of the managed C++ isolated in one function, and then be able to call a separate function which takes only unmanaged parameters so that all of the native C++ and unmanaged code is isolated at that one point. Providing the callback mechanism to this unmanaged code is proving to be the sticking point for me.
#pragma managed
public delegate string InvalidStringFilter(int lineNumber, string text);
...
public IList<Result> DoListProcessing(IList<string> listToProcess, InvalidStringFilter filter)
{
// Managed code goes here, translate parameters etc.
}
#pragma unmanaged
// This should be the only function that actually touches the C-library directly
std::vector<NativeResult> ProcessList(std::vector<char*> list, ?? callback);
In this snippet, I want to keep all of the C-library access within ProcessList, but during the processing, it will need to do callbacks, and this callback is provided in the form of the InvalidStringFilter delegate which is passed in from some client of my managed library.
A: .NET can auto-convert the delegate to a pointer to function if it is declared right. There are two caveats
*
*The C function must be built STDCALL
*The pointer to function does not count as a reference to the object, so you must arrange for a reference to be kept so that the underlying object is not Garbage collected
http://www.codeproject.com/KB/mcpp/FuncPtrDelegate.aspx?display=Print
A: If I am understanding the problem correctly you need to declare an unmanaged callback function in your C++/CLI assembly that acts as the bridge between your C library and managed delegate.
#pragma managed
public delegate string InvalidStringFilter(int lineNumber, string text);
...
static InvalidStringFilter sFilter;
public IList<Result> DoListProcessing(IList<string> listToProcess, InvalidStringFilter filter)
{
// Managed code goes here, translate parameters etc.
SFilter = filter;
}
#pragma unmanaged
void StringCallback(???)
{
sFilter(????);
}
// This should be the only function that actually touches the C-library directly
std::vector<NativeResult> ProcessList(std::vector<char*> list, StringCallback);
As written this code is clearly not thread-safe. If you need thread safety then some other mechanism would be needed to look up the correct managed delegate in the callback, either a ThreadStatic, or perhaps the callback gets passed a user supplied variable you could use.
A: You want to do something like this:
typedef void (__stdcall *w_InvalidStringFilter) (int lineNumber, string message);
GCHandle handle = GCHandle::Alloc(InvalidStringFilter);
w_InvalidStringFilter callback =
static_cast<w_InvalidStringFilter>(
Marshal::GetFunctionPointerForDelegate(InvalidStringFilter).ToPointer()
);
std::vector<NativeResult> res = ProcessList(list, callback);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Where are some good resources for learning the new features of Perl 5.10? I didn't realize until recently that Perl 5.10 had significant new features and I was wondering if anyone could give me some good resources for learning about those. I searched for them on Google and all I found was some slides and a quick overview. Some of the features (to me at least) would be nice if they had more explanation.
Any links would be appreciated.
-fREW
A: Learning Perl, Fifth Edition and later scover 5.10. Other than that, the resources that other people mentioned, including perldelta, are pretty good. I've written a couple of articles about some of the features for The Effective Perler.
The best way to get started is to pick an interesting feature and play around with it. That's how the authors of the guides you'll find figured it out. That's how you really should start learning anything is just about any language.
A: Regex Improvements include named captures: Look Here
A: See Ricardo Signes' slides for his excellent "Perl 5.10 For People Who Aren't Totally Insane."
http://www.slideshare.net/rjbs/perl-510-for-people-who-arent-totally-insane
A: The perldelta manpage has all the nitty-gritty details. There's a brief (but informative) slide presentation, Perl 5.10 for people who aren't totally insane. And a good PerlMonks discussion on the issue.
A: I found this article useful.
This one is more focused on 5.10 Advanced Regular Expressions.
And also A beginners' Introduction to Perl 5.10.
Finally, this excellent summary on why you should start using Perl 5.10 and from which I extracted the following:
*
*state variables No more scoping variables with an outer curly block, or the naughty my $f if 0 trick (the latter is now a syntax error).
*defined-or No more $x = defined $y ? $y : $z, you may write $x = $y // $z instead.
*regexp improvements Lots of work done by dave_the_m to clean up the internals, which paved the way for demerphq to add all sorts of new cool stuff.
*smaller variable footprints Nicholas Clark worked on the implementations of SVs, AVs, HVs and other data structures to reduce their size to a point that happens to hit a sweet spot on 32-bit architectures
*smaller constant sub footprints Nicholas Clark reduced the size of constant subs (like use constant FOO => 2). The result when loading a module like POSIX is significant.
*stacked filetests you can now say if (-e -f -x $file). Perl 6 was supposed to allow this, but they moved in a different direction. Oh well.
*lexical $_ allows you to nest $_ (without using local).
*_ prototype you can now declare a sub with prototype . If called with no arguments, gets fed with $ (allows you to replace builtins more cleanly).
*x operator on a list you can now say my @arr = qw(x y z) x 4. (Update: this feature was backported to the 5.8 codebase after having been implemented in blead, which is how Somni notices that it is available in 5.8.8).
*switch a true switch/given construct, inspired by Perl 6
*smart match operator (~~) to go with the switch
*closure improvements dave_the_m thoroughly revamped the closure handling code to fix a number of buggy behaviours and memory leaks.
*faster Unicode lc, uc and /i are faster on Unicode strings. Improvements to the UTF-8 cache.
*improved sorts inplace sorts performed when possible, rather than using a temporary. Sort functions can be called recursively: you can sort a tree
*map in void context is no longer evil. Only morally.
*less opcodes used in the creation of anonymous lists and hashes. Faster pussycat!
*tainting improvements More things that could be tainted are marked as such (such as sprintf formats)
*$# and $* removed Less action at a distance
*perlcc and JPL removed These things were just bug magnets, and no-one cared enough about them.
A: There's been a string of articles in Perl Tips about Perl 5.10:
*
*Regular Expressions in Perl 5.10
*Perl 5.10: Defined-or and state
*Switch (given and when)
*Perl 5.10 and Hash::Util::FieldHash
*Smart-match in Perl 5.10
There are also my What's new in Perl 5.10 slides on Perl Training Australia's presentations page, but since they were written before 5.10 was released, some things may have changed slightly. I believe that rjbs' Perl 5.10 for people who aren't totally insane now covers everything my slides used to.
All the best,
Paul
Mandatory bias disclosure: I wrote almost all of the resources mentioned in this post,
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: A Question of Greedy vs. Negated Character Classes in Regex I have a very large file that looks like this (see below). I have two basic choices of regex to use on it (I know there may be others but I'm really trying to compare Greedy and Negated Char Class) methods.
ftp: [^\D]{1,}
ftp: (\d)+
ftp: \d+
Note: what if I took off the parense around the \d?
Now + is greedy which forces backtracking but the Negated Char Class require a char-by-char comparison. Which is more efficient? Assume the file is very-very large so minute differences in processor usage will become exaggerated due to the length of the file.
Now that you've answered that, What if my Negated Char Class was very large, say 18 different characters? Would that change your answer?
Thanks.
ftp: 1117 bytes
ftp: 5696 bytes
ftp: 3207 bytes
ftp: 5696 bytes
ftp: 7200 bytes
A: [^\D]{1,} and \d+ is exactly the same. The regex parser will compile [^\D] and \d into character classes with the equal content, and + is just short for {1,}.
If you want lazy repetition you can add a ? at the end.
\d+?
The character classes are usually compiled into bitmaps for ASCII-characters. For Unicode (>=256) it is implementation dependent. One way could be to create a list of ranges, and use binary search on it.
For ASCII the lookup time is constant over the size. For Unicode it is logarithmic or linear.
A: Both your expressions have the same greediness. As others have said here, except for the capturing group they will execute in the same way.
Also in this case greediness won't matter much at the execution speed since you don't have anything following \d*. In this case the expression will simply process all the digits it can find and stop when the space is encountered. No backtracking should occur with these expressions.
To make it more explicit, backtracking should occur if you have an expression like this:
\d*123
In this case the parser will engulf all the digits, then backtrack to match the three following digits.
A: My initial tests show that [^\D{1,} is a bit slower than \d+, on a 184M file the former takes 9.6 seconds while the latter takes 8.2
Without capturing (the ()'s) both are about 1 second faster, but the difference between the two is about the same.
I also did a more extensive test where the captured value is printed to /dev/null, with a third version splitting on the space, results:
([^\D]{1,}): ~18s
(\d+): ~17s
(split / /)[1]: ~17s
Edit: split version improved and time decreased to be the same or lower than (\d+)
Fastest version so far (can anyone improve?):
while (<>)
{
if ($foo = (split / /)[1])
{
print $foo . "\n";
}
}
A: This is kind of a trick question as written because (\d)+ takes slightly longer due to the overhead of the capturing parentheses. If you change it to \d+ they take the same amount of time in my Perl / system.
A: Yeah, I agree with MizardX... these two expressions are semantically equivalent. Although the grouping could require additional resources. That's not what you were asking about.
A: Not a direct answer to the question, but why not a different approach altogether, since you know the format of the lines already? For example, you could use a regex on the whitespace between the fields, or avoid regex altogether and split() on the whitespace, which is generally going to be faster than any regular expression, depending on the language you're using.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/168531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.