Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a stored procedure with multiple insert/select statements. Let's say I'm using the first insert to populate a "Manager" table. On insert, a ManagerId (incremented automatically) is added, but not referenced in the insert statement. I then wish to use the ManagerId from this table to insert a row into another table, where ManagerId is a foreign key. Sample code as follows.. ``` USE [TEST] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sptInsertNewManager] -- Add the parameters for the stored procedure here @FName varchar(50), @LName varchar(50), @EMail varchar(100), @UserRoleID int, @LANUserID varchar(25), @GroupID int AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here INSERT INTO [Manager] ([FName], [LName], [Email], [UserRoleID], [LANUserID], [ActiveFlag], [GroupID] ) VALUES (@FName ,@LName ,@EMail ,@UserRoleID ,@LANUserID ,1 ,@GroupID); COMMIT SELECT ManagerId FROM [Manager] AS newManager WHERE LANUserID = @LANUserID; --also insert into Users table. INSERT INTO [dbo].[aspnet_Users] ( [UserId], [UserName], [LoweredUserName], [ManagerId] ) VALUES ( NEWID(), @LANUserID, LOWER(@LANUserID), newManager) END ``` This, obviously, does not work. This was my attempt at solving this. I'm fairly new to SQL, so any help with this problem would be greatly appreciated.
use scope\_identity() after your insert to capture the most recent single identity value from within your current scope: ``` DECLARE @ID int INSERT ...... SELECT @ID=scope_identity() ``` use @ID wherever you need it note: SCOPE\_IDENTITY() is preferred over the older @@IDENTITY because it gives the last Identity value in the current scope, which avoids issues from triggers that insert into log tables (with identities). However, if you need multiple identity values (inserting a set of rows), use OUTPUT and INTO: ``` declare @test table (RowID int identity(1,1) primary key not null, RowValue varchar(10) null) declare @OutputTable table (RowID int not null) insert into @test (RowValue) OUTPUT INSERTED.RowID INTO @OutputTable SELECT 'A' UNION SELECT 'B' UNION SELECT 'C' UNION SELECT 'D' UNION SELECT 'E' select * from @OutputTable ``` the output: ``` (5 row(s) affected) RowID ----------- 1 2 3 4 5 (5 row(s) affected) ```
for MS Sql Server Whenever you insert a record and it has auto-increment column (Identity Column in ms-sql parlance) you can user this to retrieve the id for the row you inserted: ``` @id = SCOPE_IDENTITY() ``` this ensures that you get the identity column value that your insert produced, not by the other users during the time your code was running (between insert and getting identity by *scope\_identity()*)
Retrieving auto-incremented column value from table where multiple insert/selects in single stored procedure
[ "", "sql", "sql-server", "stored-procedures", "" ]
I am trying to create a php file that adds a user and create public\_html directory in linux using exec() function(php). I can add following code to the php file exec("useradd -d /home/username -m username"); exec("mkdir /home/username/public\_html"); now..I have to add public\_html to smb.conf to work public\_html on windows. is it possible to edit smb.conf on command line? of course, I am going to use SSH as root to execute the php file.
Take a look at [parse\_ini\_file()](http://www.php.net/parse_ini_file) function. Then if you want to write inside the ini file try this: ``` function writeini($file,$title,$item,$data) { $source = file($file); $t_match = "/^\[".$title."\]/i"; $s_title = ""; $c = "0"; $o = ""; foreach ($source as $temp) { $c++; if (preg_match("/^\[.+\]/i",$temp)) { if (preg_match($t_match,$l_title)) { $done = "yes"; $f_write .= $item."=".$data."\n"; } $l_title = $temp; $f_write .= $temp; } elseif (preg_match("/^".$item."=/i",$temp) && preg_match($t_match,$l_title)) { $done = "yes"; $f_write .= $item."=".$data."\n"; $l_title = ""; } elseif ($c == count($source) && $done != "yes") { if (preg_match($t_match,$l_title)) { $f_write .= $temp.$item."=".$data."\n"; } else { $f_write .= $temp."[".$title."]\n".$item."=".$data."\n"; } } else { $f_write .= $temp; } } $ini_write = fopen($file,'w'); fwrite($ini_write,$f_write); fclose($ini_write); } ```
EDIT: After reading Roy Rico's answer I believe I misunderstood the question. You can use either nano or vi. If you're a beginner then nano would be more straight forward. Simply use it like so: ``` nano /etc/samba/smb.conf ``` I believe that ctrl+o will save the file and ctrl+x will exit.
how do I edit a file on command line?
[ "", "php", "linux", "command-line", "" ]
I have a class and I want to have some bit masks with values 0,1,3,7,15,... So essentially i want to declare an array of constant int's such as: ``` class A{ const int masks[] = {0,1,3,5,7,....} } ``` but the compiler will always complain. I tried: ``` static const int masks[] = {0,1...} static const int masks[9]; // then initializing inside the constructor ``` Any idea on how this can be done? Thanks!
``` class A { static const int masks[]; }; const int A::masks[] = { 1, 2, 3, 4, ... }; ``` You may want to fixate the array within the class definition already, but you don't have to. The array will have a complete type at the point of definition (which is to keep within the .cpp file, not in the header) where it can deduce the size from the initializer.
``` // in the .h file class A { static int const masks[]; }; // in the .cpp file int const A::masks[] = {0,1,3,5,7}; ```
declare a array of const ints in C++
[ "", "c++", "arrays", "constants", "declaration", "" ]
I'm working on a bookmarklet, and thought I'd throw down a challenge: how to inject an external javascript file from a link in as few characters as possible. Here's the shortest I was able to come up with: ``` javascript:(function(d){d.body.appendChild(d.createElement('script')).src='URL'})(document) ``` That's 88 characters without the URL. Can the Stack Overflow javascript gurus here do better? I'll be accepting the working answer with the fewest characters, so put on your thinking caps! (One thing: the bookmarklet must work in all major browsers. [This](http://pastie.org/309899) is a clever solution, but doesn't work in all major browsers, because it returns a value.)
Assuming that String.prototype isn't contaminated, we can save some chars. ``` javascript:with(document)(body.appendChild(createElement('script')).src='URL')._ ```
``` javascript:void(with(document)body.appendChild(createElement('script')).src='URL') ``` 79 characters. Credit to [Ben Blank](https://stackoverflow.com/users/46387/ben-blank) for the use of `void`.
Injecting a javascript file with the fewest possible characters?
[ "", "javascript", "code-golf", "bookmarklet", "minimize", "" ]
I wanted to have some radio buttons that disabled when the mouse went over and enabled again when it went out (just for fun). ``` <form> <input type="radio" name="rigged" onMouseOver="this.disabled=true" onMouseOut="this.disabled=false"> </form> ``` When the mouse goes on it it does what it should be when it goes back off the button wont re-enable. Also, how do I make it default to enable so that when you refresh the page it doesn't stay disabled. Thanks in advance.
The inputs do not fire the mouseout events because they are disabled. So you have to wrap it in a div and catch the div's events. If you want pure javascript, use Phaedrus's example "toggleDisabled" script. If you want jQuery and not-so-newbie friendly: ``` <html> <head> <title>Page</title> <script src="jquery-1.3.2.min.js"></script> <script> $(function() { function toggleDisabled(d) { var disable = d; this.disableChildren = function() { $(this).children().each(function() { this.disabled = d; }); } } $("form .radios").hover(new toggleDisabled(true).disableChildren, new toggleDisabled(false).disableChildren); }); </script> </head> <body> <form> <div class="radios"> <input type="radio" name="rigged" value="1"/> Item One<br /> <input type="radio" name="rigged" value="2"/> Item Two<br /> <input type="radio" name="rigged" value="3"/> Item Three<br /> <input type="radio" name="rigged" value="4"/> Item Four </div> </form> </body> </html> ```
You could achieve the same effect by wrapping your radio buttons in a div tag and setting the onmouseover and onmouseout events. ``` <div id="container" onmouseout="this.disabled=false" onmouseover="this.disabled=true"> <input name="rigged" type="radio"> </div> ``` The above solution only works in IE, for a solution that works in FireFox do the following. ``` <script type="text/javascript"> function toggleDisabled(el) { try { el.disabled = el.disabled ? false : true; } catch(E){ } if (el.childNodes && el.childNodes.length > 0) { for (var x = 0; x < el.childNodes.length; x++) { toggleDisabled(el.childNodes[x]); } } } </script> ``` \*This javaScript function was borrowed from here: [Enable or disable DIV tag and its inner controls using Javascript](http://hspinfo.wordpress.com/2008/09/03/enable-or-disable-div-tag-and-its-inner-controls-using-javascript/) ``` <div id="container" onmouseover="toggleDisabled(this)" onmouseout="toggleDisabled(this)"> <input name="rigged" type="radio"> </div> ```
Javascript OnMouseOver and Out disable/re-enable item problem
[ "", "javascript", "events", "" ]
This may be rather noobish but I'm gonna ask anyhow. I have a class that inserts into my database. After the insert is finished, I would like to be able to test whether the insert was successful. Can someone tell me what a good way to do this may be? I call the class like this: ``` foo = new Myclass('my params'); ``` `print_r($foo)` returns an object. Again, all I am interested in is testing whether the insert was successful or not.
From <https://www.php.net/manual/en/function.mysql-db-query.php> ``` mysql_db_query() selects a database, and executes a query on it. ``` Returns a positive MySQL result resource to the query result, or FALSE on error. The function also returns TRUE/FALSE for INSERT/UPDATE/DELETE queries to indicate success/failure. So you can have MyClass set an error flag in the constructor, as the return value from mysql\_db\_query() you then check for in your code.. ``` foo = new Myclass('my params'); if (foo->error) { // error occured } else { // all is good } ``` hope this helps!
You could just do $foo->affected\_rows() Which on insert will be 1.
Testing a php object
[ "", "php", "" ]
I was asked in an interview that what is the usage of virtual keyword with a class declaration in C++ and I answered that virtual keyword cannot be used with a class declaration in C++. The interviewer said that it is possible and asked me to test it later. Now that I have checked it myself I have come to know that this is possible and this is not a compiler error. In fact, when I do something like this with a Visual C++ compiler: ``` virtual class Test { int i; }; ``` I get a compiler warning "warning C4091: 'virtual ' : ignored on left of 'Test' when no variable is declared". I haven't been able to find out yet that what this warning means and further what is the usage of virtual keyword. If there is no helpful usage, then why is this allowed in the first place and why is this not a compiler error.
That's a bug in VC++. Comeau and gcc both reject the code.
**virtual** can be used when deriving from another class: ``` class Foo : public virtual Bar { } ``` This is used to avoid having multiple versions of the same base class when using multiple inheritance. [Wikipedia](http://en.wikipedia.org/wiki/Virtual_inheritance) has a good article on the subject.
usage of virtual keyword with a class declaration
[ "", "c++", "virtual", "" ]
I get a warning in MSVC++ when I try to read an integer from a file and make a bool variable equal it. ``` accessLV[i] = FileRead(file1, i + 1); ``` (accessLV is an array of bools, FileRead is a function I made to decrease the syntax involved in reading from a file, i is because the statement is within a for loop) I've tried using a static\_cast: ``` accessLV[i] = static_cast<bool>(FileRead(file1, i + 1)); ``` But I still get the warning. I've tried doing this (I'm not sure the exact term): ``` accessLV[i] = (bool)FileRead(file1, i + 1)); ``` And the warning is still there. Is there anyway to get rid of the warning without making accessLV an array of ints? NB: this is the syntax of FileRead, if it helps: ``` int FileRead(std::fstream& file, int pos) { int data; file.seekg(file.beg + pos * sizeof(int)); file.read(reinterpret_cast<char*>(&data), sizeof(data)); return data; } ```
How about ``` accessLV[i] = FileRead(file1, i + 1) != 0; ```
What you want to do is basically accessLV[i] = (FileRead(file1, i + 1) != 0)
How can I fix an int-to-bool warning in C++?
[ "", "c++", "integer", "boolean", "" ]
I'm programming in Java and my applications are making a lot of use of DB. Hence, it is important for me to be able to test my DB usage easily. What DB tests are all about? For me, they should supply two simple requirements: 1. Verify SQL syntax. 2. More importantly, check that the data is selected/updated/inserted correctly, according to a given situation. Well then, it seems that all I need is a DB. But actually, I prefer not, as there are few difficulties using a DB for a test: * "Just get yourself a testing DB, how hard could it be?" - Well, in my working place, to have a personal testing DB is pretty impossible. You have to use a "public" DB, which is accessible for everyone. * "These tests sure ain't fast..." - DB tests tend to be slower than usual tests. It's really not ideal to have slow tests. * "This program should handle any case!" - It becomes somewhat annoying and even impossible to try and simulate each and every case in a DB. For each case a certain amount of insert/update queries should be made, which is annoying and takes time. * "Wait a second, how do you know there are 542 rows in that table?" - One of the main principles in testing, is to be able to test the functionality in a way different from that of your tested-code. When using a DB, there's usually one way to do something, therefore the test is exactly the same as the core-code. So, you can figure out I don't like DBs when it comes to tests (of course I will have to get to this in some point, but I'd rather get there later on my testing, after I found most bugs using the rest of the test methods). But what am I looking for? I'm looking for a way to simulate a DB, a mock DB, using the file system or just virtual memory. I thought that maybe there's a Java tool/package which allows to simply construct (using code interface) a DB mock per test, with simulated tables and rows, with SQL verification, and with a code interface for monitoring its status (rather then using SQL). Are you familiar with this kind of tool? --- **Edit:** Thanks for the answers! Although I was asking for a tool, you also provided me with some tips concerning the problem :) It will take me some time to check out your offers, so I can't say right now whether your answers were satisfying not. Anyway, here's a better view of what I'm looking for - Imagine a class named DBMonitor, that one of its features is finding the number of rows in a table. Here is an imaginary code of how I would like to test that feature using JUnit: ``` public class TestDBMonitor extends TestCase { @Override public void setUp() throws Exception { MockConnection connection = new MockConnection(); this.tableName = "table1"; MockTable table = new MockTable(tableName); String columnName = "column1"; ColumnType columnType = ColumnType.NUMBER; int columnSize = 50; MockColumn column = new MockColumn(columnName, columnType, columnSize); table.addColumn(column); for (int i = 0; i < 20; i++) { HashMap<MockColumn, Object> fields = new HashMap<MockColumn, Object>(); fields.put(column, i); table.addRow(fields); } this.connection = connection; } @Test public void testGatherStatistics() throws Exception { DBMonitor monitor = new DBMonitor(connection); monitor.gatherStatistics(); assertEquals(((MockConnection) connection).getNumberOfRows(tableName), monitor.getNumberOfRows(tableName)); } String tableName; Connection connection; } ``` I hope this code is clear enough to understand my idea (excuse me for syntax errors, I was typing manually without my dear Eclipse :P). By the way, I use ORM partially, and my raw SQL queries are quite simple and shouldn't differ from one platform to another.
new answer to old question (but things have moved forward a bit): > How to simulate a DB for testing (Java)? you don't simulate it. you mock your repositiories and you don't test them or you use the same db in your tests and you test your sqls. All the in-memory dbs are not fully compatible so they won't give you full coverage and reliability. and never ever try to mock/simulate the deep db objects like connection, result set etc. it gives you no value at all and is a nightmare to develop and maintain > to have a personal testing DB is pretty impossible. You have to use a "public" DB, which is accessible for everyone unfortunately a lot of companies still use that model but now we have [docker](https://www.docker.com/) and there are images for almost every db. commercial products have some limitations (like up to a few gb of data) that are non-important for tests. also you need your schema and structure to be created on this local db > "These tests sure ain't fast..." - DB tests tend to be slower than usual tests. It's really not ideal to have slow tests. yes, db tests are slower but they are not that slow. I did some simple [measurements](http://blog.piotrturski.net/2017/09/database-integration-tests-are-slow.html) and a typical test took 5-50ms. what takes time is the application startup. there are plenty of ways to speed this up: * first DI frameworks (like spring) offers a way run only some part of your application. if you write your application with a good separation of db and non-db related logic, then in your test you can [start only the db part](https://docs.spring.io/spring-boot/docs/1.5.7.RELEASE/reference/html/boot-features-testing.html#boot-features-testing-spring-boot-applications-testing-autoconfigured-jpa-test) * each db have plenty of tuning options that makes it less durable and much faster. that's perfect for testing. [postgres example](https://www.postgresql.org/docs/9.6/static/non-durability.html) * you can also put the entire db into tmpfs * another helpful strategy is to have groups of tests and keep db tests turned off by default (if they really slows your build). this way if someone is actually working on db, he needs to pass additional flag in the cmd line or use IDE (testng groups and custom test selectors are perfect for this) > For each case a certain amount of insert/update queries should be made, which is annoying and takes time 'takes time' part was discussed above. is it annoying? I've seen two ways: * prepare one dataset for your all test cases. then you have to maintain it and reason about it. usually it's separated from code. it has kilobytes or megabytes. it's to big to see on one screen, to comprehend and to reason about. it introduces coupling between tests. because when you need more rows for test A, your `count(*)` in test B fails. it only grows because even when you delete some tests, you don't know which rows were used only by this one test * each tests prepares its data. this way each test is completely independent, readable and easy to reason about. is it annoying? imo, not at all! it let you write new tests very quickly and saves you a lot of work in future > how do you know there are 542 rows in that table?" - One of the main principles in testing, is to be able to test the functionality in a way different from that of your tested-code uhm... not really. the main principle is to check if your software generates desired output in response to specific input. so if you call `dao.insert` 542 times and then your `dao.count` returns 542, it means your software works as specified. if you want, you can call commit/drop cache in between. Of course, sometimes you want to test your implementation instead of the contract and then you check if your dao changed the state of the db. but you always test sql A using sql B (insert vs select, sequence next\_val vs returned value etc). yes, you'll always have the problem 'who will test my tests', and the answer is: no one, so keep them simple! other tools that may help you: 1. [testcontainers](https://www.testcontainers.org/) will help you provide real db. 2. [dbunit](http://dbunit.sourceforge.net/) - will help you clean the data between tests cons: * a lot of work is required to create and maintain schema and data. especially when your project is in a intensive development stage. * it's another abstraction layer so if suddenly you want to use some db feature that is unsupported by this tool, it may be difficult to test it 3. [testegration](https://github.com/piotrturski/testegration) - intents to provide you full, ready to use and extensible lifecycle (disclosure: i'm a creator). cons: * free only for small projects * very young project 4. [flyway](https://flywaydb.org/) or [liquibase](http://www.liquibase.org/) - db migration tools. they help you easily create schema and all the structures on your local db for tests.
Java comes with [Java DB](http://www.oracle.com/technetwork/java/javadb/overview/index.html). That said, I would advise against using a different type of DB than what you use in production unless you go through an ORM layer. Otherwise, your SQL might not be as cross-platform as you think. Also check out [DbUnit](http://dbunit.sourceforge.net/)
How to simulate a DB for testing (Java)?
[ "", "java", "database", "unit-testing", "testing", "jdbc", "" ]
Are there extra Java EE libraries required to run Hibernate standalone Java applications, or is the standard SDK sufficient?
Hibernate works just fine on Java SE. It can be used as a JPA implementation on Java EE, but JPA, as well, can be used just fine on Java SE.
Java EE is not required to run Hibernate. I think a good place to start is with the Hibernate documentation. I think it is very intuitive and easy to follow. <http://docs.jboss.org/hibernate/stable/core/reference/en/html/tutorial-firstapp.html> You can also find the jar files to download here: <https://www.hibernate.org/344.html>
Is Java EE required for Hibernate
[ "", "java", "hibernate", "jakarta-ee", "" ]
In a SQL database I got storage information for each user, for each customer. I need to write a stored procedure that sums the disc usage (MB) for the customers users. When I got the total sum of all users for a single customer (totalDiscUsage), I need to perform a calculation (simple example): ``` x = numberOfUsers * 200 y = (totalDiscUsage - x) / (10 * 5) ``` After that, I need to write y to the database, and do that for all customers. My question is how I can do this the best way? Maybe using a cursor to go through each customer, perform the sum and calculation, and write the result to the database? Would that be a good solution? Thanks in advance. Help will be much appreciated!
Please - do not go around using cursors again! :-) SQL is set-based - avoid cursors whenever you can! And here you can - easily. So for each customer, you need to determine the number of users first, and then do a simple calculation, and update the customer. My suggestion would be: * create a little function that calculates the number of users for a given customer * create a second little function to do the same to calculate the total disk usage * write your stored proc as a simple update statement that does the calculation ``` CREATE PROCEDURE dbo.UpdateCustomers() AS BEGIN UPDATE Customers SET y = (dbo.GetTotalDiskUsage(CustomerID) - dbo.GetTotalUsers(CustomerID) * 200) / 50 END ``` Seems too easy, almost :-) Marc
You can do an insert with a select query for the values such as: ``` INSERT INTO DiskUsage SELECT SUM(Users.Usage) FROM Customers INNER JOIN Users ON somefield and somefield ``` or something like that. not 100% sure if that will work for your situation since I'm not really clear on it from your question.
Sum and calculations for each customer in SQL stored procedure
[ "", "sql", "stored-procedures", "" ]
``` #include <iostream> #include <string> #include <fstream> using namespace std ; string strWord( int index , string line) { int count = 0; string word; for ( int i = 0 ; i < line.length(); i++) { if ( line[i] == ' ' ) { if ( line [i+1] != ' ') { count ++; if ( count == index) { return word; } word =""; } } else { word += line[i]; } } } int main ( ) { ifstream inFile ; inFile.open("new.txt"); string line , id; cout <<"Enter id : "; cin >>id; while(!inFile.eof() ) { getline ( inFile , line ); if ( strWord ( 1, line ) == id ) { cout <<strWord ( 2 , line ) <<endl; break; } } system("pause"); } ``` Question is : Can someone explain this to me I do not get what it is doing i mean i get the concept but what is each line doing?
You wanted a comment of each line ``` // function that returns a word from 'line' with position 'index' // note that this is not a zero based index, first word is 1, // second is 2 etc .. string strWord(int index, string line) { int count = 0; // number of read words string word; // the resulting word for (int i = 0 ; i < line.length(); i++) { // iterate over all characters in 'line' if (line[i] == ' ') { // if this character is a space we might be done reading a word from 'line' if (line[i+1] != ' ') { // next character is not a space, so we are done reading a word count++; // increase number of read words if (count == index) { // was this the word we were looking for? return word; // yes it was, so return it } word =""; // nope it wasn't .. so reset word and start over with the next one in 'line' } } else { // not a space .. so append the character to 'word' word += line[i]; } } } int main( ) // main function of the program, execution starts here { ifstream inFile; // construct input file stream object inFile.open("new.txt"); // associate the stream with file named "new.txt" string line, id; // cout << "Enter id : "; // write "Enter id :" to console cin >> id; // read input from console and put the result in 'id' while (!inFile.eof()) { // do the following as long as there is something to read from the file getline(inFile, line); // read a line from the file and put the value into 'line' if (strWord(1, line) == id) { // if the first word in 'line' equals 'id' .. cout << strWord(2, line) << endl; // prints the second word in 'line' break; // exits the while loop } } system("pause"); // pause the program (should be avoided) } ```
I did not read very carefully, but it looks like it prints out the second word of the line for which the first word is the one entered by user.
C++ code to get line of file and read the second word of the line?
[ "", "c++", "" ]
As part of a test bench I'm building, I'm looking for a simple class to calculate a histogram of integer values (number of iterations taken for an algorithm to solve a problem). The answer should be called something like this: ``` Histogram my_hist = new Histogram(); for( uint i = 0; i < NUMBER_OF_RESULTS; i++ ) { myHist.AddValue( some_result ); } for( uint j = 0; j < myHist.NumOfBins; j++ ) { Console.WriteLine( "{0} occurred {1} times", myHist.BinValues[j], myHist.BinCounts[j] ); } ``` I was suprised a bit of googling didn't turn up a neat solution but maybe I didn't search for the right things. Is there a generic solution out there or is it worth rolling my own?
You could use SortedDictionary ``` uint[] items = new uint[] {5, 6, 1, 2, 3, 1, 5, 2}; // sample data SortedDictionary<uint, int> histogram = new SortedDictionary<uint, int>(); foreach (uint item in items) { if (histogram.ContainsKey(item)) { histogram[item]++; } else { histogram[item] = 1; } } foreach (KeyValuePair<uint, int> pair in histogram) { Console.WriteLine("{0} occurred {1} times", pair.Key, pair.Value); } ``` This will leave out empty bins, though
Based on BastardSaint's suggestion I came up with a neat and fairly generic wrapper: ``` public class Histogram<TVal> : SortedDictionary<TVal, uint> { public void IncrementCount(TVal binToIncrement) { if (ContainsKey(binToIncrement)) { this[binToIncrement]++; } else { Add(binToIncrement, 1); } } } ``` So now I can do: ``` const uint numOfInputDataPoints = 5; Histogram<uint> hist = new Histogram<uint>(); // Fill the histogram with data for (uint i = 0; i < numOfInputDataPoints; i++) { // Grab a result from my algorithm uint numOfIterationsForSolution = MyAlorithm.Run(); // Add the number to the histogram hist.IncrementCount( numOfIterationsForSolution ); } // Report the results foreach (KeyValuePair<uint, uint> histEntry in hist.AsEnumerable()) { Console.WriteLine("{0} occurred {1} times", histEntry.Key, histEntry.Value); } ``` Took me a while to work out how to make it generic (to begin with I just overrode the `SortedDictionary` constructor which meant you could only use it for `uint` keys).
Simple histogram generation of integer data in C#
[ "", "c#", "histogram", "" ]
I have the following code snippet where some strings are initialized in the `if` block: ``` String serialmask = request.getParameter( "serialmask"); String serialincrement = request.getParameter( "serialincrement"); if (serialmask == "1") { String tserialmask = "aaa########"; } else { String tserialmask = ""; } if (serialincrement == "1") { String tserialincrement = "aaa^^^^^^^^"; } else { String tserialincrement = ""; } out.println( itemimport( partnumber, itemcost, itemlistprice, itemdescription, PurchProdLineKey, UnitMeasKey, itemclasskey, trackmethod, tserialmask, tserialincrement ) ); ``` The error I'm getting is `"cannot find symbol" symbol : variable tserialmask` in the `out.println(itemimport(....tserialmask,tserialincrement));` statement. I tried declaring the variables outside of the `if` block and this seems to bring on even more errors saying it's already been declared.
You need to declare the variable first, but then just assign it. Here's the version for `tserialincrement` (the same is true for `tserialmask`) ``` String tserialincrement; if (serialincrement == "1") { tserialincrement = "aaa^^^^^^^^"; } else { tserialincrement = ""; } ``` However, there are two things wrong with this: * You're using == on a string, which is a bad idea in almost all situations; use `equals` * You can do it in one statement (per variable) with the conditional operator: ``` String tserialmask = "1".equals(serialmask) ? "aaa########" : ""; String tserialincrement = "1".equals(serialincrement) ? "aaa^^^^^^^^" : ""; ``` In addition, I'd suggest nicer variable names, using Pascal casing (e.g. `serialMask`) and something more useful than just "t" as a prefix. (What does that mean?)
You need to declare tserialmask and tserialincrement outside of the if/else blocks. Otherwise, they go out of scope when that block ends. ``` String serialmask = request.getParameter( "serialmask"); String serialincrement = request.getParameter( "serialincrement"); String tserialmask; String tserialincrement; if (serialmask == "1") { tserialmask = "aaa########"; } else { tserialmask = ""; } if (serialincrement == "1") { tserialincrement = "aaa^^^^^^^^"; } else { tserialincrement = ""; } out.println(itemimport(partnumber,itemcost,itemlistprice,itemdescription,PurchProdLineKey,UnitMeasKey,itemclasskey,trackmethod,tserialmask,tserialincrement)); ```
Java error: can't find symbol?
[ "", "java", "string", "" ]
I don't get [FirePhp](http://www.firephp.org/) FirePhp is an extension for the Firefox add-on Firebug. I don't get why this should be used at all. How can this tool give me more than your everyday PHP debugger? Seriously? I don't understand how to implement this, can someone please explain... video with narration perhaps?
*In deep narrator voice* FirePHP, with a little work, lets you send information to the firebug console. It *can* be rather detailed. see <http://www.christophdorn.com/Blog/2009/04/03/how-to-integrate-firephp-for-ajax-development/> for the best info on it. is it better than other debugging tools? debatable. Honestly, I don't really use it much.
According to what I've read about it: Say you want to know the value of some random variable in your PHP script. Your server sends a special header containing that info, and Firebug reads it. The advantage of the process is that the actual response (headers excluded) is the same, so you can keep the same response handling code. I guess it makes Ajax'd PHP scripts easier to debug.
Trouble understanding Firephp
[ "", "php", "debugging", "firephp", "" ]
I'm attempting to measure test coverage for the first time using [gcov](http://gcc.gnu.org/onlinedocs/gcc/Gcov.html). Now that I'm past the initial learning curve, things seem to be going well, except for one little snag. I expect that it boils down to a lack of understanding on my part, so I'm hoping someone familiar with gcov can explain what's going on. The issue is that I have one particular header file showing 0% coverage. However the *only* thing in that header is a class declaration for a well-used class. In addition, The implementation file for the class shows 100% coverage, and I can point to 5 different tests where I know for a fact that the class is being successfully instantiated. So why would gcov report 0% coverage on the class, and how can I fix it? -- edit -- As lacqui pointed out below, if the header has no executable code, it shouldn't be included in coverage calculations. And in fact, out of all of my classes, that particular header file is the only one that gcov is trying to generate coverage information for. I've done a line-by-line comparison with the other headers, and the biggest difference I found was that this particular header does not have an explicit constructor or destructor, while the others do. In an attempt to minimize the differences between the files, I added an explicit ctor and dtor to the problem class. However, this did not make any difference.
It looks like I've sorted out the problem. As I expected, it's more of a lack-of-experience issue, than anything. As it turns out, gcov was only finding a subset of the available tracefiles, and was therefore reporting only part of the total results. Finding and adding the rest of the tracefiles resolved the issue. For any that are curious, the problematic header file still shows up in the output. In fact, all of the other header files do too, even though they contain no executable code. However, they all show 90+% coverage.
C++ creates implicit destructor even if you haven't created one explicitly. If you don't destroy you object before program end, this implicit destructor never get called. gcov show this by marking a '0' counter on the line of class declaration.
Why does gcov report 0% coverage on a header file for a well used class?
[ "", "c++", "unit-testing", "code-coverage", "gcov", "" ]
I have a table containing 2 entries. Something like ``` CREATE TABLE `db`.`main` ( `id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`) ); ``` The id for these 2 entries are automatically generated primary keys. I have another table with a rule linking ``` CREATE TABLE `db`.`day` ( `main_id` int(10) unsigned NOT NULL, `day` tinyint(4) NOT NULL, CONSTRAINT `fk_db_main` FOREIGN KEY (`main_id`) REFERENCES `main` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION ); ``` now I can successfully get a result using ``` SELECT * FROM main where id='9'; ``` but when I try to run ``` INSERT INTO day (main_id, day) VALUES (9, 0); ``` I get "Cannot add or update a child row: a foreign key constraint fails (`db`.`day`, CONSTRAINT `fk_db_main` FOREIGN KEY (`main_id`) REFERENCES `main` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION) (1452)" Any suggestions on what I am missing with the insert? \*\*I hadn't listed the actual cause of the issue while asking the question. The actual cause was that the main db table was in MyISAM, and the InnoDB tables couldn't create a foreign key connecting to it. In short, MyISAM doesn't support foreign keys, even when they are coming from other tables.
I hadn't listed the actual cause of the issue while asking the question. The actual cause was that the main db table was in MyISAM, and the InnoDB tables couldn't create a foreign key connecting to it. In short, MyISAM doesn't support foreign keys, even when they are coming from other tables.
The insert works for me if I remove the `db.` parts in the `CREATE TABLE` statements (and insert into `main` a row with an `id` of 9). Maybe the problem is that you're using that `db.` prefix inconsistently, i.e. after `TABLE` but not in the `CONSTRAINT` clause...?
Suggestions for insert with Foreign key constraint in mysql
[ "", "sql", "mysql", "" ]
In a .NET C# program, is it easy to transition from FTP to SFTP? I'm trying to get a sense of how muh time it would take the contractor to make the transition. My personal experience is mostly with PHP, so I have no idea. Basically, what I'm talking about, what steps would have to be made? Obviously, different commands, but would anything else in the code itself? Like do the commands require different formats, etc.? Also, if anyone has a list of all the .NET/C# FTP and SFTP commands, that would be really helpful. Clarification, as requested: The program is uploading extremely small files (20 KB) to a server. By format, I mean visually, because I was wondering about a find/replace job.
One thing that you'd need to consider is how well your current code is written. If your existing FTP implementation is horribly designed spaghetti code then converting it to SFTP may be next to impossible and take way longer than you'd like. Without knowing the current state of the code, it would be difficult for anyone to make a good estimation. And even if you do get an estimation from people on this site, I wouldn't recommend trusting it (even though the people on this site are great) since without all the information in front of them it will be next to impossible for anyone to come up with a reliable estimate. Perhaps you should consider hiring a good consultant or business analyst to do a thorough estimate for you.
This is a pretty vague question. You haven't told us what the C# program is doing with FTP. Is it a server, is it a client, is it doing directory listings, is it uploading 100 GB files? What library is it using? According to [this forum post](http://social.msdn.microsoft.com/forums/en-US/netfxnetcom/thread/136ad4ce-f625-471d-b202-668bf7866889) , there is no built-in support for SFTP in .NET, so you would have to use third-party libraries such as [SharpSSH](http://www.tamirgal.com/home/dev.aspx?Item=sharpSsh) or [Granados SSH](http://www.routrek.co.jp/en/product/varaterm/granados.html). I don't really know what you mean, "do the commands require different formats". Obviously, the code will use different: 1. Libraries 2. Types 3. Wire protocol. It will obviously *appear* somewhat similar, thanks to the abstraction of the libraries. I suggest you provide more information, and a clearer question.
In a .NET C# program, is it easy to transition from FTP to SFTP?
[ "", "c#", ".net", "ftp", "sftp", "transition", "" ]
I'm writing some browser side dynamic functionality and using HTTP Basic Auth to protect some resources. The user experience is very important and is highly customized. Here's a simple test JQuery method that eventually will test if a user has supplied the right credentials in a form: ``` $(document).ready(function() { $("#submit").click(function() { var token = Base64.encode($('#username').val() + ':' + $('#password').val()); $.ajax({ url: '/private', method: 'GET', async: false, beforeSend: function(req) { req.setRequestHeader('Authorization', 'test:password'); }, error: function(request, textStatus, error) { if (request.status == 401) { alert('401'); } } }); return false; }); }); ``` If they are not allowed to access `/private`, at the moment they should see just the alert box. However, on Firefox, a browser-provided login form pops up (to retry with new credentials). Safari does not do this. We want to completely control the experience with custom forms, fades, transitions, etc. How can I keep Firefox's default box from being shown? (If this will be an issue when we test for IE, I'd love to hear solutions there, too.)
In case you haven't read it: [How can I supress the browser's authentication dialog?](https://stackoverflow.com/questions/86105/how-can-i-supress-the-browsers-authentication-dialog) Doesn't look too promising :)
The solution is to set the `WWW-Authenticate` header to something other than `Basic`. For example set it to: ``` WWW-Authenticate: None ``` or ``` WWW-Authenticate: FormBased ``` if you use form based login. Then the browser will not show you a login window.
How do I keep Firefox from prompting for username/password with HTTP Basic Auth with JQuery AJAX?
[ "", "javascript", "jquery", "firefox", "http-authentication", "" ]
What's the easiest way to find Dom elements with a css selector, without using a library? ``` function select( selector ) { return [ /* some magic here please :) */ ] }; select('body')[0] // body; select('.foo' ) // [div,td,div,a] select('a[rel=ajax]') // [a,a,a,a] ``` This question is purely academical. I'm interested in learning how this is implemented and what the 'snags' are. What would the expected behavior of this function be? ( return array, or return first Dom element, etc ).
These days, doing this kind of stuff without a library is madness. However, I assume you want to learn how this stuff works. I would suggest you look into the source of jQuery or one of the other javascript libraries. With that in mind, the selector function has to include a lot of if/else/else if or switch case statements in order to handle all the different selectors. Example: ``` function select( selector ) { if(selector.indexOf('.') > 0) //this might be a css class return document.getElementsByClassName(selector); else if(selector.indexOf('#') > 0) // this might be an id return document.getElementById(selector); else //this might be a tag name return document.getElementsByTagName(selector); //this is not taking all the different cases into account, but you get the idea. }; ```
In addition to the custom hacks, in recent browsers you can use the native methods defined in the [W3C Selectors API Level 1](http://www.w3.org/TR/selectors-api/), namely [`document.querySelector()`](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelector) and [`document.querySelectorAll()`](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelectorAll): ``` var cells = document.querySelectorAll("#score > tbody > tr > td:nth-of-type(2)"); ```
Best way to find DOM elements with css selectors
[ "", "javascript", "css-selectors", "" ]
How can I tell if a file is binary (non-text) in Python? I am searching through a large set of files in Python, and keep getting matches in binary files. This makes the output look incredibly messy. I know I could use `grep -I`, but I am doing more with the data than what grep allows for. In the past, I would have just searched for characters greater than `0x7f`, but `utf8` and the like, make that impossible on modern systems. Ideally, the solution would be fast.
You can also use the [mimetypes](https://docs.python.org/3/library/mimetypes.html) module: ``` import mimetypes ... mime = mimetypes.guess_type(file) ``` It's fairly easy to compile a list of binary mime types. For example Apache distributes with a mime.types file that you could parse into a set of lists, binary and text and then check to see if the mime is in your text or binary list.
Yet another method [based on file(1) behavior](https://github.com/file/file/blob/f2a6e7cb7db9b5fd86100403df6b2f830c7f22ba/src/encoding.c#L151-L228): ``` >>> textchars = bytearray({7,8,9,10,12,13,27} | set(range(0x20, 0x100)) - {0x7f}) >>> is_binary_string = lambda bytes: bool(bytes.translate(None, textchars)) ``` Example: ``` >>> is_binary_string(open('/usr/bin/python', 'rb').read(1024)) True >>> is_binary_string(open('/usr/bin/dh_python3', 'rb').read(1024)) False ```
How can I detect if a file is binary (non-text) in Python?
[ "", "python", "file", "binary", "" ]
In the webbrowser control, I want to set the media type = print. So it will display the contents in print preview mode. Is this possible and how to do it?
The only solution it seems is redirect the request to some other server, which will fetch the original page and modify the css attributes to show print layout on the screen. This can be done by removing the screen css files and setting the attribute media="print" to media="screen".
In google chrome (i use version 33 now) you can open developer tools by `F12` then press `Esc`. in Emulation tab you can set `CSS media` to `print`.
How to set webbrowser media type to print in C#?
[ "", "c#", "browser", "" ]
I have been playing around a bit with a fairly simple, home-made search engine, and I'm now twiddling with some relevancy sorting code. It's not very pretty, but I'm not very good when it comes to clever algorithms, so I was hoping I could get some advice :) Basically, I want each search result to get scoring based on how many words match the search criteria. 3 points per exact word and one point for partial matches For example, if I search for "winter snow", these would be the results: * **winter** **snow** => 6 points * **winter** **snow**ing => 4 points * **winter**land **snow** => 4 points * **winter** sun => 3 points * **winter**land **snow**ing => 2 points Here's the code: ``` String[] resultWords = result.split(" "); String[] searchWords = searchStr.split(" "); int score = 0; for (String resultWord : resultWords) { for (String searchWord : searchWords) { if (resultWord.equalsIgnoreCase(searchWord)) score += 3; else if (resultWord.toLowerCase().contains(searchWord.toLowerCase())) score++; } } ```
Your code seems ok to me. I suggest little changes: Since your are going through all possible combinations you might get the `toLowerCase()` of your back at the start. Also, if an exact match already occurred, you don't need to perform another `equals`. ``` result = result.toLowerCase(); searchStr = searchStr.toLowerCase(); String[] resultWords = result.split(" "); String[] searchWords = searchStr.split(" "); int score = 0; for (String resultWord : resultWords) { boolean exactMatch = false; for (String searchWord : searchWords) { if (!exactMatch && resultWord.equals(searchWord)) { exactMatch = true; score += 3; } else if (resultWord.contains(searchWord)) score++; } } ``` Of course, this is a very basic level. If you are really interested in this area of computer science and want to learn more about implementing search engines start with these terms: * [Natural Language Processing](http://en.wikipedia.org/wiki/Natural_language_processing) * [Information retrieval](http://en.wikipedia.org/wiki/Information_retrieval) * [Text mining](http://en.wikipedia.org/wiki/Text_mining)
* [stemming](http://en.wikipedia.org/wiki/Stemming) * for acronyms case sensitivity is important, i.e. [SUN](http://www.acronymfinder.com/SUN.html); any word that matches both content and case must be weighted more than 3 points (5 or 7)? * use the [strategy design pattern](http://en.wikipedia.org/wiki/Strategy_pattern) For example, consider this naive score model: ``` interface ScoreModel { int startingScore(); int partialMatch(); int exactMatch(); } ``` ... ``` int search(String result, String searchStr, ScoreModel model) { String[] resultWords = result.split(" "); String[] searchWords = searchStr.split(" "); int score = model.startingScore(); for (String resultWord : resultWords) { for (String searchWord : searchWords) { if (resultWord.equalsIgnoreCase(searchWord)) { score += model.exactMatch(); } else if (resultWord.toLowerCase().contains(searchWord.toLowerCase())) { score += model.partialMatch(); } } } return score; } ```
Optimizing a simple search algorithm
[ "", "java", "optimization", "search", "" ]
If I have a nullable "decimal? d" and I want to assign d to non nullable e, what is the proper way?
``` decimal e = d ?? 0.0; ```
``` decimal e; if(d.HasValue) { e = d.Value; } ```
c#: assigning from nullable types
[ "", "c#", "nullable", "" ]
I'm currently creating a CMS system and found that the following doesn't work. I do have a work around that isn't exactly ideal and feels dirty. I'm cool with it for now and not really that interested in a different approach (but don't let that stop you answering). What I am after is some kind of explaination on why it doesn't work - is it a bug in ASP.NET MVC? It's hard to explain so I'll let my code (minus alot of fluff) do the talking... hope it makes sense! **EDIT:** It seems that the compiler totally ignores the second masterpage's 'inherits' attribute - see at the bottom of the question. **ContentViewData.cs** - notice it inherits from **BaseViewData** ``` public class ContentViewData : BaseViewData { public MyCMS.Data.Models.Content ContentItem { get; set; } } ``` **Site.Master** - Notice the strongly typed viewdata of type **BaseViewData** ``` <%@ Master Language="C#" Inherits="System.Web.Mvc.ViewMasterPage<MyCMS.WebSite.ViewData.BaseViewData>" %> ``` **Content.Master** - Notice the strongly typed viewdata of type **ContentViewData** and the fact that it's a child masterpage of **Site.Master** ``` <%@ Master Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewMasterPage<MyCMS.WebSite.ViewData.ContentViewData>" %> ...blah blah blah... <% Html.RenderPartial("ContentItemImage", Model.ContentItem); %> ``` **ContentItemImage.ascx** ``` <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<MyCMS.Data.Models.Content>" %> <% if (Model.HasPrimaryPhoto) { %> <img src="/content/photos/<%= Model.GetPrimaryPhoto.ThumbFileName %>" title="<%= Model.GetPrimaryPhoto.Caption %>" /> <% } %> ``` Now inside the Content.Master if I try and render the ContentItemImage partial and refer to a property on the ContentViewData object (specifically the 'ContentItem' property) like I have - repeated below. ``` <% Html.RenderPartial("ContentItemImage", Model.ContentItem); %> ``` If falls over on that line with the following error > Compilation Error > > CS1061: 'object' does not contain a definition for 'ContentItem' and no > extension method 'ContentItem' > accepting a first argument of type > 'object' could be found (are you > missing a using directive or an > assembly reference?) BUT if I change things up like so, it all works fine and dandy. **Content.Master** - Notice I'm passing into RenderPartial() the whole Model (ContentViewData object) rather than trying to refer to a property on the ContentViewData object ``` <%@ Master Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewMasterPage<MyCMS.WebSite.ViewData.ContentViewData>" %> ...blah blah blah... <% Html.RenderPartial("ContentItemImage", Model); %> ``` **ContentItemImage.ascx** - notice the changed strongly typed viewdata from MyCMS.Data.Models.Content to the ContentViewData class. ``` <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<MyCMS.WebSite.ViewData.ContentViewData>" %> <% if (Model.ContentItem.HasPrimaryPhoto) { %> <img src="/content/photos/<%= Model.ContentItem.GetPrimaryPhoto.ThumbFileName %>" title="<%= Model.ContentItem.GetPrimaryPhoto.Caption %>" /> <% } %> ``` So yeah, that works but it aint go not alibi. Thanks in advance, Charles. **EDIT:** Interestingly it seems that the compiler totally ignores the second master page's 'inherits' attribute. Eg. I can do this and it still compiles without a complaint... ``` <%@ Master Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewMasterPage<ThisDoesntExist.AtAll>" %> ```
As you can see from my edit, it seems that the compiler totally ignores the nested master page's 'inherits' attribute. This leads me to believe that a nested masterpage in ASP.NET MVC will always inherit from it's parent masterpage and as I've witnessed, totally ignore the inherits attribute. **EDIT:** There must be some magic going on here... If I remove the 'inherits' attribute it won't compile because it doesn't know about the HtmlHelper class. But if I have the 'inherits' attribute in there with garbage inside it, it does compile. Doesn't work ``` <%@ Master Language="C#" MasterPageFile="~/Views/Shared/Site.Master" %> ``` Does work ``` <%@ Master Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="Sysasfdaasdtem.Web.Mvsdfc.ViewMasterPage<ThisDoesntExist.AtAll>" %> ``` Very odd indeed.
Interesting...A resonable guess would be that your ContentViewData object is being upcast to BaseViewData due to some interaction with your nested master page (if that is indeed the case, someone else will need to weigh in as to why). You could verify trying this: ``` <% Html.RenderPartial("ContentItemImage", ((MyCMS.WebSite.ViewData.ContentViewData)Model).ContentItem); %> ``` The reason your workaround "works" is because your partial view is typed for ContentViewData, so when you pass in Model it is downcast to that type.
Strongly typed master pages polymorphism - nested masterpages ignore inherit attribute
[ "", "c#", "asp.net-mvc", "master-pages", "viewdata", "" ]
Python is so dynamic that it's not always clear what's going on in a large program, and looking at a tiny bit of source code does not always help. To make matters worse, editors tend to have poor support for navigating to the definitions of tokens or import statements in a Python file. One way to compensate might be to write a special profiler that, instead of timing the program, would record the runtime types and paths of objects of the program and expose this data to the editor. This might be implemented with sys.settrace() which sets a callback for each line of code and is how pdb is implemented, or by using the ast module and an import hook to instrument the code, or is there a better strategy? How would you write something like this without making it impossibly slow, and without runnning afoul of extreme dynamism e.g side affects on property access?
I don't think you can help making it slow, but it should be possible to detect the address of each variable when you encounter a STORE\_FAST STORE\_NAME STORE\_\* opcode. Whether or not this has been done before, I do not know. If you need debugging, look at [PDB](http://docs.python.org/library/pdb.html), this will allow you to step through your code and access any variables. ``` import pdb def test(): print 1 pdb.set_trace() # you will enter an interpreter here print 2 ```
What if you monkey-patched `object`'s class or another prototypical object? This might not be the easiest if you're not using new-style classes.
What's the best way to record the type of every variable assignment in a Python program?
[ "", "python", "profiling", "" ]
I created a program called test: ``` #include<stdlib.h> #include<iostream> int main() { std::cout<<system("..\\add\\debug\\add.exe 4 8"); while(true); return 0; } ``` add.exe consists of ``` #include<stdlib.h> int main(int argc,char **argv[]) { int n=((unsigned)argv[1]); int m=((unsigned)argv[2]); return(n+m); } ``` so when I run test, I get 6841420 The attempt was to have test run add with parameters 4 and 8, and for add to return the sum of those values (12) and then test would display that to the screen. How did I get 6841420, and how can I fix it?
The problem is that you are converting a pointer value into an integer. The arguments will be passed to your program as C style strings (`const char*`). You need to first convert these to a string using an API like atoi. ``` #include<stdlib.h> int main(int argc,char *argv[]) { int n= atoi(argv[1]); int m= atoi(argv[2]); return(n+m); } ``` **EDIT** As others have pointed out, you should also do some error checking to ensure there are actually 2 parameters passed to the program.
Your add.exe is casting pointers to strings to unsigned ints, so it's adding their positions in memory rather than the numbers. Use sscanf (or, as suggested in another answer, atoi - it's probably easier).
What is wrong with this setup?
[ "", "c++", "parameters", "" ]
I am working on porting over a database from a custom MSSQL CMS to MYSQL - Wordpress. I am using Python to read a txt file with `\t` delineated columns and one row per line. I am trying to write a Python script that will read this file (fread) and [eventually] create a MYSSQL ready .sql file with insert statements. A line in the file I'm reading looks something like: ``` 1 John Smith Developer http://twiiter.com/johns Chicago, IL ``` My Python script so far: ``` import sys fwrite = open('d:/icm_db/wp_sql/wp.users.sql','w') fread = open('d:/icm_db/users.txt','r') for line in fread: print line; fread.close() fwrite.close() ``` How can I "implode" each line so I can access each column and do business on it? I need to generate multiple MYSQL insert statements per line I read. So... for each line read, I'd generate something like: ``` INSERT INTO `wp_users` (`ID`, `user_login`, `user_name`) VALUES (line[0], 'line[2]', 'line[3]'); ```
Although this is easily doable, it does become easier with the [csv](http://docs.python.org/library/csv.html) module. ``` >>> import csv >>> reader = csv.reader(open('C:/www/stackoverflow.txt'), delimiter='\t') >>> for row in reader: ... print row ... ['1', 'John Smith', 'Developer', 'http://twiiter.com/johns', 'Chicago, IL'] ['2', 'John Doe', 'Developer', 'http://whatever.com', 'Tallahassee, FL'] ``` Also, as pointed out, semicolons are not needed in Python. Try to kick that habit :)
Knowing the exact number of columns helps self document your code: ``` fwrite = open("d:/icm_db/wp_sql/wp.users.sql","w") for line in open("d:/icm_db/users.txt"): name, title, login, location = line.strip().split("\t") # Double up on those single quotes to avoid nasty SQL! safe_name = name.replace("'","''") safe_login = name.replace("'","''") # ID field is primary key and will auto-increment fwrite.write( "INSERT INTO `wp_users` (`user_login`, `user_name`) " ) fwrite.write( "VALUES ('%s','%s');\n" % (safe_name,safe_login) ) ```
Python File Read + Write
[ "", "python", "file", "" ]
The code below doesn't seem to work or find anything on an array. I'm using "in\_array" to search for the needle in the stack. I also tried exploding the contents with comma separated and won't work. Any suggestions? Also I tried "array\_search". ``` $q4 = "SELECT domain_name,slots_config.bid FROM slots_pid,slots_config,slots_sites WHERE slots_config.bid=slots_pid.bid && slots_sites.aid=slots_pid.aid"; $result4 = mysql_query($q4); while($row = mysql_fetch_array($result4)) { $resultarray[] = $row; } if (in_array("test",$resultarray)) { echo "Match found"; } else { echo "Match not found"; } ```
`in_array()` won't work with that sort of array, because it's multi-dimensional. Your array looks like this: ``` $resultarray[0]['domain_name'] = 'first row domain name'; $resultarray[0]['bid'] = 'first row bid'; $resultarray[1]['domain_name'] = 'second row domain name'; ... ``` You can't use `in_array()` to search in that, so you'll have to do it with another method, something like looping over the array, or building `$resultarray` differently. Similarly, `array_search()` doesn't work on multidimensional arrays, so you could do something like looping over the first dimension and `array_search()`-ing each second dimension. Let me know if you want more detail.
It looks like what you have here is an 'array of arrays'. That is, in your while() loop, $row is an array which corresponds to the data from your mysql query. So each element of $resultarray actually contains an array, rather than a string. Try doing this: `print_r($resultarray)`. This will display the entire structure of $resultarray, and you can see how you're creating an array-of-arrays. To use in\_array, you woul need to do something akin to `in_array("test", $resultarray[0])`
Search inside Array for Value on a MYSQL Output
[ "", "php", "mysql", "arrays", "string", "" ]
Given that each PHP file in our project contains a single class definition, how can I determine what class or classes are defined within the file? I know I could just regex the file for `class` statements, but I'd prefer to do something that's more efficient.
I needed something like this for a project I am working on, and here are the functions I wrote: ``` function file_get_php_classes($filepath) { $php_code = file_get_contents($filepath); $classes = get_php_classes($php_code); return $classes; } function get_php_classes($php_code) { $classes = array(); $tokens = token_get_all($php_code); $count = count($tokens); for ($i = 2; $i < $count; $i++) { if ( $tokens[$i - 2][0] == T_CLASS && $tokens[$i - 1][0] == T_WHITESPACE && $tokens[$i][0] == T_STRING) { $class_name = $tokens[$i][1]; $classes[] = $class_name; } } return $classes; } ```
If you just want to check a file without loading it use [`token_get_all()`](http://php.net/token_get_all): ``` <?php header('Content-Type: text/plain'); $php_file = file_get_contents('c2.php'); $tokens = token_get_all($php_file); $class_token = false; foreach ($tokens as $token) { if (is_array($token)) { if ($token[0] == T_CLASS) { $class_token = true; } else if ($class_token && $token[0] == T_STRING) { echo "Found class: $token[1]\n"; $class_token = false; } } } ?> ``` Basically, this is a simple finite state machine. In PHP the sequence of [tokens](http://php.net/manual/en/tokens.php) will be: * `T_CLASS`: 'class' keyword; * `T_WHITESPACE`: space(s) after 'class'; * `T_STRING`: name of class. So this code will handle any weird spacing or newlines you get just fine because it's using the same parser PHP uses to execute the file. If `token_get_all()` can't parse it, neither can PHP. By the way, you use [`token_name()`](http://php.net/manual/en/function.token-name.php) to turn a token number into it's constant name. Here is my c2.php: ``` <?php class MyClass { public __construct() { } } class MyOtherClass { public __construct() { } } ?> ``` Output: ``` Found class: MyClass Found class: MyOtherClass ```
Determining what classes are defined in a PHP class file
[ "", "php", "class", "" ]
Say I want to count in binary until I have the highest number a set number of bytes will hold, how could I do that? It feels like there should be a pretty simple way, I just don't know it. I googled but was surprised to not find any examples. For example, if I wanted to count to 1 byte I'd start with 00000001 add 1 and get 00000010, add 1 to get 00000011, etc until I get to 11111111. Additionally, how could you do the same in hex? you start with 0x00, add one and output 0x01, the 0x02, 0x03, etc until you get to 0xFF? Also, how can I output the values as a string (like my examples)? Some psuedo-code: ``` byteSize = 3 counter = 0 while counter.size <= byteSize print counter /* prints 00000001, 00000010, etc. count += 1 loop ``` ## Update: I'm not only concerned with displaying a number in another base, that was only part of it. I see my error in that the displaying function is what determines how the number is displayed (as Jeremy pointed out). So, that's parts not a problem. ## Update 2: I promise I'm not a complete moron. Here is the context: It started today when I read this on reddit: <http://www.elliottkember.com/kember_identity.html> Then, this: <http://www.reddit.com/r/programming/comments/8iguu/md5_gamechallenge_for_you_reddit/> which led to this: <http://www.olegkikin.com/md5game/> So, I figured you could just count in bits starting at different intervals and just let 'er run.
For arbitrary depth. You could switch this to 64-bit easily if that makes sense to do so (e.g., 64-bit processor). Note: this is entirely free-hand and I have not compiled it nor, obviously, tested it. I can't even begin to guess how long it would take to print out 2^160 values (that's 1.46e48 values) or more if you do more than 5 32-bit counters. This is grossly inefficient but what the heck. ``` // A word is 32-bits void CountBytes(int numberOfWords) { uint[] numbers = new uint[numberOfWords]; while(true) { // Show most-significant first for (int i=numbers.Length-1; i>=0; i--) { Console.Write(Convert.ToString(numbers[i], 2).PadLeft(32, '0')); } // Hit max on all uint's, bail bool done = true; for (int i=numbers.Length-1; i >= 0; i--) { if (numbers[i] != uint.MaxValue) { done = false; break; } } if (done) { break; } // Check for overflow for (int i=numbers.Length-2; i >= 0; i--) { // Overflow for numbers[i] is if it and all beneath it are MaxValue bool overflow = true; for (int k=i; k>=0; k--) { if (numbers[k] != uint.MaxValue) { overflow = false; break; } } if (overflow) { numbers[i+1]++; numbers[i] = 0; } } // Increment counter numbers[0]++; } } ```
### Binary ``` for (int i = 0; i <= byte.MaxValue; i++) { Console.WriteLine(Convert.ToString(i, 2).PadLeft(8, '0')); } ``` ### Hexadecimal ``` for (int i = 0; i <= byte.MaxValue; i++) { Console.WriteLine("0x" + i.ToString("X").PadLeft(2, '0')); } ``` or ``` for (int i = 0; i <= byte.MaxValue; i++) { Console.WriteLine(Convert.ToString(i, 16).PadLeft(2, '0')); } ``` ### Multiple bytes ``` int numBytes = 3; for (int i = 0; i < Math.Pow(2, numBytes * 8); i++) { Console.WriteLine(Convert.ToString(i, 2).PadLeft(numBytes * 8, '0')); } ``` I wouldn't do more than 3, or you're going to be waiting a very long time... ## Response to Update: I hope you were kidding about "counting to 20 bytes in binary". That's 160 bits. That creates a list of numbers with a count somewhere in the realm of **the number of atoms in the entire universe**. I hope you have plenty of time (i.e. googols of millenia) to wait for that to finish :)
How do you count in another base?
[ "", "c#", "binary", "" ]
Which one is preferable or more clear? ``` public int FrozenRegionWidth { get; set; } ``` Or... ``` public int WidthOfFrozenRegion { get; set; } ```
I'd say FrozenRegionWidth, otherwise you'll end up with a whole bunch of properties starting with 'WidthOf..'. Having said that, shouldn't you have something like FrozenRegion.Width (another reason why I'd look for FrozenRegionWidth over WidthOfFrozenRegion)?
I'd would prefer FrozenRegionWidth. For more informations refer to .NET framework naming guidelines at <http://msdn.microsoft.com/en-us/library/ms229012.aspx>
Naming convention for property
[ "", "c#", "naming-conventions", "coding-style", "" ]
I see this problem on and off again in my PHP coding, and I've never understood what's happening. ``` $val = $matches[1][$i] ``` In this example, $i = 0 (set in a for loop) and the value held in that dimension of the array is a string. Instead of that string being assigned to $val, $val gets assigned the 0th (first) character in the string. If $i = 1, $val gets assigned the 1st (second) character in the string, etc. Instead of returning the string, why is PHP treating this like a substring operation? How can I get my string value instead of a single character in that string?
Are you sure it's a multidimensional array? I think that if you access a string like an array it returns the character at the key's position in the string, and that could be your problem. Check your assignments and whatnot and make sure you're not accidentally assigning a string to $matches[1] instead of an array of strings.
That is strange! I have never had that happen before. I wonder if the multidimensional really is just a normal array. It could be using your [$i] to return a substring if it is just a single dimension.
multi dimensional array keys returning single characters, not values
[ "", "php", "" ]
We were having the (never ending) underscore prefix versus no underscore prefix debate on member variables and someone mentioned that is you use "this." instead of "-", your code will be slower due to the "." in "this.". Is this true and can anyone quantify this?
No, that makes no sense at all. Just look at the IL, and kick that developer in the ass. Also FWIW, I like the underscore in member variables.
There doesn't seem to be difference when using the this keywords. If you have the following code: ``` class Class3 { private long id; public void DoWork() { id = 1; this.id = 2; } } ``` When you run it through reflector you will see the following output: ``` internal class Class3 { // Fields private long id; // Methods public void DoWork() { this.id = 1L; this.id = 2L; } } ```
Does this. slow down code?
[ "", "c#", "coding-style", "" ]
Using unmanaged C++ on a Windows platform, is there a simple way to detect the number of processor cores my host machine has?
You can use [GetLogicalProcessorInformation](http://msdn.microsoft.com/en-us/library/ms683194.aspx) to get the info you need. ETA: As mentioned in the question a commenter linked to, another (easier) way to do it would be via GetSystemInfo: ``` SYSTEM_INFO sysinfo; GetSystemInfo( &sysinfo ); numCPU = sysinfo.dwNumberOfProcessors; ``` Seems like GetLogicalProcessorInformation would give you more detailed info, but if all you need is the number of processors, GetSystemInfo would probably work just fine.
I've noticed there's an environment variable `NUMBER_OF_PROCESSORS` on XP, but I couldn't find it on Microsoft's site. I believe this would be the easiest way, though.
Count Processors using C++ under Windows
[ "", "c++", "processors", "" ]
I am just trying to get into python, but I've found it very difficult to find any resource that is Python 3. All I've got so far is diveintopython3.org, and its limited. Anyways, I was just trying to get a feel for the language by doing some very basic stuff, but I can't figure out why this little program won't do what I intend, which is to add 2 numbers. I'm sure someone here knows how to fix it, but any other resources that contain tutorials in Python 3 would be greatly appreciated: ``` def add(num=0,num2=0): sumEm = (num+num2) print (sumEm) if __name__ == '__main__': num = input("Enter a number: ") num2 = input("Enter a number: ") add(num,num2) ``` output: ``` Enter a number: 23 Enter a number: 24 23 24 ```
[A Byte of Python](http://www.swaroopch.com/notes/Python_en:Table_of_Contents) covers Python 3 in detail. There's also a 2.X version of the book, which can help compare and contrast the differences in the languages. To fix your problem, you need to convert the input taken into an integer. It's stored as a string by default. ``` num = int(input("Enter a number: ")) num2 = int(input("Enter a number: ")) ```
You didn't say what you *do* get - I'm guessing `num` and `num2` concatenated, as the [`input`](http://docs.python.org/3.0/library/functions.html#input) returns a string. Adding two strings just concatenates them. If you expect `num` and `num2` to represent integers, you could use [`int`](http://docs.python.org/3.0/library/functions.html#int) to convert the strings into integers: ``` num = int(input("Enter a number:") num2 = int(input("Enter a number:") ``` And you'll likely get better results. Note there's still room for better error-checking, but this might get you started. One other thing to try - add a line at the end of your `__main__` like this: ``` add(4, 3) ``` and see what gets printed. That will tell you whether the fault is with `add` or with your input routines. Of course, none of that provided you with a resource - are the online docs not helping? I'd start with the [tutorial](http://docs.python.org/3.0/tutorial/index.html), if you haven't already.
Looking for help, just started with Python today. (3.0)
[ "", "python", "python-3.x", "" ]
Is using MS SQL Identity good practice in enterprise applications? Isn't it make difficulties in creating business logic, and migrating database from one to another?
Yes, they work very well and are reliable, and perform the best. One big benefit of using identity fields vs non, is they handle all of the complex concurrency issues of multiple callers attempting to reserve new id's. This may seem like something trivial to code but it's not. These links below offer some interesting information about identity fields and why you should use them whenever possible. 1. [DB: To use identity column or not?](https://stackoverflow.com/questions/186369/db-to-use-identity-column-or-not) 2. <http://www.codeproject.com/KB/database/AgileWareNewGuid.aspx?display=Print> 3. <http://www.sqlmag.com/Article/ArticleID/48165/sql_server_48165.html>
Personally I couldn't live without identity columns and use them everywhere however there are some reasons to think about not using them. Origionally the main reason not to use identity columns [AFAIK](http://en.wiktionary.org/wiki/AFAIK) was due to distributed multi-database schemas (disconnected) using replication and/or various middleware components to move data. There just was no distributed synchronization machinery avaliable and therefore no reliable means to prevent collisions. This has changed significantly as SQL Server does support distributing IDs. However, their use still may not map into more complex application controlled replication schemes. They can leak information. Account ID's, Invoice numbers, etc. If I get an invoice from you every month I can ballpark the number of invoices you send or customers you have. I run into issues all the time with merging customer databases and all sides still wanting to keep their old account numbers. This sometimes makes me question my addiction to identity fields :) Like most things the ultimate answer is "it depends" specifics of a given situation should necessarily hold a lot of weight in your decision.
Is using MS SQL Identity good practice?
[ "", "sql", "sql-server", "t-sql", "identity", "" ]
I discovered that it is possible to extract the hard-coded strings from a binary. For example the properties view of [Process Explorer](http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx) displays all the string with more than 3 characters. Here is the code of a simple executable that I wrote to simply test it: ``` #ifndef _WIN32_WINNT #define _WIN32_WINNT 0x0501 #endif #include <stdio.h> #include <tchar.h> #include <Windows.h> int _tmain(int argc, _TCHAR* argv[]) { _TCHAR* hiddenString1 =_T("4537774B-CC80-4eda-B3E4-7A9EE77991F5"); _TCHAR* hiddenString2 =_T("hidden_password_or_whatever"); for (int i= 0; i<argc; i++) { if (0 == _tcscmp(argv[i],hiddenString1)) { _tprintf (_T("The guid argument is correct.\n")); } else if (0 == _tcscmp(argv[i],hiddenString2)) { _tprintf (_T("Do something here.\n")); } } _tprintf (_T("This is a visible string.\n")); //Keep Running Sleep(60000); return 0; } ``` The strings can clearly be extracted from the corresponding executable: ![alt text](https://i.stack.imgur.com/3k8mN.png) I think that it is a little too easy to find the strings. My questions are: 1. How to **simply** hide *hiddenString1* or *hiddenString2* in the executable? 2. Is there a more secure way to use "cheat code" than with some obscure hidden input?
Welcome to the wider world of defensive programming. There are a couple of options, but I believe all of them depend on some form of obfuscation; which, although not perfect, is at least something. 1. Instead of a straight string value you can store the text in some other binary form (hex?). 2. You can encrypt the strings that are stored in your app, then decrypt them at run time. 3. You can split them across various points in your code, and reconstitute later. Or some combination thereof. Bear in mind, that some attacks go further than looking at the actual binary. Sometimes they will investigate the memory address space of the program while it's running. MS came up with something called a [SecureString in .Net 2.0](http://msdn.microsoft.com/en-us/library/system.security.securestring.aspx). The purpose being to keep the strings encrypted while the app is running. A fourth idea is to not store the string in the app itself, but rather rely on a validation code to be submitted to a server you control. On the server you can verify if it's a legit "cheat code" or not.
There are many ways to *obscure* data in an executable. Others here have posted good solutions -- some stronger than others. I won't add to that list. Just be aware: it's all a cat-and-mouse game: it is **impossible** to **guarantee** that nobody will find out your "secret". No matter how much encryption or other tricks you use; no matter how much effort or money you put into it. No matter how many "NASA/MIT/CIA/NSA" types are involved in hiding it. It all comes down to simple physics: If it were impossible for *any* user to pull out your secret from the executable and "unhide" it, then the computer would not be able to unhide it either, and your program wouldn't be able to use it. Any moderately skilled developer with enough incentive will find the way to unhide the secret. The moment that you have handed your executable to a user, they have everything they need to find out the secret. The best you can hope for is to make it *so hard* to uncover the secret that any benefits you can get from knowing the secret become not worth the hassle. So, it's OK to try to obscure the data if it's merely "not-nice" for it to be public, or if the consequences of it becoming public would just be "inconvenient". But don't even think of hiding in your program "the password to your master client database", a private key, or some other critical secret. You just can't. If you have truly critically secret information that your program will somehow need but should NEVER become public information (like a private key), then you will need to have your program talk to a remote server under your control, apply appropriate authentication and authorization controls (*that is, make sure only the approved people or computers are able to make the request to the server*), and have that server keep the secret and use it.
How to hide strings in a exe or a dll?
[ "", "c++", "security", "obfuscation", "reverse-engineering", "defensive-programming", "" ]
I was [reading](https://stackoverflow.com/questions/925513/c-empty-string-constructor) the linked question which leads me to ask this question. Consider the following code ``` int main() { string SomeString(); } ``` All says, compiler takes this as a function prototype and not as a *string* object. Now consider the following code. ``` int main() { string Some() { return ""; } } ``` Compiler said this is invalid as I guess nested function definition is not allowed. **If it is not allowed, why nested function prototypes are allowed?** It is not giving any advantage rather than making confusion (or am I missing some valid points here?). I figured out the following is valid. ``` int main() { string SomeFun(); SomeFun(); return 0; } string SomeFun() { std::cout << "WOW this is unexpected" << std::endl; } ``` This is also confusing. I was expecting the function *SomeFun()* will have a scope only in *main*. But I was wrong. Why compiler is allowing to compile code like the above? Is there any real time situations where code like the above makes sense? Any thoughts?
Your prototype is just '[Forward Declaration](http://en.wikipedia.org/wiki/Forward_declaration)'. Please check out the Wikipedia article. Basically, it tells the compiler **"don't be alarmed if the label 'SomeFun' is used in this way".** But your linker is what's responsible for finding the correct function body. You can actually declare a bogus prototype, e.g. 'char SomeFun()' and use it all over your main. You will only get an error when your linker tries to find the body of your bogus function. But your compiler will be cool with it. There are lots of benefits. You have to remember the function body is not always in the same source code file. It can be in a linked library.Also, that linked library may be have a specific 'link signature'.Use conditional defines you may even select the correct link signature at build time using your scoped prototypes.Although most people would use function pointers for that instead. Hope this helps.
Just as a side note, C++03 does have a roundabout way of defining local functions. It requires abusing the local-class feature: ``` int main() { struct Local { static string Some() { return ""; } }; std::cout << Local::Some() << std::endl; } ```
Nested functions are not allowed but why nested function prototypes are allowed? [C++]
[ "", "c++", "compiler-construction", "function", "nested", "" ]
I've mapped the Spring MVC dispatcher as a global front controller servlet on `/*`. ``` <servlet> <servlet-name>home</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>home</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> ``` However, this mapping stops the access to static files like CSS, JS, images etc which are all in the `/res/` folder. How can I access them anyway?
I've run into this also and never found a great solution. I ended up mapping my servlet one level higher in the URL hierarchy: ``` <servlet-mapping> <servlet-name>home</servlet-name> <url-pattern>/app/*</url-pattern> </servlet-mapping> ``` And now everything at the base context (and in your /res directory) can be served up by your container.
Map the controller servlet on a more specific `url-pattern` like `/pages/*`, put the static content in a specific folder like `/static` and create a `Filter` listening on `/*` which transparently continues the chain for any static content and dispatches requests to the controller servlet for other content. In a nutshell: ``` <filter> <filter-name>filter</filter-name> <filter-class>com.example.Filter</filter-class> </filter> <filter-mapping> <filter-name>filter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <servlet> <servlet-name>controller</servlet-name> <servlet-class>com.example.Controller</servlet-class> </servlet> <servlet-mapping> <servlet-name>controller</servlet-name> <url-pattern>/pages/*</url-pattern> </servlet-mapping> ``` with the following in filter's `doFilter()`: ``` HttpServletRequest req = (HttpServletRequest) request; String path = req.getRequestURI().substring(req.getContextPath().length()); if (path.startsWith("/static")) { chain.doFilter(request, response); // Goes to default servlet. } else { request.getRequestDispatcher("/pages" + path).forward(request, response); } ``` No, this does not end up with `/pages` in browser address bar. It's fully transparent. You can if necessary make `"/static"` and/or `"/pages"` an `init-param` of the filter.
How to access static resources when mapping a global front controller servlet on /*
[ "", "java", "servlets", "resources", "" ]
Doing cross platform development with 64bit. Using gcc/linux and msvc9/server 2008. Just recently deployed a customer on windows and during some testing of upgrades I found out that although std::streamoff is 8 bytes, the program crashes when seeking past 4G. I immediately switched to stlport which fixes the problem, however stlport seems to have other issues. Is STL with msvc9 really that broken, or am I missing something? Since the code is cross platform I have zero interest in using any win32 calls. ### Related * [iostream and large file support](https://stackoverflow.com/questions/660667/iostream-and-large-file-support) * [Reading files larger than 4GB using c++ stl.](https://stackoverflow.com/questions/293672/reading-files-larger-than-4gb-using-c-stl)
I ended up using STLport. The biggest difference with STLport being that some unit tests which crashed during multiplies of double precision numbers now work and those unit tests pass. There are some other differences with relative precision popping up but those seem to be minor.
Even though you say that you have "zero" interest in using "win32" calls, it situations like this your stuck between a rock and a hard place. I would just implement my own version of a file iostream using the "win32" calls that looks and feels like the fstream interfaces. This is easy to do and I've done it hundreds of times. Call it say 'fstreamwin32'. Then I would have a header file that would do something like: ``` #ifdef WIN32 typedef fstreamwin32 fsteamnative; #else typedef fstream fsteamnative; #endif ``` Then I would use fsteamnative everywhere. That way you keep your code cross platform and still solve your problem. If the problem is ever fixed, you can easily remove your "win32" workaround by changing your typedef back to using fstream typedef. This is why lots of cross platform codebases have lots of levels of indirection (e.g. by using their own typedef's for standard stuff) so that they are do stuff like this would having to change a lot of code.
msvc9, iostream and 2g/4g plus files
[ "", "c++", "visual-studio-2008", "stl", "" ]
What's a "static factory" method?
We avoid providing direct access to database connections because they're resource intensive. So we use a static factory method `getDbConnection` that creates a connection if we're below the limit. Otherwise, it tries to provide a "spare" connection, failing with an exception if there are none. ``` public class DbConnection{ private static final int MAX_CONNS = 100; private static int totalConnections = 0; private static Set<DbConnection> availableConnections = new HashSet<DbConnection>(); private DbConnection(){ // ... totalConnections++; } public static DbConnection getDbConnection(){ if(totalConnections < MAX_CONNS){ return new DbConnection(); }else if(availableConnections.size() > 0){ DbConnection dbc = availableConnections.iterator().next(); availableConnections.remove(dbc); return dbc; }else { throw new NoDbConnections(); } } public static void returnDbConnection(DbConnection dbc){ availableConnections.add(dbc); //... } } ```
The [static factory method pattern](https://books.google.com/books?id=ka2VUBqHiWkC&pg=PA5) is a way to encapsulate object creation. Without a factory method, you would simply call the class's [constructor](http://en.wikipedia.org/wiki/Constructor_(computer_science)) directly: `Foo x = new Foo()`. With this pattern, you would instead call the factory method: `Foo x = Foo.create()`. The constructors are marked private, so they cannot be called except from inside the class, and the factory method is marked as [`static`](http://en.wikipedia.org/wiki/Method_(computer_science)#Static_methods) so that it can be called without first having an object. There are a few advantages to this pattern. One is that the factory can choose from many subclasses (or implementers of an interface) and return that. This way the caller can specify the behavior desired via parameters, without having to know or understand a potentially complex class hierarchy. Another advantage is, as Matthew and James have pointed out, controlling access to a limited resource such as connections. This a way to implement [pools of reusable objects](http://en.wikipedia.org/wiki/Object_pool) - instead of building, using, and tearing down an object, if the construction and destruction are expensive processes it might make more sense to build them once and recycle them. The factory method can return an existing, unused instantiated object if it has one, or construct one if the object count is below some lower threshold, or throw an exception or return `null` if it's above the upper threshold. As per the article on Wikipedia, multiple factory methods also allow different interpretations of similar argument types. Normally the constructor has the same name as the class, which means that you can only have one constructor with a given [signature](http://en.wikipedia.org/wiki/Type_signature). Factories are not so constrained, which means you can have two different methods that accept the same argument types: ``` Coordinate c = Coordinate.createFromCartesian(double x, double y) ``` and ``` Coordinate c = Coordinate.createFromPolar(double distance, double angle) ``` This can also be used to improve readability, as Rasmus notes.
What are static factory methods?
[ "", "java", "design-patterns", "factory-method", "" ]
I've worked with a few scripts to begin uploading files on my development machine. Problem is, despite the expected ease of this operation, Apache seems to time-out whenever I try to upload an image. Uploading is set to `On` and the `tmp` directory is set in `php.ini`. I tried uploading the main `gif` from Google, an `8.36KB` image. It should be fine and well within the limits to PHPs uploading capabilities. Here is a copy of the script. There should be an easy fix. As requested, I changed the tilde to an actual directory. ``` <?php if (!isset($_GET['upload'])) { ?> <form method="post" action="index.php?upload=true" enctype="multipart/form-data"> <input type="file" name="file" class="form"> <input name="submit" type="submit"> </form> <? } else if (isset($_GET['upload']) && $_GET['upload'] == 'true') { $url = $_FILES['file']['name']; $move = move_uploaded_file($_FILES['file']['tmp_name'], "/Users/<username>/Sites/file.jpg"); if ($move) { echo "Success!"; } else { echo "Err..." } } ?> ``` Thanks, Dan EDIT: I fixed it, with help from a few of the answers, to one of which I will mark. A few things here were causing this behavior. 1. Permissions on the `images` directory were not set to allow the `_www` user to access it. A `chmod -R 777 images` seemed to fix it, as well as a `sudo chown _www images`. 2. The form output may have been corrupting the PHP script itself. As suggested, an `ECHO <<< ...END` helped, I think.
This is more than likely an issue with the size of the file and/or a permission issue between the Apache user and the directory specified. For instance make sure the Apache instance is not running under user (nobody). Comment to chaos: He is right the tilde (~) can cause issues, but would probably not cause a timeout; it would display a warning. Even if it does work on your system it would probably deposit the file into an unexpected directory or run into some issues if the Apache user (ie www) does not have a valid home directory set.
What is it that leads you to believe that Apache is timing out rather than, say, outright failing in some way? Because what leaps out at me is that you're trying to move the file to `~/file.jpg`, which I'm nearly certain will not work; `~` is a construct that only normally has meaning inside shells, unless one of PHP's freakish obscure features is processing it in contexts like this. Anyway, try putting the actual directory.
PHP Uploading Issue
[ "", "php", "file", "upload", "uploading", "" ]
In Django, when you have a parent class and multiple child classes that inherit from it you would normally access a child through parentclass.childclass1\_set or parentclass.childclass2\_set, but what if I don't know the name of the specific child class I want? Is there a way to get the related objects in the parent->child direction without knowing the child class name?
(**Update**: For Django 1.2 and newer, which can follow select\_related queries across reverse OneToOneField relations (and thus down inheritance hierarchies), there's a better technique available which doesn't require the added `real_type` field on the parent model. It's available as [InheritanceManager](https://django-model-utils.readthedocs.org/en/latest/managers.html#inheritancemanager) in the [django-model-utils](https://github.com/carljm/django-model-utils/) project.) The usual way to do this is to add a ForeignKey to ContentType on the Parent model which stores the content type of the proper "leaf" class. Without this, you may have to do quite a number of queries on child tables to find the instance, depending how large your inheritance tree is. Here's how I did it in one project: ``` from django.contrib.contenttypes.models import ContentType from django.db import models class InheritanceCastModel(models.Model): """ An abstract base class that provides a ``real_type`` FK to ContentType. For use in trees of inherited models, to be able to downcast parent instances to their child types. """ real_type = models.ForeignKey(ContentType, editable=False) def save(self, *args, **kwargs): if self._state.adding: self.real_type = self._get_real_type() super(InheritanceCastModel, self).save(*args, **kwargs) def _get_real_type(self): return ContentType.objects.get_for_model(type(self)) def cast(self): return self.real_type.get_object_for_this_type(pk=self.pk) class Meta: abstract = True ``` This is implemented as an abstract base class to make it reusable; you could also put these methods and the FK directly onto the parent class in your particular inheritance hierarchy. This solution won't work if you aren't able to modify the parent model. In that case you're pretty much stuck checking all the subclasses manually.
In Python, given a ("new-style") class X, you can get its (direct) subclasses with `X.__subclasses__()`, which returns a list of class objects. (If you want "further descendants", you'll also have to call `__subclasses__` on each of the direct subclasses, etc etc -- if you need help on how to do that effectively in Python, just ask!). Once you have somehow identified a child class of interest (maybe all of them, if you want instances of all child subclasses, etc), `getattr(parentclass,'%s_set' % childclass.__name__)` should help (if the child class's name is `'foo'`, this is just like accessing `parentclass.foo_set` -- no more, no less). Again, if you need clarification or examples, please ask!
How do I access the child classes of an object in django without knowing the name of the child class?
[ "", "python", "django", "many-to-many", "" ]
I have a couple of tables which look like this Table 1 ``` user_id | name ------------------------- x111 | Smith, James x112 | Smith, Jane ``` etc.. Table 2 ``` id | code | date | incident_code | user_id ----------------------------------------------------------------- 1 | 102008 | 10/20/2008 | 1 | x111 2 | 113008 | 11/30/2008 | 3 | x111 3 | 102008 | 10/20/2008 | 2 | x112 4 | 113008 | 11/30/2008 | 5 | x112 ``` What i'd like to display is something like this ``` user_id | user_name | INCIDENT IN OCT 2008 | INCIDENT IN NOV 2008 ------------------------------------------------------------------------------ x111 | Smith, John | 1 | 3 x112 | Smith, Jane | 2 | 5 ``` etc.. The incident\_code would be replaced by the actual description of the incident which is located in another table, but i thought i'd see how this would work first. Some of the column headers would be static while others would be created based on the date. Does anyone one know how i can do this using sql server 2005? Some examples would be very helpful. Thanks in advance
Here's a solution which generates and runs the dynamic SQL with a PIVOT: ``` DECLARE @pivot_list AS VARCHAR(MAX) -- ; WITH cols AS ( SELECT DISTINCT 'INCIDENT IN ' + LEFT(UPPER(CONVERT(VARCHAR, [date], 107)), 3) + ' ' + SUBSTRING(UPPER(CONVERT(VARCHAR, [date], 107)), 9, 4) AS col FROM so926209_2 ) SELECT @pivot_list = COALESCE(@pivot_list + ', ', '') + '[' + col + ']' FROM cols -- DECLARE @template AS VARCHAR(MAX) SET @template = 'WITH incidents AS ( SELECT [user_id], incident_code, ''INCIDENT IN '' + LEFT(UPPER(CONVERT(VARCHAR, [date], 107)), 3) + '' '' + SUBSTRING(UPPER(CONVERT(VARCHAR, [date], 107)), 9, 4) AS col FROM so926209_2 ) ,results AS ( SELECT * FROM incidents PIVOT (MAX(incident_code) FOR col IN ({@pivot_list})) AS pvt ) SELECT results.[user_id] ,so926209_1.[name] ,{@select_list} FROM results INNER JOIN so926209_1 ON so926209_1.[user_id] = results.[user_id] ' DECLARE @sql AS VARCHAR(MAX) SET @sql = REPLACE(REPLACE(@template, '{@pivot_list}', @pivot_list), '{@select_list}', @pivot_list) --PRINT @sql EXEC (@sql) ``` Where `so926209_1`, `so926209_2` are your table 1 and table 2 Note that if you have multiple incidents in a month for the same person, your example doesn't show how you want that handled. This example only takes the last incident in the month.
You want to Pivot <http://msdn.microsoft.com/en-us/library/ms177410.aspx>
Query Transposing certain rows into column names
[ "", "sql", "sql-server", "pivot", "transpose", "" ]
I have a javascript function that accepts a number and performs a mathematical operation on the number. However, the number I'm passing in could have a comma in it, and from my limited experience with Javascript I am having problems working with that value. It doesn't seem to treat that as a numeric type. What's the easiest way to take a parameter with a value of 1,000 and convert it to a numeric 1000?
You can set up your textbox to have an **onblur()** function so when the user attempts to leave the textbox, you then remove the commas from the value by using the javascript **replace** function **example**: ``` function checkNumeric(objName) { var lstLetters = objName; var lstReplace = lstLetters.replace(/\,/g,''); } ``` With input tag here: ``` <input type="text" onblur="checkNumeric(this);" name="nocomma" size="10" maxlength="10"/> ```
A quick and dirty way is to use the String.replace() method: ``` var rawstring = '1,200,000'; var cleanstring = rawstring.replace(/[^\d\.\-\ ]/g, ''); ``` This will set cleanstring to: `1200000`. Assuming you are using US formatting, then the following conversions will occur: ``` 1234 --> 1234 1,234 --> 1234 -1234 --> -1234 -1,234 --> -1234 1234.5 --> 1234.5 1,234.5 --> 1234.5 -1,234.5 --> -1234.5 1xxx234 --> 1234 ``` If you are in other locales that invert the '.' and ',', then you'll have to make that change in the regex.
Numbers with commas in Javascript
[ "", "javascript", "regex", "" ]
How do I detect whether the machine is joined to an Active Directory domain (versus in Workgroup mode)?
You can PInvoke to Win32 API's such as [NetGetDcName](http://www.pinvoke.net/default.aspx/netapi32.NetGetDCName) which will return a null/empty string for a non domain-joined machine. Even better is [NetGetJoinInformation](http://www.pinvoke.net/default.aspx/netapi32.NetGetJoinInformation) which will tell you explicitly if a machine is unjoined, in a workgroup or in a domain. Using `NetGetJoinInformation` I put together this, which worked for me: ``` public class Test { public static bool IsInDomain() { Win32.NetJoinStatus status = Win32.NetJoinStatus.NetSetupUnknownStatus; IntPtr pDomain = IntPtr.Zero; int result = Win32.NetGetJoinInformation(null, out pDomain, out status); if (pDomain != IntPtr.Zero) { Win32.NetApiBufferFree(pDomain); } if (result == Win32.ErrorSuccess) { return status == Win32.NetJoinStatus.NetSetupDomainName; } else { throw new Exception("Domain Info Get Failed", new Win32Exception()); } } } internal class Win32 { public const int ErrorSuccess = 0; [DllImport("Netapi32.dll", CharSet=CharSet.Unicode, SetLastError=true)] public static extern int NetGetJoinInformation(string server, out IntPtr domain, out NetJoinStatus status); [DllImport("Netapi32.dll")] public static extern int NetApiBufferFree(IntPtr Buffer); public enum NetJoinStatus { NetSetupUnknownStatus = 0, NetSetupUnjoined, NetSetupWorkgroupName, NetSetupDomainName } } ```
Don't fool with pinvoke if you don't have to. Reference System.DirectoryServices, then call: ``` System.DirectoryServices.ActiveDirectory.Domain.GetComputerDomain() ``` Throws an `ActiveDirectoryObjectNotFoundException` if the machine is not domain-joined. The Domain object that's returned contains the Name property you're looking for.
How to detect if machine is joined to domain?
[ "", "c#", ".net", "" ]
I'm having an issue with django templates at the moment. I have 3 template files basically: * Base * story\_list * story\_detail Story\_list and \_detail extend Base, and that works perfectly fine. However, list and detail share some code that extend the base template for my sidebar. I'm basically repeating a chunk of code in both templates, and the programmer in me says that's just wrong. There has to be a better way, I'm sure of it. I've tried includes, however I have in the included file: ``` {% block item %} content stuff {% endblock %} ``` for about 3 blocks. The problem is that none of that is being picked up/rendered. If I include the file in a block section that extends base, then it dumps everything properly, but if I don't include it in a block, I get nothing. Is SSI the way to go? I toyed with that, but that didn't seem to work properly either. Any help is appreciated.
Generally, using `include`s is not the answer with Django templates. Let me answer your question on several fronts. First, let me address the sidebar. * **Are nearly all the common pages going to be using that sidebar?** Put it in `Base`. Don't override those sidebar blocks (i.e. don't write them at all in your `Story_*` templates). * **Is this sidebar unique to the `Story_*` templates?** Make another template called, say, `Story_base` and extend that. This is akin to making an abstract superclass in Java. (Answer was in my head, but wording was mercilessly [ripped off](https://stackoverflow.com/questions/849463/django-template-includes/849530#849530 "jpwatts's answer") from jpwatts.) Next, let me address template inheritance. Say you have a template named `Story_list` that extends `Base`. At this point, after just putting `{% extends "Base" %}`, `Story_list` *is exactly* `Base`. Anything else you put in `Story_list` is ignored, because the template is already complete. The only thing you can do now is *override* blocks that have been defined in `Base`. Finally, let me address `include`s. Try to always avoid them. Other templating engines, such as PHP, seem to encourage using `include`s. However, this can lead to less manageable templates in the long run. It's slightly harder to glance at an included snippet and immediately ascertain its place in your template hierarchy. They're also harder to refactor into the template hierarchy, especially if you include them at several levels (once in `Base`, twice in `Story_base`, once in some of the `Story_*`, etc.).
If there is common code between the story templates that isn't needed site-wide, I'd create a `story_base` (extending the original `base`) and have my story templates extend that.
django template includes
[ "", "python", "django", "django-templates", "" ]
How to get the classes that are available in a '.cs' file.? Like we can get the classes and methods in an Assembly using, ``` Assembly.GetTypes() and Type.GetMethods() ``` to get the Class and methods in an Assembly. Similarly how to get all the classes present within a C# file(.cs file).? I need to get the **classes in a .cs file** from which i can easily get the methods within them and further details like parameters of methods etc.
**Short of using a C# parser**, there's no direct way of doing it. You could compile the `.cs` file using `CSharpCodeProvider` (which only works if the file compiles on its own and you can tell all the referenced assemblies to the compiler) and use reflection on the resulting assembly.
I recommend you to use a parser generator tool to generate a quick c# parser, you can use [Antlr](http://www.antlr.org/). Also you can check [this](http://www.codeplex.com/csparser) and [this](http://www.temporal-wave.com/)
How to get Classes and methods from a .cs file using Reflections in C#.?
[ "", "c#", "reflection", "" ]
If the table id is known – so the table can be obtained with `docoument.getElementById(table_id)` – how can I append a TR element to that table in the easiest way? The TR is as follows: ``` <tr><td><span>something here..</span></td></tr> ```
The first uses DOM methods, and the second uses the non-standard but widely supprted innerHTML ``` var tr = document.createElement("tr"); var td = document.createElement("td"); var span = document.createElement("span"); var text = document.createTextNode("something here.."); span.appendChild(text); td.appendChild(span); tr.appendChild(td); tbody.appendChild(tr); ``` OR ``` tbody.innerHTML += "<tr><td><span>something here..</span></td></tr>" ```
The most straightforward, standards compliant and library-independent method to insert a table row is using [the insertRow method](https://developer.mozilla.org/en/DOM/table.insertRow) of the table object. ``` var tableRef = document.getElementById(tableID); // Insert a row in the table at row index 0 var newRow = tableRef.insertRow(0); ``` P.S. Works in IE6 too, though it may have some quirks at times.
what's the easiest method to append a TR to a table by javascript?
[ "", "javascript", "dom", "" ]
The following doesn't work... (at least not in Firefox: `document.getElementById('linkid').click()` is not a function) ``` <script type="text/javascript"> function doOnClick() { document.getElementById('linkid').click(); //Should alert('/testlocation'); } </script> <a id="linkid" href="/testlocation" onclick="alert(this.href);">Testlink</a> ```
You need to [`apply`](https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/Function/apply) the event handler in the context of that element: ``` var elem = document.getElementById("linkid"); if (typeof elem.onclick == "function") { elem.onclick.apply(elem); } ``` Otherwise `this` would reference the context the above code is executed in.
The best way to solve this is to use Vanilla JS, but if you are already using jQuery, there´s a very easy solution: ``` <script type="text/javascript"> function doOnClick() { $('#linkid').click(); } </script> <a id="linkid" href="/testlocation" onclick="alert(this.href);">Testlink</a> ``` Tested in IE8-10, Chrome, Firefox.
How can I programmatically invoke an onclick() event from a anchor tag while keeping the ‘this’ reference in the onclick function?
[ "", "javascript", "dom-events", "" ]
I have two tabitems. User will enter some data and save it on the first tab. The second tab lists the saved data. What I need is when the user select the second tab before saving data in first tab a confirmation message box with Yes, No and Cancel should be shown. If the user clicks Yes button the data should be saved and go to the second tab. If he hits No, the data need not to be saved and finally if Cancel is hit the tab will retain with all entered data. How can i make this?
To keep things simple you can do the follwing in the Code Behind file. I'd create a Model class of the data you want to display and edit in the WPF Control. Make the Model implement the `INotifyPropertyChanged` and `IEditableObject` interfaces. `INotifyPropertyChanged` will allow you to Bind to the Model. `IEditableObject` will allow you to provide Edit, Save and Cancel functionality. The TabControl has a `SelectionChanged` Event you can handle, that will allow you to detect when the user changes tabs, in this handler you can use `System.Windows.MessageBox` to ask the user to save etc, `System.Windows.MessageBox.Show()` returns a `MessageBoxResult` Object you can use to detirmine what button the user clicked and perform the appropiate action. This is not a geat way to do things, but it keeps things simple, you may want to look into some WPF design Patterns to help with Code Manageability. If you need anything explained further, just ask.
Although I disagree with the way you interrupt the user's flow from tab to tab I'm going to humor you and answer the question: You'll need two things to get this done: 1. The event that occurs when a tab was clicked 2. The previous tab that was selected (the one you came from) **The first item:** The tab control has a Click method that you can subscribe to: ``` Click=”MyTabButton_Click” ``` **The second item:** This part you'll have to do manually. You can set a variable in the click event which contains what tab was last selected. Once this is set you can check a variable (which you previously set) as to what tab was previously selected. You can then do all your validation.
Confirmation on switching between tabs
[ "", "c#", ".net", "wpf", "tabcontrol", "tabcontainer", "" ]
I'm having a hard time finding a good resource that explains how to use Named Capturing Groups in C#. This is the code that I have so far: ``` string page = Encoding.ASCII.GetString(bytePage); Regex qariRegex = new Regex("<td><a href=\"(?<link>.*?)\">(?<name>.*?)</a></td>"); MatchCollection mc = qariRegex.Matches(page); CaptureCollection cc = mc[0].Captures; MessageBox.Show(cc[0].ToString()); ``` However this always just shows the full line: ``` <td><a href="/path/to/file">Name of File</a></td> ``` I have experimented with several other "methods" that I've found on various websites but I keep getting the same result. How can I access the named capturing groups that are specified in my regex?
Use the group collection of the Match object, indexing it with the capturing group name, e.g. ``` foreach (Match m in mc){ MessageBox.Show(m.Groups["link"].Value); } ```
You specify the named capture group string by passing it to the indexer of the `Groups` property of a resulting `Match` object. Here is a small example: ``` using System; using System.Text.RegularExpressions; class Program { static void Main() { String sample = "hello-world-"; Regex regex = new Regex("-(?<test>[^-]*)-"); Match match = regex.Match(sample); if (match.Success) { Console.WriteLine(match.Groups["test"].Value); } } } ```
How do I access named capturing groups in a .NET Regex?
[ "", "c#", ".net", "regex", "" ]
is there a way to retrieve type `T` from `IEnumerable<T>` through reflection? e.g. i have a variable `IEnumerable<Child>` info; i want to retrieve Child's type through reflection
``` IEnumerable<T> myEnumerable; Type type = myEnumerable.GetType().GetGenericArguments()[0]; ``` Thusly, ``` IEnumerable<string> strings = new List<string>(); Console.WriteLine(strings.GetType().GetGenericArguments()[0]); ``` prints `System.String`. See [MSDN](http://msdn.microsoft.com/en-us/library/system.type.getgenericarguments.aspx) for `Type.GetGenericArguments`. **Edit:** I believe this will address the concerns in the comments: ``` // returns an enumeration of T where o : IEnumerable<T> public IEnumerable<Type> GetGenericIEnumerables(object o) { return o.GetType() .GetInterfaces() .Where(t => t.IsGenericType && t.GetGenericTypeDefinition() == typeof(IEnumerable<>)) .Select(t => t.GetGenericArguments()[0]); } ``` Some objects implement more than one generic `IEnumerable` so it is necessary to return an enumeration of them. **Edit:** Although, I have to say, it's a terrible idea for a class to implement `IEnumerable<T>` for more than one `T`.
I'd just make an extension method. This worked with everything I threw at it. ``` public static Type GetItemType<T>(this IEnumerable<T> enumerable) { return typeof(T); } ```
getting type T from IEnumerable<T>
[ "", "c#", "generics", "reflection", "" ]
i have this html: ``` <span class="price"> $61.00   <span class="detailFreeShipping">With Free Shipping</span> </span> ``` How to write a jquery selector which give me the price text only "$61.00"? i want to make it generic as i can because may be in some cases there is no inner span, just the parent one.
You can do this: ``` jQuery.fn.extend({ textOnly: function() { // make a clone of this object so we are not changing the actual DOM var obj = $(this).clone(); // remove all the children, i.e. any DOM objects obj.children().remove(); // get the text value after all DOM elements are removed return obj.text(); } }); ``` then you can call it like this ``` var price = $(".price").textOnly(); ``` you will get the value you want.
I don't know jQuery, but just to pseudo-respond your question, you can do this by: ``` var elem = document.getElementById('yourid'); // or document.getElementsByTagName('span')[0]; var text = elem.innerHTML; text = text.substr(0, (text.indexOf('<') > -1 ? text.indexOf('<') : text.length)); ```
how to select just the top element text?
[ "", "javascript", "jquery", "html", "" ]
I have a problem from "The C++ Standard Library Extensions": > Exercise 6 > I said in Section 2.4.2 > that you shouldn't construct two > shared\_ptr objects from the same > pointer. The danger is that both > shared\_ptr objects or their progeny > will eventually try to delete the > resource, and that usually leads to > trouble. In fact, you can do this if > you're careful. It's not particularly > useful, but write a program that > constructs two > shared\_ptr objects from > the same pointer and deletes the > resource only once. below is my answer: ``` template <typename T> void nonsence(T*){} struct SX { int data; SX(int i = 0) : data(i) { cout << "SX" << endl; } ~SX() { cout << "~SX" << endl; } }; int main(int argc, char **argv) { SX* psx=new SX; shared_ptr<SX> sp1(psx),sp2(psx,nonsence<SX>); cout<<sp1.use_count()<<endl; return 0; } ``` but I don't think it is a good solution--because i don't want solving it by use constructor. can anyone give me a better one? thx, forgive my bad english.
I got the "STANDARD" answer from boost doc : <http://www.boost.org/doc/libs/1%5F38%5F0/libs/smart%5Fptr/sp%5Ftechniques.html#another_sp>
All you need to do is construct the second `shared_ptr` from the first `shared_ptr`. ``` shared_ptr<SX> sp1( new SX ); shared_ptr<SX> sp2( sp1 ); ``` The created SX will then properly be deleted only when all shared pointers to it are destroyed.
construct two shared_ptr objects from the same pointer
[ "", "c++", "shared-ptr", "tr1", "" ]
I have a SP that has the following algorithm. ``` IF <SomeCondition> BEGIN SELECT * FROM TABLE1 END ELSE BEGIN SELECT * FROM TABLE2 END --Union the above with the query below UNION SELECT * FROM TABLE3 ``` The recordset returned is EXACTLY the same. I need to do a UNION of that resultset and another query. Is there a way to do this without having to use a temp table?
How about: ``` SELECT * FROM TABLE1 WHERE <SomeCondition> UNION SELECT * FROM TABLE2 WHERE NOT <SomeCondition> UNION SELECT * FROM TABLE3 ``` If you're worried about evaluating twice: ``` DECLARE @condition bit SET @condition = CASE WHEN <SomeCondition> THEN 1 ELSE 0 END SELECT * FROM TABLE1 WHERE @condition = 1 UNION SELECT * FROM TABLE2 WHERE @condition = 0 UNION SELECT * FROM TABLE3 ```
You could also use dynamic SQL if you don't mind that it isn't compiled. For example: ``` DECLARE @sql VARCHAR(100) DECLARE @table VARCHAR(10) IF <SomeCondition> BEGIN SET @table = 'Table1' END ELSE BEGIN SET @table = 'Table2' END SET @sql = 'SELECT * FROM ' + @table + ' UNION SELECT * FROM TABLE3' EXEC(@sql) ```
SQL Server 2005 - If condition with union
[ "", "sql", "sql-server", "sql-server-2005", "" ]
The .Net framework has an Array.Sort overload that allows one to specify the starting and ending indicies for the sort to act upon. However these parameters are only 32 bit. So I don't see a way to sort a part of a large array when the indicies that describe the sort range can only be specified using a 64-bit number. I suppose I could copy and modify the the framework's sort implementation, but that is not ideal. Update: I've created two classes to help me around these and other large-array issues. One other such issue was that long before I got to my memory limit, I start getting OutOfMemoryException's. I'm assuming this is because the requested memory may be available but not contiguous. So for that, I created class BigArray, which is a generic, dynamically sizable list of arrays. It has a smaller memory footprint than the framework's generic list class, and does not require that the entire array be contiguous. I haven't tested the performance hit, but I'm sure its there. ``` public class BigArray<T> : IEnumerable<T> { private long capacity; private int itemsPerBlock; private int shift; private List<T[]> blocks = new List<T[]>(); public BigArray(int itemsPerBlock) { shift = (int)Math.Ceiling(Math.Log(itemsPerBlock) / Math.Log(2)); this.itemsPerBlock = 1 << shift; } public long Capacity { get { return capacity; } set { var requiredBlockCount = (value - 1) / itemsPerBlock + 1; while (blocks.Count > requiredBlockCount) { blocks.RemoveAt(blocks.Count - 1); } while (blocks.Count < requiredBlockCount) { blocks.Add(new T[itemsPerBlock]); } capacity = (long)itemsPerBlock * blocks.Count; } } public T this[long index] { get { Debug.Assert(index < capacity); var blockNumber = (int)(index >> shift); var itemNumber = index & (itemsPerBlock - 1); return blocks[blockNumber][itemNumber]; } set { Debug.Assert(index < capacity); var blockNumber = (int)(index >> shift); var itemNumber = index & (itemsPerBlock - 1); blocks[blockNumber][itemNumber] = value; } } public IEnumerator<T> GetEnumerator() { for (long i = 0; i < capacity; i++) { yield return this[i]; } } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return this.GetEnumerator(); } } ``` And getting back to the original issue of sorting... What I really needed was a way to act on each element of an array, in order. But with such large arrays, it is prohibitive to copy the data, sort it, act on it and then discard the sorted copy (the original order must be maintained). So I created static class OrderedOperation, which allows you to perform an arbitrary operation on each element of an unsorted array, in a sorted order. And do so with a low memory footprint (trading memory for execution time here). ``` public static class OrderedOperation { public delegate void WorkerDelegate(int index, float progress); public static void Process(WorkerDelegate worker, IEnumerable<int> items, int count, int maxItem, int maxChunkSize) { // create a histogram such that a single bin is never bigger than a chunk int binCount = 1000; int[] bins; double binScale; bool ok; do { ok = true; bins = new int[binCount]; binScale = (double)(binCount - 1) / maxItem; int i = 0; foreach (int item in items) { bins[(int)(binScale * item)]++; if (++i == count) { break; } } for (int b = 0; b < binCount; b++) { if (bins[b] > maxChunkSize) { ok = false; binCount *= 2; break; } } } while (!ok); var chunkData = new int[maxChunkSize]; var chunkIndex = new int[maxChunkSize]; var done = new System.Collections.BitArray(count); var processed = 0; var binsCompleted = 0; while (binsCompleted < binCount) { var chunkMax = 0; var sum = 0; do { sum += bins[binsCompleted]; binsCompleted++; } while (binsCompleted < binCount - 1 && sum + bins[binsCompleted] <= maxChunkSize); Debug.Assert(sum <= maxChunkSize); chunkMax = (int)Math.Ceiling((double)binsCompleted / binScale); var chunkCount = 0; int i = 0; foreach (int item in items) { if (item < chunkMax && !done[i]) { chunkData[chunkCount] = item; chunkIndex[chunkCount] = i; chunkCount++; done[i] = true; } if (++i == count) { break; } } Debug.Assert(sum == chunkCount); Array.Sort(chunkData, chunkIndex, 0, chunkCount); for (i = 0; i < chunkCount; i++) { worker(chunkIndex[i], (float)processed / count); processed++; } } Debug.Assert(processed == count); } } ``` The two classes can work together (that's how I use them), but they don't have to. I hope someone else finds them useful. But I'll admit, they are fringe case classes. Questions welcome. And if my code sucks, I'd like to hear tips, too. One final thought: As you can see in OrderedOperation, I'm using ints and not longs. Currently that is sufficient for me despite the original question I had (the application is in flux, in case you can't tell). But the class should be able to handle longs as well, should the need arise.
You'll find that even on the 64-bit framework, the maximum number of elements in an array is `int.MaxValue`. The existing methods that take or return `Int64` just cast the `long` values to `Int32` internally and, in the case of parameters, will throw an `ArgumentOutOfRangeException` if a `long` parameter isn't between `int.MinValue` and `int.MaxValue`. For example the `LongLength` property, which returns an `Int64`, just casts and returns the value of the `Length` property: ``` public long LongLength { get { return (long)this.Length; } // Length is an Int32 } ``` So my suggestion would be to cast your `Int64` indicies to `Int32` and then call one of the existing `Sort` overloads.
Since Array.Copy takes Int64 params, you could pull out the section you need to sort, sort it, then put it back. Assuming you're sorting less than 2^32 elements, of course.
How to sort a part of an array with int64 indicies in C#?
[ "", "c#", ".net", "arrays", "sorting", "int64", "" ]
I am looking for a Java library to display map data from various sources, including shapefile, WMS, WFS, Google Maps, possibly ArcIMS, etc. It seems like OpenLayers is the closest thing to what I want, except it's a JavaScript library, and I'm writing a Swing application. GDAL looks promising, but as far as I can tell there won't be Java bindings until "sometime" in the future. Just to be clear, I am looking for a single Java API that I can use to display maps from a number of map servers/sources. Does anyone know if anything like this exists, and if not, where to go from here? Should I build this API on top of GeoTools? Or...
GeoTools is a good bet for this. The Google Maps [Terms of Service](http://code.google.com/apis/maps/terms.html) prohibit accessing Google Maps tiles except through the (JavaScript) Google Maps API, so it's not likely that you'll find a freely available codebase to access them in a Java application. Similar restrictions probably apply to other commercial layers.
This is a case where you have lots of choice. GeoTools is indeed a low level library and an excellent choice for some hands on work. We recently started using Swing for our example code in order to make tutorials [more visual](http://geotoolsnews.blogspot.com/2009/11/foss4g-geotools-tutorials.html). That said they are tutorial code only and do not represent a multi threaded system such as uDig. If you are looking for Swing applications to start form there is the Jump family (sadly scattered through a closed development process - OpenJump represents a collection of the various forks), gvSig which has excellent financial backing in Span, and so on. You may also wish to look at the library side of the deegree project (which I seem to recall produced a desktop application as well?). I also recommend looking at some of the tiled map viewers produced around Nasa World Wind and Open Street Map datasets. These will be more similar to the OpenLayers experience. Disclaimer I am on the steering committee for GeoTools, and uDig.
Java equivalent of OpenLayers
[ "", "java", "api", "swing", "gis", "geotools", "" ]
I'm trying to read POST data in a PHP script. My first instincts led me to the `$_POST[]` array, but for whatever reason, it's never populated. I'm using HTTP Client for OS X, <http://ditchnet.org/httpclient/> to send POST requests. I enter the URL of the script, set the method to POST, set the content-type header to text/plain and then enter `myVar=foobar` as the body of the request. I hit send, and there's nothing in the `$_POST[]` array. I tried another route after reading some questions here on StackOverflow, and tried reading from the `$HTTP_RAW_POST_DATA`, but no dice there either. I also tried reading from the `php://input` stream, and nothing. As a heads up, I'm able to read from the `$_GET[]` array if I add some parameters to the URL, but I don't want to do that. I also need to post the data from a different application, so I can't use the HTML post forms... Any help would be awesome, thanks guys!
Set the Content-Type header to application/x-www-form-urlencoded. text/plain is [not](http://www.w3.org/TR/html401/interact/forms.html#h-17.13.4) a valid content type for post data.
If you want to read the raw request-body, you can use: ``` file_get_contents('php://input'); ```
Reading Data From $_POST[] in PHP
[ "", "php", "http", "post", "" ]
Regardless of other options that may achieve the same result (i.e. adding breakpoints by hand), is it possible to programmatically add a breakpoint into the source code of a Visual Studio project? Such as: ``` try { FunctionThatThrowsErrors(obj InscrutableParameters); } catch(Exception ex) { Log.LogTheError(ex); AddBreakPointToCallingFunction(); } ``` That way when you run in debug the next time, it will automatically have set breakpoints at all the points that caused trouble during the last run. I'm not saying that's a particularly useful way of debugging. I'm just wondering if the capability is there.
You inspired me to poke around with this - thanks for keeping me awake all night. :) Here's one way you can do it. Visual Studio has really great breakpoint support. One of the cooler features is that you can tell it to run a Visual Studio macro when the breakpoint is hit. These macros have full access to the development environment, i.e. they can do whatever you could do manually at the keyboard, including setting other breakpoints. This solution is to 1) put a top-level try/catch in your program to catch all exceptions, 2) put a breakpoint in the catch block that runs your macro, and 3) have the macro look at the exception to figure out where it came from, and put a breakpoint there. When you run it in the debugger and an exception occurs, you'll have a new breakpoint at the offending line of code. Take this sample program: ``` using System; namespace ExceptionCallstack { class Program { static void Main(string[] args) { try { func1(); } catch (Exception e) { Console.WriteLine("Oops"); Console.ReadKey(); } } static void func1() { func2(); } static void func2() { func3(); } static void func3() { throw new Exception("Boom!"); } } } ``` The objective is to programmatically set a breakpoint on that `throw` in func3 when you run it in the debugger and get the error. To do this, first create a new Visual Studio macro (I called mine SetBreakpointOnException). Paste this into a new module MyDebuggerMacros or whatever: ``` Imports System Imports EnvDTE Imports EnvDTE80 Imports EnvDTE90 Imports System.Diagnostics Imports System.Text.RegularExpressions Public Module DebuggerMacros Sub SetBreakpointOnException() Dim output As String = "" Dim stackTrace As String = DTE.Debugger.GetExpression("e.StackTrace").Value stackTrace = stackTrace.Trim(New Char() {""""c}) Dim stackFrames As String() = Regex.Split(stackTrace, "\\r\\n") Dim r As New Regex("^\s+at .* in (?<file>.+):line (?<line>\d+)$", RegexOptions.Multiline) Dim match As Match = r.Match(stackFrames(0)) Dim file As String = match.Groups("file").Value Dim line As Integer = Integer.Parse(match.Groups("line").Value) DTE.Debugger.Breakpoints.Add("", file, line) End Sub End Module ``` Once this macro is in place, go back to the `catch` block and set a breakpoint with F9. Then right-click the red breakpoint circle and select "When Hit...". At the bottom of the resulting dialog there's an option to tell it to run a macro - drop down the list and pick your macro. Now you should get new breakpoints when your app throws unhandled exceptions. Notes and caveats about this: * I am **not** a regex guru, I'm sure someone else can whip up something better. * This doesn't handle nested exceptions (InnerException property) - you can beat your head against that if you want. :) Check for GetExpression("e.InnerException") and recurse, perhaps. * It does text parsing on the excpetion's StackTrace string, not more-sophisticated object graph analysis (digging down into Exception.TargetSite and using reflection). The usual caveats apply about the fragility of this approach. * For some reason it seems to put the breakpoint into some "alternate space". Once the initial debugging session is over, you don't see the new breakpoint in your code. But it's there if you run the program again in the debugger, and things like "Disable All Breakpoints" affect it. It would be nice to learn about what's going on, if someone feels like finding a way to clean that up. Maybe digging around in the .suo file? Hope this helps!
You can call [`System.Diagnostics.Debugger.Break()`](http://msdn.microsoft.com/en-us/library/system.diagnostics.debugger.break.aspx). You can also tell Visual Studio to break on all exceptions, even handled ones, by going on the menu to `Debug->Exceptions...` and checking `Thrown` everywhere that's currently only checked "User-unhandled".
Programmatically apply / deactivate breakpoints in Visual Studio
[ "", "c#", ".net", "visual-studio", "breakpoints", "" ]
I was doing some beginner AppEngine dev on a Windows box and installed Eclipse for that. I liked the autocompletion I got with the objects and functions. I moved my dev environment over to my Macbook, and installed Eclipse Ganymede. I installed the AppEngine SDK and Eclipse plug in. However, when I am typing out code now, the autocomplete isn't functioning. Did I miss a step? UPDATE Just to add to this: the line: ``` import cgi ``` appears to give me what I need. When I type "cgi." I get all of the auto complete. However, the lines: ``` from google.appengine.api import users from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app from google.appengine.ext import db ``` don't give me any auto complete. If I type "users." there is no auto complete.
I found the answer from a poster on the Google App Engine forum. I will blog the experience later, but this one gets a FAIL for GOOG. You need to click on the download link for "Linux/Other" to get the ZIP file with the right files. You can then point your PYTHONPATH at the directory on your local machine. The fail here is that the Mac SDK download gives you the AppEngineLauncher, but not these files. Nowhere on the AppEngine site, or in the directions GOOG publishes, do they suggest that as a Mac developer that you should download these "other" files. Very, very, very annoying.
The App Engine plugin for eclipse is for the Java SDK, not for the Python one. I don't know eclipse very well, but I suspect what you're missing is simply the path to the App Engine SDK included in eclipse's pythonpath configuration. Find somewhere to add that, and it'll be able to find the SDK classes and offer autocomplete suggestions for them.
Eclipse + AppEngine =? autocomplete
[ "", "python", "eclipse", "google-app-engine", "" ]
I'm trying to convert one range of numbers to another, maintaining ratio. Maths is not my strong point. I have an image file where point values may range from -16000.00 to 16000.00 though the typical range may be much less. What I want to do is compress these values into the integer range 0-100, where 0 is the value of the smallest point, and 100 is the value of the largest. All points in between should keep a relative ratio even though some precision is being lost I'd like to do this in python but even a general algorithm should suffice. I'd prefer an algorithm where the min/max or either range can be adjusted (ie, the second range could be -50 to 800 instead of 0 to 100).
``` NewValue = (((OldValue - OldMin) * (NewMax - NewMin)) / (OldMax - OldMin)) + NewMin ``` Or a little more readable: ``` OldRange = (OldMax - OldMin) NewRange = (NewMax - NewMin) NewValue = (((OldValue - OldMin) * NewRange) / OldRange) + NewMin ``` Or if you want to protect for the case where the old range is 0 (*OldMin = OldMax*): ``` OldRange = (OldMax - OldMin) if (OldRange == 0) NewValue = NewMin else { NewRange = (NewMax - NewMin) NewValue = (((OldValue - OldMin) * NewRange) / OldRange) + NewMin } ``` Note that in this case we're forced to pick one of the possible new range values arbitrarily. Depending on context, sensible choices could be: `NewMin` (*see sample*), `NewMax` or `(NewMin + NewMax) / 2`
That's a simple linear conversion. ``` new_value = ( (old_value - old_min) / (old_max - old_min) ) * (new_max - new_min) + new_min ``` So converting 10000 on the scale of -16000 to 16000 to a new scale of 0 to 100 yields: ``` old_value = 10000 old_min = -16000 old_max = 16000 new_min = 0 new_max = 100 new_value = ( ( 10000 - -16000 ) / (16000 - -16000) ) * (100 - 0) + 0 = 81.25 ```
Convert a number range to another range, maintaining ratio
[ "", "python", "math", "" ]
Using Apache POI to generate a document and i'm having a small problem with cell styles, currentlly i'm using: ``` CellStyle currencyCellStyle=workbook.createCellStyle(); currencyCellStyle.setDataFormat(format.getFormat("$#,##0.00")); ``` Which works totally fine for positive numbers, however I would like to assign a different style to negative cells automatically. Question is is there any to set this up without having to check the individual cell values and assign a separate style to them? Or alternatively is there any way to tell Apache POI to use the built in excel currency format with one of its negative options?
Found it, thanks me :D ``` CellStyle currencyCellStyle=workbook.createCellStyle(); currencyCellStyle.setDataFormat(format.getFormat("$#,##0.00;[Red]($#,##0.00)")); ```
Why poi REFUSES to support the FIRST option in excel currency formatting is beyond me! [![enter image description here](https://i.stack.imgur.com/OrlzU.png)](https://i.stack.imgur.com/OrlzU.png) I don't like using the DecimalFormat for currency because your end cell value becomes a non-numeric with the introduction of the currency symbol. While working for a major financial institution, I was tasked with resolving this formatting issue. The core idea of this change is, because POI refuses to be reasonable and have comprehensive support of Excel's native options, I will infiltrate their code and change their values at the core. The following is my WORKAROUND: ``` private static final String CURRENCY_FORMAT_OVERRIDE = "\"$\"#,##0.00_);-\"$\"#,##0.00"; private static final String CURRENCY_FORMAT_TARGET = "\"$\"#,##0.00_);(\"$\"#,##0.00)"; static { // static class level initializer Field field = org.apache.poi.ss.usermodel.BuiltinFormats.class.getDeclaredField("_formats"); field.setAccessible(true); String[] _formats = (String[])field.get(new org.apache.poi.ss.usermodel.BuiltinFormats()); for(int i = 0; i < _formats.length; ++i) { if(_formats[i].equals(CURRENCY_FORMAT_TARGET)) { _formats[i]=CURRENCY_FORMAT_OVERRIDE; System.out.println("TAKE THAT, POI!!!"); } } } ```
Negative Cell Value Styles in Apache POI
[ "", "java", "apache", "excel", "apache-poi", "" ]
What is the easiest way to strip all html/javascript from a string?
Django provides an utility function to remove HTML tags: ``` from django.utils.html import strip_tags my_string = '<div>Hello, world</div>' my_string = strip_tags(my_string) print(my_string) # Result will be "Hello, world" without the <div> elements ``` This function used to be unsafe on older Django version (before 1.7) but nowadays it is **completely safe** to use it. [Here is](https://www.mehmetince.net/django-strip_tags-bypass-vulnerability-exploit/) an article that reviewed this issue when it was relevant.
The [striptags](http://docs.djangoproject.com/en/dev/ref/templates/builtins/#striptags) template filter. ``` {{ value|striptags }} ```
How to strip html/javascript from text input in django
[ "", "python", "django", "" ]
ok I give up, how do you do this in one line? ``` public object Convert(object[] values, Type targetType, object parameter, System.Globalization.CultureInfo culture) { //List<string> fields = values.ToList<string>(); //List<string> fields = values as List<string>; //List<string> fields = (List<string>)values; List<string> fields = new List<string>(); foreach (object value in values) { fields.Add(value.ToString()); } //process the fields here knowning they are strings ... } ```
Are you using C# 3.0 with LINQ? It's pretty easy then: ``` List<string> fields = values.Select(i => i.ToString()).ToList(); ```
If you have LINQ available (in .NET 3.5) and C# 3.0 (for extension methods), then there is quite a nice one liner: ``` var list = values.Cast<string>().ToList(); ``` You're not going get anything much shorter that what you've posted for .NET 2.0/C# 2.0. **Caveat:** I just realised that your `object[]` isn't necessarily of type `string`. If that is in fact the case, go with Matt Hamilton's method, which does the job well. If the element of your array are in fact of type `string`, then my method will of course work.
How to convert object[] to List<string> in one line of C# 3.0?
[ "", "c#", "generics", "list", "casting", "" ]
I have this code here, which is intended to allow any type of arguments: ``` public static void AreEqual(object expectedValue, object actualValue) { if (expectedValue == actualValue) { HttpContext.Current.Response.Write("Equal"); } else { HttpContext.Current.Response.Write("Not Equal"); } } ``` If I call it using a couple of ints it does not behave very well. ``` AreEqual(3, 3) // prints Not Equal ```
At the simplest level: ``` public static void AreEqual(object expectedValue, object actualValue) { if (object.Equals(expectedValue,actualValue)) { HttpContext.Current.Response.Write("Equal"); } else { HttpContext.Current.Response.Write("Not Equal"); } } ``` Or with generics (supports `IEquatable<T>`): ``` public static void AreEqual<T>(T expectedValue, T actualValue) { if (EqualityComparer<T>.Default.Equals(expectedValue,actualValue)) { HttpContext.Current.Response.Write("Equal"); } else { HttpContext.Current.Response.Write("Not Equal"); } } ```
Just to highlight the reason for the "strange" behavior is because when you cast an int to object [boxing](http://msdn.microsoft.com/en-us/magazine/cc301569.aspx) occurs. The two 3s are converted to objects and then you are not comparing numbers anymore, you are comparing references, which will not be the same.
How can I check if two values in c# are equal? (Given any type of value)
[ "", "c#", "types", "equals", "" ]
I have [CakePHP](https://en.wikipedia.org/wiki/CakePHP) (cake\_1.2.2.8120) and [EasyPHP](https://de.wikipedia.org/wiki/EasyPHP) (3.0) installed on [Windows Vista Ultimate](https://en.wikipedia.org/wiki/Windows_Vista_editions#Editions_for_personal_computers). I followed the "baking" tutorials online and successfully set the database connection with the "cake bake" command from the CLI. I baked the controller for my "Users" table using the "php cake.php bake controller Users" command - worked fine. However, when I tried baking the view similarly, I got this message: > ``` > Fatal error: Call to undefined function mysql_connect() in C:\Web\EasyPHP > ``` \www\cake\cake\libs\model\datasources\dbo\dbo\_mysql.php on line 374' Also, the MySQL module is enabled for PHP on the [phpMyAdmin](https://en.wikipedia.org/wiki/PhpMyAdmin) page (web), but I can't find it in the CLI (using "php -m" from the command line) even though I've uncommented the "extension=php\_mysql.dll" line in the php.ini file. How can I fix this?
Grab [XAMPP](https://en.wikipedia.org/wiki/XAMPP) which has Apache with the MySQL and PHP modules setup and working together, as well at the MySQL and command-line versions. It just unzips to any directory, so it won't mess up any other Apache installs you have. Just be careful of any port conflicts between the different installations. *[Download XAMPP](http://www.apachefriends.org/en/xampp-windows.html)*
The root cause of the problem is that in EasyPHP 5.3 there isn't any *php.ini* file in the *php* folder. Copy `php.ini` from `EasyPHP5.3\apache` to `EasyPHP5.3\php` to solve this nasty issue.
Why can't CakePHP bake connect to MySQL running under EasyPHP on Windows Vista Ultimate?
[ "", "php", "mysql", "cakephp", "easyphp", "cakephp-1.2", "" ]
when using new Date,I get something like follows: Fri May 29 2009 22:39:02 GMT+0800 (China Standard Time) but what I want is xxxx-xx-xx xx:xx:xx formatted time string
Although it doesn't pad to two characters in some of the cases, it does what I expect you want ``` function getFormattedDate() { var date = new Date(); var str = date.getFullYear() + "-" + (date.getMonth() + 1) + "-" + date.getDate() + " " + date.getHours() + ":" + date.getMinutes() + ":" + date.getSeconds(); return str; } ```
jonathan's answers lacks the leading zero. There is a simple solution to this: ``` function getFormattedDate(){ var d = new Date(); d = d.getFullYear() + "-" + ('0' + (d.getMonth() + 1)).slice(-2) + "-" + ('0' + d.getDate()).slice(-2) + " " + ('0' + d.getHours()).slice(-2) + ":" + ('0' + d.getMinutes()).slice(-2) + ":" + ('0' + d.getSeconds()).slice(-2); return d; } ``` basically add 0 and then just take the 2 last characters. So 031 will take 31. 01 will take 01... [jsfiddle](http://jsfiddle.net/aq5jywk5/)
how to get formatted date time like 2009-05-29 21:55:57 using javascript?
[ "", "javascript", "datetime", "" ]
I'm trying to scroll down 100px every time the user gets near the top of the document. I have the function executing when the user gets close to the top of the document, but the .scrollTo function isn't working. I put an alert after and before to check to see if it actually was the line or not that was stopping it and only the first alert goes off, here's the code: ``` alert("starting"); $.scrollTo({ top: '+=100px', left: '+=0px' }, 800); alert("finished"); ``` I know I have the jquery page linked properly because I'm using many other jquery functions throughout and they all work fine. I've also tried removing the 'px' from above and it doesn't seem to make a difference.
If it's not working why don't you try using jQuery's scrollTop method? ``` $("#id").scrollTop($("#id").scrollTop() + 100); ``` If you're looking to scroll smoothly you could use basic javascript setTimeout/setInterval function to make it scroll in increments of 1px over a set length of time.
``` $('html, body').animate({scrollTop: $("#page").offset().top}, 2000); ```
How to scroll the window using JQuery $.scrollTo() function
[ "", "javascript", "jquery", "scroll", "scrollto", "" ]
I have an alphanumeric string like below, ``` $string_1 = "a4nas60dj71wiena15sdl1131kg12b" ``` and would like to change it to something like below, ``` $string_2 = "a4NaS60dJ71wIeNa15Sdl1131Kg12B" ``` How would I go about doing this? I have tried the below code, but it doesn't work. ``` $lenght = strlen($string_1); for ( $i = 0; $i <= $length - 1; $i += 1) { if ( $i % 2) { $string_2[$i]=strtoupper($string_1[$i]); } else { $string_2[$i]=$string_1[$i]; } } echo $string_2; ``` The above code prints out "Array" so something is definitely not working.
By the way, you have a slight error in your capitalized string: ``` $string_1: a4nas60dj71wiena15sdl1131kg12b $string_2: a4NaS60dJ71wIeNa15Sdl1131Kg12B ^ should be capital so out of sync for rest of string ``` I'll give you two ways of doing it: ``` <?php header('Content-Type: text/plain'); $string_1 = "a4nas60dj71wiena15sdl1131kg12b"; $string_2 = "a4NaS60dJ71wIeNa15Sdl1131Kg12B"; $letter_count = 0; $result = ''; for ($i=0; $i<strlen($string_1); $i++) { if (!preg_match('![a-zA-Z]!', $string_1[$i])) { $result .= $string_1[$i]; } else if ($letter_count++ & 1) { $result .= strtoupper($string_1[$i]); } else { $result .= $string_1[$i]; } } $result2 = preg_replace_callback('!([a-zA-Z]\d*)([a-zA-Z])!', 'convert_to_upper', $string_1); function convert_to_upper($matches) { return strtolower($matches[1]) . strtoupper($matches[2]); } echo "$string_1\n"; echo "$string_2\n"; echo "$result\n"; echo "$result2\n"; ?> ``` **Note:** The above makes several assumptions: 1. Characters other than numbers and letters can be in the string; 2. You want to alternate case regardless of the original (eg "ASDF" becomes "aSdF"); 3. You're capitalizing every second letter, not every second lowercase letter. The above can be altered if these assumptions are incorrect.
Strings are not arrays, and you're declaring $string\_2 as an array... Still, your code won't work quite as expected, since, you're alternating case of every other *letter*, while ignoring numbers try the following: ``` function altCaps($str) { $lower = true; $str2 = ""; for ($i=0;$len=strlen($str);$i<$len;++$i) { $char = substr($str,$i,1); if (is_numeric($char)) { $str2 .= $char; } else { if ($lower) { $str2 .= strtolower($char); } else { $str2 .= strtolower($char); } $lower = !$lower; } } return $str2; } ```
Alternating Capital Letters in Array Using PHP
[ "", "php", "arrays", "" ]
I have a desktop product which uses an embedded webserver which will use self-signed certs. Is there something that I can put in a web page that would detect that they haven't added the root CA to their trusted list, and display a link or DIV or something directing them how to do it? I'm thinking maybe a DIV that has instructions on install the CA, and a Javascript that runs some test (tries to access something without internal warnings??), and hides the DIV if the test succeeds. Or something like that... Any ideas from the brilliant SO community ? :)
Why do you want to do this? It is a bad idea to train users to indiscriminately install root CA certificates just because a web site tells them to. You are undermining the entire chain of trust. A security conscious user would ignore your advice to install the certificate, and might conclude that you are not taking security seriously since you did not bother to acquire a certificate from an existing CA. Do you really need HTTPS? If so, you should probably bite the bullet and make a deal with a CA to facilitate providing your customers with proper CA signed server certificates. If the web server is only used for local connections from the desktop app, you should either add the self-signed certificate to the trusted list as part of the installation process, or switch to HTTP instead.
Assuming you know C# and you want to install a pfx file.Create a exe that will be run from a url.[Follow this URL](http://blogs.microsoft.co.il/blogs/maxim/archive/2008/03/31/how-to-run-wpf-xbap-application-in-full-trust-mode-post-2-certificate-extraction.aspx) * [and this](http://blogs.microsoft.co.il/blogs/maxim/archive/2008/03/31/how-to-run-wpf-xbap-application-in-full-trust-mode-post-2-certificate-extraction.aspx#234066)
Self-signed certs -- helping users know they need to add root CA to trusted cert store
[ "", "javascript", "browser", "certificate", "ssl-certificate", "self-signed", "" ]
In Java, you can do the following : ``` public interface IEngine{} public interface ICoolEngine extends IEngine{} public interface Car { IEngine getEngine(); } public interface ICoolCar extends ICar { @Override ICoolEngine getEngine(); } ``` While this nicely solves a problem I've been grappling with, something about it "feels" wrong. Am I committing some nasty design faux pas here?
No, you are doing the right thing. Covariant returns just specify that the class, and classes below it, must return a specific subclass of the original general class argument that the parent class returned. It also means that your subclasses are still compatible with the original interface that requires that it return an Engine, but if you *know* that it is an ICoolCar, that it has an ICoolEngine - because the more specific interface knows of more specific functionality. This applies to interfaces as well as classes - this is correct, proper and useful to boot.
No, that's fine. Since `ICoolEngine` extends `IEngine`, any object implementing `ICoolEngine` can be treated as if it's an `IEngine` (without all the `ICoolEngine`-specific methods of course). You'll just have to be aware of the type difference depending on which interface you are working with in each situation, and make sure not to use `ICoolEngine` methods that aren't defined in `IEngine` (assuming that, in your actual code, there are additional methods listed in the equivalent of `ICoolEngine`). It's not a bad practice to do this; you're simply using the power of polymorphism.
Overriding return type in extended interface - Bad idea?
[ "", "java", "oop", "" ]
We have written our own integration test harness where we can write a number of "operations" or tests, such as "GenerateOrders". We have a number of parameters we can use to configure the tests (such as the number of orders). We then write a second operation to confirm the test has passed/Failed (i.e. there are(nt) orders). The tool is used for * Integration Testing * Data generation * End-to-end testing (By mixing and matching a number of tests) It seems to work well, however requires development experience to maintain and write new tests. Our test team would like to get involved, who have little C# development experience. We are just about to start a new Greenfield project and I am doing some research into the optimum way to write and maintain integration tests. The questions are as follows: * How do you perform integration testing? * What tool do you use for it (FitNess?, Custom?, NUnit)? I am looking forward to peoples suggestions/comments. Thanks in advance, David
Integration testing may be done at a user interface level (via automated functional tests - *AFT*) or service/api interface level. There are several tools in both cases: I have worked on projects that successfully used [Sahi](http://sahi.co.in/) or [Selenium](http://seleniumhq.org/) for AFT of web apps, [white](http://www.codeplex.com/white) for AFT for .NET WPF or winforms apps, [swtBot](http://www.eclipse.org/swtbot/) for AFT of Eclipse Rich client apps and [frankenstein](http://frankenstein.openqa.org/) for AFT of Java swing apps. Fitnesse is useful for service/api level tests or for tests that run just below the UI. When done right, it has the advantage of having business-readable tests i.e. non-developers can read and understand the tests. Tools like NUnit are less useful for this purpose. [SOAPUI](http://www.soapui.org) is particularly suited for testing SOAP web services. Factors to consider: * **Duration**: Can you tolerate 8 hour test runs? * **Brittleness**: AFTs can be quite brittle against an evolving application (e.g. ids and positions of widgets may change). Adequate skill and effort is needed to not hard code the changing parts. * **Fidelity**: How close to real world do you want it to be? e.g. You may have to mock out interactions with a payment gateway unless the provider provides you a test environment that you can pummel with your tests. Some nuances are captured [here](http://fragmental.tw/2008/09/29/where-do-acceptance-tests-go-to-die/). Full disclosure: The author is associated with the organization behind most (not all) of the above free and open source tools.
You could try the [Concordion](http://www.concordion.org) framework for writing user acceptance tests in HTML files. It takes a BDD-style approach. There is a [.Net port](http://code.google.com/p/concordion-net/) as well
Whats a great way to perfom integration testing?
[ "", "c#", ".net", "integration-testing", "" ]
I want to build a list containing every possible permutation of capitalization of a word. so it would be ``` List<string> permutate(string word) { List<string> ret = new List<string>(); MAGIC HAPPENS HERE return ret; } ``` So say I put in `"happy"` I should get an array back of ``` {happy, Happy, hAppy, HAppy, haPpy, HaPpy ... haPPY, HaPPY, hAPPY, HAPPY} ``` I know of plenty of functions that will capitalize the first letter but how do I do any arbitrary letter in the word?
You can modify individual characters if you convert your string to an array of char. Something like this should do the trick... ``` public static List<string> Permute( string s ) { List<string> listPermutations = new List<string>(); char[] array = s.ToLower().ToCharArray(); int iterations = (1 << array.Length) - 1; for( int i = 0; i <= iterations; i++ ) { for( int j = 0; j < array.Length; j++ ) array[j] = (i & (1<<j)) != 0 ? char.ToUpper( array[j] ) : char.ToLower( array[j] ); listPermutations.Add( new string( array ) ); } return listPermutations; } ```
Keep in mind that while the accepted answer is the most straightforward way of capitalizing an arbitrary letter, if you are going to change the capitalization repeatedly on the same set of letters (e.g., 32 times in "happy" and growing exponentially for longer words), it will be more efficient to turn the string into a char[], set the appropriate letter(s), and construct the string from the array.
Permutations of capitalization
[ "", "c#", ".net", "string", "capitalization", "" ]
Let's say I accidentally wrote this: ``` do { } while (true); ``` ...and then ran it. Apart from killing your browser, is there a way to stop javascript execution (the equivalent of Ctrl+Break in basic, or Ctrl+C)? Normally, after about 30 seconds your browser asks if you want to stop the long-running script, but this doesn't always happen (as I just found out)! FYI: A simple loop such as this: `for (i=1; i > 0; ++i);` will cause my browser to crash (Firefox 3.5b4). I don't feel much like testing to see if it's any of my add-ons. Continuously restarting my browser isn't my idea of a fun Monday night.
At least with Chrome, you may be able to kill off the individual tab and not the whole application. [Randolpho](https://stackoverflow.com/users/12716/randolpho) has also informed me that IE8 has similar functionality.
2018 update: In Chrome 67, if you have the DevTools open (`F12`), you can end the infinite loop without killing the whole tab: * Go to the **Sources** panel and click "Pause script execution". * Hold that same button and now select the "Stop" icon. [![enter image description here](https://i.stack.imgur.com/iYiF1.png)](https://i.stack.imgur.com/iYiF1.png) <https://developers.google.com/web/updates/2018/04/devtools#stop>
How do you stop an infinite loop in Javascript?
[ "", "javascript", "" ]
I can't see why this won't work. I am performing lstrip() on the string being passed to the function, and trying to see if it starts with """. For some reason, it gets caught in an infinite loop ``` def find_comment(infile, line): line_t = line.lstrip() if not line_t.startswith('"""') and not line_t.startswith('#'): print (line, end = '') return line elif line.lstrip().startswith('"""'): while True: if line.rstrip().endswith('"""'): line = infile.readline() find_comment(infile, line) else: line = infile.readline() else: line = infile.readline() find_comment(infile, line) ``` And my output: ``` Enter the file name: test.txt import re def count_loc(infile): ``` Here is the top of the file i am reading in for reference: ``` import re def count_loc(infile): """ Receives a file and then returns the amount of actual lines of code by not counting commented or blank lines """ loc = 0 func_records = {} for line in infile: (...) ```
You haven't provided and exit path from the recursive loop. A return statement should do the trick. ``` (...) while True: if line.rstrip().endswith('"""'): line = infile.readline() return find_comment(infile, line) else: line = infile.readline() ```
`while True` is an infinite loop. You need to `break` once you're done.
str.startswith() not working as I intended
[ "", "python", "string", "python-3.x", "" ]
What do I need to do in order to reference the double click event for a listview control?
I'm using something like this to only trigger on ListViewItem double-click and not for example when you double-click on the header of the ListView. ``` private void ListView_MouseDoubleClick(object sender, MouseButtonEventArgs e) { DependencyObject obj = (DependencyObject)e.OriginalSource; while (obj != null && obj != myListView) { if (obj.GetType() == typeof(ListViewItem)) { // Do something here MessageBox.Show("A ListViewItem was double clicked!"); break; } obj = VisualTreeHelper.GetParent(obj); } } ```
``` <ListView.ItemContainerStyle> <Style TargetType="ListViewItem"> <EventSetter Event="MouseDoubleClick" Handler="listViewItem_MouseDoubleClick" /> </Style> </ListView.ItemContainerStyle> ``` The only difficulty then is if you are interested in the underlying object the listviewitem maps to e.g. ``` private void listViewItem_MouseDoubleClick(object sender, MouseButtonEventArgs e) { ListViewItem item = sender as ListViewItem; object obj = item.Content; } ```
Get the item doubleclick event of listview
[ "", "c#", ".net", "wpf", "wpf-controls", "" ]
According to the cakebook [section on the Auth component](http://book.cakephp.org/view/172/Authentication), I can implement simple authentication by using the following Users controller: ``` class UsersController extends AppController { var $name = 'Users'; var $components = array('Auth'); // Not necessary if declared in your app controller /** * The AuthComponent provides the needed functionality * for login, so you can leave this function blank. */ function login() { } function logout() { $this->redirect($this->Auth->logout()); } } ``` I would like to be able to something like the following into my view: ``` <?php $username = $auth->user('username'); echo "Welcome " . $username; ?> ``` Is there a simple way to do this, or do I need to overwrite the login function and store the username to the session? **Update** [Alexander's answer](https://stackoverflow.com/questions/929186/is-there-an-easy-way-to-get-authcomponent-user-data-from-a-view-in-cakephp/930226#930226) is exactly what I wanted. However, I will add the following in case someone else gets confused like I did. It took me a while to understand that if you change the model that Auth uses (for example, you might have a 'persons' table instead of 'users'), then you need to use something like: ``` $persondata = $session->read('Auth.Person'); ```
Actually this information is easily available from the session. You use the session helper to grab it. I believe the correct syntax is : ``` $userdata = $session->read('Auth.User'); $username = $session->read('Auth.User.username'); ``` EDIT: For CakePHP 2.X and on the syntax is: ``` $userdata = $this->session->read('Auth.User'); $username = $this->session->read('Auth.User.username'); ```
Check out AuthComponent-Methods in the CakePHP manual.... You can access an user info after a user has logged in from the session via $this->Auth->User(). So if you want the username, just use this in the controller. $this->set('username', $this->Auth->User('username')); You can now use $username in the view.
Is there an easy way to get AuthComponent user data from a view in CakePHP?
[ "", "php", "authentication", "cakephp", "" ]
I want to convert a binary file into an array of ascii charcters . how can I do that . thank you .
It depends on what you want to do with it. Ascii is supposed to be 7bits (0-127 are well defined, the other characters are codepage dependant). So plain ASCII encoding can lead to nasty surprises (among which are non printables characters as nulls...) If you want to have something printable out of your byte array, you should not convert them with an ASCII encoding. You'd better encode it in Base64, which is a safe (albeit not too optimal size-wise) way to encode binary in strings. To encode your bytes in Base64, you can just go with: ``` string result = System.Convert.ToBase64String(yourByteArray); ```
Check out BASE64 or UUEncoding. I assume you're wanting to use only printable characters from the 256-char ASCII set. BASE64 uses only 64 characters (sometimes this is used when sending binary via email for example). This causes the output to grow in size -- something you have to consider in your situation.
How can I covert a binary file into a set of ascii charecters
[ "", "c#", "encoding", "io", "ascii", "" ]
Has anyone been able to install ffmpeg-php on osx? i was able to install ffmpeg via mac ports. ffmpeg-php wont install. Problems: cannot find shared directory because macports installs it to /opt/local/include/ and /opt/local/lib. ffmpeg-php looks in /usr/local/include/ffmpeg phpize returns a whole bunch of warnings, not sure if this is a problem or not. When i get all the headers/libraries in the right locations make is still throwing fits. Has anyone actually done this? or... is there an alturnative to ffmpeg? I want to be able to create pictures of video frames on the fly.
My "Formula" for ffmpeg-php is accepted. Install [homebrew](https://brew.sh/), and run: ``` brew install ffmpeg-php ``` And you're good to go.
I haven't been able to compile it either although ffmpeg can still be useful without that extension. Here's a simple example that grabs a jpg every 5th secoond: ``` $ffmpeg = '/opt/local/bin/ffmpeg'; $vid = 'test.mpg'; $cmd = $ffmpeg.' -i '.$vid.' -r 0.2 -sameq -f image2 '.$vid.'.%03d.jpg'; exec($cmd); ```
How to install ffmpeg-php Mac OS X leopard
[ "", "php", "macos", "ffmpeg", "" ]
How do you select all fields of two joined tables, without having conflicts with the common field? Suppose I have two tables, `Products` and `Services`. I would like to make a query like this: ``` SELECT Products.*, Services.* FROM Products INNER JOIN Services ON Products.IdService = Services.IdService ``` The problem with this query is that `IdService` will appear twice and lead to a bunch of problems. The alternative I found so far is to discriminate every field from `Products` except the `IdService` one. But this way I'll have to update the query every time I add a new field to `Products`. Is there a better way to do this?
[What are the most common SQL anti-patterns?](https://stackoverflow.com/questions/346659/what-are-the-most-common-sql-anti-patterns/346679#346679) You've hit anti-pattern #1. The better way is to provide a fieldlist. One way to get a quick field list is to ``` sp_help tablename ``` And if you want to create a view from this query - using select \* gets you in more trouble. SQL Server captures the column list at the time the view is created. If you edit the underlying tables and don't recreate the view - you're signing up for trouble (I had a production fire of this nature - view was against tables in a different database though).
You should NEVER have SELECT \* in production code (well, almost never, but the times where it is justified can be easily counted).
'SELECT *' from inner joined tables
[ "", "sql", "sql-server", "inner-join", "" ]
I just realized that I need to synchronize a significant amount of data collection code in an aspect but performance is a real concern. If performance degrades too much my tool will be thrown out. I will be writing ints and longs individually and to various arrays, ArrayLists and Maps. There will be multiple threads of an application that will make function calls that will be picked up by my aspect. What kind of things should I look out for that will negatively affect performance? What code patterns are more efficient? In particular I have a method that calls many other data recording methods: ``` void foo() { bar(); woz(); ... } ``` The methods mostly do adding an incrementing of aspect fields ``` void bar() { f++; // f is a field of the aspect for (int i = 0; i < ary.length; i++) { // get some values from aspect point cut if (some condiction) { ary[i] += someValue; // ary a field of the aspect } } } ``` Should I synchronize foo, or bar, woz and others individually, or should I move all the code in bar, woz, etc into foo and just synchronize it? Should I synchronize on `this`, on a specifically created synchronization object: ``` private final Object syncObject = new Object(); ``` (see [this](https://stackoverflow.com/questions/416183/in-java-critical-sections-what-should-i-synchronize-on) post), or on individual data elements within the methods: ``` ArrayList<Integer> a = new ArrayList<Integer>(); void bar() { synchronize(a) { // synchronized code } } ```
Concurrency is extremely tricky. It's very easy to get it wrong, and very hard to get right. I wouldn't be too terribly worried about performance at this point. My first and foremost concern would be to get the concurrent code to work safely (no deadlocks or race conditions). But on the issue of performance: when in doubt, profile. It's hard to say just how different synchronization schemes will affect performance. It's even harder for us to give you suggestions. We'd need to see a lot more of your code and gain a much deeper understanding of what the application does to give you a truly useful answer. In contrast, profiling gives you hard evidence as to if one approach is slower than another. It can even help you identify where the slowdown is. There are a lot of great profiling tools for Java these days. The Netbeans and Eclipse profilers are good. Also, I'd recommend staying away from raw synchronization altogether. Try using some of the classes in the `java.util.concurrency` package. They make writing concurrent code much easier, and much less error prone. Also, I recommend you read [Java Concurrency in Practice](http://www.javaconcurrencyinpractice.com/) by Brian Goetz, et al. It's very well written and covers a lot of ground.
Rule of thumb is not to synchronize on `this` - most of the times it is a performance hit - all methods are synchronized on one object. Consider using locks - they'a very nice abstraction and many fine features like, trying to lock for a time period, and then giving up: ``` if(commandsLock.tryLock(100, TimeUnit.MILLISECONDS)){ try { //Do something }finally{ commandsLock.unlock(); } }else{ //couldnt acquire lock for 100 ms } ``` I second opinion on using `java.util.concurrent`. I'd make two levls of synchronization * synchronize collection access (if it is needed) * synchronize field access ## Collection access If your collection are `read-only` ie no elements get removed-inserted (but elements may change) i would say that you should use synchronized collections (but this may be not needed...) and dont synchronize iterations: *Read only:* ``` for (int i = 0; i < ary.length; i++) { // get some values from aspect point cut if (some condiction) { ary += someValue; // ary a field of the aspect } } ``` and ary is instance obtained by `Collections.synchronizedList`. *Read-write* ``` synchronized(ary){ for (int i = 0; i < ary.length; i++) { // get some values from aspect point cut if (some condiction) { ary += someValue; // ary a field of the aspect } } } ``` Or use some concurrent collections (like [CopyOnWriteArrayList](http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/CopyOnWriteArrayList.html)) which is inherentently therad safe. Main difference is that - in first read-only wersion any number of threads may iterate over this collections, and in second only one at a time may iterate. In both cases only one therad at a time should increment any given field. ## Field access Synchronize incrementations on fields separately from synchronizing iterations. like: ``` Integer foo = ary.get(ii); synchronized(foo){ foo++; } ``` # Get rid of synchronization 1. Use concurrent collections (from `java.util.concurrent` - not from `Collections.synchronizedXXX', latter still need synchronizing on traversal). 2. Use `java.util.atomic` that enable you to atomically incrememt fields. ### Something you should watch: [Java memory model](https://www.youtube.com/watch?v=WTVooKLLVT8) - its a talk that gives very nice understanding on how synchronizations and data aligment in JAVA works.
Java synchronization and performance in an aspect
[ "", "java", "performance", "synchronization", "aspectj", "" ]
Is it possible to create an environment to safely run arbitrary Python scripts under Linux? Those scripts are supposed to be received from untrusted people and may be too large to check them manually. A very brute-force solution is to create a virtual machine and restore its initial state after every launch of an untrusted script. (Too expensive.) I wonder if it's possible to restrict Python from accessing the file system and interacting with other programs and so on.
Consider using a chroot jail. Not only is this very secure, well-supported and tested but it also applies to external applications you run from python.
There are 4 things you may try: * As you already mentioned, using a virtual machine or some other form of virtualisation (perhaps solaris zones are lightweight enough?). If the script breaks the OS there then you don't care. * Using chroot, which puts a shell session into a virtual root directory, separate from the main OS root directory. * Using systrace. Think of this as a firewall for system calls. * Using a "jail", which builds upon systrace, giving each jail it's own process table etc. Systrace has been compromised recently, so be aware of that.
Safe Python Environment in Linux
[ "", "python", "linux", "runtime", "sandbox", "restriction", "" ]
I have an array that has keys and values. For eg: ``` Array ( [name] => aalaap [age] => 29 [location] => mumbai ) ``` I want to convert the keys from this into values, but I want the values to apear right after the keys. For eg: ``` Array ( [0] => name [1] => aalaap [2] => age [3] => 29 [4] => location [5] => mumbai ) ``` I can easily write an iteration function that will do this... for eg: ``` array_flatten($arr) { foreach ($arr as $arrkey => $arrval) { $arr_new[] = $arrkey; $arr_new[] = $arrval; } return $arr_new; } ``` ...but I'm trying to find out if there's any way this can be accomplished using `array_combine`, `array_keys`, `array_values` and/or `array_merge`, preferably in one, so i don't need to use a custom function. Is there?
Your own solution is probably the cleanest solution, so converting it to a **"one-liner"**: ``` $array = array('name' => 'aalaap','age' => 29, 'location' => 'mumbai'); $answer = array(); array_walk($array, create_function('$val,$key', 'global $answer; $answer[]=$key; $answer[]=$val;')); var_dump($answer); ``` This avoids unnecessary and expensive array copies or sorting. **Alternatively, lose the global:** ``` array_walk($array, create_function('$val,$key,$result', '$result[]=$key; $result[]=$val;'), &$answer); ```
PHP 5.3+ version of Just Jules' answer, and a bit more readable: ``` array_walk($array, function($val, $key) use (&$answer) { $answer[] = $key; $answer[] = $val; }); ```
How do I flatten an associative array into an array with only values in PHP?
[ "", "php", "arrays", "function", "" ]
I need to do something like this: ``` <input type="button" value="click" id="mybtn" onclick="myfunction('/myController/myAction', 'myfuncionOnOK('/myController2/myAction2', 'myParameter2');', 'myfuncionOnCancel('/myController3/myAction3', 'myParameter3');');" /> ``` The context of this question is that on the onClick I need to call a javascript function that will make an ajax call to the url I provide. It will open a modal div with OK and Cancel buttons. Up to here it's fine. But then I also needed to tell the javascript function another function with other parameters and urls to be called when an OK button is clicked in the new div. And another one for the Cancel button. The problem is I can't manage to pass the second argument properly as it doesn't escape properly and gives me javascript errors. I have done a search for other similar Javascript questions in SO but non of them seem to cover what I need to do. Anybody knows how could I pass this kind of string parameters to the javascript function? Or maybe there's another better solution of passing this things that I didn't think of. Thanks in advance
One way would be to just escape the quotes properly: ``` <input type="button" value="click" id="mybtn" onclick="myfunction('/myController/myAction', 'myfuncionOnOK(\'/myController2/myAction2\', \'myParameter2\');', 'myfuncionOnCancel(\'/myController3/myAction3\', \'myParameter3\');');"> ``` In this case, though, I think a better way to handle this would be to wrap the two handlers in anonymous functions: ``` <input type="button" value="click" id="mybtn" onclick="myfunction('/myController/myAction', function() { myfuncionOnOK('/myController2/myAction2', 'myParameter2'); }, function() { myfuncionOnCancel('/myController3/myAction3', 'myParameter3'); });"> ``` And then, you could call them from within `myfunction` like this: ``` function myfunction(url, onOK, onCancel) { // Do whatever myfunction would normally do... if (okClicked) { onOK(); } if (cancelClicked) { onCancel(); } } ``` That's probably not what `myfunction` would actually look like, but you get the general idea. The point is, if you use anonymous functions, you have a lot more flexibility, and you keep your code a lot cleaner as well.
Try this: ``` onclick="myfunction( '/myController/myAction', function(){myfuncionOnOK('/myController2/myAction2','myParameter2');}, function(){myfuncionOnCancel('/myController3/myAction3','myParameter3');} );" ``` Then you just need to call these two functions passed to `myfunction`: ``` function myfunction(url, f1, f2) { // … f1(); f2(); } ```
Javascript: How to pass a function with string parameters as a parameter to another function
[ "", "javascript", "" ]
I am parsing through an uploaded excel files (xlsx) in asp.net with c#. I am using the following code (simplified): ``` string connString = string.Format("Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" + fileLocation + ";Extended Properties=\"Excel 12.0 Xml;HDR=YES\";"); OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * FROM [Sheet1$]", connString); DataSet ds = new DataSet(); adapter.Fill(ds); adapter.Dispose(); DataTable dt = ds.Tables[0]; var rows = from p in dt.AsEnumerable() select new { desc = p[2] }; ``` This works perfectly, *but* if there is anything longer than 255 characters in the cell, it will get cut off. Any idea what I am doing wrong? Thank you. EDIT: When viewing the excel sheet, it shows much more than 255 characters, so I don't believe the sheet itself is limited.
Just from a quick Googling of the subject, it appears that that's a limit of Excel. **EDIT**: [Possible workaround (unfortunately in VB)](http://support.microsoft.com/kb/213841)
## The Solution! I've been battling this today as well. I finally got it to work by modifying some registry keys before parsing the Excel spreadsheet. You must update this registry key before parsing the Excel spreadsheet: ``` // Excel 2010 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\14.0\Access Connectivity Engine\Engines\Excel\ or HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Office\14.0\Access Connectivity Engine\Engines\Excel\ // Excel 2007 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\12.0\Access Connectivity Engine\Engines\Excel\ // Excel 2003 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Excel\ ``` Change `TypeGuessRows` to `0` and `ImportMixedTypes` to `Text` under this key. You'll also need to update your connection string to include `IMEX=1` in the extended properties: ``` string connString = string.Format("Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" + fileLocation + ";Extended Properties=\"Excel 12.0 Xml;HDR=YES;IMEX=1\";"); ``` --- ## References <http://blogs.vertigo.com/personal/aanttila/Blog/archive/2008/03/28/excel-and-csv-reference.aspx> <http://msdn.microsoft.com/en-us/library/ms141683.aspx> > ...characters may be truncated. **To import > data from a memo column without > truncation, you must make sure that > the memo column in at least one of the > sampled rows contains a value longer > than 255 characters, or you must > increase the number of rows sampled by > the driver to include such a row.** You > can increase the number of rows > sampled by increasing the value of > TypeGuessRows under the > HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Excel > registry key....
Parsing an Excel file in C#, the cells seem to get cut off at 255 characters... how do I stop that?
[ "", "c#", "linq", "excel", "excel-2007", "xlsx", "" ]
I am writing some small games in Python with Pygame & Pyglet as hobby projects. A class for 2D array would be very handy. I use py2exe to send the games to relatives/friends and numpy is just too big and most of it's features are unnecessary for my requirements. Could you suggest a Python module/recipe I could use for this. -- Chirag [Edit]: List of lists would be usable as mentioned below by MatrixFrog and zvoase. But it is pretty primitive. A class with methods to insert/delete rows and columns as well as to rotate/flip the array would make it very easy and reusable too. dicts are good for sparse arrays only. Thank you for your ideas.
I wrote the class. Don't know if it is a good or redundant but... Posted it here <http://bitbucket.org/pieceofpeace/container2d/>
How about using a defaultdict? ``` >>> import collections >>> Matrix = lambda: collections.defaultdict(int) >>> m = Matrix() >>> m[3,2] = 6 >>> print m[3,4] # deliberate typo :-) 0 >>> m[3,2] += 4 >>> print m[3,2] 10 >>> print m defaultdict(<type 'int'>, {(3, 2): 10, (3, 4): 0}) ``` As the underlying dict uses tuples as keys, this supports 1D, 2D, 3D, ... matrices.
Is there a Python module/recipe (not numpy) for 2d arrays for small games
[ "", "python", "arrays", "multidimensional-array", "" ]
How to recursively list all the files in a directory and child directories in C#?
[This article](http://support.microsoft.com/kb/303974) covers all you need. Except as opposed to searching the files and comparing names, just print out the names. It can be modified like so: ``` static void DirSearch(string sDir) { try { foreach (string d in Directory.GetDirectories(sDir)) { foreach (string f in Directory.GetFiles(d)) { Console.WriteLine(f); } DirSearch(d); } } catch (System.Exception excpt) { Console.WriteLine(excpt.Message); } } ``` **Added by barlop** GONeale mentions that the above doesn't list the files in the current directory and suggests putting the file listing part outside the part that gets directories. The following would do that. It also includes a Writeline line that you can uncomment, that helps to trace where you are in the recursion that may help to show the calls to help show how the recursion works. ``` DirSearch_ex3("c:\\aaa"); static void DirSearch_ex3(string sDir) { //Console.WriteLine("DirSearch..(" + sDir + ")"); try { Console.WriteLine(sDir); foreach (string f in Directory.GetFiles(sDir)) { Console.WriteLine(f); } foreach (string d in Directory.GetDirectories(sDir)) { DirSearch_ex3(d); } } catch (System.Exception excpt) { Console.WriteLine(excpt.Message); } } ```
Note that in .NET 4.0 there are (supposedly) iterator-based (rather than array-based) file functions built in: ``` foreach (string file in Directory.EnumerateFiles(path, "*.*", SearchOption.AllDirectories)) { Console.WriteLine(file); } ``` At the moment I'd use something like below; the inbuilt recursive method breaks too easily if you don't have access to a single sub-dir...; the `Queue<string>` usage avoids too much call-stack recursion, and the iterator block avoids us having a huge array. ``` static void Main() { foreach (string file in GetFiles(SOME_PATH)) { Console.WriteLine(file); } } static IEnumerable<string> GetFiles(string path) { Queue<string> queue = new Queue<string>(); queue.Enqueue(path); while (queue.Count > 0) { path = queue.Dequeue(); try { foreach (string subDir in Directory.GetDirectories(path)) { queue.Enqueue(subDir); } } catch(Exception ex) { Console.Error.WriteLine(ex); } string[] files = null; try { files = Directory.GetFiles(path); } catch (Exception ex) { Console.Error.WriteLine(ex); } if (files != null) { for(int i = 0 ; i < files.Length ; i++) { yield return files[i]; } } } } ```
How to recursively list all the files in a directory in C#?
[ "", "c#", ".net", "" ]
I'm writing an application that utilizes JavaScript timeouts and intervals to update the page. Is there a way to see how many intervals are setup? I want to make sure that I'm not accidentally going to kill the browser by having hundreds of intervals setup. Is this even an issue?
I don't think there is a way to enumerate active timers, but you could override `window.setTimeout` and `window.clearTimeout` and replace them with your own implementations which do some tracking and then call the originals. ``` window.originalSetTimeout = window.setTimeout; window.originalClearTimeout = window.clearTimeout; window.activeTimers = 0; window.setTimeout = function(func, delay) { window.activeTimers++; return window.originalSetTimeout(func, delay); }; window.clearTimeout = function(timerID) { window.activeTimers--; window.originalClearTimeout(timerID); }; ``` Of course, you might not always call `clearTimeout`, but this would at least give you some way to track what is happening at runtime.
I made a Chrome DevTools extension that shows all intervals. Cleared ones are greyed out. ![Timers Chrome Devtool extension](https://i.stack.imgur.com/O0ZRi.png) **[setInterval-sniffer](https://github.com/NV/setInterval-sniffer)**
Viewing all the timeouts/intervals in javascript?
[ "", "javascript", "timeout", "setinterval", "" ]
In java, if a class implements Serializable but is abstract, should it have a serialVersionUID long declared, or do the subclasses only require that? In this case it is indeed the intention that all the sub classes deal with serialization as the purpose of the type is to be used in RMI calls.
The serialVersionUID is provided to determine compatibility between a deseralized object and the current version of the class. As such, it isn't really necessary in the first version of a class, or in this case, in an abstract base class. You'll never have an instance of that abstract class to serialize/deserialize, so it doesn't need a serialVersionUID. (Of course, it does generate a compiler warning, which you want to get rid of, right?) It turns out james' comment is correct. The serialVersionUID of an abstract base class *does* get propagated to subclasses. In light of that, you *do* need the serialVersionUID in your base class. The code to test: ``` import java.io.Serializable; public abstract class Base implements Serializable { private int x = 0; private int y = 0; private static final long serialVersionUID = 1L; public String toString() { return "Base X: " + x + ", Base Y: " + y; } } import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; public class Sub extends Base { private int z = 0; private static final long serialVersionUID = 1000L; public String toString() { return super.toString() + ", Sub Z: " + z; } public static void main(String[] args) { Sub s1 = new Sub(); System.out.println( s1.toString() ); // Serialize the object and save it to a file try { FileOutputStream fout = new FileOutputStream("object.dat"); ObjectOutputStream oos = new ObjectOutputStream(fout); oos.writeObject( s1 ); oos.close(); } catch (Exception e) { e.printStackTrace(); } Sub s2 = null; // Load the file and deserialize the object try { FileInputStream fin = new FileInputStream("object.dat"); ObjectInputStream ois = new ObjectInputStream(fin); s2 = (Sub) ois.readObject(); ois.close(); } catch (Exception e) { e.printStackTrace(); } System.out.println( s2.toString() ); } } ``` Run the main in Sub once to get it to create and save an object. Then change the serialVersionUID in the Base class, comment out the lines in main that save the object (so it doesn't save it again, you just want to load the old one), and run it again. This will result in an exception ``` java.io.InvalidClassException: Base; local class incompatible: stream classdesc serialVersionUID = 1, local class serialVersionUID = 2 ```
Yes, in general, for the same reason that any other class needs a serial id - to avoid one being generated for it. Basically any class (not interface) that implements serializable should define serial version id or you risk de-serialization errors when the same .class compile is not in the server and client JVMs. There are other options if you are trying to do something fancy. I'm not sure what you mean by "it is the intention of the sub classes...". Are you going to write custom serialization methods (eg. writeObject, readObject)? If so there are other options for dealing with a super class. see: <http://java.sun.com/javase/6/docs/api/java/io/Serializable.html> HTH Tom
Should an abstract class have a serialVersionUID
[ "", "java", "serialization", "abstract-class", "serialversionuid", "" ]
Generally speaking, the SQL queries that I write return unformatted data and I leave it to the presentation layer, a web page or a windows app, to format the data as required. Other people that I work with, including my boss, will insist that it is more efficient to have the database do it. I'm not sure that I buy that and believe that even if there was a measurable performance gain by having the database do it, that there are more compelling reasons to generally avoid this. For example, I will place my queries in a Data Access layer with the intent of potentially reusing the queries whenever possible. Given this, I ascertain that the queries are more likely to be able to be reused if the data remains in their native type rather than converting the data to a string and applying formatting functions on them, for example, formatting a date column to a DD-MMM-YYYY format for display. Sure, if the SQL was returning the dates as formatted strings, you could reverse the process to revert the value back to a date data type, but this seems awkward, for lack of a better word. Furtehrmore, when it comes to formatting other data, for example, a machine serial number made up of a prefix, base and suffix with separating dashes and leading zeros removed in each sub field, you risk the possibility that you may not be able to correctly revert back to the original serial number when going in the other direction. Maybe this is a bad example, but I hope you see the direction I am going with this... To take things a step further, I see people write VERY complex SQLs because they are essentially writing what I would call presentation logic into a SQL instead of returning simple data and then applying this presentation logic in the presentation layer. In my mind, this results in very complex, difficult to maintain and more brittle SQL that is less adaptable to change. Take the following real-life example of what I found in our system and tell me what you think. The rational I was given for this approach was that this made the web app very simple to render the page as it used the following 1-line snippet of classic ADO logic in a Classic ASP web app to process the rows returned: ``` oRS.GetString ( , , "</td>" & vbCrLf & "<td style=""font-size:x-small"" nowrap>" ,"</td>" & vbCrLf & "</tr>" & vbCrLf & "<tr>" & vbCrLf & _ "<td style=""font-size:x-small"" nowrap>" ,"&nbsp;" ) & "</td>" & vbCrLf & "</tr>" & vbCrLf & _ ``` Here's the SQL itself. While I appreciate the author's ability to write a complex SQL, I feel like this is a maintenance nightmare. Am I nuts? The SQL is returning a list of programs that are current running against our database and the status of each: Because the SQL did not display with CR/LFs when I pasted here, I decided to put the SQL on an otherwise empty personal Google site. Please feel free to comment. Thanks. By the way-This SQL was actually constructed using VB Script nested WITHIN a classic ASP page, not calling a stored procedure, so you have the additional complexity of embedded concatentations and quoted markup, if you know what I mean, not to mention lack of formatting. The first thing I did when I was asked to help to debug the SQL was to add a debug.print of the SQL output and throw it through a SQL formatter that I just found. Some of the formatting was lost in pasting at the following link: Edit(Andomar): copied inline: (external link removed, thanks-Chad) ``` SELECT Substring(Datename("dw",start_datetime),1,3) + ', ' + Cast(start_datetime AS VARCHAR) "Start Time (UTC/GMT)" ,program_name "Program Name" ,run_sequence "Run Sequence" ,CASE WHEN batchno = 0 THEN Char(160) WHEN batchno = NULL THEN Char(160) ELSE Cast(batchno AS VARCHAR) END "Batch #" /* ,Replace(Replace(detail_log ,'K:\' ,'file://servernamehere/DiskVolK/') ,'\' ,'/') "log"*/ /* */ ,Cast('<a href="GOIS_ViewLog.asp?Program_Name=' AS VARCHAR(99)) + Cast(program_name AS VARCHAR) + Cast('&Run_Sequence=' AS VARCHAR) + Cast(run_sequence AS VARCHAR) + Cast('&Page=1' AS VARCHAR) + '' + Cast('">' + CASE WHEN end_datetime >= start_datetime THEN CASE WHEN end_datetime <> 'Jan 1 1900 2:00 PM' THEN CASE WHEN (success_code = 10 OR success_code = 0) AND exit_code = 10 THEN CASE WHEN errorcount = 0 THEN 'Completed Successfully' ELSE 'Completed with Errors' END WHEN success_code = 100 AND exit_code = 10 THEN 'Completed with Errors' ELSE CASE WHEN program_name <> 'FileDepCheck' THEN 'Failed' ELSE 'File not found' END END ELSE CASE WHEN success_code = 10 AND exit_code = 0 THEN 'Failed; Entries for Input File Missing' ELSE 'Aborted' END END ELSE CASE WHEN ((Cast(Datediff(mi,start_datetime,Getdate()) AS INT) <= 240) OR ((SELECT Count(* ) FROM MASTER.dbo.sysprocesses a(nolock) INNER JOIN gcsdwdb.dbo.update_log b(nolock) ON a.program_name = b.program_name WHERE a.program_name = update_log.program_name AND (Abs(Datediff(n,b.start_datetime,a.login_time))) < 1) > 0)) THEN 'Processing...' ELSE 'Aborted without end date' END END + '</a>' AS VARCHAR) "Status / Log" ,Cast('<a href="' AS VARCHAR) + Replace(Replace(detail_log,'K:\','file://servernamehere/DiskVolK/'), '\','/') + Cast('" title="Click to view Detail log text file"' AS VARCHAR(99)) + Cast('style="font-family:comic sans ms; font-size:12; color:blue"><img src="images\DetailLog.bmp" border="0"></a>' AS VARCHAR(999)) + Char(160) + Cast('<a href="' AS VARCHAR) + Replace(Replace(summary_log,'K:\','file://servernamehere/DiskVolK/'), '\','/') + Cast('" title="Click to view Summary log text file"' AS VARCHAR(99)) + Cast('style="font-family:comic sans ms; font-size:12; color:blue"><img src="images\SummaryLog.bmp" border="0"></a>' AS VARCHAR(999)) "Text Logs" ,errorcount "Error Count" ,warningcount "Warning Count" ,(totmsgcount - errorcount - warningcount) "Information Message Count" ,CASE WHEN end_datetime > start_datetime THEN CASE WHEN Cast(Datepart("hh",(end_datetime - start_datetime)) AS INT) > 0 THEN Cast(Datepart("hh",(end_datetime - start_datetime)) AS VARCHAR) + ' hr ' ELSE ' ' END + CASE WHEN Cast(Datepart("mi",(end_datetime - start_datetime)) AS INT) > 0 THEN Cast(Datepart("mi",(end_datetime - start_datetime)) AS VARCHAR) + ' min ' ELSE ' ' END + CASE WHEN Cast(Datepart("ss",(end_datetime - start_datetime)) AS INT) > 0 THEN Cast(Datepart("ss",(end_datetime - start_datetime)) AS VARCHAR) + ' sec ' ELSE ' ' END ELSE CASE WHEN end_datetime = start_datetime THEN '< 1 sec' ELSE CASE WHEN ((Cast(Datediff(mi,start_datetime,Getdate()) AS INT) <= 240) OR ((SELECT Count(* ) FROM MASTER.dbo.sysprocesses a(nolock) INNER JOIN gcsdwdb.dbo.update_log b(nolock) ON a.program_name = b.program_name WHERE a.program_name = update_log.program_name AND (Abs(Datediff(n,b.start_datetime,a.login_time))) < 1) > 0)) THEN 'Running ' + Cast(Datediff(mi,start_datetime,Getdate()) AS VARCHAR) + ' min' ELSE '&nbsp;' END END END "Elapsed Time" /* ,end_datetime "End Time (UTC/GMT)" ,datepart("hh" , (end_datetime - start_datetime)) "Hr" ,datepart("mi" ,(end_datetime - start_datetime)) "Mins" ,datepart("ss" ,(end_datetime - start_datetime)) "Sec" ,datepart("ms" ,(end_datetime - start_datetime)) "mSecs" ,datepart("dw" ,start_datetime) "dp" ,case when datepart("dw" ,start_datetime) = 6 then ' Fri' when datepart("dw" ,start_datetime) = 5 then ' Thu' else '1' end */ ,totalrows "Total Rows" ,inserted "Rows Inserted" ,updated "Rows Updated" /* ,success_code "succ" ,exit_code "exit" */ FROM update_log WHERE start_datetime >= '5/29/2009 16:15' ORDER BY start_datetime DESC ```
The answer is obviously "just retrieve output". Formatting on the SQL server has the following problems: * it increases the network traffic from the SQL server * SQL has very poor string handling functionality * SQL servers are not optimised to perform string manipulation * you are using server CPU cycles which could better be used for query processing * it may make life difficult (or impossible) for the query optimiser * you have to write many more queries to support different formatting * you may have to write different queries to support formatting on different browsers * you can't re-use queries for different purposes I'm sure there are many more.
SQL should *not* be formatting, period. It's a relational algebra for extracting (when using `SELECT`) data from the database. Getting the DBMS to format the data for you is the wrong thing to do, and that should be left to your own code (outside the DBMS). The DBMS is generally under enough load as it is without having to do your presentation work for you. It's also optimized for data retrieval, not presentation. I know DBAs that would call for my immediate execution if I tried to do something like that :-)
Should SQL format the output or just retrieve the raw data?
[ "", "sql", "" ]
Trying to programmatically add options to a SELECT drop down in IE Windows Mobile. Because this is IE Windows Mobile 5, most solutions involving getElementID do not function, so I have had to resort to more plain vanilla HTML /Java script, the example below works fine in IE 6 and FF , but fails with "Object doesn't support this property or method" in Windows Mobile 5 ``` function insertBarcodes() { val = document.form1.barcode.value ; i = document.form1.blist.length; myNewOption = new Option(val , val ); document.form1.blist.options[document.form1.blist.length] =myNewOption ; } updateCount(); ``` } Any ideas?
Found the answer here: First I looked at the official reference source here: <http://msdn.microsoft.com/en-us/library/bb159677.aspx> I noted that there is an add method for the selectObj, so I tried it and it worked.. here's the working code, ``` function AddSelectOption(selectObj, text, value, isSelected){ if(selectObj != null && selectObj.options != null){ var newOpt = new Option('Hello','Hello'); //create the option object selectObj.add(newOpt); //it's the .add(option) method } } ``` Thanks to all
There are 4 ways (that I know of) to set the options... (hopefully one of them works for you (let us know which)) ``` //option 1 var newOpt = document.createElement('option'); newOpt.innerText = 'Hello'; mySelectObject.appendChild(newOpt); //option 2 mySelectObject.innerHTML = '<option>Hello</option>'; //KNOWN TO FAIL IN IE6,7,8 (see url below) //option 3 mySelectObject.outerHTML = '<select><option>Hello</option></select>'; //IE Only //option 4 var newOpt = new Option('Hello','Hello'); mySelectObject.options[index] = newOpt; ``` IE bug with setting the [.innerHTML](http://webbugtrack.blogspot.com/2007/08/bug-274-dom-methods-on-select-lists.html)
How to Add options to <SELECT>, in IE Windows Mobile 5
[ "", "javascript", "windows-mobile", "ie-mobile", "" ]
Does a free general purpose ASN.1 Decode/Dump/Inspect program exist? I have a suspect ASN.1 block which may have failed decryption, and I would like to inspect it to see it it appears valid, and if so what elements it contains.
I also used `dumpasn1` with good success for a couple of years), then I decided that looking at 200-lines long nested tags was a bit difficult to follow onscreen and wanted something more dynamic, so that I could collapse parts of the tree and stuff like that. That's what I'm trying to create with my very own [asn1js](http://lapo.it/asn1js/) client-side javascript ASN.1 decoder. It's also opensource and uses dumpasn1's huge "known OIDs" config file. Doesn't try to detect all ASN.1 format errors, only the impossible-to-decode ones... (e.g. won't bother to differentiate DER from BER such as an INTEGER with extra leading zeros) Yep, this is shameless self-promotion, but I hope you can find that software useful related to the problem you have in your question. ;-)
My favorite tool for ASN.1 viewing is Peter Gutmann's [dumpasn1](http://www.cs.auckland.ac.nz/~pgut001/dumpasn1.c). Command-line only, but very flexible and gives diagnostics in case of errors.
Does a free general purpose ASN.1 Decode/Dump/Inspect program exist?
[ "", "javascript", "cryptography", "utility", "asn.1", "" ]
Is it possible to insert multiple sets of values to a SQLite table in one statement? I was trying: `INSERT INTO the_table VALUES (1,2,'hi'),(2,0,'foo');` with the different ()s representing different insert sets, but I get an error.
Are there only three columns in your table? If not, you could try defining the column names you are setting like so: ``` INSERT INTO the_table (column1 ,column2 ,column3) VALUES (1 ,2 ,'hi' ) ,(2 ,0 ,'foo' ) ``` This convention was introduced in SQL Server 2008 known as the [Table Value Constructor](http://msdn.microsoft.com/en-us/library/dd776382.aspx). See MSDN's [INSERT page](http://msdn.microsoft.com/en-us/library/ms174335.aspx) for a look at the overall syntax. Also, the `INSERT` statement can be easily formatted for better readability.
You can do ``` INSERT INTO the_table SELECT 1,2,'hi' UNION SELECT 2,0,'foo'; ```
SQL: Insert multiple sets of values in one statement?
[ "", "sql", "mysql", "sqlite", "" ]
I am going to ask a question that might sound weird. Is there a way to build a new class during Runtime? Or at least, add a new property to an existing class. I mean creating a class that doesn't exist and not an instance of an existing class. I could later on use reflections to load and use this class.
Adding a property to an existing type is not possible, but you can create a new type at runtime using Reflection.Emit. It's pretty complicated stuff, and it goes something like this: ``` AssemblyBuilder assemblyBuilder = Thread.GetDomain().DefineDynamicAssembly( assemblyName , AssemblyBuilderAccess.Run, assemblyAttributes); ModuleBuilder moduleBuilder = assemblyBuilder.DefineDynamicModule("ModuleName"); TypeBuilder typeBuilder = moduleBuilder.DefineType( "MyNamespace.TypeName" , TypeAttributes.Public); typeBuilder.DefineDefaultConstructor(MethodAttributes.Public); // Add a method newMethod = typeBuilder.DefineMethod("MethodName" , MethodAttributes.Public); ILGenerator ilGen = newMethod.GetILGenerator(); // Create IL code for the method ilGen.Emit(...); // ... // Create the type itself Type newType = typeBuilder.CreateType(); ``` This code is just a sample. It could contain errors. You can also generate classes by compiling C# source code at runtime using System.CodeDom, but I don't know a lot about that.
Take a look at the [System.Reflection.Emit](http://msdn.microsoft.com/en-us/library/system.reflection.emit.aspx) namespace. I've never used it myself but the classes in this namespace can be used to generate IL (intermediate language).
Is there a way to build a new type during Runtime?
[ "", "c#", ".net", "vb.net", "reflection", "" ]
I am trying to write a calendar in PHP. In week view, I want my events to be listed like iCal, where simultaneous events reduces their width to half size. I have an extremely hard time figuring this one out though, so I hope you can help me. What I want is that if one event is overlapping another, it should set `[split] => true` on both event arrays - or something in that direction (read: I am unsure whether this is the most efficient solution). Then I can check for `split == true` in the foreach loop which prints out the events. Here is an example array containing two simultaneous events: ``` $events = array( array( "id" => 21, "start" => 1242219600, "end" => 1242237600, "title" => "foo", "split" => false ), array( "id" => 22, "start" => 1242223200, "end" => 1242234000, "title" => "foo", "split" => false ) ); $events = someFunctionToOffsetEvents($events); ``` How would you solve this one?
I've had to deal with date collision issues alot lately, and the best I've been able to come up with is: date1.start < date2.end and date1.end > date2.start = collision This simple formula will account for all of the following situations: ``` --same for all situations: date1.start = 1/1/2009 date1.end = 1/10/2009 --Start date is in first dates range: date2.start = 1/9/2009 date2.end = 2/10/2009 --End date is in first dates range: date2.start = 12/10/2008 date2.end = 1/3/2009 --Start & End date is inside first dates range: date2.start = 1/2/2009 date2.end = 1/3/2009 --First date is inside of second dates range: date2.start = 12/1/2008 date2.end = 2/1/2009 ``` ``` $date1 = array('start' => '2009-01-05', 'end' => '2009-01-10'); $date2 = array('start' => '2009-01-01', 'end' => '2009-01-04'); // end inside one $date3 = array('start' => '2009-01-04', 'end' => '2009-01-15'); // start inside one $date4 = array('start' => '2009-01-01', 'end' => '2009-01-15'); // one inside me $date5 = array('start' => '2009-01-04', 'end' => '2009-01-05'); // inside one function datesCollide($date1, $date2) { $start1TS = strtotime($date1['start']); $end1TS = strtotime($date1['end']); $start2TS = strtotime($date2['start']); $end2TS = strtotime($date2['end']); if ($start1TS <= $end2TS && $end1TS >= $start2) { return true; } return false; } ``` Based on your comment this is probably the solution you are looking for: Note that this solution isn't very optimized, and should only be used for figuring out a better solution. **Never trust code from the web.** ``` $events = array( array( "id" => 21, "start" => 1242219600, "end" => 1242237600, "title" => "foo", "split" => false ), array( "id" => 22, "start" => 1242223200, "end" => 1242234000, "title" => "foo", "split" => false ) ); foreach ($events as $key => $event) { $events[$key]->split = dateCollisionCheck($event, $events); } function dateCollisionCheck(&$event, &$eventList) { foreach ($eventList as $checkEvent) { if ($event->id != $checkEvent->id) { if ($event->start <= $checkEvent->end && $event->end >= $checkEvent->start) { return true; // return as soon as we know there is a collision } } } return false; } ``` \*Code has not been tested
I posted the question about what if three events overlap, which leads to a better solution. Don't put your GUI and your data in the same structure. Save your data, and then figure out how to display it.
Calendar, offset overlapping events
[ "", "php", "arrays", "prototype", "calendar", "offset", "" ]
here is the situation: In the web application, user selects some options, submits form, and PDF file is dynamically generated on the server side and offered for download. The problem: The generation of PDF file takes quite long (up to 1 minute). Some users think nothing is hapening and keep clicking the submit button again and again increasing server load and slowing it down even further. I thougt about adding some busy indicator, which would show animated picture and message like "Please wait, your file is being generated", which seems very easy to do. But: how to hide this indicator when the file is ready and "file download" dialog pops up? Otherwise, the message stays on screen even after the user has downloaded the file! Seems very basic requirement, but I'm completely stuck. Thanks for any suggestions!
I've had to do this for something that took a lot longer (converting a video's format). It might seem like overkill to you but it did work. 1. Move your PDF code out of process. 2. When a job is started create a unique job ID and save it to session. Tell the PDF creator this ID. 3. Make it write a status file (named as the job ID) to the server so that it writes "0% complete" and updates it every few seconds (this part depends on you being able to have a progress metric in your process... If it doesn't then this won't work) 4. Use JS to pull that file down. You can display the progress to the user. 5. When the progress equals "100% complete", push them to the PDF's location. It might help if you name this as the job ID too.\* \*We had another download script that renamed it to a nice name (based on the original video filename) using the content-disposition header, but that's really up to you! Edit: I should add that the waiting page was the result of clicking submit. It was a new pageload but there's no reason you couldn't AJAX the whole process, just make sure, as others have said, to disable the submit button on the first click to stop the user going ape on it.
I agree with the other respondents that your best bet would be to just disable the submit button on the form and give the user a busy indicator; however, for files that take a long time to generate it does make sense to have a more realistic indicator to the user that things didn't just hang. One way that you might be able to implement a status bar, or an updating waiting message, is to have the client side JavaScript make an initial call to the server and have the server immediately respond back once it starts generation. Then you could have the client JavaScript poll the sever every *n* seconds to see if the file is complete. If you have a way of determining the file generation status then you could give a numeric response back for the progress bar percentage, or you could just reply back with a "still working" type of message. I believe that [this blog on asp.net](http://weblogs.asp.net/rchartier/archive/2005/08/15/422635.aspx) discusses a similar setup.
Progress/busy indicator while waiting for file download in javascript?
[ "", "javascript", "html", "" ]
I have a bunch of generic code that is used a lot, which i'd like to poke into in order to deal with a bug in a certain specific case. So I'd like to break on a set of breakpoints only if some other breakpoint has been hit. Is there a way to do this in Visual 2005? I'm using C++ code. Thanks!
If the trigger logic is complex enough, sometimes I find it easier to just add a [DebugBreak](http://msdn.microsoft.com/en-us/library/ms679297(VS.85).aspx)(); call into the source.
Please remember you can disable a breakpoint - it might be easier/more efficient/cleaner (then adding debug flags to your code and recompiling for example) to just disable the second breakpoint, wait till the first one breaks and then enable the second one in your breakpoints window - it takes just two mouse clicks each time you debug... :)
How do you add conditional breaking based on another breakpoint being hit? Visual C++
[ "", "c++", "debugging", "visual-c++", "visual-studio-2005", "breakpoints", "" ]
HI, Is there a way to prevent a particular dll in C# being opened in reflector. I can open many of the dll's and can get the code using reflector. But when trying to open some dll's it shows an error message stating that "The particual dll does not contain a CLI header.". How can I make a dll like this??
Are you sure that these DLLs are managed-code-dlls? I don't think so, if they don't contain the CLI header, they aren't written in C#. And for your question, you can't prevent a managed-DLL from being opened in a decompiler, all what you can do is to obfuscate it.
If you want to protect your .net dll you could obfuscate your assembly [Free .NET Obfuscation Tools](http://twit88.com/blog/2007/09/15/free-net-obfuscation-tools/) You have commercial ones too... "The particual dll does not contain a CLI header.". message appears in Reflector because they are not managed dlls (.net).
Is there a way to prevent dll from being opened in a software like reflector?
[ "", "c#", "reflector", "" ]