Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
So with the final releases of Python 3.0 (and now 3.1), a lot of people are facing the worry of how to upgrade without losing half their codebase due to backwards incompatibility. What are people's best tips for avoiding the many pitfalls that will almost-inevitably result from switching to the next-generation of python? Probably a good place to start is "use 2to3 to convert your python 2.x code to 3.x" :-)
First, this question is very similar to [How are you planning on handling the migration to Python 3?](https://stackoverflow.com/questions/172306/how-are-you-planning-on-handling-the-migration-to-python-3). Check the answers there. There is also a [section](http://wiki.python.org/moin/PortingToPy3k) in the Python Wiki about porting applications to Python 3.x The [Release Notes for python 3.0](http://docs.python.org/3.0/whatsnew/3.0.html) contains a section about porting. I'm quoting the tips there: > 1. (Prerequisite:) Start with excellent test coverage. > 2. Port to Python 2.6. This should be no more work than the average por > from Python 2.x to Python 2.(x+1). > Make sure all your tests pass. > 3. (Still using 2.6:) Turn on the -3 command line switch. This enables warnings about features that will be > removed (or change) in 3.0. Run your > test suite again, and fix code that > you get warnings about until there are > no warnings left, and all your tests > still pass. > 4. Run the 2to3 source-to-source translator over your source code tree. > (See 2to3 - Automated Python 2 to 3 > code translation for more on this > tool.) Run the result of the > translation under Python 3.0. Manually > fix up any remaining issues, fixing > problems until all tests pass again. > > It is not recommended to try to write > source code that runs unchanged under > both Python 2.6 and 3.0; you’d have to > use a very contorted coding style, > e.g. avoiding print statements, > metaclasses, and much more. If you are > maintaining a library that needs to > support both Python 2.6 and Python > 3.0, the best approach is to modify step 3 above by editing the 2.6 > version of the source code and running > the 2to3 translator again, rather than > editing the 3.0 version of the source > code.
I write a free book about this. You can read it here: <http://python3porting.com/> In short: 1. Make sure all your third party libraries are available for Python 3. 2. Prepare your code by removing common ambiguities: * Use // if you really want integer division. * Make sure you flag binary files with the 'b' flag when you open them, to clearly indicate if you mean the data to be binary or not. 3. The higher your test coverage is, the better. 4. Make sure it runs without warnings under "Python 2.7 -3". 5. Now run 2to3. 6. Fix any bugs. That's it, more or less.
Tips on upgrading to python 3.0?
[ "", "python", "python-3.x", "" ]
I'm trying to connect to a MS SQL Server 2005 Express database that is running on the local host from a java program. I have tried the same connect URL (below) that I used on another system (same jave code) that was running MS SQL Server 2000. But that does not work. ``` jdbc:jtds:sqlserver://127.0.0.1:1433/Finance ``` Any ideas?
Are you sure it is the correct instance? SQL Express tends to install as named instance, like "localhost\SQLExpress", instead of a standard instance. So it would be something like: ``` jdbc:jtds:sqlserver://127.0.0.1:1433/Finance;instance=<instance_name> ``` If this doesn't work, try dropping the instance name, and changing the port to the port used by the named instance: ``` jdbc:jtds:sqlserver://127.0.0.1:<instance_port>/Finance ``` Else try to check your connectivity through OSQL.exe tool first. You can also check the [jTDS FAQ](http://jtds.sourceforge.net/faq.html) on this.
I would suggest MicSim's url: ``` jdbc:jtds:sqlserver://localhost/Finance;instance=sqlexpress ``` Check [this](http://jtds.sourceforge.net/faq.html#urlFormat) for jTDS Url Info. [This](http://softwaresalariman.blogspot.com/2007/04/jdbc-to-sql-server-express.html) also has some interesting information to help troubleshoot jtds to sql express sorts of problems. Good luck. Let us know how it goes.
What is the jTDS JDBC Connect URL to MS SQL Server 2005 Express
[ "", "java", "sql-server", "sql-server-2005", "jdbc", "jtds", "" ]
I am working on a Java app that uses Spring [IoC](https://en.wikipedia.org/wiki/Inversion_of_control) and JDBC Template classes. I have a DAO class that has 4 methods : m1() to m4(). m1 performs multiple inserts and updates on table t1, m2 on table t2, m3 on t3, etc. The DAO methods are used as follows: ``` while(true) { //process & generate data dao.m1(data1); dao.m2(data2); dao.m3(data3); dao.m4(data4); //sleep } ``` I want the db operations under the 4 consecutive method calls to be atomic, either all the 4 tables are updated successfully or none are. So, if there is an error while performing operations in m3(), i want to rollback all the changes(updates & inserts) performed in m2 & m1. So does spring let you do it the following way ? ``` while (true) { //process & generate data transaction = TransactionManager.createNewTransaction(); transaction.start() try { dao.m1(data1); dao.m2(data2); dao.m3(data3); dao.m4(data4); } catch(DbUpdateException e) { transaction.rollBack(); } transaction.end(); // sleep ``` } or are there better ways to do it?
For completeness, the programmatic solution would be: ``` private TransactionTemplate transactionTemplate; public setTransactionManager(PlatformTransactionManager transactionManager) { this.transactionTemplate = new TransactionTemplate(transactionManager); } ... while (true) { transactionTemplate.execute(new TransactionCallbackWithoutResult() { protected void doInTransactionWithoutResult(TransactionStatus status) { try { dao.m1(data1); dao.m2(data2); dao.m3(data3); dao.m4(data4); } catch(DbUpdateException e) { status.setRollbackOnly(); } } }); } ```
Yes Spring allows you to [programmatically control transactions](http://static.springframework.org/spring/docs/2.5.x/reference/transaction.html). Personally I prefer [declarative transactions](http://static.springframework.org/spring/docs/2.5.x/reference/transaction.html) using annotations, which goes like this: ``` public void runBatchJob() { while (true) { // generate work doWork(unitOfWork); } } @Transactional private void doWork(UnitOfWork work) { dao.m1(data1); dao.m2(data2); dao.m3(data3); dao.m4(data4); } ``` where the DAO functions are defined: ``` @Transactional public void m1(Data data) { ... } ``` This requires in your *applicationContext.xml*: ``` <tx:annotation-driven/> ``` Declarative transactions can be declared to require a transaction, require a new transaction, support transactions, etc. Rollback will occur when a block annotated with `@Transactional` throws a `RuntimeException`.
Easy transactions using Spring JDBC?
[ "", "java", "spring", "jdbc", "transactions", "rollback", "" ]
> **Possible Duplicate:** > [In C# what is the difference between myInt++ and ++myInt?](https://stackoverflow.com/questions/437026/in-c-what-is-the-difference-between-myint-and-myint) **Duplicate:** [In C# what is the difference between myInt++ and ++myInt?](https://stackoverflow.com/questions/437026/in-c-what-is-the-difference-between-myint-and-myint) In .NET, please. **Update**: can anyone post a sample scenario of which to use, because both looks very similar to me.
* `i++` is a post-increment, meaning this expression returns the original value of i, then increments it. * `++i` is a pre-increment, meaning this expression increments i, and returns the new value Many languages aside from C# support this expression behaviour.
``` int i = 0; Console.WriteLine(++i); // prints 1 Console.WriteLine(i++); // prints 1 also Console.WriteLine(i); // prints 2 ```
What is the difference between i++ and ++i?
[ "", "c#", ".net", "" ]
Is there a nicer way to write in jUnit ``` String x = "foo bar"; Assert.assertTrue(x.contains("foo")); ```
If you add in Hamcrest and JUnit4, you could do: ``` String x = "foo bar"; Assert.assertThat(x, CoreMatchers.containsString("foo")); ``` With some static imports, it looks a lot better: ``` assertThat(x, containsString("foo")); ``` The static imports needed would be: ``` import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.CoreMatchers.containsString; ```
use [fest assert 2.0](https://github.com/alexruiz/fest-assert-2.x/wiki/One-minute-starting-guide) whenever possible EDIT: [assertj](http://joel-costigliola.github.io/assertj/assertj-core.html) may have more assertions (a fork) ``` assertThat(x).contains("foo"); ```
AssertContains on strings in jUnit
[ "", "java", "string", "junit", "assert", "" ]
I'm trying to set a default value for a search field. The idea is that the Search-Field has the value "Search" until the user clicks into it, then it should be empty. Also as long as it is "blank" (with "Search" as the value) it should have the class ".blank". I tried this ``` <input autocomplete="off" class="" id="searchq" name="searchq" onblur="if (this.value == '') { this.value='Search'; jQuery(this).addClass('blank'); };" onfocus="if (this.value == 'Search') { this.value=''; jQuery(this).removeClass('blank'); };" type="text" value="" /> ``` it works so far, but when I load the site, the field is just empty. I have to click inside the field first and then somewhere on the page to make the effect working. I guess it has something to do with onBlur. Any ideas? Thanks!
This is known as a watermark. see <http://digitalbush.com/projects/watermark-input-plugin/> for an example
Another idea is to put placeholders in your input types: (Note this is HTML5.) `<input type=text placeholder="Default text here"/>` this way the textfield will show in a grey text color: Default text here. Once clicked it will remove the text and replace it with your current text and when its empty it comes back.
Set default value for Search field - Please Help!
[ "", "javascript", "jquery", "textfield", "" ]
I have the following `List<int>` collection and I need to find the highest integer in the collection. It could have an arbitrary number of integers and I can have same integer value for multiple times. ``` List<int> MyList = new List<int> { 3, 4, 6, 7, 9, 3, 4, 5, 5 }; ``` What is the simplest algorithm to use for finding the highest integer? I am using C# and the .NET 3.5 framework.
You can just do: ``` int max = MyList.Max(); ``` See [Enumerable.Max](http://msdn.microsoft.com/en-us/library/bb292667.aspx) for details.
If you need to retrieve the maximum value frequently you might think about creating your own list class (or derive from List) which keeps the maximum item in a cache. Such a class could look like this: ``` public class MaxList<T> : IList<T>, ICollection<T>, IEnumerable<T> { T Maximum { get; set; } List<T> _list; public T this[int index] { get; set; } public void Add(T item) { if (item > this.Maximum) { this.Maximum = item; } _list.Add(item); } // ... IEnumerable<T>, ICollection<T> and IList<T> members } ``` Alternatively, you could derive from List directly and overwrite the Add and Remove methods (basically all methods modifying list items) and update the cache accordingly. If such an approach is really a benefit depends on your scenario. IT definitely is if you have a very large list with is rarely updated and you need to retrieve the maximum frequently. Otherwise go for the solutions already suggested because they are much simpler.
Find highest integer in a Generic List using C#?
[ "", "c#", ".net-3.5", "" ]
When creating a Socket in Java: ``` new Socket(host, port); ``` The Socket constructor will try to connect to *host:port* before returning. On Windows, this fails almost immediately for unreachable hosts but for Linux it can take up to 5 minutes for the Socket to timeout. I'm aware that if I have control over creating the Sockets, I can do: ``` Socket s = new Socket(); s.bind(..); s.connect(.., timeout); ``` but I'd rather have the OS use a reasonable default value. Is there a way to change this setting on Linux?
I think you want `/proc/sys/net/ipv4/tcp_syn_retries`. The default is usually 5 or 6 which comes out to around 3 minutes. Note that these are system-wide.
I would advise against changing OS settings as it might affect other applications unexpectedly. The `Socket.setSoTimeout()` method might help you too.
How to view/change socket connection timeout on Linux?
[ "", "java", "linux", "sockets", "timeout", "" ]
Consider this class hierarchy: * `Book extends Goods` * `Book implements Taxable` As we know, there is a relationship between a subclass and its superclass (is-a). Q: Is there any relationship like "is-a" between `Book` and `Taxable`? GOOD Answers, but you said that "is-a" is also a relationship between `Book` and `Taxable`, **but** "is-a" is a relation between *classes*, and an interface is not a class!
Yes. The relationship is exactly the same Book is a Taxable too. **EDIT** An interface is an artifact that happens to match Java's ( and probably C# I don't know ) `interface` keyword. In OO interface is the set of operations that a class is 'committed' perform and nothing more. Is like a contract between the object class and its clients. OO programming languages whose don't have `interface` keyword, still have class interface OO concept.
Well there's "supports-the-operations-of". Personally I don't find the "is-a", "can-do" etc mnemonics to be terribly useful. I prefer to think in terms of what the types allow, whether they're specialising existing behaviour or implementing the behaviour themselves etc. Analogies, like abstractions, tend to be leaky. If you know what the different between interface inheritance and implementation inheritance is, you probably don't need any extra phraseology to express it.
Is there any relation between the class that implements interface and that interface?
[ "", "java", "interface", "class-hierarchy", "" ]
Can .NET (managed code) read and write to CouchDB? I would like to build a part of my project that does document management using CouchDB
Take a look at the [SharpCouch](http://code.google.com/p/couchbrowse/source/browse/trunk/SharpCouch/SharpCouch.cs) utility class. It is a simple wrapper class for the CouchDB HTTP API.
Its a late answer, but do check out [Hammock](http://code.google.com/p/relax-net/). It's active and going into production use on several projects soon and is receiving regular updates and fixes. Besides basic object persistence, Hammock gives you: * True POCO. You don't even need an 'id' property; Hammock tracks that internally. * Robust support for views, including an easy to use fluent API that both generates AND executes views, and support for creating custom map/reduce views. * Attachments support. * A generic `Repository<>` class that helps bring your queries/views (i.e. \_Design doc) together with your c# application code. Repositories are responsible for maintaining \_design docs, and this helps keep CouchDB views from feeling like *stored procs*. * A full unit test suite. This is not prototype software.
Use CouchDB with .NET
[ "", "c#", ".net", "asp.net", "couchdb", "" ]
I am using currently the following code to populate a combobox: ``` combobox.DataSource = datatable; combobox.DisplayMember = "Auftragsnummer"; combobox.ValueMember = "ID"; ``` Is there a way to display multiple columns. I tried "Auftragsnummer, Kunde, Beschreibung" for DisplayMember but it did not work.
You can't have multiple columns. Though you can have concatenation of multiple fields as Display Member Check out: [How do I bind a Combo so the displaymember is concat of 2 fields of source datatable?](https://stackoverflow.com/questions/1006521/how-do-i-bind-a-combo-so-the-displaymember-is-concat-of-2-fields-of-source-datata)
There's an article on MSDN describing how a Multicolumn ComboBox can be created. > How to create a multiple-column drop-down list for a combo box in > Windows Forms <http://support.microsoft.com/kb/982498> --- Source code from the download for VB from the above Microsoft Link, that can be easily adapted to work with a ListBox as well as a ComboBox: ``` '************************************* Module Header **************************************' ' Module Name: MainForm.vb ' Project: VBWinFormMultipleColumnComboBox ' Copyright (c) Microsoft Corporation. ' ' ' This sample demonstrates how to display multiple columns of data in the dropdown of a ComboBox. ' ' This source is subject to the Microsoft Public License. ' See http://www.microsoft.com/opensource/licenses.mspx#Ms-PL. ' All other rights reserved. ' ' THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, ' EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED ' WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE. '******************************************************************************************' Imports System Imports System.Collections.Generic Imports System.ComponentModel Imports System.Data Imports System.Drawing Imports System.Linq Imports System.Text Imports System.Windows.Forms Imports System.Drawing.Drawing2D Public Class MainForm Private Sub MainForm_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim dtTest As DataTable = New DataTable() dtTest.Columns.Add("ID", GetType(Integer)) dtTest.Columns.Add("Name", GetType(String)) dtTest.Rows.Add(1, "John") dtTest.Rows.Add(2, "Amy") dtTest.Rows.Add(3, "Tony") dtTest.Rows.Add(4, "Bruce") dtTest.Rows.Add(5, "Allen") ' Bind the ComboBox to the DataTable Me.comboBox1.DataSource = dtTest Me.comboBox1.DisplayMember = "Name" Me.comboBox1.ValueMember = "ID" ' Enable the owner draw on the ComboBox. Me.comboBox1.DrawMode = DrawMode.OwnerDrawFixed ' Handle the DrawItem event to draw the items. End Sub Private Sub comboBox1_DrawItem(ByVal sender As System.Object, _ ByVal e As System.Windows.Forms.DrawItemEventArgs) _ Handles comboBox1.DrawItem ' Draw the default background e.DrawBackground() ' The ComboBox is bound to a DataTable, ' so the items are DataRowView objects. Dim drv As DataRowView = CType(comboBox1.Items(e.Index), DataRowView) ' Retrieve the value of each column. Dim id As Integer = drv("ID").ToString() Dim name As String = drv("Name").ToString() ' Get the bounds for the first column Dim r1 As Rectangle = e.Bounds r1.Width = r1.Width / 2 ' Draw the text on the first column Using sb As SolidBrush = New SolidBrush(e.ForeColor) e.Graphics.DrawString(id, e.Font, sb, r1) End Using ' Draw a line to isolate the columns Using p As Pen = New Pen(Color.Black) e.Graphics.DrawLine(p, r1.Right, 0, r1.Right, r1.Bottom) End Using ' Get the bounds for the second column Dim r2 As Rectangle = e.Bounds r2.X = e.Bounds.Width / 2 r2.Width = r2.Width / 2 ' Draw the text on the second column Using sb As SolidBrush = New SolidBrush(e.ForeColor) e.Graphics.DrawString(name, e.Font, sb, r2) End Using End Sub End Class ```
WinForms combobox with multiple columns (C#)?
[ "", "c#", "winforms", "combobox", "" ]
Is there a better way to wait for queued threads before execute another process? Currently I'm doing: ``` this.workerLocker = new object(); // Global variable this.RunningWorkers = arrayStrings.Length; // Global variable // Initiate process foreach (string someString in arrayStrings) { ThreadPool.QueueUserWorkItem(this.DoSomething, someString); Thread.Sleep(100); } // Waiting execution for all queued threads lock (this.workerLocker) // Global variable (object) { while (this.RunningWorkers > 0) { Monitor.Wait(this.workerLocker); } } // Do anything else Console.WriteLine("END"); ``` --- ``` // Method DoSomething() definition public void DoSomething(object data) { // Do a slow process... . . . lock (this.workerLocker) { this.RunningWorkers--; Monitor.Pulse(this.workerLocker); } } ```
You likely want to take a look at AutoResetEvent and ManualResetEvent. These are meant for exactly this situation (waiting for a ThreadPool thread to finish, prior to doing "something"). You'd do something like this: ``` static void Main(string[] args) { List<ManualResetEvent> resetEvents = new List<ManualResetEvent>(); foreach (var x in Enumerable.Range(1, WORKER_COUNT)) { ManualResetEvent resetEvent = new ManualResetEvent(); ThreadPool.QueueUserWorkItem(DoSomething, resetEvent); resetEvents.Add(resetEvent); } // wait for all ManualResetEvents WaitHandle.WaitAll(resetEvents.ToArray()); // You probably want to use an array instead of a List, a list was just easier for the example :-) } public static void DoSomething(object data) { ManualResetEvent resetEvent = data as ManualResetEvent; // Do something resetEvent.Set(); } ``` Edit: Forgot to mention you can wait for a single thread, any thread and so forth as well. Also depending on your situation, AutoResetEvent can simplify things a bit, since it (as the name implies) can signal events automatically :-)
How about a `Fork` and `Join` that uses just `Monitor` ;-p ``` Forker p = new Forker(); foreach (var obj in collection) { var tmp = obj; p.Fork(delegate { DoSomeWork(tmp); }); } p.Join(); ``` Full code shown on this [earlier answer](https://stackoverflow.com/questions/540078#540380). Or for a producer/consumer queue of capped size (thread-safe etc), [here](https://stackoverflow.com/questions/530211#530228).
Is there a better way to wait for queued threads?
[ "", "c#", "multithreading", "queue", "" ]
First I'm going to show you an image of what I'm trying to recreate in pure NET. [Image](http://img221.imageshack.us/img221/456/7sw5.jpg) I recreated the window "Inventory" ,the only thing left is the blue-opacity window ,which shows information. If I use the opacity property then everything on that form has opacity,but on the picture the text doesn't have opacity. How do I make the opacity only on the form?
The `Opacity` property only exists on the form so there's no way it could be overridden on the controls contained therein. I did think that a slightly transparent background image might give the effect you wanted - but I've just tried it and it didn't seem to work.
Not sure if this helps but the only thing I can think of short of using WPF would be to use TransparencyKey and backcolor of the form. Just make sure your TransparencyKey is not set as default control or gray, may the backcolor of the form red or something. If your looking for partial transparency you might end up having to use WPF. Personally I've never tried opacity with WPF on an actual form so you "may" get the same results but... EDIT: WPF causes the same condition. All controls under the form become transparent with the form. Probably because they inherent the forms properties.
Form-only opacity
[ "", "c#", ".net", "winforms", "" ]
On my machine (Quad core, 8gb ram), running Vista x64 Business, with Visual Studio 2008 SP1, I am trying to intersect two sets of numbers very quickly. I've implemented two approaches in C++, and one in C#. The C# approach is faster so far, I'd like to improve the C++ approach so its faster than C#, which I expect C++ can do. Here is the C# output: (Release build) ``` Found the intersection 1000 times, in 4741.407 ms ``` Here is the initial C++ output, for two different approaches (Release x64 build): ``` Found the intersection (using unordered_map) 1000 times, in 21580.7ms Found the intersection (using set_intersection) 1000 times, in 22366.6ms ``` Here is the latest C++ output, for three approaches (Release x64 build): Latest benchmark: ``` Found the intersection of 504 values (using unordered_map) 1000 times, in 28827.6ms Found the intersection of 495 values (using set_intersection) 1000 times, in 9817.69ms Found the intersection of 504 values (using unordered_set) 1000 times, in 24769.1ms ``` So, the set\_intersection approach is now approx 2x slower than C#, but 2x faster than the initial C++ approaches. Latest C++ code: ``` Code: // MapPerformance.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <hash_map> #include <vector> #include <iostream> #include <time.h> #include <algorithm> #include <set> #include <unordered_set> #include <boost\unordered\unordered_map.hpp> #include "timer.h" using namespace std; using namespace stdext; using namespace boost; using namespace tr1; int runIntersectionTest2(const vector<int>& set1, const vector<int>& set2) { // hash_map<int,int> theMap; // map<int,int> theMap; unordered_set<int> theSet; theSet.insert( set1.begin(), set1.end() ); int intersectionSize = 0; vector<int>::const_iterator set2_end = set2.end(); for ( vector<int>::const_iterator iterator = set2.begin(); iterator != set2_end; ++iterator ) { if ( theSet.find(*iterator) != theSet.end() ) { intersectionSize++; } } return intersectionSize; } int runIntersectionTest(const vector<int>& set1, const vector<int>& set2) { // hash_map<int,int> theMap; // map<int,int> theMap; unordered_map<int,int> theMap; vector<int>::const_iterator set1_end = set1.end(); // Now intersect the two sets by populating the map for ( vector<int>::const_iterator iterator = set1.begin(); iterator != set1_end; ++iterator ) { int value = *iterator; theMap[value] = 1; } int intersectionSize = 0; vector<int>::const_iterator set2_end = set2.end(); for ( vector<int>::const_iterator iterator = set2.begin(); iterator != set2_end; ++iterator ) { int value = *iterator; unordered_map<int,int>::iterator foundValue = theMap.find(value); if ( foundValue != theMap.end() ) { theMap[value] = 2; intersectionSize++; } } return intersectionSize; } int runSetIntersection(const vector<int>& set1_unsorted, const vector<int>& set2_unsorted) { // Create two vectors std::vector<int> set1(set1_unsorted.size()); std::vector<int> set2(set2_unsorted.size()); // Copy the unsorted data into them std::copy(set1_unsorted.begin(), set1_unsorted.end(), set1.begin()); std::copy(set2_unsorted.begin(), set2_unsorted.end(), set2.begin()); // Sort the data sort(set1.begin(),set1.end()); sort(set2.begin(),set2.end()); vector<int> intersection; intersection.reserve(1000); set_intersection(set1.begin(),set1.end(), set2.begin(), set2.end(), back_inserter(intersection)); return intersection.size(); } void createSets( vector<int>& set1, vector<int>& set2 ) { srand ( time(NULL) ); set1.reserve(100000); set2.reserve(1000); // Create 100,000 values for set1 for ( int i = 0; i < 100000; i++ ) { int value = 1000000000 + i; set1.push_back(value); } // Try to get half of our values intersecting float ratio = 200000.0f / RAND_MAX; // Create 1,000 values for set2 for ( int i = 0; i < 1000; i++ ) { int random = rand() * ratio + 1; int value = 1000000000 + random; set2.push_back(value); } // Make sure set1 is in random order (not sorted) random_shuffle(set1.begin(),set1.end()); } int _tmain(int argc, _TCHAR* argv[]) { int intersectionSize = 0; vector<int> set1, set2; createSets( set1, set2 ); Timer timer; for ( int i = 0; i < 1000; i++ ) { intersectionSize = runIntersectionTest(set1, set2); } timer.Stop(); cout << "Found the intersection of " << intersectionSize << " values (using unordered_map) 1000 times, in " << timer.GetMilliseconds() << "ms" << endl; timer.Reset(); for ( int i = 0; i < 1000; i++ ) { intersectionSize = runSetIntersection(set1,set2); } timer.Stop(); cout << "Found the intersection of " << intersectionSize << " values (using set_intersection) 1000 times, in " << timer.GetMilliseconds() << "ms" << endl; timer.Reset(); for ( int i = 0; i < 1000; i++ ) { intersectionSize = runIntersectionTest2(set1,set2); } timer.Stop(); cout << "Found the intersection of " << intersectionSize << " values (using unordered_set) 1000 times, in " << timer.GetMilliseconds() << "ms" << endl; getchar(); return 0; } ``` C# code: ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace DictionaryPerformance { class Program { static void Main(string[] args) { List<int> set1 = new List<int>(100000); List<int> set2 = new List<int>(1000); // Create 100,000 values for set1 for (int i = 0; i < 100000; i++) { int value = 1000000000 + i; set1.Add(value); } Random random = new Random(DateTime.Now.Millisecond); // Create 1,000 values for set2 for (int i = 0; i < 1000; i++) { int value = 1000000000 + (random.Next() % 200000 + 1); set2.Add(value); } long start = System.Diagnostics.Stopwatch.GetTimestamp(); for (int i = 0; i < 1000; i++) { runIntersectionTest(set1,set2); } long duration = System.Diagnostics.Stopwatch.GetTimestamp() - start; Console.WriteLine(String.Format("Found the intersection 1000 times, in {0} ms", ((float) duration * 1000.0f) / System.Diagnostics.Stopwatch.Frequency)); Console.ReadKey(); } static int runIntersectionTest(List<int> set1, List<int> set2) { Dictionary<int,int> theMap = new Dictionary<int,int>(100000); // Now intersect the two sets by populating the map foreach( int value in set1 ) { theMap[value] = 1; } int intersectionSize = 0; foreach ( int value in set2 ) { int count; if ( theMap.TryGetValue(value, out count ) ) { theMap[value] = 2; intersectionSize++; } } return intersectionSize; } } } ``` C++ code: ``` // MapPerformance.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <hash_map> #include <vector> #include <iostream> #include <time.h> #include <algorithm> #include <set> #include <boost\unordered\unordered_map.hpp> #include "timer.h" using namespace std; using namespace stdext; using namespace boost; int runIntersectionTest(vector<int> set1, vector<int> set2) { // hash_map<int,int> theMap; // map<int,int> theMap; unordered_map<int,int> theMap; // Now intersect the two sets by populating the map for ( vector<int>::iterator iterator = set1.begin(); iterator != set1.end(); iterator++ ) { int value = *iterator; theMap[value] = 1; } int intersectionSize = 0; for ( vector<int>::iterator iterator = set2.begin(); iterator != set2.end(); iterator++ ) { int value = *iterator; unordered_map<int,int>::iterator foundValue = theMap.find(value); if ( foundValue != theMap.end() ) { theMap[value] = 2; intersectionSize++; } } return intersectionSize; } int runSetIntersection(set<int> set1, set<int> set2) { set<int> intersection; set_intersection(set1.begin(),set1.end(), set2.begin(), set2.end(), inserter(intersection, intersection.end())); return intersection.size(); } int _tmain(int argc, _TCHAR* argv[]) { srand ( time(NULL) ); vector<int> set1; vector<int> set2; set1.reserve(10000); set2.reserve(1000); // Create 100,000 values for set1 for ( int i = 0; i < 100000; i++ ) { int value = 1000000000 + i; set1.push_back(value); } // Create 1,000 values for set2 for ( int i = 0; i < 1000; i++ ) { int random = rand() % 200000 + 1; random *= 10; int value = 1000000000 + random; set2.push_back(value); } Timer timer; for ( int i = 0; i < 1000; i++ ) { runIntersectionTest(set1, set2); } timer.Stop(); cout << "Found the intersection (using unordered_map) 1000 times, in " << timer.GetMilliseconds() << "ms" << endl; set<int> set21; set<int> set22; // Create 100,000 values for set1 for ( int i = 0; i < 100000; i++ ) { int value = 1000000000 + i; set21.insert(value); } // Create 1,000 values for set2 for ( int i = 0; i < 1000; i++ ) { int random = rand() % 200000 + 1; random *= 10; int value = 1000000000 + random; set22.insert(value); } timer.Reset(); for ( int i = 0; i < 1000; i++ ) { runSetIntersection(set21,set22); } timer.Stop(); cout << "Found the intersection (using set_intersection) 1000 times, in " << timer.GetMilliseconds() << "ms" << endl; getchar(); return 0; } ``` --- Ok, here is the latest, with some changes: * The C++ sets are now properly setup so they have a 50% intersection (like the C#) * Set1 is shuffled so its not sorted, set2 was already not sorted * The set\_intersection implementation now uses vectors, and sorts them first C++ (Release, x64) Results: ``` Found the intersection of 503 values (using unordered_map) 1000 times, in 35131.1ms Found the intersection of 494 values (using set_intersection) 1000 times, in 10317ms ``` So its 2x slower than C#. @Jalf: You're getting some pretty fast numbers, is there something I'm doing wrong here? C++ Code: ``` // MapPerformance.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <hash_map> #include <vector> #include <iostream> #include <time.h> #include <algorithm> #include <set> #include <boost\unordered\unordered_map.hpp> #include "timer.h" using namespace std; using namespace stdext; using namespace boost; int runIntersectionTest(const vector<int>& set1, const vector<int>& set2) { // hash_map<int,int> theMap; // map<int,int> theMap; unordered_map<int,int> theMap; vector<int>::const_iterator set1_end = set1.end(); // Now intersect the two sets by populating the map for ( vector<int>::const_iterator iterator = set1.begin(); iterator != set1_end; ++iterator ) { int value = *iterator; theMap[value] = 1; } int intersectionSize = 0; vector<int>::const_iterator set2_end = set2.end(); for ( vector<int>::const_iterator iterator = set2.begin(); iterator != set2_end; ++iterator ) { int value = *iterator; unordered_map<int,int>::iterator foundValue = theMap.find(value); if ( foundValue != theMap.end() ) { theMap[value] = 2; intersectionSize++; } } return intersectionSize; } int runSetIntersection(const vector<int> set1_unsorted, const vector<int> set2_unsorted) { // Create two vectors std::vector<int> set1(set1_unsorted.size()); std::vector<int> set2(set2_unsorted.size()); // Copy the unsorted data into them std::copy(set1_unsorted.begin(), set1_unsorted.end(), set1.begin()); std::copy(set2_unsorted.begin(), set2_unsorted.end(), set2.begin()); // Sort the data sort(set1.begin(),set1.end()); sort(set2.begin(),set2.end()); vector<int> intersection; intersection.reserve(1000); set_intersection(set1.begin(),set1.end(), set2.begin(), set2.end(), inserter(intersection, intersection.end())); return intersection.size(); } void createSets( vector<int>& set1, vector<int>& set2 ) { srand ( time(NULL) ); set1.reserve(100000); set2.reserve(1000); // Create 100,000 values for set1 for ( int i = 0; i < 100000; i++ ) { int value = 1000000000 + i; set1.push_back(value); } // Try to get half of our values intersecting float ratio = 200000.0f / RAND_MAX; // Create 1,000 values for set2 for ( int i = 0; i < 1000; i++ ) { int random = rand() * ratio + 1; int value = 1000000000 + random; set2.push_back(value); } // Make sure set1 is in random order (not sorted) random_shuffle(set1.begin(),set1.end()); } int _tmain(int argc, _TCHAR* argv[]) { int intersectionSize = 0; vector<int> set1, set2; createSets( set1, set2 ); Timer timer; for ( int i = 0; i < 1000; i++ ) { intersectionSize = runIntersectionTest(set1, set2); } timer.Stop(); cout << "Found the intersection of " << intersectionSize << " values (using unordered_map) 1000 times, in " << timer.GetMilliseconds() << "ms" << endl; timer.Reset(); for ( int i = 0; i < 1000; i++ ) { intersectionSize = runSetIntersection(set1,set2); } timer.Stop(); cout << "Found the intersection of " << intersectionSize << " values (using set_intersection) 1000 times, in " << timer.GetMilliseconds() << "ms" << endl; getchar(); return 0; } ```
There are several problems with your test. First, you are not testing set intersection, but "create a couple of arrays, fill them with random numbers, and then perform set intersection". You should only time the portion of the code you're actually interested in. Even if you're going to want to do those things, they should not be benchmarked here. Measure one thing at a time, to reduce uncertainty. If you want your C++ implementation to perform better, you first need to know which part of it is slower than expected. Which means you have to separate setup code from intersection test. Second, you should run the test a large number of times to take possible caching effects and other uncertainties into account. (And probably output one total time for, say, 1000 runs, rather than an individual time for each. That way you reduce the uncertainty from the timer which might have limited resolution and report inaccurate results when used in the 0-20ms range. Further, as far as I can read from the docs, the input to set\_intersection should be sorted, which set2 won't be. An there seems to be no reason to use `unordered_map`, when `unordered_set` would be a far better match for what you're doing. About the setup code being needed, note that you probably *don't* need to populate vectors in order to run the intersection. Both your own implementation and `set_intersection` work on iterators already, so you can simply pass them a pair of iterators to the data structures your inputs are in already. A few more specific comments on your code: * Use `++iterator` instead of `iterator++` * rather than calling vector.end() at each loop iteration, call it once and cache the result * experiment with using sorted vectors vs std::set vs `unordered_set` (not `unordered_map`) **Edit:** I haven't tried your C# version, so I can't compare the numbers properly, but here's my modified test. Each is run 1000 times, on a Core 2 Quad 2.5GHz with 4GB RAM: ``` std::set_intersection on std::set: 2606ms std::set_intersection on tr1::unordered_set: 1014ms std::set_intersection on sorted vectors: 171ms std::set_intersection on unsorted vectors: 10140ms ``` The last one is a bit unfair, because it has to both copy and sort the vectors. Ideally, only the sort should be part of the benchmark. I tried creating a version that used an array of 1000 unsorted vectors (so I woudln't have to copy the unsorted data in each iteration), but the performance was about the same, or a bit worse, because this would cause constant cache misses, so I reverted back to this version And my code: ``` #define _SECURE_SCL 0 #include <ctime> #include <vector> #include <set> #include <iostream> #include <algorithm> #include <unordered_set> #include <windows.h> template <typename T, typename OutIter> void stl_intersect(const T& set1, const T& set2, OutIter out){ std::set_intersection(set1.begin(), set1.end(), set2.begin(), set2.end(), out); } template <typename T, typename OutIter> void sort_stl_intersect(T& set1, T& set2, OutIter out){ std::sort(set1.begin(), set1.end()); std::sort(set2.begin(), set2.end()); std::set_intersection(set1.begin(), set1.end(), set2.begin(), set2.end(), out); } template <typename T> void init_sorted_vec(T first, T last){ for ( T cur = first; cur != last; ++cur) { int i = cur - first; int value = 1000000000 + i; *cur = value; } } template <typename T> void init_unsorted_vec(T first, T last){ for ( T cur = first; cur != last; ++cur) { int i = rand() % 200000 + 1; i *= 10; int value = 1000000000 + i; *cur = value; } } struct resize_and_shuffle { resize_and_shuffle(int size) : size(size) {} void operator()(std::vector<int>& vec){ vec.resize(size); } int size; }; int main() { srand ( time(NULL) ); std::vector<int> out(100000); std::vector<int> sortedvec1(100000); std::vector<int> sortedvec2(1000); init_sorted_vec(sortedvec1.begin(), sortedvec1.end()); init_unsorted_vec(sortedvec2.begin(), sortedvec2.end()); std::sort(sortedvec2.begin(), sortedvec2.end()); std::vector<int> unsortedvec1(sortedvec1.begin(), sortedvec1.end()); std::vector<int> unsortedvec2(sortedvec2.begin(), sortedvec2.end()); std::random_shuffle(unsortedvec1.begin(), unsortedvec1.end()); std::random_shuffle(unsortedvec2.begin(), unsortedvec2.end()); std::vector<int> vecs1[1000]; std::vector<int> vecs2[1000]; std::fill(vecs1, vecs1 + 1000, unsortedvec1); std::fill(vecs2, vecs2 + 1000, unsortedvec2); std::set<int> set1(sortedvec1.begin(), sortedvec1.end()); std::set<int> set2(sortedvec2.begin(), sortedvec2.end()); std::tr1::unordered_set<int> uset1(sortedvec1.begin(), sortedvec1.end()); std::tr1::unordered_set<int> uset2(sortedvec2.begin(), sortedvec2.end()); DWORD start, stop; DWORD delta[4]; start = GetTickCount(); for (int i = 0; i < 1000; ++i){ stl_intersect(set1, set2, out.begin()); } stop = GetTickCount(); delta[0] = stop - start; start = GetTickCount(); for (int i = 0; i < 1000; ++i){ stl_intersect(uset1, uset2, out.begin()); } stop = GetTickCount(); delta[1] = stop - start; start = GetTickCount(); for (int i = 0; i < 1000; ++i){ stl_intersect(sortedvec1, sortedvec2, out.begin()); } stop = GetTickCount(); delta[2] = stop - start; start = GetTickCount(); for (int i = 0; i < 1000; ++i){ sort_stl_intersect(vecs1[i], vecs1[i], out.begin()); } stop = GetTickCount(); delta[3] = stop - start; std::cout << "std::set_intersection on std::set: " << delta[0] << "ms\n"; std::cout << "std::set_intersection on tr1::unordered_set: " << delta[1] << "ms\n"; std::cout << "std::set_intersection on sorted vectors: " << delta[2] << "ms\n"; std::cout << "std::set_intersection on unsorted vectors: " << delta[3] << "ms\n"; return 0; } ``` There's no reason why C++ should always be faster than C#. C# has a few key advantages that require a lot of care to compete with in C++. The primary one I can think of is that dynamic allocations are ridiculously cheap in .NET-land. Every time a C++ vector, set or unordered\_set (or any other container) has to resize or expand, it is a very costly `malloc` operation. In .NET, a heap allocation is little more than adding an offset to a pointer. So if you want the C++ version to compete, you'll probably have to solve that, allowing your containers to resize without having to perform actual heap allocations, probably by using custom allocators for the containers (perhaps boost::pool might be a good bet, or you can try rolling your own) Another issue is that `set_difference` only works on sorted input, and in order to reproduce tests results that involve a sort, we have to make a fresh copy of the unsorted data in each iteration, which is costly (although again, using custom allocators will help a lot). I don't know what form your input takes, but it is possible that you can sort your input directly, without copying it, and then run `set_difference` directly on that. (That would be easy to do if your input is an array or a STL container at least.) One of the key advantages of the STL is that it is so flexible, it can work on pretty much any input sequence. In C#, you pretty much have to copy the input to a List or Dictionary or something, but in C++, you might be able to get away with running `std::sort` and `set_intersection` on the raw input. Finally, of course, try running the code through a profiler and see exactly where the time is being spent. You might also want to try running the code through GCC instead. It's my impression that STL performance in MSVC is sometimes a bit quirky. It might be worth testing under another compiler just to see if you get similar timings there. Finally, you might find these blog posts relevant for performance of C++ vs C#: <http://blogs.msdn.com/ricom/archive/2005/05/10/416151.aspx> The morale of those is essentially that yes, you can get better performance in C++, but it is a surprising amount of work.
One problem I see right away is that you're passing the sets in C++ by value and not by const reference. So you're copying them every time you pass them around! Also, I would not use a set for the target of `set_intersection`. I would use something like ``` int runSetIntersection(const set<int>& set1, const set<int>& set2) { vector<int> intersection; intersection.reserve(10000) // or whatever the max is set_intersection(set1.begin(),set1.end(), set2.begin(), set2.end(), back_inserter(intersection)); return intersection.size(); } ``` This code, however, still allocates inside the function. Even faster would be ``` int runSetIntersection(const set<int>& set1, const set<int>& set2, vector<int>& scratch) { scratch.reserve(10000) // or whatever the max is set_intersection(set1.begin(),set1.end(), set2.begin(), set2.end(), back_inserter(scratch)); return scratch.size(); } ``` And then allocate scratch before you start the timer. Though, if you're just looking for the size, a hand-written for loop, combined with set::find might give even better results.
Fast intersection of sets: C++ vs C#
[ "", "c++", "performance", "stl", "intersection", "" ]
In light of the accepted answer pointing out that returning a non-const reference to a member from a const instance method won't compile (without a cast or making the member variable mutable), the question has become more a general best-practices discussion of const instance methods. For posterity, here's the original question: If I have an object with a getter that returns a non-const reference, for example: ``` SomeObject& SomeOtherObject::foo(){ return someObjectInstance; } ``` Should this be made const? Obviously the call itself doesn't modify the object, but the caller could then modify someObjectInstance which would mutate my instance of SomeOtherObject. I guess my question really boils down to "**what exactly does const on a member method mean?**" Is it A) the call itself won't mutate the object or B) no mutation of the object can occur during the call, or as a result of the returned references/pointers (with a caveat for people who do const\_casts). As an aside, I'm currently adding this where I need const calls. ``` const SomeObject& SomeOtherObject::constFoo() const{ return someObjectInstance; } ``` to be on the safe side, since I'm reluctant to do ``` SomeObject& SomeOtherObject::foo() const{ return someObjectInstance; } ``` even though it would make my life easier in some places.
**const** (when applied to a member function) is mainly useful as a means of self documenation. It is a contract with the calling code that this function will not modify the external state (i.e. have no observable side effects). The compilier achieves this by making all members effectively const while inside a const member function It is not uncommon to see code like: ``` const SomeObject& SomeOtherObject::constFoo() const; SomeObject& SomeOtherObject::constFoo(); ``` The following won't compile (MSVC 9 and gcc 3.4.4) ``` SomeObject& SomeOtherObject::constFoo() const{ return someObjectInstance; } ``` You could hack around the above error by casting away the const: ``` SomeObject& SomeOtherObject::constFoo() const{ return (SomeObject&)someObjectInstance; } ``` but this of course breaks the contract with the users of your code that you won't be changing the SomeOtherObject instance. You could also make someObjectInstance **mutable** to be able to avoid the cast, but in the end it really isn't any better.
To answer your question: > "what exactly does const on a member > method mean?" It means that the method can safely and sensibly be applied to a const object. Of course, what "safely" and "sensibly" mean will depend on the problem domain your classes model. You might also want to consider having your accessors return values, rather than references. If they must return references, then I would reconsider my overall design.
What is the best practice regarding const instance methods?
[ "", "c++", "constants", "" ]
I have a function thats the main bottleneck of my application, because its doing heavy string comparisions against a global list shared among the threads. My question is basicly this: Is it bad practive to lock the list ( called List gList ) multiple times in 1 function. For then to lock it again later ( Basicly locking when doing the lookup, unlocking getting a new item ready for insertion then locking it again and adding the new item). When i you a profiler i dont see any indication that im paying a heavy price for this, but could i be at a later point or when the code it out in the wild? Anyone got any best practive or personal experence in this?
It sounds like you don't want to be releasing the lock between the lookup and the insertion. Either that, or you don't need to lock during the lookup at all. Are you trying to add to the list only if the element is not already there? If so, then releasing the lock between the two steps allows another thread to add to the list while you are preparing your element. By the time you are ready to add, your lookup is out of date. If it is not a problem that the lookup might be out of date, then you probably don't need to lock during the lookup at all.
How do you perform the locking? You may want to look into using [`ReaderWriterLockSlim`](http://msdn.microsoft.com/en-us/library/system.threading.readerwriterlockslim.aspx), if that is not already the case. Here is a simple usage example: ``` class SomeData { private IList<string> _someStrings = new List<string>(); private ReaderWriterLockSlim _lock = new ReaderWriterLockSlim(); public void Add(string text) { _lock.EnterWriteLock(); try { _someStrings.Add(text); } finally { _lock.ExitWriteLock(); } } public bool Contains(string text) { _lock.EnterReadLock(); try { return _someStrings.Contains(text); } finally { _lock.ExitReadLock(); } } } ```
multiple lock in same function
[ "", "c#", "multithreading", "" ]
When you do: ``` MyClass.class.someMethod() ``` What exactly is the "class" field? I can't find it in the API docs. Is it an inherited static field? I thought reserved keywords were not allowed as entity names.
Please read : A class literal is an expression consisting of the name of a class, interface, array, or primitive type, or the pseudo-type void, followed by a `.' and the token class. The type of a class literal, C.Class, where C is the name of a class, interface or array type, is Class. If p is the name of a primitive type, let B be the type of an expression of type p after boxing conversion (§5.1.7). Then the type of p.class is Class. The type of void.class is Class. [Java Language Specification: 15.8.2. Class Literals](https://docs.oracle.com/javase/specs/jls/se15/html/jls-15.html#jls-15.8.2)
`MyClass` is not the name of an object, it's a class name, so this is actually special syntax that retrieves the corresponding `Class<MyClass>` object for the named class. It is a language feature, not a real property of the `MyClass` class.
About the "class" property/field
[ "", "java", "" ]
I'm having trouble declaring a const field in an abstract class. Why is this? ### edit I should have clarified. My problem is that my child classes can't see the const field: ``` protected const string Prefix = "dynfrm_"; ``` If I remove the const keyword, I can get to it from a grandchild class.
As long as you initialize it in the declaration, there shouldn't be an issue. What is the error message you are receiving?
``` public abstract class Class1 { protected const string Prefix = "dynfrm_"; } public class Class2 : Class1 { public void GetConst() { Console.WriteLine(Prefix); } } ```
C# Const field in abstract class
[ "", "c#", "constants", "abstract", "" ]
Im trying to access the Magento customer session in another part of my website. ``` domain.com/shop/ <- Magento domain.com/test.php ``` The shop itself works like a charm, however im trying to determine within test.php if a customer is logged in, so I can display his name with a link to his cart. Contents of **test.php** so far: ``` <?php require_once dirname(__FILE__).'/shop/app/Mage.php'; umask(0); Mage::app('default'); Mage::getSingleton('core/session', array('name' => 'frontend')); var_dump(Mage::getSingleton('customer/session')->isLoggedIn()); ?> ``` I keep getting **bool(false)** returned. I'm logged into Magento at domain.com/shop/ so ‘true’ is expected. Am I forgetting something?
I would recommend checking the path of the cookie that Magento sets. Most likely, it is set to `/shop`, so the cookie will not be accessible to your file above it. You can modify the path that Magento uses to set its cookie in the control panel under `System -> Configuration -> Web (under the General heading) -> Session cookie management`
it is under `confuguration-> web` on Magento ver. 1.10.1.1
How to access Magento user's session from outside Magento?
[ "", "php", "oop", "session", "magento", "e-commerce", "" ]
I'm building a general purpose data translation tool for internal enterprise use, using Java 5. The various departments use differing formats for coordinate information (latitudes/longitudes), and they want to see the data in their own format. For example, the coordinates of the White House in DMS format are 38° 53' 55.133" N, 77° 02' 15.691" W But can also be expressed as: 385355.133 / -0770215.691 I want to represent the pattern required by each system as a string, and then use those patterns to parse instance data from the input system, and also use that pattern when formatting a string for consumption by the output system. So it is not unlike a date/time formatting problem, for which the JDK provides java.text.SimpleDateFormat that lets you convert among various date/time patterns, which are defined by strings such as "YYYY-MM-DD" or "MM/DD/YY". My question is, do I have to build this CoordinateFormat thing totally from scratch, or is there a good general tool or well-defined approach I can use to guide me in this endeavor?
If I read it right, you're talking about the problem addressed by the [Interpreter](http://en.wikipedia.org/wiki/Interpreter_pattern) pattern, but sort of going in both directions. There are some easy ways to get nice generic interfaces, so you can get the rest of the thing running. My recommendation on that is something like: ``` public interface Interpreter<OutputType> { public void setCode(String coding); public OutputType decode(String formattedData); public String encode(OutputType rawData); } ``` However, there are a couple of hurdles with concrete implementations. For your date example, you might need to deal with "9/9/09", "9 SEP 09", "September 9th, 2009". The first "kind" of date is straightforward - numbers and set divider symbols, but either of the other two is pretty nasty. Honestly, doing something totally generic (which could already be canned) probably isn't reasonable, so I recommend the following. I'd attack it on two levels, the first of which is pretty straightforward with regex and format string: chomping up the data string into the things that are going to become raw data. You'd supply something like "D\*/M\*/YY" (or "M\*/D\*") for the first one, "D\* MMM YY" for the second, and "Mm+ D\*e\*, YYYY" for the last, where you've defined in your data some reserved symbols (D, M, Y, obvious interpretations) and for all data types (\* multiple characters possible, + "full" output, e defined extraneous characters) - these symbols obviously being specific to your application. Then your regex stuff would chomp the string up, feeding everything associated with each reserved character to the individual data fields, and saving the decoration part (commas, etc) in some formatting string. This first level can all be fairly generic - each data type (e.g., date, coordinate, address) has reserved symbols (which don't overlap with any formatting characters), and all data types have some shared symbols. Perhaps the Interpreter interface would also have `public List<Character> reservedSymbols()` and `public void splitCode(List<String> splitcodes)` methods, or perhaps guaranteed fields, so that you can make the divider an external class and pass in the results. The second level is less easy, because it gets at the part that can't be generic. Based on the format of the reserved symbols, the individual fields need to know how to present themselves. To the date example, MM would tell the month to print as (01, 02, ... 12), M\* as (1, 2, ... 12), MMM as (JAN, FEB, ... DEC), Mmm as (Jan, Feb, ...Dec), etc. If your company has been somewhat consistent or doesn't venture too far from standard representations of stuff, then hand coding each of these shouldn't be too bad (and in fact, there are probably smart ways within each data type to reduce replicated code). But I don't think it's practical to generify all this stuff - I mean, practically representing that something that can be presented as a number or characters (like months) or whole data that can be inferred from partial data (e.g., century from year) or how to get truncated representations from the data (e.g., the truncation for year is to the last two digits vice most normal numbers truncating to two leading digits) is probably going to take as long as handwriting those cases, though I guess I can imagine cases of your application the trade-off might be worth it. Date is really tricky example, but I can certainly see equally tricky things coming up for other sorts of data. Summary: -there's an easy generic face you can put on your problem, so the rest of your app can be coded around it. -there's a fairly easy and generic first pass parsing, by having universal reserved symbols, and then reserved symbols for each data type; make sure these don't collide with symbols that will appear in formatting -there's a somewhat tedious final coding stage for individual data bits
take a look to [JScience](http://jscience.org/), particularly [this class](http://jscience.org/experimental/javadoc/org/jscience/astronomy/solarsystem/coordinates/CoordinateFormatter.html)
A means of specifying pattern strings that drive parsing and formatting for arbitrary objects?
[ "", "java", "parsing", "formatting", "" ]
i work in lamp environment . i search script that know to take a image , that user upload to my site. and make from this image , new image that contain the real image with small stamp on the top corner. thanks
your best bet would be to [look here](https://www.php.net/manual/en/image.examples.merged-watermark.php) it is an ideal script provided by the PHP.net user guide. Just make sure that you have the GD library installed
You'd want a watermark which can be [accomplished with GD](http://www.sitepoint.com/article/watermark-images-php/)
How can I produce a small stamp on each image on my site
[ "", "php", "image", "watermark", "stamp", "" ]
I need a way to programaticaly block and then later unblock specific websites based on their domain names. I only need to block browsers (so http & https would be sufficient, I guess?) but not just Internet Explorer, it should also work for anyone trying to run Chrome or Firefox too. This needs to work on Windows XP and be usable from a .NET program (Vb.net or C#). (ps., I had found this question: [How to unblock website which is blocked, using C#?](https://stackoverflow.com/questions/868054/how-to-unblock-website-which-is-blocked-using-c) which seems to be saying the same thing, however at the time I could not understand it. Now I see it, thanks all.) Thanks,
This line in the hosts file will redirect to localhost. Though I have nothing against Nascar ;) ``` 127.0.0.1 www.nascar.com ``` [Block websites using a hosts file.](http://allthingsmarked.com/2006/08/28/howto-block-websites-using-the-hosts-file/)
A down and dirty way would be to dynamically update the hosts file. c:\Windows\System32\drivers\etc\hosts
Programmatic way to temporarily block specific Web Sites?
[ "", "c#", "vb.net", "windows-xp", "cross-browser", "" ]
I fully realize that what I am proposing does not follow the .NET guidelines, and, therefore, is probably a poor idea for this reason alone. However, I would like to consider this from two possible perspectives: (1) Should I consider using this for my own development work, which is 100% for internal purposes. (2) Is this a concept that the framework designers could consider changing or updating? I am thinking about using an event signature that utilizes a strong typed 'sender', instead of typing it as 'object', which is the current .NET design pattern. That is, instead of using a standard event signature that looks like this: ``` class Publisher { public event EventHandler<PublisherEventArgs> SomeEvent; } ``` I am considering using an event signature that utilizes a strong-typed 'sender' parameter, as follows: First, define a "StrongTypedEventHandler": ``` [SerializableAttribute] public delegate void StrongTypedEventHandler<TSender, TEventArgs>( TSender sender, TEventArgs e ) where TEventArgs : EventArgs; ``` This is not all that different from an Action<TSender, TEventArgs>, but by making use of the `StrongTypedEventHandler`, we enforce that the TEventArgs derives from `System.EventArgs`. Next, as an example, we can make use of the StrongTypedEventHandler in a publishing class as follows: ``` class Publisher { public event StrongTypedEventHandler<Publisher, PublisherEventArgs> SomeEvent; protected void OnSomeEvent() { if (SomeEvent != null) { SomeEvent(this, new PublisherEventArgs(...)); } } } ``` The above arrangement would enable subscribers to utilize a strong-typed event handler that did not require casting: ``` class Subscriber { void SomeEventHandler(Publisher sender, PublisherEventArgs e) { if (sender.Name == "John Smith") { // ... } } } ``` I do fully realize that this breaks with the standard .NET event-handling pattern; however, keep in mind that contravariance would enable a subscriber to use a traditional event handling signature if desired: ``` class Subscriber { void SomeEventHandler(object sender, PublisherEventArgs e) { if (((Publisher)sender).Name == "John Smith") { // ... } } } ``` That is, if an event handler needed to subscribe to events from disparate (or perhaps unknown) object types, the handler could type the 'sender' parameter as 'object' in order to handle the full breadth of potential sender objects. Other than breaking convention (which is something that I do not take lightly, believe me) I cannot think of any downsides to this. There may be some CLS compliance issues here. This does run in Visual Basic .NET 2008 100% fine (I've tested), but I believe that the older versions of Visual Basic .NET through 2005 do not have delegate covariance and contravariance. *[Edit: I have since tested this, and it is confirmed: VB.NET 2005 and below cannot handle this, but VB.NET 2008 is 100% fine. See "Edit #2", below.]* There may be other .NET languages that also have a problem with this, I can't be sure. But I do not see myself developing for any language other than C# or Visual Basic .NET, and I do not mind restricting it to C# and VB.NET for .NET Framework 3.0 and above. (I could not imagine going back to 2.0 at this point, to be honest.) Can anyone else think of a problem with this? Or does this simply break with convention so much that it makes people's stomachs turn? Here are some related links that I've found: (1) [Event Design Guidelines [MSDN 3.5]](http://msdn.microsoft.com/en-us/library/ms229011.aspx) (2) [C# simple Event Raising - using “sender” vs. custom EventArgs [StackOverflow 2009]](https://stackoverflow.com/questions/809609/c-simple-event-raising-using-sender-vs-custom-eventargs) (3) [Event signature pattern in .net [StackOverflow 2008]](https://stackoverflow.com/questions/247241/event-signature-pattern-in-net) I am interested in anyone's and everyone's opinion on this... Thanks in advance, Mike **Edit #1:** This is in response to [Tommy Carlier's post](https://stackoverflow.com/questions/1046016/event-signature-in-net-using-a-strong-typed-sender/1046104#1046104) : Here's a full working example that shows that both strong-typed event handlers and the current standard event handlers that use a 'object sender' parameter can co-exist with this approach. You can copy-paste in the code and give it a run: ``` namespace csScrap.GenericEventHandling { class PublisherEventArgs : EventArgs { // ... } [SerializableAttribute] public delegate void StrongTypedEventHandler<TSender, TEventArgs>( TSender sender, TEventArgs e ) where TEventArgs : EventArgs; class Publisher { public event StrongTypedEventHandler<Publisher, PublisherEventArgs> SomeEvent; public void OnSomeEvent() { if (SomeEvent != null) { SomeEvent(this, new PublisherEventArgs()); } } } class StrongTypedSubscriber { public void SomeEventHandler(Publisher sender, PublisherEventArgs e) { MessageBox.Show("StrongTypedSubscriber.SomeEventHandler called."); } } class TraditionalSubscriber { public void SomeEventHandler(object sender, PublisherEventArgs e) { MessageBox.Show("TraditionalSubscriber.SomeEventHandler called."); } } class Tester { public static void Main() { Publisher publisher = new Publisher(); StrongTypedSubscriber strongTypedSubscriber = new StrongTypedSubscriber(); TraditionalSubscriber traditionalSubscriber = new TraditionalSubscriber(); publisher.SomeEvent += strongTypedSubscriber.SomeEventHandler; publisher.SomeEvent += traditionalSubscriber.SomeEventHandler; publisher.OnSomeEvent(); } } } ``` **Edit #2:** This is in response to [Andrew Hare's statement](https://stackoverflow.com/questions/1046016/event-signature-in-net-using-a-strong-typed-sender/1046041#1046041) regarding covariance and contravariance and how it applies here. Delegates in the C# language have had covariance and contravariance for so long that it just feels "intrinsic", but it's not. It might even be something that is enabled in the CLR, I don't know, but Visual Basic .NET did not get covariance and contravariance capability for its delegates until the .NET Framework 3.0 (VB.NET 2008). And as a result, Visual Basic.NET for .NET 2.0 and below would not be able to utilize this approach. For example, the above example can be translated into VB.NET as follows: ``` Namespace GenericEventHandling Class PublisherEventArgs Inherits EventArgs ' ... ' ... End Class <SerializableAttribute()> _ Public Delegate Sub StrongTypedEventHandler(Of TSender, TEventArgs As EventArgs) _ (ByVal sender As TSender, ByVal e As TEventArgs) Class Publisher Public Event SomeEvent As StrongTypedEventHandler(Of Publisher, PublisherEventArgs) Public Sub OnSomeEvent() RaiseEvent SomeEvent(Me, New PublisherEventArgs) End Sub End Class Class StrongTypedSubscriber Public Sub SomeEventHandler(ByVal sender As Publisher, ByVal e As PublisherEventArgs) MessageBox.Show("StrongTypedSubscriber.SomeEventHandler called.") End Sub End Class Class TraditionalSubscriber Public Sub SomeEventHandler(ByVal sender As Object, ByVal e As PublisherEventArgs) MessageBox.Show("TraditionalSubscriber.SomeEventHandler called.") End Sub End Class Class Tester Public Shared Sub Main() Dim publisher As Publisher = New Publisher Dim strongTypedSubscriber As StrongTypedSubscriber = New StrongTypedSubscriber Dim traditionalSubscriber As TraditionalSubscriber = New TraditionalSubscriber AddHandler publisher.SomeEvent, AddressOf strongTypedSubscriber.SomeEventHandler AddHandler publisher.SomeEvent, AddressOf traditionalSubscriber.SomeEventHandler publisher.OnSomeEvent() End Sub End Class End Namespace ``` VB.NET 2008 can run it 100% fine. But I've now tested it on VB.NET 2005, just to be sure, and it does not compile, stating: > Method 'Public Sub > SomeEventHandler(sender As Object, e > As > vbGenericEventHandling.GenericEventHandling.PublisherEventArgs)' > does not have the same signature as > delegate 'Delegate Sub > StrongTypedEventHandler(Of TSender, > TEventArgs As System.EventArgs)(sender > As Publisher, e As > PublisherEventArgs)' Basically, delegates are invariant in VB.NET versions 2005 and below. I actually thought of this idea a couple of years ago, but VB.NET's inability to deal with this bothered me... But I've now moved solidly to C#, and VB.NET can now handle it, so, well, hence this post. **Edit: Update #3** Ok, I have been using this quite successfully for a while now. It really is a nice system. I decided to name my "StrongTypedEventHandler" as "GenericEventHandler", defined as follows: ``` [SerializableAttribute] public delegate void GenericEventHandler<TSender, TEventArgs>( TSender sender, TEventArgs e ) where TEventArgs : EventArgs; ``` Other than this renaming, I implemented it exactly as discussed above. It does trip over FxCop rule CA1009, which states: > "By convention, .NET events have two > parameters that specify the event > sender and event data. Event handler > signatures should follow this form: > void MyEventHandler( object sender, > EventArgs e). The 'sender' parameter > is always of type System.Object, even > if it is possible to employ a more > specific type. The 'e' parameter is > always of type System.EventArgs. > Events that do not provide event data > should use the System.EventHandler > delegate type. Event handlers return > void so that they can send each event > to multiple target methods. Any value > returned by a target would be lost > after the first call." Of course, we know all this, and are breaking the rules anyway. (All event handlers can use the standard 'object Sender' in their signature if preferred in any case -- this is a non-breaking change.) So the use of a `SuppressMessageAttribute` does the trick: ``` [SuppressMessage("Microsoft.Design", "CA1009:DeclareEventHandlersCorrectly", Justification = "Using strong-typed GenericEventHandler<TSender, TEventArgs> event handler pattern.")] ``` I hope that this approach becomes the standard at some point in the future. It really works very nicely. Thanks for all your opinions guys, I really appreciate it... Mike
It seems Microsoft has picked up on this as a similar example is now on MSDN: [Generic Delegates](http://msdn.microsoft.com/en-us/library/sx2bwtw7.aspx "Generic Delegates")
The Windows Runtime (WinRT) introduces a `TypedEventHandler<TSender, TResult>` delegate, which does exactly what your `StrongTypedEventHandler<TSender, TResult>` does, but apparently without the constraint on the `TResult` type parameter: ``` public delegate void TypedEventHandler<TSender, TResult>(TSender sender, TResult args); ``` The MSDN documentation is [here](https://learn.microsoft.com/en-us/uwp/api/windows.foundation.typedeventhandler-2).
Event Signature in .NET -- Using a Strong Typed 'Sender'?
[ "", "c#", ".net", "vb.net", "events", "" ]
Can any one tell me how can i convert a PDF file to a format which does not have a copy /sve option in it in C# ? I want to show some pdf in browser where it should restrict users from downloading the PDF file.
Any files that the user views are loaded to their computer. This included all images, pdf files, html whatever. If you want to prevent downloading of the pdf, there are Flash and Java based pdf viewers. They show the content to the user, but user doesn't get the pdf file.
PDF's are rendered on the client side using the reader app on the user's machine. You can't stop them from downloading it, unless you want them to not view the PDF. You can convert the PDF into some other format and display it on your webpage as a non-pdf document, but that's a different story.
Convert PDF to ReadOnly in browsers
[ "", "c#", "pdf", "" ]
What's the correct way to prevent invoking (creating an instance of) a C type from Python? I've considered providing a `tp_init` that raises an exception, but as I understand it that would still allow `__new__` to be called directly on the type. A C function returns instances of this type -- that's the only way instances of this type are intended to be created. **Edit:** My intention is that users of my type will get an exception if they accidentally use it wrongly. The C code is such that calling a function on an object incorrectly created from Python would crash. I realise this is unusual: all of my C extension types so far have worked nicely when instantiated from Python. My question is whether there is a usual way to provide this restriction.
Simple: leave the tp\_new slot of the type empty. ``` >>> Foo() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: cannot create 'foo.Foo' instances >>> Foo.__new__(Foo) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object.__new__(foo.Foo) is not safe, use foo.Foo.__new__() ``` If you inherit from a type other than the base object type, you will have to set tp\_new to NULL after calling PyType\_Ready().
Don't prevent them from doing it. "[We're all consenting adults here](http://mail.python.org/pipermail/tutor/2003-October/025932.html)." Nobody is going to do it unless they have a reason, and if they have such a reason then you shouldn't stop them just because you didn't anticipate every possible use of your type.
Preventing invoking C types from Python
[ "", "python", "cpython", "" ]
Is it possible to assign an `out`/`ref` parameter using Moq (3.0+)? I've looked at using `Callback()`, but `Action<>` does not support ref parameters because it's based on generics. I'd also preferably like to put a constraint (`It.Is`) on the input of the `ref` parameter, though I can do that in the callback. I know that Rhino Mocks supports this functionality, but the project I'm working on is already using Moq.
Moq version 4.8 and later has much improved support for by-ref parameters: ``` public interface IGobbler { bool Gobble(ref int amount); } delegate void GobbleCallback(ref int amount); // needed for Callback delegate bool GobbleReturns(ref int amount); // needed for Returns var mock = new Mock<IGobbler>(); mock.Setup(m => m.Gobble(ref It.Ref<int>.IsAny)) // match any value passed by-ref .Callback(new GobbleCallback((ref int amount) => { if (amount > 0) { Console.WriteLine("Gobbling..."); amount -= 1; } })) .Returns(new GobbleReturns((ref int amount) => amount > 0)); int a = 5; bool gobbleSomeMore = true; while (gobbleSomeMore) { gobbleSomeMore = mock.Object.Gobble(ref a); } ``` The same pattern works for `out` parameters. `It.Ref<T>.IsAny` also works for C# 7 `in` parameters (since they are also by-ref).
For 'out', the following seems to work for me. ``` public interface IService { void DoSomething(out string a); } [TestMethod] public void Test() { var service = new Mock<IService>(); var expectedValue = "value"; service.Setup(s => s.DoSomething(out expectedValue)); string actualValue; service.Object.DoSomething(out actualValue); Assert.AreEqual(expectedValue, actualValue); } ``` I'm guessing that Moq looks at the value of 'expectedValue' when you call Setup and remembers it. For `ref`, I'm looking for an answer also. I found the following QuickStart guide useful: <https://github.com/Moq/moq4/wiki/Quickstart>
Assigning out/ref parameters in Moq
[ "", "c#", "parameters", "moq", "ref", "out", "" ]
I'm currently trying to build a more or less complete set of unit tests for a small library. Since we want to allow different implementations to exist we want this set of tests to be (a) generic, so that we can re-use it to test the different implementations and (b) as complete as possible. For the (b) part I'd like to know if there is any best-practice out there for testing enum types. So for example I have an enum as follows: ``` public enum Month { January, February, ... December; } ``` Here I want to ensure that all enum types really exist. Is that even necessary? Currently I'm using Hamcrests `assertThat` like in the following example: ``` assertThat(Month.January, is(notNullValue())); ``` A missing "January" enum would result in a compile time error which one can fix by creation the missing enum type. I'm using Java here but I don't mind if your answer is for a different language.. **Edit:** As mkato and Mark Heath have both pointed out testing enums may not be necessary since the compiler won't compile when you are using an enum type which isn't there. But I still want to test those enums since we want to build a seperate TCK-like test.jar which will run the same test on different implementations. So my question was more meant to be like: What is the best way to test enum types? After thinking about it a bit more I changed the Hamcrest statement above to: ``` assertThat(Month.valueOf("January"), is(notNullValue())); ``` This statement now throws a NPE when January is not there (yet). Is there anything wrong with this approach?
For enums, I test them only when they actually have methods in them. If it's a pure value-only enum like your example, I'd say don't bother. But since you're keen on testing it, going with your second option is much better than the first. The problem with the first is that if you use an IDE, any renaming on the enums would also rename the ones in your test class.
I agree with [aberrant80](https://stackoverflow.com/questions/1079700/how-to-test-enum-types/1103800#1103800). > For enums, I test them only when they actually have methods in them. > If it's a pure value-only enum like your example, I'd say don't > bother. > > But since you're keen on testing it, going with your second option is > much better than the first. The problem with the first is that if you > use an IDE, any renaming on the enums would also rename the ones in > your test class. I would expand on it by adding that unit testings an Enum can be very useful. If you work in a large code base, build time starts to mount up and a unit test can be a faster way to verify functionality (tests only build their dependencies). Another really big advantage is that other developers cannot change the functionality of your code unintentionally (a huge problem with very large teams). And with all Test Driven Development, tests around an Enums Methods reduce the number of bugs in your code base. **Simple Example** ``` public enum Multiplier { DOUBLE(2.0), TRIPLE(3.0); private final double multiplier; Multiplier(double multiplier) { this.multiplier = multiplier; } Double applyMultiplier(Double value) { return multiplier * value; } } public class MultiplierTest { @Test public void should() { assertThat(Multiplier.DOUBLE.applyMultiplier(1.0), is(2.0)); assertThat(Multiplier.TRIPLE.applyMultiplier(1.0), is(3.0)); } } ```
How to test enum types?
[ "", "java", "unit-testing", "enums", "hamcrest", "" ]
Lets assume a simple Spring MVC Controller that receives the ID of a domain object. The Controller should call a service that should do something with that domain object. Where do you "convert" the ID of the domain object into the domain object by loading it from the database? This should not be done by the Controller. So the service method interface has to use accept the **ID** of the **domain object** instead of the domain object itself. But the interface of the service would be nicer if it takes the domain object as a parameter. What are your thoughts about this common use case? How do you solve this?
The controller should pass the id down into the service layer and then get back whatever is needed to render the rest of the HTTP response. So - ``` Map<String,Object> doGet (@RequestParam("id") int id) { return serviceLayer.getStuffByDomainObjectId(id); } ``` Anything else is just going to be polluting the web layer, which shouldn't care at all about persistence. The entire purpose of the service layer is to get domain objects and tell them to perform their business logic. So, a database call should reside in the service layer as such - ``` public Map<String,Object> getStuffByDomainObjectId(int id) { DomainObject domainObject = dao.getDomainObjectById(id); domainObject.businessLogicMethod(); return domainObject.map(); } ```
in a project of mine I used the service layer: ``` class ProductService { void removeById(long id); } ```
Spring MVC Domain Object handling Best Practice
[ "", "java", "model-view-controller", "hibernate", "spring", "dao", "" ]
My Listview is setup in the details view with the following column headers: Image Name || Image Location || Image Size || Image Preview I would like to know if there is a way to draw an image in the 4th column there. The only way I know, is to set ``` this.listview1.OwnerDraw = true this.listView1.DrawColumnHeader += new System.Windows.Forms.DrawListViewColumnHeaderEventHandler(listView1_DrawColumnHeader); this.listView1.DrawItem += new System.Windows.Forms.DrawListViewItemEventHandler(listView1_DrawItem); this.listView1.DrawSubItem += new System.Windows.Forms.DrawListViewSubItemEventHandler(listView1_DrawSubItem); ``` The problem with this is I have to handle ALL the listview drawing myself... I was wondering if there is a better way to draw in image to a subItem, or if there is a way to only handle the DrawSubItem event?
In `listView1_DrawColumnHeader` and `listView1_DrawItem` event handlers you should put this ``` e.DrawDefault = true; ``` It will use default drawing implementation for columns and items, all you have to do is write your own implementation only for subitems.
I just ran into this. [@zidane's answer](https://stackoverflow.com/questions/1050185/drawing-an-image-to-a-subitem-in-the-listview/1259074#1259074) is nearly correct. I want to post what actually needs to be done so people reading back later don't have to figure this out themselves. --- Only handle `DrawColumnHeader` using `e.DrawDefault = true;` and the subitem drawing. In fact, if you set `e.DrawDefault = true;` in the `DrawItem` event, the `DrawSubItem` event never fires, presumably on the assumption that you want to draw the whole row and don't care about the subitems. The only real code is in `DrawSubItem`, using this basic construction: ``` if (/* condition to determine if you want to draw this subitem */) { // Draw it } else e.DrawDefault = true; ```
Drawing an Image to a subItem in the ListView
[ "", "c#", "winforms", "listview", "" ]
Currently I'm trying to install PHP 5.3.0 on some Linux testing server. As we've urgently waited for ext/intl we want to check out the features it provides. I'm running `configure` successfully with the following arguments ``` ./configure --with-apxs2=/usr/local/apache2/bin/apxs --prefix=/usr/local/php --with-zlib-dir=/usr/local/zlib --with-imap=/.../imap-2006k --with-imap-ssl --with-openssl=shared --with-iconv=shared --with-zlib=shared --with-curl=shared --with-curlwrappers --enable-exif --with-ldap=shared,/usr/local/openldap --with-ldap-sasl --enable-mbstring=shared --with-mcrypt --enable-soap=shared --enable-sockets --enable-zip=shared --enable-pdo=shared --with-pdo-sqlite=shared --with-sqlite=shared --with-mysql=shared,/usr/local/mysql --with-pdo-mysql=shared,/usr/local/mysql --with-mysqli=shared,/usr/local/mysql/bin/mysql_config --with-mhash=shared,/usr/local/mhash --with-libxml-dir=/usr/local/libxml2 --with-xsl=shared,/usr/local/libxslt --enable-xmlreader=shared --enable-xmlwriter=shared --with-gmp=shared --with-icu-dir=/usr/local/icu --enable-intl ``` ICU 4.2 is located at `/usr/local/icu` and PHP 5.2.9 compiled flawlessly (without the int- and icu-options). But when I complie the PHP 5.3.0 source I get a whole lot of error messages of the kind ``` ext/intl/grapheme/.libs/grapheme_util.o(.text+0xbab):/.../php-5.3.0/ext/intl/grapheme/grapheme_util.c:208: undefined reference to `ubrk_close_4_2' ``` I'm quite sure it has something to do with not finding the shared libraries. Setting ``` export LD_LIBRARY_PATH=/usr/local/icu/lib ``` doesn't help. Can anyone point me to some solution? I'm rather clueless - and I'm no real expert in these things... **EDIT:** I just rechecked and made sure that the various icu-libraries and the respective soft links are all located in `/usr/local/icu/lib`: ``` lrwxrwxrwx 1 root root 20 Jul 1 09:56 libicudata.so -> libicudata.so.42.0.1 lrwxrwxrwx 1 root root 20 Jul 1 09:56 libicudata.so.42 -> libicudata.so.42.0.1 -rw-r--r-- 1 root root 16015140 Jul 1 09:56 libicudata.so.42.0.1 lrwxrwxrwx 1 root root 20 Jul 1 09:56 libicui18n.so -> libicui18n.so.42.0.1 lrwxrwxrwx 1 root root 20 Jul 1 09:56 libicui18n.so.42 -> libicui18n.so.42.0.1 -rwxr-xr-x 1 root root 2454770 Jul 1 09:56 libicui18n.so.42.0.1 lrwxrwxrwx 1 root root 18 Jul 1 09:56 libicuio.so -> libicuio.so.42.0.1 lrwxrwxrwx 1 root root 18 Jul 1 09:56 libicuio.so.42 -> libicuio.so.42.0.1 -rwxr-xr-x 1 root root 65299 Jul 1 09:56 libicuio.so.42.0.1 lrwxrwxrwx 1 root root 18 Jul 1 09:56 libicule.so -> libicule.so.42.0.1 lrwxrwxrwx 1 root root 18 Jul 1 09:56 libicule.so.42 -> libicule.so.42.0.1 -rwxr-xr-x 1 root root 356125 Jul 1 09:56 libicule.so.42.0.1 lrwxrwxrwx 1 root root 18 Jul 1 09:56 libiculx.so -> libiculx.so.42.0.1 lrwxrwxrwx 1 root root 18 Jul 1 09:56 libiculx.so.42 -> libiculx.so.42.0.1 -rwxr-xr-x 1 root root 75110 Jul 1 09:56 libiculx.so.42.0.1 lrwxrwxrwx 1 root root 18 Jul 1 09:56 libicutu.so -> libicutu.so.42.0.1 lrwxrwxrwx 1 root root 18 Jul 1 09:56 libicutu.so.42 -> libicutu.so.42.0.1 -rwxr-xr-x 1 root root 159330 Jul 1 09:56 libicutu.so.42.0.1 lrwxrwxrwx 1 root root 18 Jul 1 09:56 libicuuc.so -> libicuuc.so.42.0.1 lrwxrwxrwx 1 root root 18 Jul 1 09:56 libicuuc.so.42 -> libicuuc.so.42.0.1 -rwxr-xr-x 1 root root 1660769 Jul 1 09:56 libicuuc.so.42.0.1 ``` `make check` runs tons of tests - all of them successfully: ``` [All tests passed successfully...] Elapsed Time: 00:00:25.000 make[2]: Leaving directory `/.../icu-4.2/source/test/cintltst' --------------- ALL TESTS SUMMARY: All tests OK: testdata intltest iotest cintltst make[1]: Leaving directory `/.../icu-4.2/source/test' make[1]: Entering directory `/.../icu-4.2/source' verifying that icu-config --selfcheck can operate verifying that make -f Makefile.inc selfcheck can operate PASS: config selfcheck OK make[1]: Leaving directory `/.../icu-4.2/source' ``` **EDIT: answers to VolkerK's [questions](https://stackoverflow.com/questions/1078412/trouble-installing-php-5-3-0-with-intl-support/1100891#1100891)** I installed ICU 4.2 from source and as I wrote above the build process, the unit-tests and the installation all went fine. ``` /usr/local/icu/bin/icu-config --version 4.2.0.1 /usr/local/icu/bin/icu-config --prefix /usr/local/icu /usr/local/icu/bin/icu-config --cppflags-searchpath -I/usr/local/icu/include /usr/local/icu/bin/icu-config --ldflags --ldflags-icuio -lpthread -lm -L/usr/local/icu/lib -licui18n -licuuc -licudata -lpthread -lm -licuio objdump -C /usr/local/icu/lib/libicuuc.so.42.0.1 // doesn't work because of unrecognized argument -C ``` **EDIT regarding VolkerK's comment:** No, there has been no switch of the compiler involved - I ran both build processes directly one after the other. `objdump /usr/local/icu/lib/libicuuc.so.42.0.1` doesn't work either but I managed to run ``` objdump -t /usr/local/icu/lib/libicuuc.so.42.0.1 | grep ubrk_close 00000000000d2484 g F .text 000000000000002d ubrk_close_4_2 ``` Don't know if this information can help. **EDIT on VolkerK's [edit1 and edit2](https://stackoverflow.com/questions/1078412/trouble-installing-php-5-3-0-with-intl-support/1100891#1100891):** I think there's the rub - there is indeed another icu-version on the sytem; at least in parts (there is no other icu-config for example; only the one in `/usr/local/icu/bin`). `gcc -lpthread -lm -L/usr/local/icu/lib -licui18n -licuuc -licudata -lpthread -lm -licuio -print-file-name=libicuuc.so` returns ``` /usr/lib64/gcc-lib/x86_64-suse-linux/3.3.5/../../../../lib64/libicuuc.so ``` while `gcc -lpthread -lm -L/usr/local/icu/lib -licui18n -licuuc -licudata -lpthread -lm -licuio -print-file-name=libicuuc.so.42` returns ``` libicuuc.so.42 ``` So the problem seems to be, how to get the new lib-path into the build process?? By the way, I learned a lot from your answers - thank's to all of you. I also tried to compile your simple test program - and it also fails with the same *undefined reference* error, most likely due to the same reason PHP won't compile. **How can I get rid of the reference to the old icu-library in the lib-path or how do I prioritize the new icu-library-path?**
The problem seems to be that the binary is linked against the wrong (shared) library files. First a long, boring explaination of what I think the problem is. Keep in mind that I'm not a linux expert. I really want you to understand my train of thoughts so that you can decide if it's feasible and/or where I'm wrong. The first (crude) solution is easily reversible. Run another ./configure and all changes are history. I think it's pretty save. Why do you have icu 4-2 specific dependencies in the first place? Let's take a look at a source file of php's intl extension (ext/intl/grapheme/grapheme\_string.c) ``` #include <unicode/ubrk.h> ... PHP_FUNCTION(grapheme_substr) { ... ubrk_close(bi); ... ``` Until now there's no version specific code. grapheme\_string.c looks the same whether you use icu 3.4 or icu 4.2. Where does the ubrk\_close\_4\_2 come from? When you run the "./configure ... --with-icu-dir=/usr/local/icu" command the file ext/intl/config.m4 is executed. In this process icu-config is called to get the include path and library files required to build php. You provided a path to your icu installation which boils down to that ``` ICU_CONFIG="$PHP_ICU_DIR/bin/icu-config" ICU_INCS=`$ICU_CONFIG --cppflags-searchpath` ICU_LIBS=`$ICU_CONFIG --ldflags --ldflags-icuio` ``` is executed. You've tried icu-config yourself, so you know what it outputs and therefore what ICU\_INCS and ICU\_LIBS contain. ICU\_INCS and ICU\_LIBS are passed to gcc when the files are compiled/linked. gcc (apperently) didn't find unicode/ubrk.h in it's default directory, so it searched for the file in the additional include directories provided by ICU\_INCS where it found the icu 4.2 include files. unicode/ubrk.h includes unicode/utypes.h which then includes unicode/urename.h - and again the icu 4.2 header files are included. In this case unicode/urename.h includes #define ubrk\_close ubrk\_close\_4\_2. When the preprocessor is done ubrk\_close(bi) has been replaced by ubrk\_close\_4\_2(bi). ``` PHP_FUNCTION(grapheme_substr) { ... ubrk_close_4_2(bi); ... ``` Now you have a version specific dependency, a reference to ubrk\_close\_4\_2 that some library has to resolve. So the include part did work. It did indeed find your icu 4.2 version and used its header files. So far so good. Now for the linker part. In your case ICU\_LIBS contains ``` -lpthread -lm -L/usr/local/icu/lib -licui18n -licuuc -licudata -lpthread -lm -licuio ``` -licuuc tells gcc "find me a library called 'icuuc' and use it". gcc then searches the LIB paths for files with a certain naming scheme that match "icuuc". In this case libicuuc.so. Note that it doesn't look for a version specific file name, just libicuuc.so. Once it has found such a file it won't look for another one. First gcc searches in its default paths. Then it searches the additional library paths - in the order they are provided to gcc. I.e. gcc -L/usr/lib -L/usr/local/lib -licuuc will find /usr/lib/libicuuc.so if there is such a file and not /usr/local/lib/libicuuc.so (anymore). Meaning that either the default path or the order of the library path directives may be the cause of your trouble. When your program is linked against shared objects a "special" loader is added to the code and the name of the shared object is stored in your program (at link time). Every time your program is executed, first the (runtime) loader searches for the shared object (by its name), loads the code and replaces some stub jump addresses. The shared object can "tell" the linker (i.e. at link time) the name of the shared object the loader should look for (SONAME property) at runtime. Take a look at the directory listing you provided in your question text ``` lrwxrwxrwx 1 root root 18 Jul 1 09:56 libicuuc.so -> libicuuc.so.42.0.1 lrwxrwxrwx 1 root root 18 Jul 1 09:56 libicuuc.so.42 -> libicuuc.so.42.0.1 -rwxr-xr-x 1 root root 1660769 Jul 1 09:56 libicuuc.so.42.0.1 ``` libicuuc.so, that's the file gcc is looking for when -licuuc is provided. The linker follows the symlink and uses libicuuc.so.42.0.1. This file "tells" the linker that the (runtime) loader should look for libicuuc.so.42, see <http://userguide.icu-project.org/packaging#TOC-ICU-Versions>. The loader will follow the symlink and load libicuuc.so.42.0.1, or if there is another bugfix libicuuc.so.42.0.2, libicuuc.so.42.0.3, whatever libicuuc.so.42 is pointing to. libicuuc.so.42 will/should always point to an actual shared object that exports the icu 4.2 symbols. The code may have changed/fixed but the exported symbols stay the same. Your problem now is that gcc doesn't find libicuuc.so->libicuuc.so.42.0.1 but (let's say) libicuuc.so->libicuuc.so.34.x.y. This libicuuc.so.34.x.y doesn't export the icu 4.2 symbols, it doesn't provide ubrk\_close\_4\_2 but ubrk\_close\_3\_4. So, no ubrk\_close\_4\_2 -> unresolved reference error. First "solution" (crude): Let ./configure do its magic and then ... just edit the Makefile. Open the Makefile (in the source top directory) in a text editor, search for INTL\_SHARED\_LIBADD= and replace -licui18n -licuuc -licudata -licuio in that line by /usr/local/icu/lib/libicui18n.so.42 /usr/local/icu/lib/libicuuc.so.42 /usr/local/icu/lib/libicudata.so.42 /usr/local/icu/lib/libicuio.so.42 (leave any -lm -pthread ... as they are). Compile again. This "tells" the gcc/linker not to search for the .so files but to use the specific ones. The result should be the same as if your library path was working (beacuse of SONAME). But every time you run ./configure you have to apply the "fix" again. Second solution: Remove the other libicuXY.so symlinks (that's where the word "backup" comes to mind), only keep the libicuXY.so->libicuXY.so.42.0.1 links. If there are no other libicuuc.so->>libicuuc.so.34.x.y links the gcc/linker can't find them and won't link against the old versions. Again because of the SONAME property binaries that have already been linked against the old version will still function because "their" loader will search for the (still existing) libicuXY.so.34 files. This will affect all subsequent linker runs, i.e. if you build another project that uses the older include files you will run into the same problem the other way around. The header files and the shared objects (at link time) must match.
When you call ``` export LD_LIBRARY_PATH=/usr/local/icu/lib ``` then you are overwriting the currently set path. So it could be that it will find ICU but it won't find any of the other libraries it needs. Try this instead: ``` export LD_LIBRARY_PATH=/usr/local/icu/lib:${LD_LIBRARY_PATH} ``` If that doesn't help I can think of two things to try: 1. Is the library in the right place? Perhaps your installation moved in somewhere else, like `/usr/local/lib/icu` instead? 2. Does ICU work? Try the "make check" target for ICU. Try compiling/running the test suite included with ICU, or try to compile and run a trivial ICU example. [This presentation (PPT)](http://icu-project.org/docs/papers/GettingStartedwithICU_iuc28.ppt) has a few trivial examples. **EDIT** I think I figured it out. It looks like php-intl only works with libicu 3.6 or 3.8. I have googled for ever Linux distro shipping php-intl and they all depends on libicu 3.8 even when they are also shipping libicu 4.0 or later. The [last changelog before intl became part of php itself](http://pecl.php.net/package-changelog.php?package=intl&release=1.0.2) indicates the same. I suggest installing libicu 3.8 and trying again.
Trouble installing PHP 5.3.0 with intl-support
[ "", "php", "linux", "installation", "intl", "" ]
I have an application (winform exe) that I run several times. Does this mean that I have share assemblies or does each instance have its own copie of the assemblies? When I run the app, it uses about 30MB (in task manager) and when I run another copy of the app it uses another 30MB. How do I work out how much of the memory it is using and if I can reduce the over usage of memory if I run several instances? Regards JD.
It's complicated. Start by reading my recent article on virtual memory. <http://blogs.msdn.com/ericlippert/archive/2009/06/08/out-of-memory-does-not-refer-to-physical-memory.aspx> OK, now that you have some understanding of how virtual memory works, you can understand how the operating system loads DLLs into memory. Suppose you have two processes that both need a particular page of Foo.DLL. The operating system will load that page into *physical* memory ONCE and then map that physical page into the virtual space of both processes. So the amount of physical memory that is being used is, say, a 4KB page. But that 4KB shows up in BOTH processes. It is likely that most of your 30MB is shared physical memory. The way to find out is to be more sophisticated about your use of task manager. You want to add some columns there and look at both "Working Set" and "Private Working Set". "Working Set" is the total number of pages, private AND shared, currently in use by that process. "Private Working Set" is the number of those which are not shared. To lower your memory usage -- well, first off, start by understanding why you care. Machines have plenty of memory these days and 30MB is a relatively tiny amount of memory. Unless you can find a compelling customer-focused reason to work on this, then work on something else, like making your program faster or adding more features. Assuming that you do have a reason to care, get yourself some tools -- particularly, memory profilers. The .NET memory profiler can tell you where all your allocations are and how big they are.
If you don't have static resources you could, instead of using a second process, add a new thread whenever the application is started and an instance of it already exists. This is what Firefox does.
Shared memory and runnning multiple copies of an executable
[ "", "c#", "" ]
I'm trying to write a plugin system to provide some extensibility to an application of mine so someone can write a plugin(s) for the application without touching the main application's code (and risk breaking something). I've got the base "IPlugin" interface written (atm, nothing is implemented yet) Here is how I'm loading: ``` public static void Load() { // rawr: http://www.codeproject.com/KB/cs/c__plugin_architecture.aspx String[] pluginFiles = Directory.GetFiles(Plugins.PluginsDirectory, "*.dll"); foreach (var plugin in pluginFiles) { Type objType = null; try { //Assembly.GetExecutingAssembly().GetName().Name MessageBox.Show(Directory.GetCurrentDirectory()); Assembly asm = Assembly.Load(plugin); if (asm != null) { objType = asm.GetType(asm.FullName); if (objType != null) { if (typeof(IPlugin).IsAssignableFrom(objType)) { MessageBox.Show(Directory.GetCurrentDirectory()); IPlugin ipi = (IPlugin)Activator.CreateInstance(objType); ipi.Host = Plugins.m_PluginsHost; ipi.Assembly = asm; } } } } catch (Exception e) { MessageBox.Show(e.ToString(), "Unhandled Exception! (Please Report!)", System.Windows.Forms.MessageBoxButtons.OK, System.Windows.Forms.MessageBoxIcon.Information); } } } ``` A friend tried to help but I really didn't understand what was wrong. The folder structure for plugins is the following: \ \Plugins\ All plugins reference a .dll called "Lab.Core.dll" in the [root] directory and it is not present in the Plugins directory because of duplicate references being loaded. The plugin system is loaded from Lab.Core.dll which is also referenced by my executable. Type "IPlugin" is in Lab.Core.dll as well. Lab.Core.dll is, exactly as named, the core of my application. **EDIT:** Question: Why/What is that exception I'm getting and how could I go about fixing it? **FINAL EDIT:** Ok so I decided to re-write it after looking at some source code a friend wrote for a TF2 regulator. Here's what I got and it works: ``` public class TestPlugin : IPlugin { #region Constructor public TestPlugin() { // } #endregion #region IPlugin Members public String Name { get { return "Test Plugin"; } } public String Version { get { return "1.0.0"; } } public String Author { get { return "Zack"; } } public Boolean OnLoad() { MessageBox.Show("Loaded!"); return true; } public Boolean OnAllLoaded() { MessageBox.Show("All loaded!"); return true; } #endregion } public static void Load(String file) { if (!File.Exists(file) || !file.EndsWith(".dll", true, null)) return; Assembly asm = null; try { asm = Assembly.LoadFile(file); } catch (Exception) { // unable to load return; } Type pluginInfo = null; try { Type[] types = asm.GetTypes(); Assembly core = AppDomain.CurrentDomain.GetAssemblies().Single(x => x.GetName().Name.Equals("Lab.Core")); Type type = core.GetType("Lab.Core.IPlugin"); foreach (var t in types) if (type.IsAssignableFrom((Type)t)) { pluginInfo = t; break; } if (pluginInfo != null) { Object o = Activator.CreateInstance(pluginInfo); IPlugin plugin = (IPlugin)o; Plugins.Register(plugin); } } catch (Exception) { } } public static void LoadAll() { String[] files = Directory.GetFiles("./Plugins/", "*.dll"); foreach (var s in files) Load(Path.Combine(Environment.CurrentDirectory, s)); for (Int32 i = 0; i < Plugins.List.Count; ++i) { IPlugin p = Plugins.List.ElementAt(i); try { if (!p.OnAllLoaded()) { Plugins.List.RemoveAt(i); --i; } } catch (Exception) { Plugins.List.RemoveAt(i); --i; } } } ```
It sounds like you have a circular reference. You said your plugins reference Lab.Core.DLL, but you also say the plugins are loaded from Lab.Core.DLL. Am I misunderstanding what is happening here? EDIT: OK now that you have added your question to the question... You need to have Lab.Core.DLL accessible to the plugin being loaded since it is a dependency. Normally that would mean having it in the same directory or in the GAC. I suspect there are deeper design issues at play here, but this is your immediate problem.
> The Managed Extensibility Framework (MEF) is a new library in .NET that enables greater reuse of applications and components. Using MEF, .NET applications can make the shift from being statically compiled to dynamically composed. If you are building extensible applications, extensible frameworks and application extensions, then MEF is for you. ~~<http://www.codeplex.com/MEF>~~ Edit: CodePlex is going away - the code has been moved to Github for archival purposes only: <https://github.com/MicrosoftArchive/mef> MEF is now a part of the Microsoft .NET Framework, with types primarily under the System.Composition. namespaces. There are two versions of MEF * System.ComponentModel.Composition, which has shipped with .NET 4.0 and higher. This provides the standard extension model that has been used in Visual Studio. The documentation for this version of MEF can be found [here](https://learn.microsoft.com/en-us/dotnet/api/system.componentmodel.composition) * System.Compostion is a lightweight version of MEF, which has been optimized for static composition scenarios and provides faster compositions. It is also the only version of MEF that is a portable class library and can be used on phone, store, desktop and web applications. This version of MEF is available via [NuGet](https://www.nuget.org/packages/microsoft.composition) and is documentation is available [here](https://www.nuget.org/packages/microsoft.composition)
Writing C# Plugin System
[ "", "c#", "plugins", "extensibility", "" ]
I have a form. This form has a user control. This user control has a panel and a context menu. The context menu is not attached to the panel. There are other controls that are dynamically created and added to this panel. One of those controls is a button. When you click this button, I set the contextmenustrip property to my context menu. My problem is that I need to read the items in that context menu prior to there being the opportunity to attach the context menu to the button. Each time a form is loaded, I iterate though all the child controls of the form. If a control has children, I iterate through those, and so on... I can't seem to get at the context menu that is unassigned so to speak. It has not been attached to any control so it does not appear to be a child control of any controls on the form. myConectMenu is never added to the user conrol like this.Controls.Add(myConectMenu). How can that context menu not be nested in the forms control collection? How can I get at that context menu? Here is the designer code: ``` private System.Windows.Forms.ContextMenuStrip myContextMenu; void InitializeComponent() { this.myContextMenu = new System.Windows.Forms.ContextMenuStrip(this.components); this.myContextMenu.Items.AddRange(new System.Windows.Forms.ToolStripItem[] { this.myToolStripMenuItem1, this.myToolStripMenuItem2}); this.myContextMenu.Name = "myContextMenu"; this.myContextMenu.Size = new System.Drawing.Size(158, 92); } ``` **Update** The control iteration happens in a base class from which all forms in my application derive. There is a private components object that the myContextMenu is added to. I imagine this is there so you can see the context menu in design view when it's not attached to a control. Perhaps I could leverage this? ``` private System.ComponentModel.IContainer components = null; this.myContextMenu = new System.Windows.Forms.ContextMenuStrip(this.components); ```
Create a custom contextmenu (SecureContextMenu in my case) that derives from contextmenu. Implement the open event and iterate through the items collection disabling the items that are not authorized. Be sure to create a HasBeenOpened property and set it to true the first time the open event fires so that you don't have to keep checking the same controls every time the context menu is opened. Use the SecureContextMenu everywhere you want context menu items checked against the list of authorized items.
As you correctly observed, myContextMenu is not added to the Controls connection. Control has ContextMenuStrip property which you should check. ``` public void FindContextMenuStrip(Control input) { foreach(Control control in input.Controls) { if(control.ContextMenuStrip != null) DoSomethingWithContextMenuStrip(control.ContextMenuStrip) if(control.Controls.Count > 0) FindContextMenuStrip(control); } } ``` Put relevant code in DoSomethingWithContextMenuStrip method. **EDIT:** I saw your comment where you specified what you wanted to do with ContextMenuStrip. How about creating a method in Base class which takes user details and creates a context menu strip? ``` public ContextMenuStrip GetContextMenuStripForUser(User user) { //code to create context menu strip, with only those items enabled for which user has access. } ``` In your final form, use this method to get ContextMenuStrip.
Can't find context menu in control collection
[ "", "c#", "winforms", "" ]
``` var UI$Contract$ddlForm_change = function() { //'this' is currently the drop down that fires the event // My question is can I change the context so "this" represents another object? this = SomeObject; // then call methods on the new "this" this.someMethod(someParam); }; ``` is this possible?
No, it's not possible. You can call a method with a specified value for *this* (using **`method.apply()`**/**`method.call()`**) but you cannot re-assign the keyword, **`this`**.
You can't change what `this` refers to from *inside* the function. However, you can *call* a function in a specific context - so that `this` refers to a specific object - by using `call` or `apply`.
Can I change the context of javascript "this"?
[ "", "javascript", "this", "dom-events", "" ]
I am currently completing an application that was started by someone else. He is using the app.config for some settings, and a custom xml file for other parts. This drives me nuts, and I want to consolidate the configuration into one file. But I am not certain wether to move everything into the app.config and throw out the custom xml file, or move everything into the other file and forget about app.config. I can see two arguments, one for each option: 1. Using the standard way provided by Visual Studio is easier to maintain for somebody else than me. 2. I can validate the custom file against my own xml schema, thus outsourcing a lot of tests if the config file has all required data etc. But I'm certain that there are a lot more things than I can think of right now. That's why I'm asking: **What are the advantages or disadvantages of using app.config for storing your configuration versus the 'traditional' configuration file?**
The main benefit of using the app.config is that it is the default, supported way for a .NET app to store its config. Anyone using that app or inheriting it from you some day will thank you for using established standards instead of "rolling your own". Also, the .NET framework has support for using, writing, creating, modifying the app.config file - if you go with your own scheme, you'll have to re-invent the wheel many times over. So I would definitely recommend using app.config - it's **THE** way to do configuration in .NET and a widely accepted and well-supported standard. Marc
Splitting the configuration into different files is very useful if the lifecycle of your project has it moving from one environment to another. For example, when the developers are working on the code, you may want the app to point to the 'devsql' box. When it's time to be QA'd, the code gets deployed to a staging server, and you want the app to point to 'stagingsql'. If you keep all the configs in app.config, and a settings change has been made to the dev version of app.config, it will get copied up, and clobber the staging version - now your QA people are pointing to the dev database. By keeping 'database.xml' seperate from 'app.config', you can allow for differences between the various environments, but still allow changes to config files to flow from each environment to the next without worrying about overwriting a customization.
Why use app.config to store config data?
[ "", "c#", "configuration", "settings", "" ]
I am working through a bug. In recreating the bug for the following sample I was able to determine why the problem is happening. But I am stuck for a better solution. So given the following program: ``` public interface IFoo<T> { T OutputType(T param); } class Foo : IFoo<Foo> { public virtual Foo OutputType(Foo param) { Console.WriteLine("foo"); return param; } } class Bar : Foo, IFoo<Bar> { public virtual Bar OutputType(Bar param) { Console.WriteLine("bar"); return param; } } class Program { static void Main(string[] args) { Bar bar = new Bar(); CallOutputType(bar); bar.OutputType(bar); } static void CallOutputType<T>(T t) where T : Foo { t.OutputType(t); } } ``` I was expecting the output to be: ``` bar bar ``` But what I am getting is: ``` foo bar ``` Seeing the problem simplified like this it is obvious that **Bar.OutputType** isn't overriding **Foo.OutputType**. What are my best options for improving this design? **Bar.OutputType** can't override **Foo.OutputType** because the signatures are different. Changing the signature of **Bar.OutputType** to match **Foo.OutputType** won't work because then **Bar** won't be implimenting **IFoo**.
Uh, I'm not too familiar with this stuff but shouldn't it be: ``` static void CallOutputType<T>(T t) where T : IFoo<T> { t.OutputType(t); } ``` It worked when I compiled it.
How about adding this to the Bar class: ``` public override Foo OutputType(Foo param) { return this.OutputType((Bar)param); } ```
How to properly return T from a generic method while implementing an interface?
[ "", "c#", "generics", "polymorphism", "" ]
I am using C++Builder, I know that to go to a definition of a variable or class you must press control and click on the method name, or any identifier where you want to go to a definition. However, as most of you would notice this does not work all the time. Does anyone have any trick on doing this?
I actually used the Visual Studio Emulator for keys and because of that I can now right click a popup menu and go to definition. Another benefit of enumlating the Visual Studio keyboard setup is the multiple line tab and alt-Tab now works. Sadly no more shortcut to compile (F6 for RAD Studio 2007 default keyboard setup).
"Go to declaration" usually succeeds when invoked from a source file, but fails when invoked from a header. It's easy to understand why if you know how it works: When you perform Ctrl+Click on an identifier, the compiler kicks in, running in a special "Kibitz" mode, and basically compiles your source code up to the position of the caret. For a .cpp file, this is easy - those can usually be compiled standalone. A header file, however, often depends on other headers but doesn't include them explicitly - i.e. doing a standalone compilation on "sysmac.h" will fail with an error because that header expects "System.hpp" to be included beforehand. This is one of the reasons why Code Completion and Code Browsing often fail when invoked in header files.
Control Click to get definition in IDE does not work
[ "", "c++", "definition", "c++builder-2007", "" ]
I have a class with two constructors that look like this: ``` public MyClass(SomeOtherClass source) : this(source, source.Name) { } public MyClass(SomeOtherClass source, string name) { /* ... */ } ``` When I run FxCop, it correctly reports a violation of [CA1062: ValidateArgumentsOfPublicMethods](http://msdn.microsoft.com/en-us/library/ms182182.aspx), because if `source` is `null` in the first constructor, it will throw a `NullReferenceException` on `source.Name`. Is there any way to fix this warning? I could make an extension method that checks for null and returns its argument, but it would be ugly. Also, as I understand, it wouldn't resolve the warning because FxCop wouldn't realize what it does.
Like this? ``` public MyClass(SomeOtherClass source) : this(source, source == null ? null : source.Name) { } public MyClass(SomeOtherClass source, string name) { /* ... */ } ```
There are legitimate times to turn off FxCop warnings and this could very well be one, but you can correct the problem by either a ternary expression that checks for null and throws an exception (or substitutes a default value), or a call to a static method that checks for null and throws the appropriate exception.
CA1062: ValidateArgumentsOfPublicMethods on co-constructor calls
[ "", "c#", ".net", "constructor", "fxcop", "ca1062", "" ]
In my code behind page, how do I access the connection string which is stored in my web.config file?
``` System.Web.Configuration.WebConfigurationManager.ConnectionStrings["YouConnStringName"].ConnectionString; ``` This requires references to System.Configuration.dll and System.Web.dll.
[How to: Read Connection Strings from the Web.config File](http://msdn.microsoft.com/en-us/library/ms178411.aspx)
C# connection string in web.config file
[ "", "c#", ".net", "visual-studio-2008", "web-config", "connection-string", "" ]
I am facing some issues while serializing objects (I am using JBoss Drools, and want to store an ArrayList of KnowledgePackage). When I serialize the list, store the result in a file, and deserialize it, no problem occurs, so it works fine. But when I serialize the list, store the result in a byte stream, then save it in a JarFile, i cannot then deserialize the result, because of this error : ``` IOException during package import : java.util.ArrayList; local class incompatible: stream classdesc serialVersionUID = 8664875232659988799, local class serialVersionUID = 8683452581122892189 ``` So I think the issue is when I am saving the serialized object into a Jarfile entry. I think I am doing this right, because other files saved the same way in the Jarfile can correctly be read. And after using 'cmp' and 'hexdump', I have spotted that saving it it an jar causes a variation of one octet if the uuid, else the content is the same. I am really disappointed and can not state where the problem may be. What can modify the SerialVersionUID between two classes ? other than another vm version ? --- adding source code : exportToJar -> writeRulesPackageEntry -> writeEntry ``` /** * Writes content provided from a reader into a file contained in a jar. * * @param output the output stream to write on * @param entryName the name of the file that will contain reader data * @param contentReader * * @return the zip entry that has been created into the jar */ ZipEntry writeEntry(JarOutputStream output, String entryName, ByteArrayInputStream input) { if (output == null || entryName == null || entryName.trim().length() == 0 || input == null) { throw new NullPointerException("Null argument passed"); } ZipEntry entry = new ZipEntry(entryName); byte[] buffer = new byte[BUFFER_LENGTH]; try { output.putNextEntry(entry); int nRead; while ((nRead = input.read(buffer, 0, BUFFER_LENGTH)) > 0) { output.write(buffer, 0, nRead); } output.closeEntry(); } catch (IOException e) { e.printStackTrace(); } return entry; } /** * Export rules files to a serialized object (ArrayList<KnowledgePackage>) into * an output stream, then write the output content as an entry of a jar. * * @param os the output jar to write in */ void writeRulesPackageEntry(JarOutputStream os) { // serialize objects and write them to the output stream ByteArrayOutputStream output = new ByteArrayOutputStream(); RulesPackaging rulesPackaging = new RulesPackaging(); rulesPackaging.exportResources(this.rules, output); // create a new input stream to read written objects from ByteArrayInputStream input = new ByteArrayInputStream(output.toByteArray()); this.writeEntry(os, Product.ENTRY_RULES_PACKAGE, input); } /** * Creates a JarFile containing resources. * * @param filename the exported jar filename * @return the jar as an object, null if an error occured */ public JarFile exportToJar(String filename) { FileOutputStream fOs; JarOutputStream jOs; JarFile jar = null; try { fOs = new FileOutputStream(filename); jOs = new JarOutputStream(fOs); this.writeRulesPackageEntry(jOs); jOs.close(); // construct a jar from the output jar jar = new JarFile(new File(filename)); } catch (IOException e) { e.printStackTrace(); } return jar; } ```
The `serialVersionUID` doesn't change. It is a `static final` assigned at compilation time (based on a hash of the source code, I think) unless a value is assigned explicitly in the source code. There's a bit more about it here <http://mindprod.com/jgloss/serialization.html>. With the exception you are seeing the correct `serialVersionUID` for java.util.ArrayList is 8683452581122892189L, which is assigned explicitly in the source code and has remained the same since the class was introduced in 1.2. As you've said the error is most likely occurring when byte stream to the JarFile - please post the code you're using to do that. **Cont'd after the source code was posted** I suspect the problem lies in the use of the `java.io.InputStreamReader`. From the JavaDoc: > An InputStreamReader is a bridge from > byte streams to character streams: It > reads bytes and decodes them into > characters using a specified charset. > The charset that it uses may be > specified by name or may be given > explicitly, or the platform's default > charset may be accepted. As soon as I see character sets involved in non-text streams I always get suspicious because it's possible for the stream to be modified during the decoding is a sequence of bytes doesn't correspond to a character in the character set (seen those little square characters that occurs when encoding issues happen). I would try reading the bytes straight off the `java.io.ByteArrayInputStream` that you are wrapping with the `java.io.InputStreamReader` in `writeRulesPackageEntry(JarOutputStream)`. The conversion to a `char[]` isn't necessary.
Like Nick proposes, the issue is most likely that you're not treating the stream as bytes (which are never altered), but as characters (which can be). Having said that, another decent resource on Serialization is a dedicated chapter from a book I wrote a million years ago (1997), "Mastering JavaBeans". Luckily chapter 11, Serialization, is as relevant today as it was then. Download the free PDFs from <http://ccd.uab.es/~srobles/manuals/JavaBeans>
what could modify the SerialVersionUID while serializing and storing in a Jarfile?
[ "", "java", "serialization", "jar", "drools", "" ]
I'm working on migration of data from a legacy system into our new app(running on Oracle Database, 10gR2). As part of the migration, I'm working on a script which inserts the data into tables that are used by the app. The number of rows of data that are imported runs into thousands, and the source data is not clean (unexpected nulls in NOT NULL columns, etc). So while inserting data through the scripts, whenever such an exception occurs, the script ends abruptly, and the whole transaction is rolled back. Is there a way, by which I can continue inserts of data for which the rows are clean? Using `NVL()` or `COALESCE()` is not an option, as I'd like to log the rows causing the errors so that the data can be corrected for the next pass. EDIT: My current procedure has an exception handler, I am logging the first row which causes the error. Would it be possible for inserts to continue without termination, because right now on the first handled exception, the procedure terminates execution.
Using PLSQL you can perform each insert in its own transaction (COMMIT after each) and log or ignore errors with an exception handler that keeps going.
If the data volumes were higher, row-by-row processing in PL/SQL would probably be too slow. In those circumstances, you can use DML error logging, described [here](http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/statements_9014.htm#BGBEIACB) ``` CREATE TABLE raises (emp_id NUMBER, sal NUMBER CONSTRAINT check_sal CHECK(sal > 8000)); EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('raises', 'errlog'); INSERT INTO raises SELECT employee_id, salary*1.1 FROM employees WHERE commission_pct > .2 LOG ERRORS INTO errlog ('my_bad') REJECT LIMIT 10; SELECT ORA_ERR_MESG$, ORA_ERR_TAG$, emp_id, sal FROM errlog; ORA_ERR_MESG$ ORA_ERR_TAG$ EMP_ID SAL --------------------------- -------------------- ------ ------- ORA-02290: check constraint my_bad 161 7700 (HR.SYS_C004266) violated ```
Continuing Inserts in Oracle when exception is raised
[ "", "sql", "oracle", "exception", "plsql", "" ]
We've got an application which needs to be able to use bluetooth for the following requirements: 1. Receive files from bluetooth devices (up to 2 devices at the same time) 2. Display all bluetooth devices in range 3. Send files to bluetooth devices 4. Scan for bluetooth devices and transfer files at the same time We're running on Windows XP. I've done some looking around and there seems to be 3 main stacks: **BlueSoleil** On the BlueSoleil website, in their SDK section, it seems to mention only 1 connection is supported, which is obviously no good. **Windows** Only seems to support 1 bluetooth dongle, which will probably mean we can't meet all our requirements. **Widcomm** Expensive and potentially overkill? More complex API? Thoughts? In terms of SDK for C#, was looking at Franson Bluetools, anyone used this API? Thanks
Firstly the disclaimer, I'm the maintainer of the 32feet.NET library. :-) I've just checked, and on XP with the Microsoft stack (using one dongle) I can concurrently be receiving two OBEX PUTs and also discovering devices. That's using 32feet.NET's ObexListener class and the BluetoothClient.DiscoverDevices method. To send the OBEX PUTs one can use its ObexWebRequest class. To do multiple parallel connections with ObexListener I just had multiple threads calling its GetContext() method. So that's maybe simpler than we thought... I've also tested it with Andy Hume's OBEX Server using his Brecham.Obex library and the concurrent receive works fine there too. Its available from <http://32feet.net/files/folders/objectexchange/entry6511.aspx>. On our Widcomm support. Hopefully it doesn't seem too "incomplete" on the client side... Inquiry (device discovery) and connections all work. The server-side still needs a little work however and there are some things the Widcomm API simply doesn't support eg. (programmatic authentication handling). What was the issue with the samples? Compile-time or run-time? On MSFT stack or Widcomm? Follow-up at <http://32feet.net/forums/37.aspx> if you prefer.
Time to explain exactly what we ended up doing... **2 dongles why?** 1. If a dongle is doing a scan the transfer rate is massively slowed down 2. A dongle can only support 7 concurrent transfers, if you are doing a scan, this drops to 6. If you want to send, receive and scan all at the same time, everything slows down, badly, and you are very limited in channels. So, the idea is to run one dongle continuously scanning (so devices appear as quickly as possible) and the other dongle reserved for transfers, and since it's not scanning, transfers are nice and quick. **Library we used** After much testing and thought, we ended up opting for [WirelessCommunicationLibrary from BT framework](http://www.btframework.com/). It supports Widcomm, Windows, BlueSoleil and the Toshiba stack. It supports all the server side stuff we need, is a well supported commercial product, which works perfectly without error. **Which stack?** Well, this is a complex one. NONE of the stacks support 2 dongles at the same time. So the only option is to run one dongle on one stack and the other dongle on another. This is where the WCL library comes in handy! *Microsoft* - If an error occurs during a scan, it's common for the whole stack to crash out. This is not ideal! You have to close and restart radio device, it takes time and is fault prone. But... the Microsoft stack does handle file transfers very nicely. *Widcomm* - Widcomm stack isn't great for file transfers. There is pesky little apps which install with Widcomm which keep trying to take control from your app. You can kill the bttray.exe, which helps, but you still get some strange behaviour from the stack during transfers. I'm sure this can be resolved, but since Windows is poor for scans, makes sense to use Widcomm for scans. So... we've got one dongle set to Widcomm to scan over and over, and one dongle set to Microsoft set to handle only file transfers (in and out). **Getting 2 dongles to work** We went for using 2 of the same dongles, we can order them in bulk and stock them all the same reducing confusion. Each device shipped just needs 2 bluetooth dongles, simple. The only problem is, these are widcomm dongles and we need one dongle on the Windows stack. Windows doesn't recognise these as Windows dongles, so won't register them for the Windows stack. So... the is a hack you can make to the bt.inf file to make it recognise the dongle for Windows. Then you need to switch the drivers for one of the dongles to run on the Windows drivers and you're all done. **Summary** So... we've got one dongle scanning all the time, one handling transfers, each on separate stacks and it all works nicely. This is the only way I have found to get 2 dongles working smoothly on Windows. If you've got a better suggestion, please post it!
Bluetooth in C#, Which stack, Which SDK?
[ "", "c#", ".net", "bluetooth", "" ]
Probably something stupid I'm doing. I want to populate a hidden DIV with values of a form on submit. The DIV does open with correct data, but then resets after the page is finished loading. What am I doing wrong? Here's my test: ``` <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content= "text/html; charset=us-ascii" /> <title>Test</title> <script type="text/javascript"> function test(){ var usr = document.getElementById('user').value; var pwd = document.getElementById('passwd').value; document.getElementById('out').innerHTML = usr + " " + pwd; document.getElementById('out').style.display = "block"; return true; } </script> </head> <body> <form action="" onsubmit="return test()"> <input type="text" id="user" name="user" /> <input id="passwd" type="text" name="passwd" /> <p><input type="submit" value="Go" /></p> </form> <div id="out" style="display:none;"> </div> </body> ```
Short answer: Change this ``` return true; ``` to this ``` return false; ``` Long answer: Forms are designed to load a new page when they are submitted. However, with scripting we can prevent this behavior by stopping the submit event. This can be achieved in many ways, but for your example, simply returning "false" from your handler will cancel the event, but *only* if the **onsubmit** attribute also has a return statement (which you already had).
The onsubmit function is submitting the form back to the page. You need to cancel the event to prevent it from submitting the data and reloading the page. The easy way to do this is to have your test() function return false. If you still want the form to submit and display the data in a div you'll want to submit the form via AJAX or in an iFrame.
Show a DIV with form data on submit
[ "", "javascript", "html", "" ]
I need to simulate a test scenario in which I call the `getBytes()` method of a String object and I get an UnsupportedEncodingException. I have tried to achieve that using the following code: ``` String nonEncodedString = mock(String.class); when(nonEncodedString.getBytes(anyString())).thenThrow(new UnsupportedEncodingException("Parsing error.")); ``` The problem is that when I run my test case I get a MockitoException that says that I can't mock a java.lang.String class. Is there a way to mock a String object using mockito or, alternatively, a way to make my String object throw an UnsupportedEncodingException when I call the getBytes method? --- Here are more details to illustrate the problem: This is the class that I want to test: ``` public final class A { public static String f(String str){ try { return new String(str.getBytes("UTF-8")); } catch (UnsupportedEncodingException e) { // This is the catch block that I want to exercise. ... } } } ``` This is my testing class (I'm using JUnit 4 and mockito): ``` public class TestA { @Test(expected=UnsupportedEncodingException.class) public void test(){ String aString = mock(String.class); when(nonEncodedString.getBytes(anyString())).thenThrow(new UnsupportedEncodingException("Parsing error.")); A.f(aString); } } ```
The problem is the `String` class in Java is marked as final, so you cannot mock is using traditional mocking frameworks. According to the [Mockito FAQ](https://github.com/mockito/mockito/wiki/FAQ), this is a limitation of that framework as well.
How about just creating a `String` with a bad encoding name? See ``` public String(byte bytes[], int offset, int length, String charsetName) ``` Mocking `String` is almost certainly a bad idea.
How to mock a String using mockito?
[ "", "java", "unit-testing", "mocking", "mockito", "" ]
Is the following free function implicitly inlined in C++, similar to how member functions are implicitly inlined if defined in the class definition? ``` void func() { ... } ``` Do template functions behave the same way?
No, it's not implicitly inlined. The compiler has no way of knowing if another module will use this function, so it has to generate code for it. This means, for instance, that if you define the function like that in a header and include the header twice, you will get linker errors about multiple definitions. Explicit `inline` fixes that. Of course, the compiler may still inline the function if it thinks that will be efficient, but it's not the same as an explicit inlining. Template functions are implicitly inlined in the sense that they don't require an `inline` to prevent multiple definition errors. I don't think the compiler is forced to inline those either, but I'm not sure.
It depends what you mean by inlined. A compiler can optimise any function by placing its emitted code inline at the call site. However, if you mean does the code you ask about behave as if it was declared: ``` inline void func() { ... } ``` then the answer is no. If you place your code in two different compilation units and build the executable, you will get multiple definition errors. If you explicitly mark the function as "inline", you will not. Regarding template functions, then some part of the compilation system will see to it that multiple instantiations of the same template do not cause multiple definition errors.
Are free functions implicitly inlined if defined without a previous declaration in C++?
[ "", "c++", "function", "templates", "inline", "implicit", "" ]
Suppose table 1 Have 1,000,000 rows. In table 2 there are 50,000 rows **INPUT** Table 1 ``` Id User InternetAmountDue 1 joe NULL ``` Table 2 ``` InternetUserId UserName AmountDue 21 kay 21.00 10091 joe 21.00 ``` I want to merge data from table 2 to table 1 as follows: 1. If user exists in Table 1, update InternetAmountDue Column 2. Else, insert new user **OUTPUT** Table 1 ``` Id User InternetAmountDue 1 joe 21.00 2 kay 21.00 ``` How can this be done fast given the large volume of data involved?
`SQL Server 2008` provides special construct [`MERGE`](http://technet.microsoft.com/en-us/library/bb510625.aspx) just for your case: ``` MERGE INTO table1 AS t1 USING table2 AS t2 ON t2.UserName = t1.user WHEN MATCHED THEN UPDATE SET t1.AmountDue = t2.AmountDue WHEN NOT MATCHED THEN INSERT (user, InternetAmountDue) VALUES (t2.UserName, t2.AmountDue) ```
``` INSERT INTO Table1 (User) SELECT UserName FROM Table2 WHERE UserName not in (SELECT User FROM Table1) -- UPDATE t1 SET t1.InternetAmountDue = t2.AmountDue FROM Table1 t1 JOIN Table2 t2 ON t1.User = t2.UserName ``` Make sure that Table2.UserName is indexed. Make sure that Table1.User is indexed.
How to Make this SQL Task Faster to Complete
[ "", "sql", "sql-server", "sql-server-2008", "" ]
How would one switch a public bool to true from a child form in a mdi type program? I have a child form called logon that if everything checks out i want to set a "authenticated" bool to true in the form1 (main) form
The proper, true OO way of doing things would be to expose an event on your child form that the parent can attach to. You're violating your separation of concerns if you have the child form make assumptions about its `MdiParent`. For example, a very simple method of doing what you describe would be to have this on your child form: ``` public event EventHandler Authenticated; ``` The when the parent opens it... ``` YourForm newForm = new YourForm(); newForm.Authenticated += new EventHandler(newForm_Authenticated); newForm.MdiParent = this; // and so on ``` You could also go slightly more sophisticated (and I do mean slightly) by adding an `Authenticated` boolean property to your child form, and rename the event to `AuthenticatedChanged`. You could then use the same event handler to inspect the value of the property to determine if the user has authenticated. In either scenario, you simply raise your event from the child form when you want the parent to update.
You could make a globally accessible variable that holds the main form, then use that variable within the child to call methods on the main form. Or, you could cast the appropriate Parent or Owner property of the child window to the proper type of the main form, and work from there.
set variable of parent form from child form in mdi
[ "", "c#", "variables", "scope", "mdi", "" ]
Let's say I have two entities: Group and User. Every user can be member of many groups and every group can have many users. ``` @Entity public class User { @ManyToMany Set<Group> groups; //... } @Entity public class Group { @ManyToMany(mappedBy="groups") Set<User> users; //... } ``` Now I want to remove a group (let's say it has many members). Problem is that when I call EntityManager.remove() on some Group, JPA provider (in my case Hibernate) **does not remove rows from join table** and delete operation fails due to foreign key constrains. Calling remove() on User works fine (I guess this has something to do with owning side of relationship). So how can I remove a group in this case? Only way I could come up with is to load all users in the group, then for every user remove current group from his groups and update user. But it seems ridiculous to me to call update() on every user from the group just to be able to delete this group.
* The ownership of the relation is determined by where you place the 'mappedBy' attribute to the annotation. The entity you put 'mappedBy' is the one which is NOT the owner. There's no chance for both sides to be owners. If you don't have a 'delete user' use-case you could simply move the ownership to the `Group` entity, as currently the `User` is the owner. * On the other hand, you haven't been asking about it, but one thing worth to know. The `groups` and `users` are not combined with each other. I mean, after deleting User1 instance from Group1.users, the User1.groups collections is not changed automatically (which is quite surprising for me), * All in all, I would suggest you decide who is the owner. Let say the `User` is the owner. Then when deleting a user the relation user-group will be updated automatically. But when deleting a group you have to take care of deleting the relation yourself like this: --- ``` entityManager.remove(group) for (User user : group.users) { user.groups.remove(group); } ... // then merge() and flush() ```
The following works for me. Add the following method to the entity that is not the owner of the relationship (Group) ``` @PreRemove private void removeGroupsFromUsers() { for (User u : users) { u.getGroups().remove(this); } } ``` Keep in mind that for this to work, the Group must have an updated list of Users (which is not done automatically). so everytime you add a Group to the group list in User entity, you should also add a User to the user list in the Group entity.
How to remove entity with ManyToMany relationship in JPA (and corresponding join table rows)?
[ "", "java", "hibernate", "jpa", "orm", "" ]
For example I now created a this tiny class: ``` public static class FileSystemInfoComparers<T> where T : FileSystemInfo { public static IEqualityComparer<T> FullName { get { return new FullNameComparer(); } } private class FullNameComparer : IEqualityComparer<T> { public bool Equals(T x, T y) { return x.FullName == y.FullName; } public int GetHashCode(T obj) { return obj.FullName.GetHashCode(); } } } ``` I would like it if I could just do ``` var comparer = FileSystemInfoComparers.FullName; ``` and have an instance of `IEqualityComparer<FileSystemInfo>`, since I didn't specify any type and FileSystemInfo is the most generic type T can be. With no type constraint the default type could for example be object or something. Maybe not the best example, but anyways just got curious here :p
Sounds like a recipe for trouble to me. In particular, you could easily create a non-generic class called `FileSystemInfoComparers` with a static property of the same name, and suddenly your code would mean something *completely* different. I'd rather keep things simple. (Generics are complicated enough already, and type inference in particular is pretty hairy.)
That is an interesting idea and it could definitely work but consider that it would only work in cases where the generic type argument is constrained with a concrete type. The .NET compilers are very good at *type inference* but tend to shy away from making any assumptions. I don't see any reason why this couldn't be done except that it would only work in a small number of highly-specific instances. Since it has no *general* purpose I would imagine that Microsoft would be less inclined to make a change to support it.
C#: Why don't generics use the most generic type it can when we don't specify one?
[ "", "c#", "generics", "" ]
Is there a reason I am missing that a block within a case statement isn't considered a block level declaration space? I keep getting an error (variable has already been declared) when I try ``` case x: var someVariable = 42; break; case y: var someVariable = 40; break; ``` but I can do ``` case x: try{var someVariable = 42;}catch{} break; case y: try{var someVariable = 40;}catch{} break; ``` If C# allowed fall through statements, that would make sense, but it doesn't, and I can't think of a scenario where you can declare a variable in a case statement and use it outside of that block.
UPDATE: This question was used as the inspiration for this blog post; see it for further details. <http://ericlippert.com/2009/08/13/four-switch-oddities/> Thanks for the interesting question. --- There are a number of confusions and mis-statements in the various other answers, none of which actually explain why this is illegal. I shall attempt to be definitive. First off, to be strictly correct, "scope" is the wrong word to use to describe the problem. Coincidentally, I wrote a blog post last week about this exact mis-use of "scope"; that will be published after my series on iterator blocks, which will run throughout July. The correct term to use is "*declaration space*". A declaration space is *a region of code in which no two different things may be declared to have the same name*. The scenario described here is symptomatic of the fact that **a switch *section* does not define a declaration space, though a switch *block* does.** Since the OP's two declarations are in the same declaration space and have the same name, they are illegal. (Yes, the switch block *also* defines a scope but that fact is not relevant to the question because the question is about the **legality of a declaration**, not the **semantics of an identifier lookup**.) A reasonable question is "why is this not legal?" A reasonable answer is "well, why should it be"? You can have it one of two ways. Either this is legal: ``` switch(y) { case 1: int x = 123; ... break; case 2: int x = 456; ... break; } ``` or this is legal: ``` switch(y) { case 1: int x = 123; ... break; case 2: x = 456; ... break; } ``` but you can't have it *both* ways. The designers of C# chose the second way as seeming to be the more natural way to do it. This decision was made on July 7th, 1999, just shy of ten years ago. The comments in the notes from that day are extremely brief, simply stating "*A switch-case does not create its own declaration space*" and then giving some sample code that shows what works and what does not. To find out more about what was in the designers minds on this particular day, I'd have to bug a lot of people about what they were thinking ten years ago -- and bug them about what is ultimately a trivial issue; I'm not going to do that. In short, there is no *particularly* compelling reason to choose one way or the other; both have merits. The language design team chose one way because they had to pick one; the one they picked seems reasonable to me.
Ah - you don't have fall through, but you can use goto to jump to another labelled case block. Therefore the blocks have to be within the same scope.
Case Statement Block Level Declaration Space in C#
[ "", "c#", "switch-statement", "language-design", "" ]
From this code I can call bmwCars.CopyToDataTable() as I expected. ``` var bmwCars = from car in dataTable.AsEnumerable() where car.Field<string>("Make").ToLower().Equals("bmw") select car; ``` But when I have change some statement of code to below, I can't call CopyToDataTable(), why? ``` var bmwCars = from car in dataTable.AsEnumerable() where car.Field<string>("Make").ToLower().Equals("bmw") select new { Make = car.Field<string>("Make"), Color = car.Field<string>("Color"), PetName = car.Field<string>("PetName") }; ```
Based on your use of `Field<T>`, the objects in `dataTable` (which I am assuming are of type `Car`) inherit `DataRow`. This is necessary to call the [`CopyToDataTable`](http://msdn.microsoft.com/en-us/library/bb396189.aspx) extension method. As written, however, you are returning an enumeration of an anonymous type which can not inherit `DataRow`. So, probably your ``` select new ``` should be ``` select new Car ``` so that you're returning an `IEnumerable<Car>` instead of an `IEnumerable<>` of anonymous type. Depending on the exact structure of your `Car` class, it might be necessary to make some minor syntatical changes. If `Car` has public properties, `Make`, `Color`, and `PetName` then it will work as I suggested. If, instead, `Car` has a constructor with method signature approximately equal to ``` public Car(string make, string color, string petName) ``` then you will have to alter the LINQ statement to be ``` var bmwCars = from car in dataTable.AsEnumerable() where car.Field<string>("Make").ToLower().Equals.("bmw") select new Car( car.Field<string>("Make"), car.Field<string>("Color"), car.Field<string>("PetName") ); ```
You could build your own [CopyToDataTable](http://msdn.microsoft.com/en-us/library/bb396189.aspx) that takes any kind of IEnumerable(not only `DataRow`)and returns a new `DataTable`: ``` // following would not compile by default // because input is not an IEnumerable<DataRow> but an anonymous type var tblResult = bmwCars.CopyToDataTable(); ``` Here is the implementation (with help of [MSDN](http://msdn.microsoft.com/en-us/library/bb669096.aspx#Y105)): ``` public class ObjectShredder<T> { private System.Reflection.FieldInfo[] _fi; private System.Reflection.PropertyInfo[] _pi; private System.Collections.Generic.Dictionary<string, int> _ordinalMap; private System.Type _type; // ObjectShredder constructor. public ObjectShredder() { _type = typeof(T); _fi = _type.GetFields(); _pi = _type.GetProperties(); _ordinalMap = new Dictionary<string, int>(); } /// <summary> /// Loads a DataTable from a sequence of objects. /// </summary> /// <param name="source">The sequence of objects to load into the DataTable.</param> /// <param name="table">The input table. The schema of the table must match that /// the type T. If the table is null, a new table is created with a schema /// created from the public properties and fields of the type T.</param> /// <param name="options">Specifies how values from the source sequence will be applied to /// existing rows in the table.</param> /// <returns>A DataTable created from the source sequence.</returns> public DataTable Shred(IEnumerable<T> source, DataTable table, LoadOption? options) { // Load the table from the scalar sequence if T is a primitive type. if (typeof(T).IsPrimitive) { return ShredPrimitive(source, table, options); } // Create a new table if the input table is null. if (table == null) { table = new DataTable(typeof(T).Name); } // Initialize the ordinal map and extend the table schema based on type T. table = ExtendTable(table, typeof(T)); // Enumerate the source sequence and load the object values into rows. table.BeginLoadData(); using (IEnumerator<T> e = source.GetEnumerator()) { while (e.MoveNext()) { if (options != null) { table.LoadDataRow(ShredObject(table, e.Current), (LoadOption)options); } else { table.LoadDataRow(ShredObject(table, e.Current), true); } } } table.EndLoadData(); // Return the table. return table; } public DataTable ShredPrimitive(IEnumerable<T> source, DataTable table, LoadOption? options) { // Create a new table if the input table is null. if (table == null) { table = new DataTable(typeof(T).Name); } if (!table.Columns.Contains("Value")) { table.Columns.Add("Value", typeof(T)); } // Enumerate the source sequence and load the scalar values into rows. table.BeginLoadData(); using (IEnumerator<T> e = source.GetEnumerator()) { Object[] values = new object[table.Columns.Count]; while (e.MoveNext()) { values[table.Columns["Value"].Ordinal] = e.Current; if (options != null) { table.LoadDataRow(values, (LoadOption)options); } else { table.LoadDataRow(values, true); } } } table.EndLoadData(); // Return the table. return table; } public object[] ShredObject(DataTable table, T instance) { FieldInfo[] fi = _fi; PropertyInfo[] pi = _pi; if (instance.GetType() != typeof(T)) { // If the instance is derived from T, extend the table schema // and get the properties and fields. ExtendTable(table, instance.GetType()); fi = instance.GetType().GetFields(); pi = instance.GetType().GetProperties(); } // Add the property and field values of the instance to an array. Object[] values = new object[table.Columns.Count]; foreach (FieldInfo f in fi) { values[_ordinalMap[f.Name]] = f.GetValue(instance); } foreach (PropertyInfo p in pi) { values[_ordinalMap[p.Name]] = p.GetValue(instance, null); } // Return the property and field values of the instance. return values; } public DataTable ExtendTable(DataTable table, Type type) { // Extend the table schema if the input table was null or if the value // in the sequence is derived from type T. foreach (FieldInfo f in type.GetFields()) { if (!_ordinalMap.ContainsKey(f.Name)) { // Add the field as a column in the table if it doesn't exist // already. DataColumn dc = table.Columns.Contains(f.Name) ? table.Columns[f.Name] : table.Columns.Add(f.Name, f.FieldType); // Add the field to the ordinal map. _ordinalMap.Add(f.Name, dc.Ordinal); } } foreach (PropertyInfo p in type.GetProperties()) { if (!_ordinalMap.ContainsKey(p.Name)) { // Add the property as a column in the table if it doesn't exist // already. DataColumn dc = table.Columns.Contains(p.Name) ? table.Columns[p.Name] : table.Columns.Add(p.Name, p.PropertyType); // Add the property to the ordinal map. _ordinalMap.Add(p.Name, dc.Ordinal); } } // Return the table. return table; } } ``` Now you can add these extensions: ``` public static class CustomLINQtoDataSetMethods { public static DataTable CopyToDataTable<T>(this IEnumerable<T> source) { return new ObjectShredder<T>().Shred(source, null, null); } public static DataTable CopyToDataTable<T>(this IEnumerable<T> source, DataTable table, LoadOption? options) { return new ObjectShredder<T>().Shred(source, table, options); } } ``` Voilà! Now `CopyToDataTable` works with any kind of `IEnumerable<T>` :)
Exception using CopyToDataTable with "new {..}" LINQ query
[ "", "c#", "linq", "datatable", "" ]
I've been struggling to get a Java program to connect to MS SQL Server, and I'm starting to wonder if MySQL would be a better choice for my (learning) project. Sun's tutorials refer to Java DB, but I've never heard of that in any other context, so it seems not the most useful database to learn about. I appreciate any insight into the most natural way to connect Java to a commonly used database.
The kind or the name of Database won't affect your education process as you will work with JDBC. I think you can go with any. Just set up it in the proper way on your machine and connect with appropriate connection string.
Perhaps you could describe the problems you've been having with connecting to MS SQL. Of course it's possible, so it's likely something small that you have or have not done that's preventing the connection from working. There are many open source database servers with JDBC drivers. One that you might consider is [HSQLDB](http://hsqldb.org/) which has a completely in-memory mode so you don't even have to think about setting up a server. This is probably a great way to learn the basics of SQL.
What is the best database to use with a java program?
[ "", "java", "jdbc", "database-connection", "" ]
I use a php form processor script that works fine. Except when users submit text in a multi-line text field, any line breaks or new lines are stripped out of the resulting string variable that is passed on. This often makes it unreadable by whoever receives the form results. I'm no php expert but am sure the answer lies in the code that is stripping characters. What I'm unsure of is if I stop it stripping characters will this result in a security risk? The strip array reads: ``` array('*', '|', '>', '<', '/', '\\\', '\"', 'Bcc', 'BCC', 'bcc'); ``` What can I change here to retain the line breaks? Thanks in advance for any help.
If your problem is with the submitted string then this means that the submitted string did not contain any line breaks or newline chars. In one occassion i looked up the wrap="(hard|physical)" attribute on the text area. Some values of this attribute force the textarea to maintain line-breaks in the user text. Did you try using nl2br($text) on the submitted text;
I believe you have an issue at the rendering phase. Have you tried: ``` echo nl2br($text); ``` Where $text is the text you're talking about.
PHP form removing line breaks
[ "", "php", "forms", "newline", "line", "break", "" ]
If I've overloaded operator+ and operator= do I still need to overload operator+= for something like this to work: ``` MyClass mc1, mc2; mc1 += mc2; ```
operator+= is not a composite of + and =, therefore you do need to overload it explicitly, since compiler do not know to build puzzles for you. but still you do able to benefit from already defined/overloaded operators, by using them inside operator+=.
Yes, you need to define that as well. A common trick however, is to define `operator+=`, and then implement `operator+` in terms of it, something like this: ``` MyClass operator+ (MyClass lhs, const MyClass& rhs){ return lhs += rhs; } ``` If you do it the other way around (use + to implement +=), you get an unnecessary copy operation in the += operator which may be a problem i performance-sensitive code.
Overloading += in c++
[ "", "c++", "operators", "operator-overloading", "operator-keyword", "" ]
Attempted translation of the above question from non-native English to English: This is a question about the fastest method to use when inserting a value into a database when the value may already be in the database. INSERT will fail when the value is there and UPDATE will fail when it is not. Which of these choices is the most desirable? * Do SELECT first and choose INSERT or UPDATE. * INSERT first and use UPDATE if it had a duplicate error. If there are other good choices besides the above, please teach me. The environment uses MySQL 4.1.
If you want to do this in a single statement (sounds like what you want), I would recommend using the `INSERT ... ON DUPLICATE KEY UPDATE` syntax, as follows: ``` INSERT INTO table (id, someothervalue) VALUES (1, 'hi mom') ON DUPLICATE KEY UPDATE someothervalue = 'hi mom'; ``` The initial `INSERT` statement will execute if there is no existing record with the specified key value (either primary key or unique). If a record already exists, the following `UPDATE` statement (`someothervalue = 3`) is executed. This is supported in all versions of MySQL. For more info, see the [MySQL Reference Manual page for `INSERT ... ON DUPLICATE KEY UPDATE`](http://dev.mysql.com/doc/refman/4.1/en/insert-on-duplicate.html)
**Update:** *before I get a bunch more downvotes... it should be noted this was the first reply, before the question was edited and understood.)* In all honesty it sounds like you want to know what to do if the record already exists? or is new? Therefore what you want to do (in pseudo code) ``` $count = SELECT COUNT FROM TABLE WHERE CONDITION; if($count == 1){ UPDATE... } else { INSERT... } ```
I want to do INSERT without record in DB
[ "", "sql", "mysql", "" ]
Why would anyone declare a constructor protected? I know that constructors are declared private for the purpose of not allowing their creation on stack.
When a class is (intended as) an abstract class, a protected constructor is exactly right. In that situation you don't want objects to be instantiated from the class but only use it to inherit from. There are other uses cases, like when a certain set of construction parameters should be limited to derived classes.
Non-public constructors are useful when there are construction requirements that cannot be guaranteed solely by the constructor. For instance, if an initialization method needs to be called right after the constructor, or if the object needs to register itself with some container/manager object, this must be done outside the constructor. By limiting access to the constructor and providing only a factory method, you can ensure that any instance a user receives will fulfill all of its guarantees. This is also commonly used to implement a Singleton, which is really just another guarantee the class makes (that there will only be a single instance). The reason for making the constructor protected, rather than private, is the same as for making any other method or field protected instead of private: so that it can be inherited by children. Perhaps you want a public, non-virtual factory method in the base class, which returns references to instances of the derived classes; the derived classes obviously want access to the parent constructors, but you still don't want to be creating them outside of your factory.
What are practical uses of a protected constructor?
[ "", "c++", "constructor", "protected", "" ]
I'm looking for a elaborate list comparing different operations in PHP. For example: echo vs. printf, ++$i vs $i++, a direct function call vs. object function call, array access vs. direct data access, global vs. local variables, mysql\_fetch\_assoc vs. mysql\_fetch\_row etc. Of course these figures probably highly depend on the used version, the OS, hardware and many other factors, but I saw lists like this for c/c++ and maybe there is something similar for PHP. **Update** Will leave this question open for two more days. Maybe we can get a few more of these usefull links!
Perhaps this can help you with some of the functions/operators: <http://www.phpbench.com/>
Be careful! Some days ago, [this article at google code](http://code.google.com/intl/es/speed/articles/optimizing-php.html) appeared on just this subject. It was refuted almost immediately by [this refutation](http://groups.google.com/group/make-the-web-faster/browse_thread/thread/ddfbe82dd80408cc). So watch out and **test everytime**.
What are the absolute and relative costs of different operations in PHP?
[ "", "php", "optimization", "architecture", "" ]
I have a function that pulls rows from a database, the content->id and content->type are them used to dynamically call amethod in an already loaded model to get and format the objects details. Once the object is returned it is added to array. All is well except that when i come to use the array although it has the correct number of items in it, they all contain the same object even though i know that they are returned different. Im at a complete loss with this one, any help/ideas whould be great! The code is below: ``` foreach($query->result() as $content) { $item = $this->{'mod_'.$content->type}->get($content->id); print_r($item); $items[] = $item; } print_r($items); ``` And the print\_r statements produce this: ``` stdClass Object ( [id] => 30 [type] => page ) stdClass Object ( [id] => 29 [type] => page ) Array ( [0] => stdClass Object ( [id] => 29 [type] => page ) [1] => stdClass Object ( [id] => 29 [type] => page ) ) ```
I would guess that the problem is that you get to the same object every time *by reference* from the `get` function and then add it *by reference* to the array, resulting in all items in the array being modified when the item gets modified in the `get` function. If that is the case, the following should work: ``` foreach($query->result() as $content) { $item = $this->{'mod_'.$content->type}->get($content->id); print_r($item); $items[] = clone $item; } print_r($items); ```
When you push $item to $items, it doesn't push the value $item points to but rather the reference itself. You'll need to initialize $item each time: ``` foreach($query->result() as $content) { $item = new stdClass(); $item = $this->{'mod_'.$content->type}->get($content->id); print_r($item); $items[] = $item; } print_r($items); ```
Storing objects in an array with php
[ "", "php", "arrays", "codeigniter", "object", "" ]
I need to embed the Flash player in a native application (C++) in a cross platform way (at least Windows and Mac OSX). I need to allow the Flash gui to make calls back to the native application to do things that Flash normally can’t do (e.g. write to the file system, talk to devices, loading native image processing libraries, etc). The Adobe AIR runtime is too restrictive so it is unfortunately not an option. I’ve used ActiveX hosting in Windows previously, but is there a cross platform gui toolkit that solves this problem for both Windows and OSX? If not what are my options for embedding Flash on OSX? EDIT: Must support Actionscript 3.0
Another option is [MDM Zinc](http://multidmedia.com). Win and OSX aren't 100% equal, and you should make sure it will do everything you need, but it may work for you.
It's not free, but [Scaleform GFx](https://www.scaleform.com/products/gfx) does exactly what you want -- it's cross-platform, and can make calls back and forth between native code and Flash. It also supports a number of different rendering engines (DirectX, OpenGL, etc.).
Cross Platform Flash Player Embedding
[ "", "c++", "flash", "cross-platform", "" ]
Is there a thin driver for Oracle available to be used with PHP (for example as an extension)? I cannot install the OCI driver/client, but need to be able to access an Oracle database.
What is the problem with installing the OCI driver ? The [InstantClient](http://www.oracle.com/technology/pub/notes/technote_php_instant.html) version simply requires an unzip and setting a couple of environment variables. The other option is exposing the database functionality through web services. 11g includes an inbuilt HTTP server.
There is no "thin" Oracle driver in PHP. You always need an external client library such as Oracle Instant Client - which is really easy to install. Check <http://www.oracle.com/technology/tech/php/pdf/underground-php-oracle-manual.pdf> (Also, Oracle has RPMs of PHP and Instant Client that can be used on RH Linux.)
PHP and Oracle using a thin driver
[ "", "php", "oracle", "driver", "thin", "" ]
is there a good example of a *source file* containing [Javadoc](http://www.oracle.com/technetwork/articles/java/index-137868.html)? I can find lots of good examples of Javadoc on the internet, I would just like to find out the particular syntax used to create them, and assume I can pore through the source of some library somewhere but that seems like a lot of work.
How about the JDK source code, but accessed through a 3rd party like docjar? For example, the [Collections source](https://web.archive.org/web/20190325071840/http://www.docjar.net/html/api/java/util/Collections.java.html). That way, there's no big download.
The page [How to Write Doc Coments for the Javadoc Tool](http://www.oracle.com/technetwork/java/javase/documentation/index-137868.html) contains a good number of good examples. One section is called [Examples of Doc Comments](http://www.oracle.com/technetwork/java/javase/documentation/index-137868.html#examples) and contains quite a few usages. Also, the [Javadoc FAQ](http://www.oracle.com/technetwork/java/javase/documentation/index-137483.html) contains some more examples to illustrate the answers.
good example of Javadoc
[ "", "java", "javadoc", "" ]
What exactly does putting `extern "C"` into C++ code do? For example: ``` extern "C" { void foo(); } ```
`extern "C"` makes a function-name in C++ have C linkage (compiler does not mangle the name) so that client C code can link to (use) your function using a C compatible header file that contains just the declaration of your function. Your function definition is contained in a binary format (that was compiled by your C++ compiler) that the client C linker will then link to using the C name. Since C++ has overloading of function names and C does not, the C++ compiler cannot just use the function name as a unique id to link to, so it mangles the name by adding information about the arguments. A C compiler does not need to mangle the name since you can not overload function names in C. When you state that a function has `extern "C"` linkage in C++, the C++ compiler does not add argument/parameter type information to the name used for linkage. Just so you know, you can specify `extern "C"` linkage to each individual declaration/definition explicitly or use a block to group a sequence of declarations/definitions to have a certain linkage: ``` extern "C" void foo(int); extern "C" { void g(char); int i; } ``` If you care about the technicalities, they are listed in section 7.5 of the C++03 standard, here is a brief summary (with emphasis on `extern "C"`): * `extern "C"` is a linkage-specification * Every compiler is *required* to provide "C" linkage * A linkage specification shall occur only in namespace scope * All function types, function names and variable names have a language linkage **[See Richard's Comment:](https://stackoverflow.com/questions/1041866/in-c-source-what-is-the-effect-of-extern-c#comment20842899_1041880)** Only function names and variable names with external linkage have a language linkage * Two function types with distinct language linkages are distinct types even if otherwise identical * Linkage specs nest, inner one determines the final linkage * `extern "C"` is ignored for class members * At most one function with a particular name can have "C" linkage (regardless of namespace) * `extern "C"` forces a function to have external linkage (cannot make it static) **[See Richard's comment:](https://stackoverflow.com/questions/1041866/what-is-the-effect-of-extern-c-in-c?rq=1#comment20842893_1041880)** `static` inside `extern "C"` is valid; an entity so declared has internal linkage, and so does not have a language linkage * Linkage from C++ to objects defined in other languages and to objects defined in C++ from other languages is implementation-defined and language-dependent. Only where the object layout strategies of two language implementations are similar enough can such linkage be achieved
Just wanted to add a bit of info, since I haven't seen it posted yet. You'll very often see code in C headers like so: ``` #ifdef __cplusplus extern "C" { #endif // all of your legacy C code here #ifdef __cplusplus } #endif ``` What this accomplishes is that it allows you to use that C header file with your C++ code, because the macro `__cplusplus` will be defined. But you can *also* still use it with your legacy C code, where the macro is *NOT* defined, so it won't see the uniquely C++ construct. Although, I have also seen C++ code such as: ``` extern "C" { #include "legacy_C_header.h" } ``` which I imagine accomplishes much the same thing. Not sure which way is better, but I have seen both.
What is the effect of extern "C" in C++?
[ "", "c++", "c", "linkage", "name-mangling", "extern-c", "" ]
I have a question: I have two MSSQL tables, items and states, that are linked together via a stateid: ``` STATES ITEMS ------------- --------------------------- | id | name | | id | name | ... | stateid V ^ |_____________________________________| ``` So Items.StateID is related to State.ID. In my current situation I have a GridView that is databound to a LinqDataSource, which references the Items table. The GridView has two columns, one is Name and the other is StateID. I want to be able to pull the name of the state associated with the StateID out from the state table so that is displayed instead of the StateID. Thanks in advance! # EDIT Here is the grid/datasource: ``` <asp:LinqDataSource ID="ItemViewDataSource" runat="server" ContextTypeName="GSFyi.GSFyiDataClassesDataContext" EnableDelete="true" TableName="FYI_Items" /> <h2 class="gridTitle">All Items</h2> <telerik:RadGrid ID="ItemViewRadGrid" runat="server" AutoGenerateColumns="False" DataSourceID="ItemViewDataSource" GridLines="None" AllowAutomaticDeletes="True" EnableEmbeddedSkins="False" OnItemDataBound="itemsGrid_ItemDataBound"> <HeaderContextMenu> <CollapseAnimation Type="OutQuint" Duration="200"></CollapseAnimation> </HeaderContextMenu> <MasterTableView DataKeyNames="id" DataSourceID="ItemViewDataSource" CommandItemDisplay="None" CssClass="listItems" Width="98%"> <RowIndicatorColumn> <HeaderStyle Width="20px" /> </RowIndicatorColumn> <ExpandCollapseColumn> <HeaderStyle Width="20px" /> </ExpandCollapseColumn> <Columns> <telerik:GridTemplateColumn ItemStyle-CssClass="gridActions edit" UniqueName="Edit"> <ItemTemplate> <asp:HyperLink ID="edit" runat="server" Text="Edit"></asp:HyperLink> </ItemTemplate> <ItemStyle CssClass="gridActions edit"></ItemStyle> </telerik:GridTemplateColumn> <telerik:GridButtonColumn ConfirmText="Are you sure you want to delete this item?" ConfirmDialogType="RadWindow" ButtonType="LinkButton" ItemStyle-CssClass="gridActions delete" CommandName="Delete" > <ItemStyle CssClass="gridActions delete"></ItemStyle> </telerik:GridButtonColumn> <telerik:GridBoundColumn DataField="name" HeaderText="Item Name" SortExpression="name" UniqueName="name"> </telerik:GridBoundColumn> <telerik:GridTemplateColumn HeaderText="State" UniqueName="state"> <ItemTemplate> <asp:Label ID="stateLbl" runat="server" Text='<%# Eval("stateid") %>' /> </ItemTemplate> </telerik:GridTemplateColumn> </Columns> </telerik:RadGrid> ``` And the current code-behind: ``` using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using Telerik.Cms.Web.UI; using Telerik.Web.UI; public partial class Custom_Modules_GSFyi_Backend_Views_ItemsListView : ViewModeUserControl<ItemsView> { protected void Page_Load(object sender, EventArgs e) { addNewItem.NavigateUrl = CreateHostViewCommand("ItemsInsertView",null,null); } protected void itemsGrid_ItemDataBound(object sender, Telerik.Web.UI.GridItemEventArgs e) { if (e.Item.ItemType == GridItemType.Item || e.Item.ItemType == GridItemType.AlternatingItem) { var item = (GSFyi.FYI_Item)e.Item.DataItem; HyperLink edit = (HyperLink)e.Item.FindControl("edit"); edit.NavigateUrl = CreateHostViewCommand("ItemsEditView", item.id.ToString(), null); } } } ``` Does this help at all?
Assuming you've defined the relationship in your LINQtoSQL data classes and the LINQDataSource control is pointing at your Items entity you should be able to use the following databinding syntax directly in your GridView markup: ``` <asp:GridView ID="ItemsGridView" runat="server" DataKeyNames="id" DataSourceID="ItemsLinqDataSource"> <Columns> <asp:TemplateField> <ItemTemplate> <%# Eval("states.name") %> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> <asp:LinqDataSource ID="ItemsLinqDataSource" runat="server" ContextTypeName="DataClassesDataContext" TableName="items"></asp:LinqDataSource> ```
Try something like this: ``` var query = from s in States join i in Items on s.ID equals i.StateID select i.Name; ``` The LINQ to SQL provider will take this query and convert it to a SQL query that will look something like this: ``` select i.Name from Items i inner join States s on i.StateID = s.ID; ``` > **Suggested reading**: *[SQL INNER JOIN Keyword](http://www.w3schools.com/SQl/sql_join_inner.asp)*
Get value from relationed database
[ "", "c#", "database", "linq-to-sql", "" ]
What method would you use to determine if the the bit that represents 2^x is a 1 or 0 ?
I'd use: ``` if ((value & (1L << x)) != 0) { // The bit was set } ``` (You may be able to get away with fewer brackets, but I never remember the precedence of bitwise operations.)
Another alternative: ``` if (BigInteger.valueOf(value).testBit(x)) { // ... } ```
Java: Checking if a bit is 0 or 1 in a long
[ "", "java", "long-integer", "bit-shift", "" ]
What is the regular expression for `replaceAll()` function to replace "N/A" with "0" ? input : `N/A` output : `0`
Assuming s is a `String`. ``` s.replaceAll("N/A", "0"); ``` You don't even need regular expressions for that. This will suffice: ``` s.replace("N/A", "0"); ```
Why use a regular expression at all? If you don't need a pattern, just use `replace`: ``` String output = input.replace("N/A", "0"); ```
java replaceAll()
[ "", "java", "string", "" ]
This question is sort of a pool. We are trying to identify the best archtecture while using a ORM like LINQ to SQL. The archture we are defining is for sort of framework that other applications will access either trough directly referencing the DLL or through a webservice. We have .NET apps and PHP apps. The possibilities are: Multiple data contexts: Separting the database into units of work and create separate contexts for each one. Pros: * Easy of use * Classes will be broken into different namespaces * Smaller domain to maintain Cons: * Objects have to be duplicated if related, creating maintenance hell * Objects cannot be passed between context, creating the need for another hit on the data base Single data context: All tables, views, procedures, reside in the same huge context. Pros: * No duplication * Relationships are easy to manage, basicaly the LINQ takes care of it. * Better performance, less hits on the DB. Cons: * All tables are in the same namespace, code completion becomes nuts * Not the best for the designer (at least on VS2008) * Cannot be selective in what to save and what not. Save all, or delete all mode. Well this are things that came up to my mind, so if have any other pros or cons, please tell me and I will include them in the post. Also pick your best one. Thanks all
I understand your doubts. I had the same when I started to use LinqToSql. To help me find the better way I started to create a personal project where I could test all approaches without the worry and without preconception. During this exercise I found that the only one context approach is the most useful. This solution appears to be easier to maintain and if you need to recreate the domain you will manage only one file in only one project. Other aspect I realized during the exercise is that use the LinqToSql directly is not efficient in the terms of organization. If you have a project where a team will execute the development instead of only one person you should “shield” the LinqToSql from them. There should be a “sheriff” who will handle with the domain and you also should use some abstraction mechanism to protect the model from abusing (I implemented a Repository pattern and it worked well but you could find different approaches). I also faced the problem of creating some logical groups inside the domain. Well in the true what I did was to use some DDD (Domain Driven Design) techniques to create what is called aggregates. Aggregates are a logical arrangement of entities inside a domain where you have a root entity (that works as an aggregator) and several other satellites entities related between them. You can do this creating some new entities in the LinqToSql domain. These new entities will be disconnected from the database and will work as aggregators. This approach will enable you to create “sub-domains” inside your domain and help you to have a better design. In the end I realized that the best way to use LinqToSql is to take the context like a simple DAL. Reuse its domain, with some extensions (where we can use T4 to help us to create the code), where the entities are transformed into DTOs (Data Transfer Objects) to expose the data to the other layers. I am publishing (it’s not finished yet) the steps I took during the exercise in my blog: <http://developmentnirvana.blogspot.com/>
In my view, the data-context hides squarely behind a repository interface - allowing us to swap the implementation if we like (LINQ-to-SQL / EF / NHibernate / LLBLGen / etc). As such, the specifics of the data-context(s) are *largely* an implementation detail. As long as it passes the unit tests ;-p Huge is rarely a good idea; tiny is rarely useful... I tend to break the sytem down into related chunks (normally related to different repository interfaces), and think of it at that level. I have some other thoughts here: [Pragmatic LINQ](http://marcgravell.blogspot.com/2009/02/pragmatic-linq.html) - although I'd happily defer to any wisdom from [Frans](https://stackoverflow.com/users/44991/frans-bouma) etc.
LINQ to SQL architecture. What is best?
[ "", "c#", ".net", "vb.net", "linq-to-sql", "architecture", "" ]
I have been using CakePHP for a few weeks now and its been an great experience. I've managed to port a site surprisingly quickly and I've even added a bunch of new features which I had planned but never got around to implementing. Take a look at the following two controllers, they allow a user to add premium status to one of the sites linked to their account. They don't feel very 'cakey', could they be improved in any way? The PremiumSites controller handles the signup process and will eventually have other related things such as history. ``` class PremiumSitesController extends AppController { var $name = 'PremiumSites'; function index() { $cost = 5; //TODO: Add no site check if (!empty($this->data)) { if($this->data['PremiumSite']['type'] == "1") { $length = (int) $this->data['PremiumSite']['length']; $length++; $this->data['PremiumSite']['upfront_weeks'] = $length; $this->data['PremiumSite']['upfront_expiration'] = date('Y-m-d H:i:s', strtotime(sprintf('+%s weeks', $length))); $this->data['PremiumSite']['cost'] = $cost * $length; } else { $this->data['PremiumSite']['cost'] = $cost; } $this->PremiumSite->create(); if ($this->PremiumSite->save($this->data)) { $this->redirect(array('controller' => 'paypal_notifications', 'action' => 'send', $this->PremiumSite->getLastInsertID())); } else { $this->Session->setFlash('Please fix the problems below', true, array('class' => 'error')); } } $this->set('sites',$this->PremiumSite->Site->find('list',array('conditions' => array('User.id' => $this->Auth->user('id'), 'Site.is_deleted' => 0), 'recursive' => 0))); } } ``` PaypalNotifications controller handles the interaction with Paypal. ``` class PaypalNotificationsController extends AppController { var $name = 'PaypalNotifications'; function beforeFilter() { parent::beforeFilter(); $this->Auth->allow('process'); } /** * Compiles premium info and send the user to Paypal * * @param integer $premiumID an id from PremiumSite * @return null */ function send($premiumID) { if(empty($premiumID)) { $this->Session->setFlash('There was a problem, please try again.', true, array('class' => 'error')); $this->redirect(array('controller' => 'premium_sites', 'action' => 'index')); } $data = $this->PaypalNotification->PremiumSite->find('first', array('conditions' => array('PremiumSite.id' => $premiumID), 'recursive' => 0)); if($data['PremiumSite']['type'] == '0') { //Subscription $paypalData = array( 'cmd' => '_xclick-subscriptions', 'business'=> '', 'notify_url' => '', 'return' => '', 'cancel_return' => '', 'item_name' => '', 'item_number' => $premiumID, 'currency_code' => 'USD', 'no_note' => '1', 'no_shipping' => '1', 'a3' => $data['PremiumSite']['cost'], 'p3' => '1', 't3' => 'W', 'src' => '1', 'sra' => '1' ); if($data['Site']['is_premium_used'] == '0') { //Apply two week trial if unused $trialData = array( 'a1' => '0', 'p1' => '2', 't1' => 'W', ); $paypalData = array_merge($paypalData, $trialData); } } else { //Upfront payment $paypalData = array( 'cmd' => '_xclick', 'business'=> '', 'notify_url' => '', 'return' => '', 'cancel_return' => '', 'item_name' => '', 'item_number' => $premiumID, 'currency_code' => 'USD', 'no_note' => '1', 'no_shipping' => '1', 'amount' => $data['PremiumSite']['cost'], ); } $this->layout = null; $this->set('data', $paypalData); } /** * IPN Callback from Paypal. Validates data, inserts it * into the db and triggers __processTransaction() * * @return null */ function process() { //Original code from http://www.studiocanaria.com/articles/paypal_ipn_controller_for_cakephp //Have we been sent an IPN here... if (!empty($_POST)) { //...we have so add 'cmd' 'notify-validate' to a transaction variable $transaction = 'cmd=_notify-validate'; //and add everything paypal has sent to the transaction foreach ($_POST as $key => $value) { $value = urlencode(stripslashes($value)); $transaction .= "&$key=$value"; } //create headers for post back $header = "POST /cgi-bin/webscr HTTP/1.0\r\n"; $header .= "Content-Type: application/x-www-form-urlencoded\r\n"; $header .= "Content-Length: " . strlen($transaction) . "\r\n\r\n"; //If this is a sandbox transaction then 'test_ipn' will be set to '1' if (isset($_POST['test_ipn'])) { $server = 'www.sandbox.paypal.com'; } else { $server = 'www.paypal.com'; } //and post the transaction back for validation $fp = fsockopen('ssl://' . $server, 443, $errno, $errstr, 30); //Check we got a connection and response... if (!$fp) { //...didn't get a response so log error in error logs $this->log('HTTP Error in PaypalNotifications::process while posting back to PayPal: Transaction=' . $transaction); } else { //...got a response, so we'll through the response looking for VERIFIED or INVALID fputs($fp, $header . $transaction); while (!feof($fp)) { $response = fgets($fp, 1024); if (strcmp($response, "VERIFIED") == 0) { //The response is VERIFIED so format the $_POST for processing $notification = array(); //Minor change to use item_id as premium_site_id $notification['PaypalNotification'] = array_merge($_POST, array('premium_site_id' => $_POST['item_number'])); $this->PaypalNotification->save($notification); $this->__processTransaction($this->PaypalNotification->id); } else if (strcmp($response, "INVALID") == 0) { //The response is INVALID so log it for investigation $this->log('Found Invalid:' . $transaction); } } fclose($fp); } } //Redirect $this->redirect('/'); } /** * Enables premium site after payment * * @param integer $id uses id from PaypalNotification * @return null */ function __processTransaction($id) { $transaction = $this->PaypalNotification->find('first', array('conditions' => array('PaypalNotification.id' => $id), 'recursive' => 0)); $txn_type = $transaction['PaypalNotification']['txn_type']; if($txn_type == 'subscr_signup' || $transaction['PaypalNotification']['payment_status'] == 'Completed') { //New subscription or payment $data = array( 'PremiumSite' => array( 'id' => $transaction['PremiumSite']['id'], 'is_active' => '1', 'is_paid' => '1' ), 'Site' => array( 'id' => $transaction['PremiumSite']['site_id'], 'is_premium' => '1' ) ); //Mark trial used only on subscriptions if($txn_type == 'subscr_signup') $data['Site']['is_premium_used'] = '1'; $this->PaypalNotification->PremiumSite->saveAll($data); } elseif($txn_type == 'subscr-cancel' || $txn_type == 'subscr-eot') { //Subscription cancellation or other problem $data = array( 'PremiumSite' => array( 'id' => $transaction['PremiumSite']['id'], 'is_active' => '0', ), 'Site' => array( 'id' => $transaction['PremiumSite']['site_id'], 'is_premium' => '0' ) ); $this->PaypalNotification->PremiumSite->saveAll($data); } } /** * Used for testing * * @return null */ function index() { $this->__processTransaction('3'); } } ``` /views/paypal\_notifications/send.ctp Sends the user to Paypal along with all the necessary data ``` echo "<html>\n"; echo "<head><title>Processing Payment...</title></head>\n"; echo "<body onLoad=\"document.form.submit();\">\n"; echo "<center><h3>Redirecting to paypal, please wait...</h3></center>\n"; echo $form->create(null, array('url' => 'https://www.sandbox.paypal.com/cgi-bin/webscr', 'type' => 'post', 'name' => 'form')); foreach ($data as $field => $value) { //Using $form->hidden sends in the cake style, data[PremiumSite][whatever] echo "<input type=\"hidden\" name=\"$field\" value=\"$value\">"; } echo $form->end(); echo "</form>\n"; echo "</body></html>\n"; ```
**Lesson 1**: Don't use PHP's superglobals * `$_POST` = `$this->params['form'];` * `$_GET` = `$this->params['url'];` * `$_GLOBALS` = `Configure::write('App.category.variable', 'value');` * `$_SESSION` (view) = `$session->read();` (helper) * `$_SESSION` (controller) = `$this->Session->read();` (component) * `$_SESSION['Auth']['User']` = `$this->Auth->user();` Replacements for `$_POST`: ``` <?php ... //foreach ($_POST as $key => $value) { foreach ($this->params['form'] as $key => $value) { ... //if (isset($_POST['test_ipn'])) { if (isset($this->params['form']['test_ipn'])) { ... ?> ``` **Lesson 2**: Views are for sharing (with the user) Code documented "Compiles premium info and send the user to Paypal" doesn't send the user to PayPal. Are you redirecting in the view? ``` <?php function redirect($premiumId) { ... $this->redirect($url . '?' . http_build_query($paypalData), 303); } ``` Redirect at the end of your controller and delete the view. :) **Lesson 3**: Data manipulation belongs in model layer ``` <?php class PremiumSite extends AppModel { ... function beforeSave() { if ($this->data['PremiumSite']['type'] == "1") { $cost = Configure::read('App.costs.premium'); $numberOfWeeks = ((int) $this->data['PremiumSite']['length']) + 1; $timestring = String::insert('+:number weeks', array( 'number' => $numberOfWeeks, )); $expiration = date('Y-m-d H:i:s', strtotime($timestring)); $this->data['PremiumSite']['upfront_weeks'] = $weeks; $this->data['PremiumSite']['upfront_expiration'] = $expiration; $this->data['PremiumSite']['cost'] = $cost * $numberOfWeeks; } else { $this->data['PremiumSite']['cost'] = $cost; } return true; } ... } ?> ``` **Lesson 4**: Models aren't just for database access Move code documented "Enables premium site after payment" to PremiumSite model, and call it after payment: ``` <?php class PremiumSite extends AppModel { ... function enable($id) { $transaction = $this->find('first', array( 'conditions' => array('PaypalNotification.id' => $id), 'recursive' => 0, )); $transactionType = $transaction['PaypalNotification']['txn_type']; if ($transactionType == 'subscr_signup' || $transaction['PaypalNotification']['payment_status'] == 'Completed') { //New subscription or payment ... } elseif ($transactionType == 'subscr-cancel' || $transactionType == 'subscr-eot') { //Subscription cancellation or other problem ... } return $this->saveAll($data); } ... } ?> ``` You would call from controller using `$this->PaypalNotification->PremiumSite->enable(...);` but we aren't going to do that, so let's mix it all together... **Lesson 5**: Datasources are cool Abstract your PayPal IPN interactions into a datasource which is used by a model. Configuration goes in `app/config/database.php` ``` <?php class DATABASE_CONFIG { ... var $paypal = array( 'datasource' => 'paypal_ipn', 'sandbox' => true, 'api_key' => 'w0u1dnty0ul1k3t0kn0w', } ... } ?> ``` Datasource deals with web service requests (`app/models/datasources/paypal_ipn_source.php`) ``` <?php class PaypalIpnSource extends DataSource { ... var $endpoint = 'http://www.paypal.com/'; var $Http = null; var $_baseConfig = array( 'sandbox' => true, 'api_key' => null, ); function _construct() { if (!$this->config['api_key']) { trigger_error('No API key specified'); } if ($this->config['sandbox']) { $this->endpoint = 'http://www.sandbox.paypal.com/'; } $this->Http = App::import('Core', 'HttpSocket'); // use HttpSocket utility lib } function validate($data) { ... $reponse = $this->Http->post($this->endpoint, $data); .. return $valid; // boolean } ... } ?> ``` Let the model do the work (`app/models/paypal_notification.php`) Notifications are only saved if they are valid, sites are only enabled if the notification is saved ``` <?php class PaypalNotification extends AppModel { ... function beforeSave() { $valid = $this->validate($this->data); if (!$valid) { return false; } //Minor change to use item_id as premium_site_id $this->data['PaypalNotification']['premium_site_id'] = $this->data['PaypalNotification']['item_number']; /* $this->data['PaypalNotification'] = am($this->data, // use shorthand functions array('premium_site_id' => $this->data['item_number'])); */ return true; } ... function afterSave() { return $this->PremiumSite->enable($this->id); } ... function validate($data) { $paypal = ConnectionManager::getDataSource('paypal'); return $paypal->validate($data); } ... ?> ``` Controllers are dumb. (`app/controllers/paypal_notifications_controller.php`) "Are you a post? No? .. then I don't even exist." Now this action just shouts, "I save posted PayPal notifications!" ``` <?php class PaypalNotificationsController extends AppModel { ... var $components = array('RequestHandler', ...); ... function callback() { if (!$this->RequestHandler->isPost()) { // use RequestHandler component $this->cakeError('error404'); } $processed = $this->PaypalNotification->save($notification); if (!$processed) { $this->cakeError('paypal_error'); } } ... } ?> ``` **Bonus Round**: Use provided libraries instead of native PHP Refer to previous lessons for examples of the following: * `String` instead of `sprintf` * `HttpSocket` instead of `fsock` functions * `RequestHandler` instead of manual checks * `am` instead of `array_merge` These can prevent coding errors, reduce amount of code and/or increase readability.
Except for all the stuff noted by deizel (great post btw), remember one of the basic cake principles: **fat models, skinny controllers**. You can check [this example](http://www.littlehart.net/atthekeyboard/2007/04/27/fat-models-skinny-controllers/), but the basic idea is to put all your data-mangling stuff in your models. Your controller should (mostly) be just a link between your models and views. Your PremiumSitesController::index() is a perfect example of something that should be somewhere in your model (as pointed out by deizel). [Chris Hartjes](http://www.littlehart.net/atthekeyboard/) has also written a [book about refactoring](http://littlehart.net/book/), you might want to take a look at it if you really want to learn (it's not free, but it's cheap though). Also, [Matt Curry](http://www.pseudocoder.com/) has one, with a cool name: [Super Awesome Advanced CakePHP Tips](http://www.pseudocoder.com/free-cakephp-book/), and it's completely free for download. Both make for a good read. I'd also like to plug my own article about cake which I like to believe is important for code quality in cake: [Code formatting and readability](http://lecterror.com/articles/view/code-formatting-and-readability). Though I understand if people disagree.. :-)
Improving quality of code in CakePHP
[ "", "php", "cakephp", "refactoring", "" ]
I have a MSSQL Server table like this: ``` id (auto-increment) amount date account_id ``` Data are inserted throughout the day. I now need a view to get the opening and closing amounts for each account for each day. My trouble is creating a fast query to access both minimum and maximum values. Creating a view to access just the minimum is fine using an in statement, however getting both minimum and maximum is tricky. I've tried using a with clause, but the query is incredibly slow. BTW I am mapping the view to hibernate, so stored procedures and functions won't work the same way (that I know of). **Update** I guess my question wasn't clear from the responses I received. I want to get the opening and closing balances for each account. Maximum and minimum referred to getting the max and min (id) when grouped by date and account\_id. I want to get the amount when the id is equal to the maximum id (closing balance) and the amount when the id is equal to the minimum id (opening balance) for each account for each day.
Essentially I need the following query, but the with statement causes it to run slowly: ``` with x as ( select MAX(ab.id) as maxId, MIN(ab.id) as minId from Balance ab group by ab.account_id, dbo.Get_PeriodDateFromDatetime(ab.StatementDate) ) select ab.Amount as openingBalance, ab2.Amount as closingBalance from Balance ab, Balance ab2, x where ab.id = x.maxId and ab2.id = x.minId ```
This does the work, don't have enough data to evaluate performance: ``` create table #accounts ( id integer identity, account_id integer, amount decimal(18,3), tran_date datetime ) go insert into #accounts values (1,124.56,'06/01/2009 09:34:56'); insert into #accounts values (1,125.56,'06/01/2009 10:34:56'); insert into #accounts values (1,126.56,'06/01/2009 11:34:56'); insert into #accounts values (2,124.56,'06/01/2009 09:34:56'); insert into #accounts values (2,125.56,'06/01/2009 10:34:56'); insert into #accounts values (2,126.56,'06/01/2009 11:34:56'); insert into #accounts values (3,124.56,'06/01/2009 09:34:56'); insert into #accounts values (3,125.56,'06/01/2009 10:34:56'); insert into #accounts values (3,126.56,'06/01/2009 11:34:56'); insert into #accounts values (4,124.56,'06/01/2009 09:34:56'); insert into #accounts values (4,125.56,'06/01/2009 10:34:56'); insert into #accounts values (4,126.56,'06/01/2009 11:34:56'); insert into #accounts values (1,124.56,'06/02/2009 09:34:56'); insert into #accounts values (1,125.56,'06/02/2009 10:34:56'); insert into #accounts values (1,126.56,'06/02/2009 11:34:56'); insert into #accounts values (2,124.56,'06/02/2009 09:34:56'); insert into #accounts values (2,125.56,'06/02/2009 10:34:56'); insert into #accounts values (2,126.56,'06/02/2009 11:34:56'); insert into #accounts values (3,124.56,'06/02/2009 09:34:56'); insert into #accounts values (3,125.56,'06/02/2009 10:34:56'); insert into #accounts values (3,126.56,'06/02/2009 11:34:56'); insert into #accounts values (4,124.56,'06/02/2009 09:34:56'); insert into #accounts values (4,125.56,'06/02/2009 10:34:56'); insert into #accounts values (4,126.56,'06/02/2009 11:34:56'); go select ranges.tran_day transaction_day, ranges.account_id account_id, bod.amount bod_bal, eod.amount eod_bal from -- Subquery to define min/max records per account per day ( select account_id, cast(convert(varchar(10),tran_date,101) as datetime) tran_day, max(id) max_id, min(id) min_id from #accounts group by account_id, cast(convert(varchar(10),tran_date,101) as datetime) ) ranges -- Beginning of day balance JOIN #accounts bod on (bod.id = ranges.min_id) -- End of day balance JOIN #accounts eod on (eod.id = ranges.max_id) go ``` If you need better performance, store the subquery to a temp table first and put an index on it for the joins ... that might speed it up a bit.
SQL SERVER: View to get minimum and maximum values from a table
[ "", "sql", "" ]
I noticed a strange behaviour in my Import Service today when I tried to import multiple data records. When I do it like this, all data records are imported and the auto-incremented value is correct ([see screenshot](http://i40.tinypic.com/j7x3xw.gif)): ``` public void Create(List<Property> properties) { foreach (Property prop in properties) { dbc.Property.InsertOnSubmit(prop); dbc.SubmitChanges(); } } ``` When I try it like this, only the first data record get's a correct auto-incremented value ([see screenshot](http://i41.tinypic.com/wvzhgw.gif)): ``` foreach (Property prop in properties) { dbc.Property.InsertOnSubmit(prop); } dbc.SubmitChanges(); ``` Same here: ``` dbc.Property.InsertAllOnSubmit(properties); dbc.SubmitChanges(); ``` Does anybody have an idea why it's like that? All three variants should import all data records according to my understanding, but the missing auto-incremented values indicate it's not that way. **[EDIT]** Added two screenshots.
I had the same problem and it turned out the issue was due to overriding Equals on the mapped class. My Equals method was only comparing the primary key field which was an identity field. Of course when the objects are new, all identities are 0. So when InsertAllOnSubmit was called, it thought that all new objects were the same and basically ignored every one but the first.
Not quite sure why the 2nd variation doesn't work, however, shouldn't the last one be: ``` dbc.Property.InsertallOnSubmit(properties); dbc.SubmitChanges(); ``` **Edit** For the second loop try: ``` foreach (Property prop in properties) { var newProp = new Property(); newProp = prop; dbc.Property.InsertOnSubmit(newProp); } dbc.SubmitChanges(); ``` For the last solution try: ``` dbc.Property.InsertAllOnSubmit(properties.ToList()); dbc.SubmitChanges(); ```
InsertAllOnSubmit only inserts first data record
[ "", "c#", "sql-server", "linq-to-sql", "insert", "" ]
There are a lot of tutorials that teach on how to validate XML against a schema. But now I want to validate XML syntax only, not against the schema. Meaning I just want to check whether the XML is well-form, that whether there are closing or opening tag that is not done properly. Is there anyway I can do that in .Net?
Or if you're on .NET 3.5, you can use [`XElement.Load()`](http://msdn.microsoft.com/en-us/library/system.xml.linq.xelement.load.aspx). > LINQ to XML's loading functionality is > built upon [XmlReader](http://msdn.microsoft.com/en-us/library/system.xml.xmlreader.aspx). Therefore, you > might catch any exceptions that are > thrown by the [XmlReader.Create](http://msdn.microsoft.com/en-us/library/system.xml.xmlreader.create.aspx) > overload methods and the [XmlReader](http://msdn.microsoft.com/en-us/library/system.xml.xmlreader.aspx) > methods that read and parse the > document.
Just open it in an XmlReader and read to the end. If it makes it without throwing an exception, it's well formed.
Validate XML Syntax Only in C#
[ "", "c#", "xml", "" ]
in html and javascript, I can use keyup, focus, blur to detect most of the content changes in a text input, however if the user do a copy and paste into the text input, how do I capture this change? The issue here is that the input is already in focus when user paste into it.
You could capture the paste event (<http://www.quirksmode.org/dom/events/cutcopypaste.html>) ``` $("#myinput").bind("paste",function(){ //code here }) ```
``` $("#myinput").change(function(){ // whatever you need to be done on change of the input field }); // Trigger change if the user type or paste the text in the field $("#myinput").keyup(function(){ $(this).change(); }); // if you're using a virtual keyboard, you can do : $(".key").live('click',function(){ $("#myinput").val($("#myinput").val()+$(this).val()); $("#myinput").change(); // Trigger change when the value changes }); ```
detecting change in a text input box using jquery/javascript
[ "", "javascript", "jquery", "" ]
I'm currently building a toy assembler in c# (going through [The Elements Of Computing Systems](http://www1.idc.ac.il/tecs/) book). I need to match a very simple pattern, I thought this would be a good time to learn some regex but I'm struggling! In the following examples I'd just like to match the letters before the '=' M=A D=M MD=A A=D AD=M AMD=A I've come up with the following: ``` ([A-Z]{1,3})= ``` However this also matches the '=' which I don't want. I also tried: ``` ([A-Z^\=]{1,3})= ``` But I still have the same problem - it a matches the '=' sign as well. I'm using [this site](http://www.nregex.com/nregex/default.aspx) to test my regexes. Any help would be really appreciated. Thank you in advance.
You need a [positive lookahead assertion](http://msdn.microsoft.com/en-us/library/bs2twtah.aspx#zerowidth_positive_lookahead_assertion): ``` ([A-Z]{1,3})(?==) ```
What you want is called a zero-width, lookahead assertion. You do: ``` (Match this and capture)(?=before this) ``` In your case, this would be: ``` ([A-Z^]{1,3})(?==) ```
Regex - Match a Pattern Before a Character
[ "", "c#", "regex", "" ]
I want to encrypt a string and embed it in a URL, so I want to make sure the encrypted output isn't bigger than the input. Is AES the way to go?
It's impossible to create any algorithm which will always create a smaller output than the input, but can reverse any output back to the input. If you allow "no bigger than the input" then basically you're just talking isomorphic algorithms where they're *always* the same size as the input. This is due to the [pigeonhole principle](http://en.wikipedia.org/wiki/Pigeonhole_principle). Added to that, encryption usually has a little bit of padding (e.g. "to the nearest 8 bytes, rounded up" - in AES, that's *16* bytes). Oh, and on top of that you're got the issue of converting between text and binary. Encryption algorithms usually work in binary, but URLs are in text. Even if you assume ASCII, you could end up with an encrypted binary value which isn't ASCII. The simplest way of representing arbitrary binary data in text is to use base64. There are other alternatives which would be highly fiddly, but the general "convert text to binary, encrypt, convert binary to text" pattern is the simplest one.
Simple answer is no. Any symmetric encryption algorithm ( AES included ) will produce an output of at minimum the same but often slightly larger. As Jon Skeet points out, usually because of padding or alignment. Of course you could compress your string using zlib and encrypt but you'd need to decompress after decrypting. Disclaimer - compressing the string with zlib will not **guarantee** it comes out smaller though
AES output, is it smaller than input?
[ "", "c#", "encryption", "" ]
What are the advanced Features With SQL2008 over SQL2005 Particularly with TSQL
The big one for me, although it's not really T-SQL related, is intellisense. About time too :) As for the language... T-SQL finally got shortcut assignment in 2008: ``` SET @var *= 1.18 ``` The `MERGE` statement allows all sorts of modification goodness, based on the results of joining tables together. There are a bunch of GROUP BY enhancements, like GROUPING SETS, and operations on cubes. There are new datatypes to play with * hierarchyid, useful in self-referencing datasets * date and time can be treated separately * geography and geometry, for GIS systems and other geographical applications There are a few others too. See [the official new features page](http://msdn.microsoft.com/en-us/library/cc645577.aspx) for more.
There are new features like- * Compound Assignment Operators i.e. +=, -= etc. * Increased size support for user defined data types. * Four new date and time data types. It is all covered here - <http://technet.microsoft.com/en-us/library/cc721270.aspx> cheers
TSQL Features in SQL 2008 Vs SQL 2005
[ "", "sql", "sql-server-2005", "t-sql", "sql-server-2008", "" ]
I have a multi-project ASP.NET + C# solution. I was planning on writing a quick (separate) VB app that yanks out all DLLs, ASPX, Config, etc files and slaps them into a 7zip file for deployment to the test server. **Is there a more elegant solution than this?** My development environment is VWD2008Express.
I suggest that you look at cruise control .net and NAnt which will allow you to take care of all of this type of stuff easily. No need for zipping to move the files (though you can zip and create a backup!). You can use msbuild to do the pre-compile. Rick Straul wrote a great tool that you can tap into that does this for you...tells you what the appropriate msbuild commands are (<http://www.west-wind.com/presentations/AspNetCompilation/AspNetCompilation.asp>).
If your version lacks setup projects, you can still set up the constituent projects to output their binaries to the same directory.
ASP.NET + C# -- Deploying a multi-project solution
[ "", "c#", ".net", "asp.net", "deployment", "" ]
I have web page in PHP which displays all records in a table. I want to add check boxes against all rows and user can check a check box to select a row and then submit the page. When the page is submitted I want to enumerate all check boxes and check whether they are checked or not, How can I do this?
**Creating the form** You can generate the HTML as follows: ``` <form [action, method etc]> <table> <?php foreach($dataSet as $dataRow) : ?> <tr> <td> <input type="checkbox" name="dataRow[]" value="<?=$dataRow['id']?>"/> </td> [Additional details about datarow here] <tr> <?php endforeach; ?> </table> </form> ``` **AFTER POST** look into *$\_POST['dataRow']* : this will be an array with values the IDS of your $dataRow, so using [array\_values](https://www.php.net/manual/en/function.array-values.php) on *$\_POST['dataRow']* will give you all the ids of the selected rows: ``` <?php $checkedRows = array_values($_POST['dataRow']); foreach($checkedRows as $row) { // Do whatever you want to do with the selected row } ```
You'll create your checkboxes like this: ``` <input name="rows[]" value="uniqueIdForThisRow" type="checkbox" /> <input name="rows[]" value="anotherId" type="checkbox" /> ``` Then you can loop through them like this: ``` <?php // $_POST['rows'] contains the values of all checked checkboxes, like: // array('uniqueIdForThisRow', 'anotherId', ...) foreach ($_POST['rows'] as $row) { if ($row == 'uniqueIdForThisRow') { // do something } } ?> ``` [PHP docs on dealing with forms](http://jp.php.net/manual/en/language.variables.external.php), see especially Example #3.
Enumerate all Check Box in PHP
[ "", "php", "" ]
I just a moment ago saw a request for finding the MAC adress of a remote host. An answer was that the MAC address is always sent as part of the TCP/IP protocol. How would I go about retrieving this information from an ASP.NET C# application? See: [Reference to sister-post](https://stackoverflow.com/questions/1092379/want-to-get-mac-address-of-remote-pc)
Any such answer is false. The MAC address of an adapter is only available on the same network segment. Not on the other side of a router.
If your remote device is [SNMP](http://en.wikipedia.org/wiki/Simple_Network_Management_Protocol)-enabled you can query it for its [ARP](http://en.wikipedia.org/wiki/Address_Resolution_Protocol) cache. That will have the MAC address in it. See [this FAQ entry](http://www.cisco.com/en/US/tech/tk648/tk362/technologies_q_and_a_item09186a0080094bc0.shtml#q14b) for more info.
Getting the MAC address of the remote host
[ "", "c#", ".net", "asp.net", "mac-address", "" ]
I have this mail script I have to run a few times. To start the script I will use cron, but the script has to run 2 or 3 more times (with an hour apart). What's the best way to do this? To use the sleep command for an hour, or at the end of the script, place some code, so that the script will create a new cron job to run it self after an hour? Thanks
Unless there's some cost savings in keeping the script running in memory, you're better off using cron to invoke it every hour, as needed. ``` 0 0-2 * * * /usr/local/bin/mail-script.php ``` You can choose multiple hours using the - syntax, or the comma syntax: ``` 0 0,1,2,3 * * * /usr/local/bin/mail-script.php ``` If it needs to maintain some form of state, use a temporary file to keep saved state. Do: ``` > man 5 crontab ``` To see if your \*nix handles the above cases. Finally, unless you know the script has to run *only* 2-3 times, you're better off putting the logic about whether to "run or not to run" in the PHP script itself, and then just run it every hour.
One advantage of using sleep() is that it could be more portable. For example, on many systems I work with, users are not allowed to have their own cron jobs - so writing your program to take care of its own timer-ness might be an advantage. An alternative to sleep() might be using [SIGALRM](https://www.php.net/manual/en/function.pcntl-alarm.php) (so your script catches an interrupt and executes code at a certain interval - when that interrupt is thrown.) I mean, I'd recommend using cron - but here are some alternatives!
To use sleep() or cron job
[ "", "php", "performance", "cron", "" ]
My machine has two audio inputs: a mic in that I use for gaming, and a line in that I use for guitar. When using one it's important that the other be muted to remove hiss/static, so I was hoping to write a small script that would toggle which one was muted (it's fairly inconvenient to click through the tray icon, switch to my input device, mute and unmute). I thought perhaps I could do this with [pywin32](http://python.net/crew/mhammond/), but [everything](https://stackoverflow.com/questions/255419/how-can-i-mute-unmute-my-sound-from-powershell) I could [find](http://www.geekpedia.com/tutorial176_Get-and-set-the-wave-sound-volume.html) seemed specific to setting the output volume rather than input, and I'm not familiar enough with win32 to even know where to look for better info. Could anybody point me in the right direction?
I had a similar problem and couldn't figure out how to use Windows API's to do what I wanted. I ended up just automating the GUI with AutoIt. I think that will be the fastest and easiest solution (albeit a "hacky" one). As I [answered](https://stackoverflow.com/questions/1084514/make-your-program-use-a-gui/1092401#1092401) earlier today, you can use AutoIT from within Python.
***Disclaimer:*** I'm not a windows programming guru by any means...but here's my best guess Per the [pywin32 FAQ](http://python.net/crew/mhammond/win32/FAQ.html): > **How do I use the exposed Win32 functions to do xyz?** > > In general, the trick is to not > consider it a Python/PyWin32 question > at all, but to search for > documentation or examples of your > problem, regardless of the language. > This will generally give you the > information you need to perform the > same operations using these > extensions. The included > documentation will tell you the > arguments and return types of the > functions so you can easily determine > the correct way to "spell" things in > Python. Sounds like you're looking to control the "endpoint device" volumes (i.e. your sound card / line-in). Here's the [API reference](http://msdn.microsoft.com/en-us/library/dd370832(VS.85).aspx) in that direction. [Here](http://msdn.microsoft.com/en-us/library/dd370825(VS.85).aspx)'s a slightly broader look at controlling audio devices in windows if the previous wasn't what you're looking for. [Here](http://blog.xploiter.com/c-and-aspnet/muting-audio-channels-mixer-control-api/)'s a blog entry from someone who did what you're trying to do in C# (I know you specified python, but you might be able to extract the correct API calls from the code). Good luck! And if you do get working code, I'm interested to see it.
Windows XP - mute/unmute audio in programmatically in Python
[ "", "python", "windows", "winapi", "audio", "pywin32", "" ]
Is it possible to generate a random number between 2 doubles? Example: ``` public double GetRandomeNumber(double minimum, double maximum) { return Random.NextDouble(minimum, maximum) } ``` Then I call it with the following: ``` double result = GetRandomNumber(1.23, 5.34); ``` Any thoughts would be appreciated.
Yes. Random.NextDouble returns a double between 0 and 1. You then multiply that by the range you need to go into (difference between maximum and minimum) and then add that to the base (minimum). ``` public double GetRandomNumber(double minimum, double maximum) { Random random = new Random(); return random.NextDouble() * (maximum - minimum) + minimum; } ``` Real code should have random be a static member. This will save the cost of creating the random number generator, and will enable you to call GetRandomNumber very frequently. Since we are initializing a new RNG with every call, if you call quick enough that the system time doesn't change between calls the RNG will get seeded with the exact same timestamp, and generate the same stream of random numbers.
Johnny5 suggested creating an extension method. Here's a more complete code example showing how you could do this: ``` public static class RandomExtensions { public static double NextDouble( this Random random, double minValue, double maxValue) { return random.NextDouble() * (maxValue - minValue) + minValue; } } ``` Now you can call it as if it were a method on the `Random` class: ``` Random random = new Random(); double value = random.NextDouble(1.23, 5.34); ``` Note that you should not create lots of new `Random` objects in a loop because this will make it likely that you get the same value many times in a row. If you need lots of random numbers then create one instance of `Random` and re-use it.
Random Number Between 2 Double Numbers
[ "", "c#", "random", "" ]
I'm sure this is an easy question, but I don't have an answer for. Here's the senario and the question. I have an array that was stored using in a particular format. The format contains a Header record with muntiple detail records following it. The header of the record tells me what TypeCode was used to store the data, for instance Int32. I have a routine that takes a byte[] array and converts the byte data back to it's proper format in C#. This routine needs the proper number of bytes to make the conversion successful. Q. So how can I get the length of bytes from the given TypeCode for passing to the Convert function without having to hardcode the length for every type?
Given that `TypeCode` is just an enumeration of a fixed set of values, you could easily hard-code a dictionary of them, e.g. ``` private static readonly Dictionary<TypeCode,int> TypeCodeLength = new Dictionary<TypeCode,int> { { TypeCode.Int32, 4 }, { TypeCode.Int64, 8 }, { TypeCode.Char, 2 }, // etc } ``` (An equivalent solution would be to use a switch/case statement - they really are equivalent if you're just including values.) Some, like `string` and `object` will be variable though - and others will depend on how you're encoding things like `DateTime`. Unless this is using a standard conversion, nothing in the framework is going to be able to give you the lengths. I know hard-coding is generally frowned upon, but in this case it's only in one place and is going to be the simplest option.
According to [How to create Type from TypeCode...](http://www.pcreview.co.uk/forums/thread-2148838.php), you can't (without a exhaustive switch). Of course the other direction (Type->TypeCode) is trivial.
How to Get the length of a TypeCode
[ "", "c#", ".net", "type-conversion", "" ]
We're deploying a WAR file into Tomcat 5.5 and it works fine if WEB-INF\classes contains .classes files, but if we move the .jar file containing that .classes into WEB-INF\lib, we get an exception on runtime complaining that java.lang.NoSuchMethodError, but existing class file in .jar file contains the class and method does exits! Any help on this would be appreciated.
This could be caused due to a class conflict. Make sure that there isn't an older version of the Class somewhere (Tomcat's shared folder, WEB-INF/classes, WEB-INF/lib). If this is the case, you practically can't know which class Tomcat will load. If it picks one without the method, the exception you are experiencing will occur.
Since you are getting a NoSuchMethodError, and not a ClassNotFoundError, it means that you have an old version of the class somewhere (outside of the jar file). You need to find and remove it.
Tomcat deployment problem using jar file instead of classes
[ "", "java", "tomcat", "jar", "war", "" ]
I have an IRepository interface that inherits from `IRepository<TObject>`. I also have a SqlRepository class that inherits from S`QLRepository<TObject>`, which in turn implements `IRepository<TObject>`. Why can't I instantiate a instance of SqlRepository as an IRepository? ``` public class MyObject : IObject { ... } public interface IRepository<TObject> where TObject : IObject, new() { ... } public interface IRepository : IRepository<MyObject> { } public class SqlRepository<TObject> : IRepository<TObject> where TObject : IObject, new() { ... } public class SqlRepository : SqlRepository<MyObject> { } public class Controller { private IRepository _repository; public Controller() { _repository = new SqlRepository(); } } ``` The example above fails when trying to assign a new SqlRepository to \_repository in the Controller class, with the following error message. ``` Argument '1': cannot convert from 'SqlRepository' to 'IRepository' ``` Which basic principle of inheritance have I failed to grasp, please help.
Because `IRepository` is an `IRepository<MyObject>` and not the other way around. Basically: `SqlRepository` is a `SqlRepository<MyObject>` which is an `IRepository<MyObject>`. To make that work, you should either inherit `IRepository<TObject>` from `IRepository` or make `SqlRepository` implement `IRepository` depending in your intention. This issue is not generics-specific at all. Assume: ``` interface IMovable { } interface IDrivable : IMovable { } class Ball : IMovable { } ``` It's obvious that `Ball` is not an `IDrivable` while it is an `IMovable`.
`SqlRepository` implements `IRepository<T>`, not `IRepository`.
Cannot convert Generic Class to Generic Interface, why?
[ "", "c#", "inheritance", "repository", "" ]
Just a quick question. Say a call a method like so ``` mysql_pconnect("server","tator_w","password") or die("Unable to connect to SQL server"); ``` Can I have the 'die' call a method rather then display a text message? If so, how?
You would be better off using an if statement rather than relying on short-circuit evaluation if you want to do anything more complicated, e.g.: ``` if (!mysql_pconnect("server","tator_w","password")) { call_a_function(); //some other stuff die(); //if you still want to die } ```
[`register_shutdown_function()`](http://www.php.net/manual/en/function.register-shutdown-function.php) It lets you register a function that will be called when the system exits. Then you can simply `die()` or `exit()` without a parameter, which will call your method. (you may also find [set\_error\_handler()](http://php.net/manual/en/function.set-error-handler.php) interesting, if slightly unrelated)
PHP Die question
[ "", "php", "die", "" ]
Has anyone used a Java based library for generating excel documents? Preferably support for 2003?
I'm currently working with Apache POI, ( <http://poi.apache.org/index.html> ) which is very comprehensive. The 2003 file format version is still in beta, but seems to work well enough. I'm not exercising it's power very much, just straightforward reads and writes of Excel, but it seems reliable.
Whenever I have to do this I ask myself if one big html table would be enough. much of the time it is. You can simply write html tags and label it as a .xls file. Excel will open it correctly
Generating excel documents programmatically
[ "", "java", "excel", "" ]
Right now I've hard coded the whole xml file in my python script and just doing out.write(), but now it's getting harder to manage because i have multiple types of xml file. What is the easiest and quickest way to setup templating so that I can just give the variable names amd filename?
You asked for the easiest and quickest, so see this post: <http://blog.simonwillison.net/post/58096201893/simpletemplates> If you want something smarter, take a look [here](https://wiki.python.org/moin/Templating#Templating_Engines).
**Short answer is:** You should be focusing, and dealing with, the data (i.e., python object) and not the raw XML **Basic story:** XML is supposed to be a representation of some data, or data set. You don't have a lot of detail in your question about the type of data, what it represents, etc, etc -- so I'll give you some basic answers. **Python choices:** BeautifulSoup, lxml and other python libraries (ElementTree, etc.), make dealing with XML more easy. They let me read in, or write out, XML data much more easily than if I'd tried to work directly with the XML in raw form. In the middle of those 2 (input,output) activities, my python program is dealing with a nice python object or some kind of parse tree I can walk. You can read data in, create an object from that string, manipulate it and write out XML. **Other choice, Templates:** OK -- maybe you like XML and just want to "template" it so you can populate it with the data. You might be more comfortable with this, if you aren't really manipulating the data -- but just representing it for output. And, this is similar to the XML strings you are currently using -- so may be more familiar. Use Cheetah, Jinja, or other template libraries to help. Make a template for the XML file, using that template language. For example, you just read a list of books from a file or database table. You would pass this list of book objects to the template engine, with a template, and then tell it to write out your XML output. Example template for these book objects: ``` <?xml version="1.0"?> <catalog> {% for object in object_list %} <book id="{{ object.bookID }}"> <author>{{ object.author_name }}</author> <title>{{ object.title }}</title> <genre>{{ object.genre }}</genre> <price>{{ object.price }}</price> <publish_date>{{ object.pub_date }}</publish_date> <description>{{ object.description }}</description> </book> {% endfor %} </catalog> </xml> ``` The template engine would loop through the "object\_list" and output a long XML file with all your books. That would be **much** better than storing raw XML strings, as you currently are. This makes the update & modification of the display of XML separate from the data, data storage, and data manipulation -- making your life easier.
fast and easy way to template xml files in python
[ "", "python", "xml", "" ]
I am aware of [pydispatcher](http://pydispatcher.sourceforge.net/), but there must be other event-related packages around for Python. Which libraries are available? I'm not interested in event managers that are part of large frameworks, I'd rather use a small bare-bones solution that I can easily extend.
# PyPI packages As of October 2022, these are the event-related packages available on PyPI, ordered by most recent release date. * [PyDispatcher](https://pypi.org/project/PyDispatcher/) `2.0.6`: Aug 2022 * [blinker](https://pypi.org/project/blinker/) `1.5`: Jun 2022 * [pymitter](https://pypi.org/project/pymitter/) `0.4.0`: June 2022 * [python-dispatch](https://pypi.org/project/python-dispatch/) `0.2.0`: Apr 2022 * [pluggy](https://pypi.org/project/pluggy/) `1.0.0`: August 2021 * [Events](https://pypi.org/project/Events/) `0.4`: October 2020 * [zope.event](https://pypi.org/project/zope.event/) `4.5.0`: Sept 2020 * [RxPy3](https://pypi.org/project/RxPy3/) `1.0.1`: June 2020 * [Louie](https://pypi.org/project/Louie/) `2.0`: Sept 2019 * [PyPubSub](https://pypi.org/project/PyPubSub/) `4.0.3`: Jan 2019 * [pyeventdispatcher](https://pypi.org/project/pyeventdispatcher/) `0.2.3a0`: 2018 * [buslane](https://pypi.org/project/buslane/) `0.0.5`: 2018 * [PyPyDispatcher](https://pypi.org/project/PyPyDispatcher/) `2.1.2`: 2017 * [axel](https://pypi.org/project/axel/) `0.0.7`: 2016 * [dispatcher](https://pypi.org/project/dispatcher/) `1.0`: 2012 * [py-notify](https://pypi.org/project/py-notify/) `0.3.1`: 2008 # There's more That's a lot of libraries to choose from, using very different terminology (events, signals, handlers, method dispatch, hooks, ...). I'm trying to keep an overview of the above packages, plus the techniques mentioned in the answers here. First, some terminology... ## Observer pattern The most basic style of event system is the 'bag of handler methods', which is a simple implementation of the [Observer pattern](http://en.wikipedia.org/wiki/Observer_pattern). Basically, the handler methods (callables) are stored in an array and are each called when the event 'fires'. ## Publish-Subscribe The disadvantage of Observer event systems is that you can only register the handlers on the actual Event object (or handlers list). So at registration time the event already needs to exist. That's why the second style of event systems exists: the [publish-subscribe pattern](http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern). Here, the handlers don't register on an event object (or handler list), but on a central dispatcher. Also the notifiers only talk to the dispatcher. What to listen for, or what to publish is determined by 'signal', which is nothing more than a name (string). ## Mediator pattern Might be of interest as well: the [Mediator pattern](https://en.wikipedia.org/wiki/Mediator_pattern). ## Hooks A 'hook' system is usally used in the context of application plugins. The application contains fixed integration points (hooks), and each plugin may connect to that hook and perform certain actions. ## Other 'events' Note: [threading.Event](https://docs.python.org/3.5/library/threading.html#event-objects) is not an 'event system' in the above sense. It's a thread synchronization system where one thread waits until another thread 'signals' the Event object. Network messaging libraries often use the term 'events' too; sometimes these are similar in concept; sometimes not. They can of course traverse thread-, process- and computer boundaries. See e.g. [pyzmq](https://pypi.org/project/pyzmq/), [pymq](https://github.com/thrau/pymq), [Twisted](https://twistedmatrix.com/trac/), [Tornado](https://www.tornadoweb.org/), [gevent](http://www.gevent.org/), [eventlet](http://eventlet.net/). ## Weak references In Python, holding a reference to a method or object ensures that it won't get deleted by the garbage collector. This can be desirable, but it can also lead to memory leaks: the linked handlers are never cleaned up. Some event systems use weak references instead of regular ones to solve this. # Some words about the various libraries Observer-style event systems: * [zope.event](https://pypi.python.org/pypi/zope.event) shows the bare bones of how this works (see [Lennart's answer](https://stackoverflow.com/a/1092617/1075152)). Note: this example does not even support handler arguments. * [LongPoke's 'callable list'](https://stackoverflow.com/a/2022629/1075152) implementation shows that such an event system can be implemented very minimalistically by subclassing `list`. * Felk's variation [EventHook](https://stackoverflow.com/questions/1092531/event-system-in-python/35957226#35957226) also ensures the signatures of callees and callers. * [spassig's EventHook](https://stackoverflow.com/a/1094423/1075152) (Michael Foord's Event Pattern) is a straightforward implementation. * [Josip's Valued Lessons Event class](https://stackoverflow.com/a/1096614/1075152) is basically the same, but uses a `set` instead of a `list` to store the bag, and implements `__call__` which are both reasonable additions. * [PyNotify](https://pypi.org/project/py-notify/) is similar in concept and also provides additional concepts of variables and conditions ('variable changed event'). Homepage is not functional. * [axel](https://pypi.python.org/pypi/axel) is basically a bag-of-handlers with more features related to threading, error handling, ... * [python-dispatch](https://pypi.org/project/python-dispatch/) requires the even source classes to derive from `pydispatch.Dispatcher`. * [buslane](https://pypi.org/project/buslane/) is class-based, supports single- or multiple handlers and facilitates extensive type hints. * Pithikos' [Observer/Event](https://stackoverflow.com/questions/1092531/event-system-in-python/28479007#28479007) is a lightweight design. Publish-subscribe libraries: * [blinker](https://pypi.org/project/blinker/) has some nifty features such as automatic disconnection and filtering based on sender. * [PyPubSub](http://pypubsub.readthedocs.io/en/stable/) is a stable package, and promises "advanced features that facilitate debugging and maintaining topics and messages". * [pymitter](https://github.com/riga/pymitter) is a Python port of Node.js EventEmitter2 and offers namespaces, wildcards and TTL. * [PyDispatcher](http://pydispatcher.sourceforge.net/) seems to emphasize flexibility with regards to many-to-many publication etc. Supports weak references. * [louie](https://github.com/11craft/louie) is a reworked PyDispatcher and should work "in a wide variety of contexts". * [pypydispatcher](https://github.com/scrapy/pypydispatcher) is based on (you guessed it...) PyDispatcher and also works in PyPy. * [django.dispatch](https://code.djangoproject.com/browser/django/trunk/django/dispatch) is a rewritten PyDispatcher "with a more limited interface, but higher performance". * [pyeventdispatcher](https://github.com/whisller/pyeventdispatcher) is based on PHP's Symfony framework's event-dispatcher. * [dispatcher](https://pypi.org/project/dispatcher/) was extracted from django.dispatch but is getting fairly old. * Cristian Garcia's [EventManger](https://stackoverflow.com/questions/1092531/event-system-in-python/20807692#20807692) is a really short implementation. Others: * [pluggy](https://pypi.org/project/pluggy/) contains a hook system which is used by `pytest` plugins. * [RxPy3](https://pypi.org/project/RxPy3/) implements the Observable pattern and allows merging events, retry etc. * Qt's Signals and Slots are available from [PyQt](http://pyqt.sourceforge.net/Docs/PyQt4/new_style_signals_slots.html) or [PySide2](https://wiki.qt.io/Qt_for_Python_Signals_and_Slots). They work as callback when used in the same thread, or as events (using an event loop) between two different threads. Signals and Slots have the limitation that they only work in objects of classes that derive from `QObject`.
I've been doing it this way: ``` class Event(list): """Event subscription. A list of callable objects. Calling an instance of this will cause a call to each item in the list in ascending order by index. Example Usage: >>> def f(x): ... print 'f(%s)' % x >>> def g(x): ... print 'g(%s)' % x >>> e = Event() >>> e() >>> e.append(f) >>> e(123) f(123) >>> e.remove(f) >>> e() >>> e += (f, g) >>> e(10) f(10) g(10) >>> del e[0] >>> e(2) g(2) """ def __call__(self, *args, **kwargs): for f in self: f(*args, **kwargs) def __repr__(self): return "Event(%s)" % list.__repr__(self) ``` However, like with everything else I've seen, there is no auto generated pydoc for this, and no signatures, which really sucks.
Which Python packages offer a stand-alone event system?
[ "", "python", "events", "event-handling", "dispatcher", "" ]
I know that such type of questions exist in SF but they are very specific, I need a generic suggestion. I need a feature for uploading user files which could be of size more that 1 GB. This feature will be an add-on to the existing file-upload feature present in the application which caters to smaller files. Now, here are some of the options 1. Use HTTP and Java applet. Send the files in chunks and join them at the server. But how to throttle the n/w. 2. Use HTTP and Flex application. Is it better than an applet wrt browser compatibility & any other environment issues? 3. Use FTP or rather SFTP rather than HTTP as a protocol for faster upload process Please suggest. Moreover, I've to make sure that this upload process don't hamper the task of other users or in other words don't eat up other user's b/w. Any mechanisms which can be done at n/w level to throttle such processes? Ultimately customer wanted to have FTP as an option. But I think the answer with handling files programmatically is also cool.
For sending files to a server, unless you *have* to use HTTP, FTP is the way to go. Throttling, I am not completely sure of, at least not programmatically. Personally, it seems like limitations of the upload speed would be better accomplished on the server side though.
Use whatever client side language you want (a Java App, Flex, etc.), and push to the server with `HTTP PUT` (no Flex) or `POST`. In the server side Java code, regulate the flow of bytes in your input stream loop. A crude, simple, sample snippet that limits bandwidth to no faster than an average <= 10KB/second: ``` InputStream is = request.getInputStream(); OutputStream os = new FileOutputStream(new File("myfile.bin")); int bytesRead = 0; byte[] payload = new byte[10240]; while (bytesRead >= 0) { bytesRead = is.read(payload); if (bytesRead > 0) os.write(payload, 0, bytesRead); Thread.currentThread().sleep(1000); } ``` *(With more complexity one could more accurately regulate the single stream bandwidth, but it gets complex when considering socket buffers and such. "Good enough" is usually good enough.)* My application does something similar to the above--we regulate both up (`POST` and `PUT`) and (`GET`) down stream bandwidth. We accept files in the 100s of MB every day and have tested up to 2GB. (Beyond 2GB there is the pesky Java int primitive issues to deal with.) Our clients are both Flex and `curl`. It works for me, it can work for you. While FTP is great and all, you can avoid many (but not all) firewall issues by using HTTP.
Suggestions for uploading very large (> 1GB) files
[ "", "java", "networking", "file-upload", "large-files", "throttling", "" ]
I am developing an custom API for a web solution and I am using the MVC design pattern. I have a modules folder so that I can swap in and out modules and also work on sections without disrupting working tested code. My only issue now is that I want to load CSS anywhere and have my application properly import the css file in the head tag. I know CodeIgniter does this but I'm not sure how. Using PHP, how do I load in a CSS file anywhere and then have the code properly import the css within the head tags like CodeIgniter does? Thanks in advance.
You can load several views at once, or views inside other views. So in this case I recomend you to create one header view where **you load all css and js files** example: ``` <html> <head> <meta http-equiv="content-type" content="text/html;charset=UTF-8"> <link rel="stylesheet" href="<?php echo base_url();?>css/moorainbow.css" type="text/css" media="screen"/> </head> <body> ``` And call it like: ``` $this->load->view('header'); $this->load->view('view1'); $this->load->view('view2'); ``` This way you can control the files (css+js+etc) you load in just one file. Regrads, Pedro @pcamacho
Your question is a little unclear to me, but I'll do my best to help. Are you simply wondering how to include a CSS file in your Views? If so, simply use the following: ``` <style> @import url('/css/styles.css'); </style> ``` If your CSS folder is at the root of your CodeIgniter project, you could do something like this using CodeIgniter's base\_url() function: ``` <style> @import url('<?=base_url()?>/css/styles.css'); </style> ``` It will ensure your pages stay portable and have the correct absolute URL. Hope this helps! If not, try being a little more specific in your question
CodeIgniter - Loading CSS
[ "", "php", "codeigniter", "" ]
We are in the process of refactoring some code. There is a feature that we have developed in one project that we would like to now use in other projects. We are extracting the foundation of this feature and making it a full-fledged project which can then be imported by its current project and others. This effort has been relatively straight-forward but we have one headache. When the framework in question was originally developed, we chose to keep a variety of constant values defined as static fields in a single class. Over time this list of static members grew. The class is used in very many places in our code. In our current refactoring, we will be elevating some of the members of this class to our new framework, but leaving others in place. Our headache is in extracting the foundation members of this class to be used in our new project, and more specifically, how we should address those extracted members in our existing code. We know that we can have our existing Constants class subclass this new project's Constants class and it would inherit all of the parent's static members. This would allow us to effect the change without touching the code that uses these members to change the class name on the static reference. However, the tight coupling inherent in this choice doesn't feel right. before: ``` public class ConstantsA { public static final String CONSTANT1 = "constant.1"; public static final String CONSTANT2 = "constant.2"; public static final String CONSTANT3 = "constant.3"; } ``` after: ``` public class ConstantsA extends ConstantsB { public static final String CONSTANT1 = "constant.1"; } public class ConstantsB { public static final String CONSTANT2 = "constant.2"; public static final String CONSTANT3 = "constant.3"; } ``` In our existing code branch, all of the above would be accessible in this manner: ``` ConstantsA.CONSTANT2 ``` I would like to solicit arguments about whether this is 'acceptable' and/or what the best practices are.
* A class with only static fields is a code smell. It's not a class. * Some people use interfaces, so they can implement it to use the constants more easily. But an interface should be used only to model a behaviour of a class. (<http://pmd.sourceforge.net/rules/design.html#AvoidConstantsInterface>) Using static imports from Java 5 removes the need for simple constant usage at all. * Are your constants really Strings, or just used as Strings. If they are different options for some type (so called enumerations), you should used [typesafe enumerations](http://www.javacamp.org/designPattern/enum.html), using enum in Java 5 or the Enum provided by [Commons Lang](http://commons.apache.org/lang/). Of course, converting your code to use enums might be a little work. * You should at least split the constants to groups of related constants in files with proper business name. Moving the final members is easy in IDE and will update all usages. * If you can afford it, convert them to enums then. (Think about using about a script to do that, often it's possible.) Class hierarchies are only usefull, if there is a relation between the constants/enums. You can keep the Strings if you have to but still think about them as entities, then extends might make sense for some (describing is-a relation). First enums can be simple classes made by yourself if serializing is not a problem. Enums are always favourable due to their type safe nature and the extra name showing intend or business/domain specific things. * If the constants are really String constants use a [Properies](http://java.sun.com/j2se/1.4.2/docs/api/java/util/Properties.html) or [ResourceBundle](http://www.j2ee.me/j2se/1.4.2/docs/api/java/util/ResourceBundle.html), which can be configured by plain text files. Again you can script the refactoring using the constant names as resource bundle keys and generate both files automatically.
I don't like it, but it's probably the best you can do right now. The right answer would be to break up the constants into coherent groups, fixing the code breaks as you go along. In C#, I'd use enums.
Java: refactoring static constants
[ "", "java", "refactoring", "" ]
I'm creating a macro in C++ that declares a variable and assigns some value to it. Depending on how the macro is used, the second occurrence of the macro can override the value of the first variable. For instance: ``` #define MY_MACRO int my_variable_[random-number-here] = getCurrentTime(); ``` The other motivation to use that is to avoid selecting certain name to the variable so that it be the same as a name eventually chosen by the developer using the macro. Is there a way to generate random variable names inside a macro in C++? -- Edit -- I mean unique but also random once I can use my macro twice in a block and in this case it will generate something like: ``` int unique_variable_name; ... int unique_variable_name; ``` In this case, to be unique both variable names have to be random generated.
Add M4 to your build flow? This macro language has some stateful capabilities, and can successfully be intermingled with CPP macros. This is probably not a standard way to generate unique names in a C environment, though I've been able to sucessfully use it in such a manner. You probably do not not want random, BTW, based on the way you posed your question. You want *unique*. You could use `__FILE__` and `__LINE__` in the macro expansion to get you the uniqueness you seem to be going for... those metavariables get defined within the source file context, so be careful to make sure you get what you are looking for (e.g., perils of more than one macro on the same line).
Try the following: ``` // One level of macro indirection is required in order to resolve __COUNTER__, // and get varname1 instead of varname__COUNTER__. #define CONCAT(a, b) CONCAT_INNER(a, b) #define CONCAT_INNER(a, b) a ## b #define UNIQUE_NAME(base) CONCAT(base, __COUNTER__) void main() { int UNIQUE_NAME(foo) = 123; // int foo0 = 123; std::cout << foo0; // prints "123" } ``` `__COUNTER__` may have portability issues. If this is a problem, you can use `__LINE__` instead and as long as you aren't calling the macro more than once per line or sharing the names across compilation units, you will be just fine.
How to generate random variable names in C++ using macros?
[ "", "c++", "variables", "random", "macros", "naming", "" ]
Please check this code out it compiles and runs absolutely fine.. The question is that when i started learning c++ (turbo c++) i never was able to declare an array of any type as .. ``` datatype var[variable_set_at_runtime]; ``` and i took it for granted that this cant be possible in latest gcc compilers...but surprisingly this is possible... So my related question is that whats the need of new operator then?? I know that new operator does a lot of things including dynamically allocating memory at runtime in heap,returning an address to that resource...etc.. What difference i feel is that my code dynamically allocates the memory on stack while new does it on heap?? is this the only difference... i am really astonished after writing something like this as i could not do this when i started learning c++ and ...to add to it i can do this for custom data types too... :-O ``` #include<iostream> using namespace std; struct A { int a; int b; }; int main() { int a; cin>>a; int ch[a]; for(int i=0;i<a;i++) ch[i]=i+1; for(int i=0;i<a;i++) cout<<"\n\nch=="<<ch[i]; A obj[a]; cin>>obj[a-1].a; cin>>obj[a-1].b; cout<<"\n\n obj.a,obj.b=="<<obj[a-1].a<<" , "<<obj[a-1].b; } ``` Please shed some light.. Thanks.
> and i took it for granted that this > cant be possible in latest gcc > compilers...but surprisingly this is > possible... It is legal in C, but not C++. GCC might allow it depending on compiler flags, but if you compile your code as strict C++ (which you should), dynamic-length arrays are not allowed, and you have to use `new`. (I'm surprised no one has mentioned this little detail yet) Apart from this, the two other big differences are that: * data on the stack is automatically cleaned up when it goes out of scope * only something like 1MB is typically allocated for the stack. Large datastructures should go on the heap instead. But really, the single most important point is the first one -- it's not valid C++. (And as Neil pointed out, it is not valid in C++0x either. There are no plans of adding this to C++)
You have to allocate it on the heap, using **new** : ``` int* ch = new int[ a ]; ``` but don't forget to deallocate it after usage : ``` delete [] ch; ``` A better way would be to use a [std::vector](http://www.cplusplus.com/reference/stl/vector/) that does exactly what you want.
how am i able to declare an array with variable length determined at runtime in C++?
[ "", "c++", "new-operator", "" ]
Does this smell? I have a few properties you can only set once. They can be set at any time during the objects existence and can not be undone. I implement then like this. ``` private FooThingy _foo; public FooThingy Foo { set { if (null == _foo) _foo = value; } get { return _foo; } } ``` But I don't like it. I feel like I have missed something. Have I? **Edit**: Why I don't have it in the constructor. I cant go into the details but setting this results in **`<`bad analogy`>`** Melting the wax statue **`<`/bad analogy`>`**. And I don't want my constructor to create already melted objects.
**Set-once-properties violate the principle of least surprise** - a caller expects that when a property can be set once, it can be set again. (barring range and compatibility checks, of course - but they are bound to specific values or value combinations). Initialize them in the constructor. Alternatively, if they are to many / to complex to write all constructors, use a factory/builder class: ``` ThingieBuilder tb = new ThingieBuilder(); tb.FooThingy = 17.23; // r/w properties tb.BarThingy = 42; tb.UseExtendedThingamagicAdapter = true; Thingie t = tb.Create(); if (t.Bar==42) // r/o property ... ``` Or, separate the settings in a configuration object, that can be replaced or passed during construction.
I think the design would be more clear to the caller if such values are passed to the constructor, and exposed as read-only properties instead. If the value can't be set at construction time, perhaps it could be appropriate to throw an exception if trying to set the value again: ``` private FooThingy _foo; public FooThingy Foo { set { if (null == _foo) { _foo = value; } else { throw new WhatEverThatFitsException(); } } get { return _foo; } } ``` Just to be very clear: **I do not** in any way **promote the use of set-once properties**; the code sample only show the approach that I *might* use, *should the value not be available at construction time* of the object for whatever reason. That said; I have never come across that situation in any of the projects I have been involved in.
Should I use set once variables?
[ "", "c#", "" ]
I have a very large list Suppose I do that (yeah, I know the code is very unpythonic, but for the example's sake..): ``` n = (2**32)**2 for i in xrange(10**7) li[i] = n ``` works fine. however: ``` for i in xrange(10**7) li[i] = i**2 ``` consumes a significantly larger amount of memory. I don't understand why that is - storing the big number takes more bits, and in Java, the the second option is indeed more memory-efficient... Does anyone have an explanation for this?
Java special-cases a few value types (including integers) so that they're stored by value (instead of, by object reference like everything else). Python doesn't special-case such types, so that assigning n to many entries *in a list* (or other normal Python container) doesn't have to make copies. Edit: note that the references are always to *objects*, not "to variables" -- there's no such thing as "a reference to a variable" in Python (or Java). For example: ``` >>> n = 23 >>> a = [n,n] >>> print id(n), id(a[0]), id(a[1]) 8402048 8402048 8402048 >>> n = 45 >>> print id(n), id(a[0]), id(a[1]) 8401784 8402048 8402048 ``` We see from the first print that both entries in list `a` refer to exactly the same object as `n` refers to -- but when `n` is reassigned, **it** now refers to a different object, while both entries in `a` still refer to the previous one. An `array.array` (from the Python standard library module [array](http://docs.python.org/library/array.html)) is very different from a list: it keeps compact copies of a homogeneous type, taking as few bits per item as are needed to store copies of values of that type. All normal containers keep references (internally implemented in the C-coded Python runtime as pointers to PyObject structures: each pointer, on a 32-bit build, takes 4 bytes, each PyObject at least 16 or so [including pointer to type, reference count, actual value, and malloc rounding up]), arrays don't (so they can't be heterogeneous, can't have items except from a few basic types, etc). For example, a 1000-items container, with all items being different small integers (ones whose values can fit in 2 bytes each), would take about 2,000 bytes of data as an `array.array('h')`, but about 20,000 as a `list`. But if all items were the same number, the array would still take 2,000 bytes of data, the list would take only 20 or so [[in every one of these cases you have to add about another 16 or 32 bytes for the container-object proper, in addition to the memory for the data]]. However, although the question says "array" (even in a tag), I doubt its `arr` is actually an array -- if it were, it could not store (2\*\*32)\*2 (largest int values in an array are 32 bits) and the memory behavior reported in the question would not actually be observed. So, the question is probably in fact about a list, not an array. **Edit**: a comment by @ooboo asks lots of reasonable followup questions, and rather than trying to squish the detailed explanation in a comment I'm moving it here. > It's weird, though - after all, how is > the reference to the integer stored? > id(variable) gives an integer, the > reference is an integer itself, isn't > it cheaper to use the integer? CPython stores references as pointers to PyObject (Jython and IronPython, written in Java and C#, use those language's implicit references; PyPy, written in Python, has a very flexible back-end and can use lots of different strategies) `id(v)` gives (on CPython only) the numeric value of the pointer (just as a handy way to uniquely identify the object). A list can be heterogeneous (some items may be integers, others objects of different types) so it's just not a sensible option to store some items as pointers to PyObject and others differently (each object also needs a type indication and, in CPython, a reference count, at least) -- `array.array` is homogeneous and limited so it can (and does) indeed store a copy of the items' values rather than references (this is often cheaper, but not for collections where the same item appears a LOT, such as a sparse array where the vast majority of items are 0). A Python implementation would be fully allowed by the language specs to try subtler tricks for optimization, as long as it preserves semantics untouched, but as far as I know none currently does for this specific issue (you could try hacking a PyPy backend, but don't be surprised if the overhead of checking for int vs non-int overwhelms the hoped-for gains). > Also, would it make a difference if I > assigned `2**64` to every slot instead > of assigning n, when n holds a > reference to `2**64`? What happens when > I just write 1? These are examples of implementation choices that every implementation is fully allowed to make, as it's not hard to preserve the semantics (so hypothetically even, say, 3.1 and 3.2 could behave differently in this regard). When you use an int literal (or any other literal of an immutable type), or other expression producing a result of such a type, it's up to the implementation to decide whether to make a new object of that type unconditionally, or spend some time checking among such objects to see if there's an existing one it can reuse. In practice, CPython (and I believe the other implementations, but I'm less familiar with their internals) uses a single copy of sufficiently *small* integers (keeps a predefined C array of a few small integer values in PyObject form, ready to use or reuse at need) but doesn't go out of its way in general to look for other existing reusable objects. But for example identical literal constants within the same function are easily and readily compiled as references to a single constant object in the function's table of constants, so that's an optimization that's very easily done, and I believe every current Python implementation does perform it. It can sometimes be hard to remember than Python is *a language* and it has several implementations that may (legitimately and correctly) differ in a lot of such details -- everybody, including pedants like me, tends to say just "Python" rather than "CPython" when talking about the popular C-coded implementation (excepts in contexts like this one where drawing the distinction between language and implementation is paramount;-). Nevertheless, the distinction *is* quite important, and well worth repeating once in a while.
In your first example you are storing the same integer len(arr) times. So python need just store the integer once in memory and refers to it len(arr) times. In your second example, you are storing len(arr) different integers. Now python must allocate storage for len(arr) integers and refer to to them in each of the len(arr) slots.
Python Memory Model
[ "", "python", "arrays", "memory", "model", "" ]
Hi I am trying to set the visibility of a label based on a textbox string being empty. I have the following code: ``` MyLabel.Visible = String.IsNullOrEmpty(MyTextBox.Text); ``` Why does MyLabel not appear when the textbox is left empty? **Update** I have tried placing this code in the Text\_Changed event of the textbox and it still doesn't work. This was an update issue, it does work on the Text\_Changed event. However the issue is it does not work when triggered on the proccessing of the form. Here is the code triggered from my controller class to give everyone a better insight as to what is going on: ``` using (var frm = new frmAdd(PersonType.Carer)) { var res = frm.ShowDialog(); if (res == System.Windows.Forms.DialogResult.OK) { if (frm.ValidateInformation()) // the above code is called in here { // process the information here... } } } ``` Also I forgot to mention that this form is in a Class Library project (dll).
I would add a Trim, to be more consistent to the User in case of spaces left: ``` MyLabel.Visible = String.IsNullOrEmpty(MyTextBox.Text.Trim()); ``` For the rest it is a matter of triggering the code at the right time. TextChanged should cover everything but the inital state, as addressed by JaredPar. Although I would use Form\_Load, not the constructor. Edit, after the clarification: If your Label an TextBox are on frmAdd then the question is moot, the form itself is no longer shown after ShowDialog returns.
Depends on where the code is being run. If you need interactivity, i.e. the label disappears when a character is typed into the textbox, you need to run it on the Keyup event of the textbox. You may also need to repaint the label.
C# Set the visibility of a label using the result of String.IsNullOrEmpty
[ "", "c#", "winforms", "" ]
I've been trying for a whlie on this, and have so far been failing miserably. My most recent attempt was lifted from this stack code here: [Sending email through Gmail SMTP server with C#](https://stackoverflow.com/questions/704636/sending-email-through-gmail-smtp-server-with-c), but I've tried all the syntax I could find here on stack and elsewhere. My code currently is: ``` var client = new SmtpClient("smtp.gmail.com", 587) { Credentials = new NetworkCredential("me@gmail.com", "mypass"), EnableSsl = true }; client.Send("me@gmail.com","me@gmail.com","Test", "test message"); ``` Running that code gives me an immediate exception "Failure sending mail" that has an innerexeption "unable to connect to the remote server". If I change the port to 465 (as gmail docs suggest), I get a timeout every time. I've read that 465 isn't a good port to use, so I'm wondering what the deal is w/ 587 giving me failure to connect. My user and pass are right. I've read that I have to have POP service setup on my gmail account, so I did that. No avail. I was originally trying to get this working for my branded GMail account, but after running into the same problems w/ that I *thought* going w/ my regular gmail account would be easier... so far that's not the case.
I tried your code, and it works prefectly with port 587, but not with 465. Have you checked the fire wall? Try from the command line "Telnet smtp.gmail.com 587" If you get "220 mx.google.com ESMTP...." back, then the port is open. If not, it is something that blocks you call. Daniel
I ran in to this problem a while ago as well. The problem is that SmtpClient does not support implicit SSL connections, but does support explicit connections ([System.Net.Mail with SSL to authenticate against port 465](http://blogs.msdn.com/b/webdav_101/archive/2008/06/02/system-net-mail-with-ssl-to-authenticate-against-port-465.aspx)). The previous class of MailMessage (I believe .Net 1.0) did support this but has long been obsolete. My answer was to call the CDO (Collaborative Data Objects) (<http://support.microsoft.com/kb/310212>) directly through COM using something like the following: ``` /// <summary> /// Send an electronic message using the Collaboration Data Objects (CDO). /// </summary> /// <remarks>http://support.microsoft.com/kb/310212</remarks> private void SendTestCDOMessage() { try { string yourEmail = "YourUserName@gmail.com"; CDO.Message message = new CDO.Message(); CDO.IConfiguration configuration = message.Configuration; ADODB.Fields fields = configuration.Fields; Console.WriteLine(String.Format("Configuring CDO settings...")); // Set configuration. // sendusing: cdoSendUsingPort, value 2, for sending the message using the network. // smtpauthenticate: Specifies the mechanism used when authenticating to an SMTP service over the network. // Possible values are: // - cdoAnonymous, value 0. Do not authenticate. // - cdoBasic, value 1. Use basic clear-text authentication. (Hint: This requires the use of "sendusername" and "sendpassword" fields) // - cdoNTLM, value 2. The current process security context is used to authenticate with the service. ADODB.Field field = fields["http://schemas.microsoft.com/cdo/configuration/smtpserver"]; field.Value = "smtp.gmail.com"; field = fields["http://schemas.microsoft.com/cdo/configuration/smtpserverport"]; field.Value = 465; field = fields["http://schemas.microsoft.com/cdo/configuration/sendusing"]; field.Value = CDO.CdoSendUsing.cdoSendUsingPort; field = fields["http://schemas.microsoft.com/cdo/configuration/smtpauthenticate"]; field.Value = CDO.CdoProtocolsAuthentication.cdoBasic; field = fields["http://schemas.microsoft.com/cdo/configuration/sendusername"]; field.Value = yourEmail; field = fields["http://schemas.microsoft.com/cdo/configuration/sendpassword"]; field.Value = "YourPassword"; field = fields["http://schemas.microsoft.com/cdo/configuration/smtpusessl"]; field.Value = "true"; fields.Update(); Console.WriteLine(String.Format("Building CDO Message...")); message.From = yourEmail; message.To = yourEmail; message.Subject = "Test message."; message.TextBody = "This is a test message. Please disregard."; Console.WriteLine(String.Format("Attempting to connect to remote server...")); // Send message. message.Send(); Console.WriteLine("Message sent."); } catch (Exception ex) { Console.WriteLine(ex.Message); } } ``` Do not forget to browse through your COM references and add the "Microsoft CDO for Windows 200 Library" which should add two references: ADODB, and CDO.
GMail SMTP via C# .Net errors on all ports
[ "", "c#", ".net", "gmail", "" ]
how do I prevent the user to change the value in an input field (which contains a certain value to be copied into the clipboard) without using disabled="true"? The text should be selected once the user clicks in the field (that's working already) but entering anything should have no effect. Thanks ``` jQuery('input.autoselect[value]').focus(function() { jQuery(this).select(); }); ```
**readonly** in the html of the input is all you need to prevent the user from editing the input. ``` <input readonly="readonly" type="text" value="You can't edit me!"/> ```
I think this should work: ``` var val = jQuery('input.autoselect[value]').val(); jQuery('input.autoselect[value]').change( function(){ jQuery(this).val(val); }); ``` Or possibly (not sure) ``` jQuery('input.autoselect[value]').keydown(function(){ return false; }); ```
Javascript / jQuery: How to prevent active input field from being changed?
[ "", "javascript", "jquery", "" ]
I was wondering if returning a list, instead of returning a pointer to one, was costly in term of performance because if I recall, a list doesn't have a lot of attributes (isn't it something like 3 pointers? One for the current position, one for the beginning and one for the end?).
If you return a `std::list` by value it won't just copy the list head, it will copy one list node per item in the list. So yes, for a large list it is costly. If the list is built in the function which is returning it, then you might be able to benefit from the named return value optimisation, to avoid an unnecessary copy. That's specific to your compiler, though. It never applies if for example the list already existed before the function was called (for example if it's a member variable of an object). A common idiom in C++, to avoid returning containers by value, is to take an output iterator as a parameter. So instead of: ``` std::list<int> getListOfInts() { std::list<int> l; for (int i = 0; i < 10; ++i) { l.push_back(i); } return l; } ``` You do: ``` template<typename OutputIterator> void getInts(OutputIterator out) { for (int i = 0; i < 10; ++i) { *(out++) = i; } } ``` Then the caller does: ``` std::list<int> l; getInts(std::back_inserter(l)); ``` Often once the compiler has finished inlining and optimising, the code is more or less identical. The advantage of this is that the caller isn't tied to a particular collection - for instance he can have the items added to a vector instead of a list if that is more useful for the particular circumstances. If he only needs to see each item once, instead of all of them together, then he can save memory by processing them in streaming mode using an output iterator of his own devising. The disadvantages are the same as with any template code: the implementation must be available to the caller at compile time, and you can end up with a lot of "duplicate" object code for multiple instantiations of the template. Of course you can use the same pattern without using templates, by taking a function pointer (plus a user data pointer if desired) as a parameter and calling it once with each item, or by defining an IntVisitor abstract class, with a pure virtual member function, and having the caller provide an instance of it. [Edit: T.E.D points out in a comment that another way to avoid the copy without using templates is for the caller to pass in a list by reference. This certainly works, it just gives the caller less flexibility than the template, and hence is not the idiom used by the STL. It's a good option if you don't want the "advantage of this" that I describe above. One of the original intentions behind the STL, though, is to separate "algorithms" (in this case whatever determines the values) from "containers" (in this case, the fact that the values happen to be stored in a list, as opposed to a vector or an array or a self-sorting set, or just printed out without storing them at all).]
It (as always) depends. The copy constructor may or may not be invoked by return in the following code. ``` std::list<int> foo() { std::list<int> bar; // ... return bar; }; ``` It may not be invoked if the compiler applies [return value optimization](http://en.wikipedia.org/wiki/Return_value_optimization). If the copy-constructor is called, then it is probably more expensive relative to a pointer for larger lists, and if it isn't called, then it is faster to return the straight list (because it avoids a dynamic allocation) Personally, I don't worry about it and return the straight list. Then, only when my profiler says this a problem do I consider optimizations.
Is returning a std::list costly?
[ "", "c++", "performance", "list", "stl", "return-value", "" ]
There are a lot of questions on SO already about Left joins in Linq, and the ones I've looked at all use the `join` keyword to achieve the desired end. This does not make sense to me. Let's say I have the tables `Customer` and `Invoice`, linked by a foreign key `CustomerID` on `Invoice`. Now I want to run a report containing customer info, plus any invoices. SQL would be: ``` select c.*, i.* from Customer c left join Invoice i on c.ID = i.CustomerID ``` From what I've seen of the answers on SO, people are mostly suggesting: ``` var q = from c in Customers join i in Invoices.DefaultIfEmpty() on c.ID equals i.CustomerID select new { c, i }; ``` I really don't understand how this can be the only way. The relationship between `Customer` and `Invoice` is already defined by the LinqToSQL classes; why should I have to repeat it for the `join` clause? If I wanted an inner join it would simply have been: ``` var q = from c in Customers from i in c.Invoices select new { c, i }; ``` without specifying the joined fields! I tried: ``` var q = from c in Customers from i in c.Invoices.DefaultIfEmpty() select new { c, i }; ``` but that just gave me the same result as if it were an inner join. Is there not a better way of doing this?
What are you talking about? That `from i in c.Invoice.DefaultIfEmpty()` is exactly a left join. ``` List<string> strings = new List<string>() { "Foo", "" }; var q = from s in strings from c in s.DefaultIfEmpty() select new { s, c }; foreach (var x in q) { Console.WriteLine("ValueOfStringIs|{0}| ValueOfCharIs|{1}|", x.s, (int)x.c); } ``` This test produces: ``` ValueOfStringIs|Foo| ValueOfCharIs|70| ValueOfStringIs|Foo| ValueOfCharIs|111| ValueOfStringIs|Foo| ValueOfCharIs|111| ValueOfStringIs|| ValueOfCharIs|0| ```
While the relationship is already defined (both in the database and in the .dbml markup) the runtime cannot automatically determine if it should use that relationship. What if there are two relationships in the object model (Person has Parents and Children, both relationships to other Person instances). While cases could be special cased, this would make the system more complex (so more bugs). Remember in SQL you would repeat the specification of the relationship. Remember indexes and keys are an implementation detail and not part of the relational algebra that underlies the relation model. If you want a LEFT OUTER JOIN then you need to use "`into`": ``` from c in Customers join i in Invoices on i.CustomerId equals c.CustomerId into inv ... ``` and inv will have type `IEnumerable<Invoivce>`, possibly with no instances.
Left join in Linq?
[ "", "c#", "linq", "linq-to-sql", "" ]
``` var query1 = urlencode($('input[name="searchTerm1"]').val()); //user1 var query2 = urlencode($('input[name="searchTerm2"]').val()); //user1 var rpp = 20; //number of tweets to retrieve out var c=0; var f1=new Array(); var f2=new Array(); var common=new Array(); $.getJSON('http://twitter.com/followers/ids.json?screen_name='+ query1 + '&callback=?', function(data) { f1=data; $('#content').append('p'+f1[0]+'p');//this one is coming }); $.getJSON('http://twitter.com/followers/ids.json?screen_name='+ query2 + '&callback=?', function(data1) { f2=data1; }); $('#content').append('p'+f1[0]+'p');//this one is not coming...its showing Undefined }) ``` In this code if u see clearly i have identified using // two append statements one of them is working and outputting the number in the array but the other one is outputting Undefined i have defined the arrays so it should take the values but wat actually happens is that the array become inaccessible outside the $.getJSON function. Any help will be appreciated. Thank You
@anand, $.getJSON() retrieves JSON data in an asynchronous manner. The purpose of you callback functions is to perform work once the JSON has been received from the asynchronous request. I'll simplify your example some: ``` var query1 = urlencode($('input[name="searchTerm1"]').val()); //user1 var query2 = urlencode($('input[name="searchTerm2"]').val()); //user1 var rpp = 20; //number of tweets to retrieve out var c=0; var f1=new Array(); var f2=new Array(); var common=new Array(); $.getJSON('http://twitter.com/followers/ids.json?screen_name='+ query1 + '&callback=?', // 1st callback function(data) { f1=data; // We know that f1 has been assigned, so unless the Twitter API // returns an empty Array, f1[0] will not be Undefined. $('#content').append('p'+f1[0]+'p');//this one is coming }); $.getJSON('http://twitter.com/followers/ids.json?screen_name='+ query2 + '&callback=?', // 2nd callback function(data1) { f2=data1; }); // This statement may be executed before 1st callback and therefore // f1 may still be an empty Array causing f1[0] to return Undefined. $('#content').append('p'+f1[0]+'p');//this one is not coming...its showing Undefined ``` Please check out the comments regarding your calls to append(). Hope this helps!
You need to append your values within the callback function. Right now, your the line of code after the two $.getJSON calls is firing before the JSON is finished downloading. That's why the first append is working. You have a timing issue. To illustrate the timing, use alert messages like this... ``` $.getJSON('http://twitter.com/followers/ids.json?screen_name='+ query1 + '&callback=?', function(data) { f1=data; alert('Two'); $('#content').append('p'+f1[0]+'p');//this one is coming }); $.getJSON('http://twitter.com/followers/ids.json?screen_name='+ query2 + '&callback=?', function(data1) { f2=data1; alert('Three'); }); alert('One'); $('#content').append('p'+f1[0]+'p');//this one is not coming...its showing Undefined ```
Scope of variable in Javascript problem
[ "", "javascript", "arrays", "json", "twitter", "scope", "" ]