text
stringlengths
8
267k
meta
dict
Q: OAuth Performance I'm a newbie to OAuth - I have a high volume customer using OAuth: LoadBalancer with 12 servers but only using 1 server to store the OAuth tokens. Today, when testing I can only get 1000 concurrent users on the site and I need to support an SLA of 10,000. I'm looking at the following alternatives: 1) Look for a more robust OAuth library - must be Java based 2) Store the tokens in a database - will be slower but users will have access Is there anything else I'm missing? Any recommendations from more experienced OAuth developers/architects? Much Appreciated! Steve A: Not missing anything. That's not the purpose of OAuth to solve this. Therefore, 2nd alternative sounds good to me. Anyway no COTS clustering solutions, no db storage here if you want to achieve some certain level of scalability easily and at low cost. Instead start scaling horizontally your token repository using a distributed caching system on its own tier of servers. If java, maybe investigate spymemcached or equivalent. A: You can store your oauth access tokens in any distributed persistent cache (like mongo db with replica sets). With this setup your oauth access tokes will be available on all 12 boxes and you will be able to scale horizontally. Tokens created on any box will be automatically replicated and it should be super fast compared to a regular database. More info on mongodb and replica sets
{ "language": "en", "url": "https://stackoverflow.com/questions/7573756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to Interpret /***/ in java? What I am Doing? I am now constructing a comments segregator as a part of my simple IDE stimulator (Which will detect all the comments in the java code). In that my task is to note down all the starting and ending positions of comments and documentations of all forms... 1. // 2. /*......*/ 3. /**.......*/ (I am doing this using Deterministic Finite Automata.) And I will give separate colors for comments and documentation. Where my doubt is? Though it is uncommon, When code has a statement like this /***/, how should I interpret that code? Whether I must treat it as comment or documentation? A: Treat it as a comment since obviously there is no documentation to be communicated to anyone. Edit: Eclipse, for example, will treat /***/ as documentation. Taking cue from this site where Java grammar is explained, /**"documentation"*/ also formally specifies that documentation is between /** and */, even when length of its content is zero. Practically, I'd say: treat it as comments. Formally, treat it as documentation. Pick one. A: Color it however you want. As a line by itself, you cannot determine if it should be interpreted as /** */ (documentation) or /* * */ (a single commented asterisk) or /* **/ (oddball comment). You could try and infer if it's documentation or not by looking at the previous and next lines. If either of those are documentation, then most likely this little /***/ is documentation as well. A: The javadoc comment style was not an extension of the language; it is not part of the actual Java syntax. So essentially every javadoc comment is a comment first, and javadoc second. For that reason, I would use "normal comment" as the default. A: If you are treating /** */ as documentation then you should also treat /***/ as documentation - there isn't much practical difference between zero-length documentation and only-whitespace documentation. I think it's also easier to implement - otherwise you have to treat /***/ as a special case (probably involves some-kind of look-ahead).
{ "language": "en", "url": "https://stackoverflow.com/questions/7573761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I restructure a sprawling WCF service I am new to Windows Communication Foundation and I am working on a system that serves data to a front end. The WCF portion of the system consists of hundreds of queries that retrieve specific filtered datasets. These datasets are send back to the client via over a hundred different classes. It almost seems like there is a separate class for each service operation. A snapshot of the code would look like [OperationContract] IList<A> LoadAdata(); [OperationContract] IList<B> LoadBdata(); [OperationContract] IList<C> LoadCDdata(); . . In addition alot of time and code is spent converting from the dataset into the IList<> objects. My Questions are: Is this how WCF is suppose to work? Is there a better way to structure this service? A: * *The typically structure you describe is not an absolute necessity for WCF to work. It can be a practice that your company to have a standard way of dealing with service and data contracts. For example: ServiceResponse ServiceOperation (ServiceRequest request); is a common pattern to see. This allows to flexibily maintain the input and output parameters of a service operation, without changing the outer visible signature of the operation. This might seem overhead, but can serve a purpose. *Is operations are standard CRUD operation and all look the same and do not have any specific business logic behind it, please take a look at WCF Data Services which exposes your data model as a standardize OData interface. The client is able to create custom queries and prevents the service from having to expose a large set of interface operations. It is all handled for you in that case.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the right way to put objects to boost::property_tree? Consider following example: #include <boost\property_tree\ptree.hpp> #include <boost/any.hpp> typedef boost::property_tree::ptree PT; struct Foo { int bar; int egg; Foo(): bar(), egg() {} }; int main() { Foo foo; foo.bar = 5; PT pt; pt.put<Foo>("foo", foo); return 0; } I'm new to boost and I'm willing to put a Foo object into property tree. The example above will not compile giving an error: c:\mingw\bin\../lib/gcc/mingw32/4.5.2/../../../../include/boost/property_tree/stream_translator.hpp:33:13: error: no match for 'operator<<' in 's << e' Can anyone suggest the right way to do it? A: Simply create an overloaded operator<< for your Foo object-type. This can be done by creating a function that takes the members of your Foo object-type, and passes them via the operator<< to a ostream object-type. Here is a very simple example: ostream& operator<<(ostream& out, Foo output_object) { out << egg << " " << bar; return out; } This works because the int types you are using as the members of your Foo object-type are calling the overloaded version of operator<< for ostream and int. So if the objects that are part of your Foo type are not already overloaded, then you would also have to create overloaded operator<< functions for those types as well. Once this is done, your code can be called anywhere like so: Foo test; cout << test; //will print out whatever the values of "egg" and "bar" are Additionally, any other code that attemps to use operator<< with an ostream object and your Foo type as operands will function correctly as well. Finally, you can either inline the overloaded function and place it in a header-file, or you can create the function declaration in a header, and then define the function in a code module somewhere else.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Eclipse : How can i know that to what package does a class belong to? I am using Eclipse IDE How can i know that to what package does a class belong to ?? Thank you A: Hover over the classname. Information about the class should appear in a tooltip. A: Look at the package statement at the top.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Accessing Validator collection through base page protected override void OnLoadComplete(EventArgs e) { foreach (var validator in Page.Validators) { //do something } base.OnLoadComplete(e); } Why does var validator2 = Page.Validators[1].ControlToValidate not work? It inherits the property but I can't get access to it. See this image - http://tinypic.com/r/14v5r0y/7 Also, is this the right place in the page cycle to get access to the validation errors? A: The ControlToValidate property returns a string, pertaining to the ID of the control that is being validated. Is that what you're looking for? To get the actual validator, I believe you'd want something like this: var validator = (BaseValidator)Page.Validators[0]; string controlToValidate = validator.ControlToValidate;
{ "language": "en", "url": "https://stackoverflow.com/questions/7573778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to detect if user pressed cancel button or selected root (primary disk) with java.awt.FileDialog in MAC OS? Does somebody knows how to detect if user selected cancel button or root disk in java.awt.FileDialog in Mac OS (10.6 - Snow Leopard)???? I have the below code: System.setProperty("apple.awt.fileDialogForDirectories", "true"); FileDialog fd = new FileDialog(this); fd.setDirectory(_projectsBaseDir.getPath()); fd.setLocation(50,50); fd.setVisible(true); File selectedFile = new File(fd.getFile()); System.setProperty("apple.awt.fileDialogForDirectories", "false"); But if user selects primary disk on the left panel (below Devices), the selection returns null, I cannot diferentiate if user selected primary disk or presssed the cancel button. (both actions return null). A: If it's possible to use Swing, I'd highly recommend using JFileChooser. Then your code would look like this: JFileChooser fc = new JFileChooser(); fc.setCurrentDirectory(_projectsBaseDir.getPath()); fc.setLocation(50,50); int ret = fc.showOpenDialog(this); // Use .showSaveDialog(this) for save dialog if(ret == JFileChooser.APPROVE_OPTION) File selectedFile = fc.getSelectedFile(); Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Unable to get FastMM4 to work with Delphi 7 application protected using ASProtect I'm getting this error, "FastMM4 cannot install since memory has already been allocated through the default memory manager". I'm using ASProtect and Eurekalog for my Delphi 7 application (there's no problems with Eurekalog though). I've already placed FastMM4 as the first unit as required. Later I found out that ASProtect can execute a DLL before running the application. *External User Code*Since this version, ASProtect implements an external dynamic library > usage feature. This might be very useful if you want you own code to be executed by ASProtect befor main application starting. You should provide ASProtect with the full path to the selected DLL. This library will be added to ASProtect code at the protection > step. There is only one function which will be executed by ASprotect at the run-time before running the main application. Function declaration: Delphi: function RunApplication() : Boolean; export; If function result is TRUE, ASProtect will start main application, otherwise error message occures. Warning: If you want to get access to the resources of your DLL use DialogBoxIndirect or > CreateDialogBoxInderect class APIs. All other WinAPI functions (such as FindResorceA, LoadResource, etc) might not working correctly and returns error results. So, I created a DLL with the following but this doesn't help. library fastmem; uses FastMM4 in 'FastMM4.pas', SysUtils,dialogs, Classes; {$R *.res} begin showmessage('ok!'); end. After that, I run my application, and it displays 'OK' messagebox before showing the "FastMM4 cannot install since memory has already been allocated through the default memory manager" error. Any thoughts of how I can solve this problem? Can I disable Delphi's default memory manager? Thanks. :) Note: ASProtect no longer provides forum support.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Request Dialog - JavaScript Example - Does not work in internet explorer I am following the example in the facebook Javascript SDK It works fine in Chrome, but for some reason, when i run the same code in Internet explorer, I get a Javascript error. I want to allow users of my app to send invites to use the application. Anyone else have this problem or have a workaround? Here are my error details (well what i could get anyway): Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.3; .NET4.0C; .NET4.0E; FDM) Timestamp: Tue, 27 Sep 2011 18:09:30 UTC Message: Expected identifier, string or number Line: 38 Char: 9 Code: 0 URI: http://someplace:5000/InviteFriends2.aspx Message: Object expected Line: 20 Char: 1 Code: 0 URI: (same as above URI) A: They have trailing commas in the examples which is not a good thing to do. Remove them. function sendRequestToOneRecipient() { var user_id = document.getElementsByName("user_id")[0].value; FB.ui({method: 'apprequests', message: 'My Great Request', to: user_id, <-- Trailing comma }, requestCallback); } function sendRequestToManyRecipients() { FB.ui({method: 'apprequests', message: 'My Great Request', <-- Trailing comma }, requestCallback); }
{ "language": "en", "url": "https://stackoverflow.com/questions/7573791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Rails 3 routes handling multiple, non-required parameters I am porting a zend framework / php app to rails and there is a set of parameters for something like a query. For example: /locations/get-all-locations/lat/xxx.xxx/lng/xxx.xxx/radius/5/query/lemonade/updated_prev_hours/72 but could provide variations of this like leaving out the query or distance paramenters (ie basically some could be required in different sets - another question). It would seem like all parameters would need to be named. Would this be best handled by a static segement like this: match 'locations/get-all-locations/lat/:lat/lng/:lng/radius/:radius/query/:query/updated_prev_hours/:updated_prev_hours => 'locations#get_all_locations' Is that all I need and the values will be available in the params hash? Or is there a better strategy for handling complex urls like this? thx A: This looks like you are going too deep in the URL and simply using the query string would be more appropriate. However if you are porting and have not other option then you can do this with route globbing. Something like: match "locations/get-all-locations/*options" => "locations#get_all_locations" You can then match these up by doing: Hash[*params[:options].split("/")] Or merge them by doing: params.merge!(Hash[*params[:options].split("/")])
{ "language": "en", "url": "https://stackoverflow.com/questions/7573795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to replicate visual studio setup project's "detected dependencies" functionality? Question Is there a canned solution that will detect (recursively) all dependencies of a given visual studio project file? This is basically what a visual studio setup project's "detected dependencies" functionality does -- but I need programmatic access to the list of dependencies (ideally from MSBuild). Background (read if you want to know about the actual problem I'm trying to solve) I am working on the automated build for a WinForms application that utilizes third party UI libraries. In order to successfully build the application, the libraries must be "installed" (via the vendor's installer) on build/dev machines (this puts the assemblies in the GAC and installs licensing components). The vendor's dlls must then be referenced via the GAC in our project files. Unfortunately, there is no way to avoid this "installation" requirement (I would love to use local references that are fetched from my source control system, as I do with nearly all of our other 3rd party references, but this is not possible in this case). We utilize a "plugin" architecture, so the executable project does not directly reference any of these components -- they are all indirectly referenced via the "plugin" projects (which are in turn referenced by the executable project). Therefore, setting the GAC references to "copy local = true" in the "plugin" projects only copies the vendor dlls into the output directory of the plugin project; they are not recursively copied to the output directory of the executable project. Hence, an xcopy deployment of the executable project's output directory does not work on a machine without these vendor dlls installed, as they are not present in the output directory. We currently use msi deployment (via a visual studio setup project) and would like to switch to xcopy deployment. The vendor dlls are currently getting packaged into our msi because the dependency is "detected" by whatever magic happens inside the setup project's "detected dependencies" functionality. My solution to make xcopy deployment work is to just require that the executable project directly reference the vendor dlls used by its referenced plugins with "copy local" set to true. To ensure that all required vendor dlls have been referenced in this manner, I would like to generate a list of the "detected dependencies" of the executable project, assert that all those dependencies are present in the executable project's output directory, and fail the build if they are not all present. I could do this myself by analyzing the csproj file, compiling a list of all "Reference" entries and recursively following project references. However, I am hoping that there is canned functionality somewhere that does this -- especially considering that "detected dependencies" is smart enough to filter out the framework dlls but includes my GAC-referenced vendor dlls. Thanks! A: Most possibly you can solve this with Reflection - your plugins and their host most likely have a common interface contract. Your code would scan all assemblies for the usage of a set of known interface contracts after they are built and add / update these to the installer project by IDE extensibility automation. Writing a plugin that does this refresh manually may be one option, writing and registering an MSBuild task the better one. I know this is not a 'canned solution' but the amount of work required should be less than one KLOC.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Scaling Out Socket Servers Assuming live chat clients (Skype, Windows Live Messenger) use sockets to stay connected to their relative services, what are some strategies that the developers implement to scale out their servers? Even a system like Xbox LIVE where users are able to chat and send out game invites to their online friends. The main problem is that each of these connections have to share state; some of this state needs to be queried by other clients (who could be connected to a different server behind a load balancer on the other side of the world). The most obvious one is online status. Do these services use giant RAM based caches (maybe something like memcached) or NoSQL databases (like Cassandra) which all servers around the world connect to and update and retrieve the required state information. I was wondering if this sort of solution would be fast (or reasonable) enough for real time services like the ones I described above. My main problem is with memory. Distributing load is fairly straight forward (i hope) with a combination of load balancers and round-robin DNS balancing. A: Here is one way. Though not necessarily concerned with memory based caching
{ "language": "en", "url": "https://stackoverflow.com/questions/7573797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to change the From address in Django Email? As noted in the docs, the SERVER_EMAIL setting is supposed to change the 'From' address in crash emails sent to ADMINS from the Django (1.3.1) server. But it's not. Does this work for you in 1.3.1? (Or any Django version) Django insists on just using my EMAIL_HOST_USER - my email login/actual address - as the from address. I'm using Gmail as an SMTP server, so I wonder if that could have something to do with it. Does Gmail block this sort of thing? I swear I've gotten this to work before. It's a little annoying because we have multiple projects that all seem to be emailing from the same address, and we have to dig through the traceback to see which project it is. A: The problem is Gmail. All the way down to the smtplib library, the correct 'from' address is specified, and this library sends the right address to Gmail. This Gmail Support page implies (especially near the bottom under "Note for POP/IMAP" users) that you need to add an address as an 'additional email address' under Gmail's settings to be able to send mail from it over Gmail's SMTP servers. This of course requires verification; since my 'from' address doesn't have an inbox (it's fake!) its not currently possible for me. But at least it's not a Django bug! : ) (Note: this is a pretty obvious way for Gmail to stop you from spamming, I'm sure that's why they do it.) A: From what I can see in the code (1.3.1) the stack trace emails are sent using the mail_admins method with SERVER_EMAIL as the specified from address: mail = EmailMultiAlternatives(u'%s%s' % (settings.EMAIL_SUBJECT_PREFIX, subject), message, settings.SERVER_EMAIL, [a[1] for a in settings.ADMINS], connection=connection) Which is defined as: class EmailMultiAlternatives(EmailMessage): [...] def __init__(self, subject='', body='', from_email=None, to=None, bcc=None, connection=None, attachments=None, headers=None, alternatives=None, cc=None): I would suggest putting trace output in EmailMultiAlternatives to verify that the proper email address is being used.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Stackoverflow error in BackgroundWorker ProgressChanged I have a search function in my program that uses a background worker in order to get the results. The Progress changed event is used to update the listview with the new item. Private Sub SearchWorker_ProgressChanged(ByVal sender As Object, ByVal e As System.ComponentModel.ProgressChangedEventArgs) Handles SearchWorker.ProgressChanged Dim itmX As ListViewItem Dim tmpCustomer As CustomerItem If e.UserState.ToString = "New" Then lstResults.Items.Clear() Else Try tmpCustomer = e.UserState itmX = lstResults.Items.Add(tmpCustomer.CustomerName) ' <-- Error here itmX.Tag = tmpCustomer.CustomerID itmX.Name = tmpCustomer.CustomerID itmX.SubItems.Add(tmpCustomer.City) itmX.SubItems.Add(tmpCustomer.State) itmX.SubItems.Add(tmpCustomer.Zip) Catch ex As Exception MsgBox(ex.Message) End Try End If progBar.Value = e.ProgressPercentage Application.DoEvents() End Sub And I get this error An unhandled exception of type 'System.StackOverflowException' occurred in System.Windows.Forms.dll I've tried this, but it doesn't make a difference Private Sub SearchWorker_ProgressChanged(ByVal sender As Object, ByVal e As System.ComponentModel.ProgressChangedEventArgs) Handles SearchWorker.ProgressChanged If e.UserState.ToString = "New" Then lstResults.Items.Clear() Else Try itmX = lstResults.Items.Add("Test") Catch ex As Exception MsgBox(ex.Message) End Try End If progBar.Value = e.ProgressPercentage Application.DoEvents() End Sub Edit: Oh, and if I just step through the code, it doesn't have any problems. Edit 2: Here is the backgroundworker DoWork event: Sub doSearch(ByVal sender As Object, ByVal e As System.ComponentModel.DoWorkEventArgs) Handles SearchWorker.DoWork canceled = False If curSearch = doSearchText Then canceled = True Exit Sub End If curSearch = doSearchText SearchWorker.ReportProgress(0, "New") Dim rSelect As New ADODB.Recordset Dim CustomerID As Integer = MakeNumeric(doSearchText) Dim sSql As String = "SELECT DISTINCT CustomerID, CustomerName, City, State, Zip FROM qrySearchFieldsQuick WHERE " Dim sWhere As String = "CustomerID = " & CustomerID & " OR CustomerName Like '" & doSearchText & "%'" If Not doSearchText.Contains(" ") Then sWhere &= " OR FirstName Like '" & doSearchText & "%' OR LastName Like '" & doSearchText & "%'" Else Dim str() As String = doSearchText.Split(" ") sWhere &= " OR (FirstName Like '" & str(0) & "%' AND LastName Like '" & str(1) & "%')" End If Dim i As Integer = 0 Dim tmpCustomer As CustomerItem With rSelect .Open(sSql & sWhere & " ORDER BY CustomerName", MyCn, ADODB.CursorTypeEnum.adOpenStatic, ADODB.LockTypeEnum.adLockReadOnly) Do While Not .EOF If SearchWorker.CancellationPending Then canceled = True Exit Do End If Do While IsDBNull(.Fields("CustomerID").Value) .MoveNext() Loop tmpCustomer.CustomerID = "c" & .Fields("CustomerID").Value tmpCustomer.CustomerName = NZ(.Fields("CustomerName").Value, "").ToString.Trim tmpCustomer.City = Trim(NZ(.Fields("City").Value, "")) tmpCustomer.State = Replace(Trim(NZ(.Fields("State").Value, "")), ",", "") tmpCustomer.Zip = Trim(NZ(.Fields("Zip").Value, "")) SearchWorker.ReportProgress((i / .RecordCount) * 100, tmpCustomer) i += 1 Application.DoEvents() aMoveNext: .MoveNext() Loop .Close() End With End Sub A: I think the problem is likely this line: Application.DoEvents() If your BackgroundWorker is queueing up ProgressChanged events fast enough, each call to Application.DoEvents() will work through the message queue, come to a ProgressChanged event, update the progress, call Application.DoEvents(), work through the message queue, come to a ProgressChanged event, etc.. essentually causing a recursive behavior in your code. Try removing that call and see if the problem goes away. A: Application.DoEvents() That's the trouble maker. You added it because you noticed that the user interface still froze, even though you used a BGW. Problem is, when it pumps the events, your BGW has called ReportProgress again. Causing ProgressChanged to run again. Causing DoEvents to get called again. That works for maybe a few seconds until the ui thread runs out of stack space. Kaboom. You'll have to delete the DoEvents() call. And solve the real problem, your BGW is calling ReportProgress way too often. Causing the ui thread to be flooded with invoke requests to call ProgressChanged. Causing it to no longer take care of its regular duties. Including painting and responding to user input. Call ReportProgress not more often than 20 times per second. That looks smooth to the human eye. Collect calculation results so you'll have a bunch of work ready to process. If your worker produces results faster than the ui thread can display them then you have no option but to slow it down forcefully.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Do SQL constraints cause slowness? I have a few constraints (to default the value of the column) on a table that is being updated. The update is really slow, and I was wondering if it could be the constraints fault? The constraint in question is: ALTER TABLE [dbo].[OrderCustomers] ADD CONSTRAINT [DF_OrderCustomers_AmountTotal] DEFAULT ((0.00)) FOR [AmountTotal] The update statement is just changing a few columns one of which is the column in the Constraint above and the a few other columns that don't have FKs on them. FYI: I disabled all triggers to isolate the problem. A: It's highly unlikely a default constraint on a column is going to even be noticeable. There are so many things that could cause a slow update. However, the first place I would look is any triggers on the table being updated. This could cause a whole slew of performance issues. One of the best ways to diagnose this is to fire up SQL Profiler and see what's happening on your SQL Server when you do an update. You might be quite surprised at what's happening. A: Unless you've coded them really badly then no. By badly, I mean a things like a UDF to access a table to generate a value. Or sending an email RBAR in a trigger. A slow write can be caused by many things. I doubt constraints. See Why does an UPDATE take much longer than a SELECT?
{ "language": "en", "url": "https://stackoverflow.com/questions/7573812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Trying to resolve a InvalidTokenAuthenticity Issue and discovered NO SESSION is even being passed I'm using IETester to debug running Parallels on a Mac OS. Everytime I send an AJAX request out I consistantly get a InvalidAuthenticityToken response. I've covered every possible issue. Then I threw in a debugger, and compared session[:_csrf_token] to the form_authenticity_token. On my mac they matched. But on my IETester/Parallells I get this : (rdb:5664) pp session {} NO SESSION AT ALL! QUESTION: IS THIS BECAUSE IETESTER IS A B#@CH? And is it possible that it will work with any other windows operating system? A: YES IETESTER doesn't store Session data and is completely inadequate for testing your websites.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: SH script to move files from one dir to another depending on the filename I am trying to write a sh script that will run when one of my downloads is completed. It should look for a specific filename on ~/Downloads and move it to a different dir depending on the filename. I.e. I have downloaded the last episode of Glee, the filename is: glee_some_trash_files_always_have.mkv It should be moved to ~/TVshows/Glee/ This is what I was able to do: #!/bin/bash if filename in ~/Downoads; then result= if filename = *glee*; then result= mv $filename ~/TVshows/Glee/ else if filename = *pokemon*; then result= mv $filename ~/TVshows/pokemon/ endif done Is my approach correct? Please note I am very new to sh. Thanks in advance. ############################################################################### Edit: Here is my script, I hope someone else could find it useful: #!/bin/bash cd "$HOME/Downloads" # for filename in *; do find . -type f | while IFS= read filename; do # Look for files in all ~/Download sub-dirs case "${filename,,*}" in # this syntax emits the value in lowercase: ${var,,*} (bash version 4) *.part) : ;; # Excludes *.part files from being moved move.sh) : ;; # *test*) mv "$filename" "$HOME/TVshows/Glee/" ;; # Using move there is no need to {&& rm "$filename"} *test*) scp "$filename" "imac@imac.local:/users/imac/Desktop/" && rm "$filename" ;; *american*dad*) scp "$filename" "imac@imac.local:/users/imac/Movies/Series/American\ Dad/" && rm "$filename" ;; *) echo "Don't know where to put $filename" ;; esac done A: The mv command can move multiple files at a time. The last argument is treated as a directory name. The trailing / is important; if there's one matching file name, and the target directory doesn't exist (say, because you misspelled it), it will create it as a file. mv ~/Downloads/*glee* ~/TVshows/Glee/ mv ~/Downloads/*pokemon* ~/TVshows/pokemon/ A: This is my script for serial sorting. #!/bin/bash PATH_FROM=/your/download/dir PATH_TO=/path/serial/directory cd $PATH_FROM ls -1 *{mkv,avi,srt,mp4} | sed -e 's/\.[s|S][0-9].*$//g' | uniq | while read -r serial do folder=$(echo $serial | tr A-Z a-z) folder=${folder/the./} folder=`echo ${folder//_/.}` folder=`echo ${folder//./ }` folder=( $folder ) folder=`echo "${folder[@]^}"` ls -1 ${serial// /.}.* | sed -e 's/'$serial'\.[s|S]//g' | sed -e 's/\..*$//g' | uniq | while read -r s do season=s$(echo "$s" | sed -e 's/[e|E].*$//g' | sed -e 's/^0//g') mkdir -p "$PATH_TO/$folder/$season" mv -f $serial.?$s* "$PATH_TO/$folder/$season/" log=`date +"[%d/%m/%Y %X]"` echo $log" "$serial" success sync with "$PATH_TO"/"$folder"/"$season >> /path/to/logfiledir/log.txt done done A: This is where the shell's case statement comes in handy: #!/bin/bash cd "$HOME/Downloads" for filename in *; do # this syntax emits the value in lowercase: ${var,,*} (bash version 4) case "${filename,,*}" in glee*) mv "$filename" "$HOME/TVshows/Glee/" ;; pokemon*) mv "$filename" "$HOME/TVshows/pokemon/" ;; *) echo "don't know where to put $filename";; esac done
{ "language": "en", "url": "https://stackoverflow.com/questions/7573815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Get the first letter of each word in a string using regex I'm trying to get the first letter of each word in a string using regex, here is what I have tried: public class Test { public static void main(String[] args) { String name = "First Middle Last"; for(String s : name.split("(?<=[\\S])[\\S]+")) System.out.println(s); } } The output is as follows: F M L How can I fix the regex to get the correct output? A: Edit Took some suggestions in the comments, but kept the \S because \w is only alpha-numeric and might break unexpectedly on any other symbols. Fixing the regex and still using split: name.split("(?<=[\\S])[\\S]*\\s*") A: Why not simply: public static void main(String[] args) { String name = "First Middle Last"; for(String s : name.split("\\s+")) System.out.println(s.charAt(0)); } A: (Disclaimer: I have no experience with Java, so if it handles regexes in ways that render this unhelpful, I apologize.) If you mean getting rid of the spaces preceding the M and L, try adding optional whitespace at the end (?<=[\\S])[\\S]+\\s* However, this may add an extra space in the case of single-letter words. This may fix that: (?<=[\\S])[\\S]*\\s* A: Sometimes it is easier to use a different technique. In particular, there's no convenient method for “get all matching regions” (you could build your own I suppose, but that feels like a lot of effort). So we transform to something we can handle: String name = "First Middle Last"; for (String s : name.replaceAll("\\W*(\\w)\\w*\\W*","$1").split("\\B")) System.out.println(s); We could simplify somewhat if we were allowed to assume there were no leading or trailing non-word characters: String name = "First Middle Last"; for (String s : name.replaceAll("(\\w)\\w*","$1").split("\\W+")) System.out.println(s); A: I recently had this question in an interview and came up with this solution after looking here. String input = "First Middle Last"; Pattern p = Pattern.compile("(?<=\\s+|^)\\w"); Matcher m = p.matcher(input); while (m.find()) { System.out.println(m.group()); } This regex won't pick up non-word characters at the start of strings. So if someone enters "Mike !sis Strawberry", the return will be M, S. This is not the case with the selected answer that returns M, !, S The regex works by serching for single word characters (\w) that have one or more space characters (\s+) or are at the start of a line (^). To modify what is being searched for, the \w can be changed to other regex valid entries. To modify what precedes the search character, modify (\s+|^). In this example \s+ is used to look for one or more white spaces and the ^ is used to determine if the character is at the start of the string being searched. To add additional criteria, add a pipe character followed by a valid regex search entry. A: It's not fixing the regex, but adding a .trim() to the output string still works: String name = "First Middle Last"; for(String s : name.split("(?<=[\\S])[\\S]+")) System.out.println(s.trim()); output: F M L
{ "language": "en", "url": "https://stackoverflow.com/questions/7573817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Convert Unicode Object to Python Dict A request object that I'm dealing with has the following value for the key "address": u"{u'city': u'new-york', u'name': u'Home', u'display_value': u'2 Main Street'}" I need to operate on this unicode object as a dictionary. Unfortunately, json.loads() fails because it is not a json compatible object. Is there any way to deal with this? Do I have to work with the the json.JSONDecoder object? A: >>> ast.literal_eval(u"{u'city': u'new-york', u'name': u'Home', u'display_value': u'2 Main Street'}") {u'city': u'new-york', u'name': u'Home', u'display_value': u'2 Main Street'}
{ "language": "en", "url": "https://stackoverflow.com/questions/7573822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Can't style a td element I'm not sure why, but i can't seem to style a td element in my script. I'm trying to remove a border from all the td elements in the table-subtitles, and i guess my css is wrong somehow. PS: I was able to remove the border by inline css only. HTML: <table> <thead> <tr> <th width="100%" colspan="5">My Recent Activity</th> </tr> </thead> <tbody> <tr class="table-subtitles"> <td style="border: 0"><h4>Date</h4></td> <td><h4>Description</h4></td> <td><h4>Amount</h4></td> <td><h4>Due</h4></td> <td><h4>Status</h4></td> </tr> <?php foreach($this->payments as $payment): ?> <tr> </tr> <?php endforeach; ?> <?php if ((count($this->payments)==0)) : ?> <tr> <td style="border: 0; text-align: center" colspan="6" class="info">No payments made yet</td> </tr> <?php endif; ?> </tbody> </table> CSS: #payments-content { width: 480px; float: left; margin-right: 20px; } #payments-sidebar { width: 270px; float: left; } .ui-button-large .ui-button-large-text { font-size: 20px; width: 270px; } // new tables design .table-subtitles td{ border-bottom: 0 !important; } A: // is not a css comment. you must use /* */ A: You might find want to make friends with border-collapse and th. A: When I put just this table into a test HTML file I'm getting no borders by default. Therefore this is probably an inheritance issue with some previous table styling you have or are importing from your CSS. Normally, the solution to such problems is over riding whatever style its inheriting my making a more specific selector in CSS. So for example, give your table an ID and have your CSS style be: #mytable .table-subtitles td { border-bottom: 0; } Another great way to see exactly what's happening is to use Firebug in Firefox or the Chrome inspector see exactly what styles your table rows are picking up, inheriting or being overridden. A: did you try border-bottom:0!important A: Your problem is with your selector. Doing .table-subtitles td means: The TD element inside the element with the "table-subtitles" class. What you need is: td.table-subtitles { border-bottom:0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/7573827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is it possible to call a web service with Indesign javascript? I'm an in-house developer for a print company. We use Adobe Indesign CS3 and CS5 to create documents for printing. I created a script in Adobe Extendscript that creates an Indesign Document and handles some basic conversions when the client fails to do so themselves. I used Javascript to write this script. Is it possible to call a web service through such a script? If so, how? If not, what would be the best way to call a web service from the desktop? Thank you. A: As of 2022 I would point to * *Marc Autrets IdExtenso https://github.com/indiscripts/IdExtenso *My solution restix https://github.com/grefel/restix Extendables was already mentioned (does not exist anymore): Extendables It is not jQuery, instead it is a library for InDesign Scripting. The most complete discussion is found at Rorohiko's blog, with an nice straight forward example. A: No and Yes. No, there is no way (afaik) to make InDesign call a web service from a script. It's very possible and often done from InDesign plugins (you can execute arbitrary c++ code so you can do whatever). However, that's an entirely different beast to learn. Yes, it's possible to do from ExtendScript using a library. So basically your script would call the web service to get data (maybe using parameters gotten from InDesign or the document) and then send the returned values into other InDesign script functions to perform the operations. A basic sample can be found here that uses 'Extendables'. EDIT: Since there seem to be some confusion: The documents aren't the ones running the script and very rarely even contain them. The scripts are saved in an InDesign specific Javascript format (.jsx) and interpreted by the InDesign scripting engine. A: Besides Extendables, there are 2 alternate options: Adobe Bridge/Bridgetalk Can't say for specific versions of the Adobe suite, but if you can use or have Adobe Bridge/Bridgetalk, you can make use of Adobe's cross app communication and HttpConnection class available to Bridge (as per the SDK doc), and have InDesign call Bridge to make the HTTP request and pass results back to InDesign. I don't have specific example for InDesign, but here's some meant for Illustrator. I would assume it would port to InDesign easily. https://gist.github.com/daluu/2d9dec72d0863f9ff5a7 https://gist.github.com/mericson/6509997 Make web service calls externally and interface to ExtendScript Adobe's scripting API engine is not strictly ExtendScript/Javascript. You can also use the script API from COM/VBScript (on Windows) or Applescript (on Mac), which execute external to InDesign but interact with InDesign via the API. For Windows, by COM, I mean any language that supports COM, so it's not just the default VBScript (can be Python, Perl, PHP, Java, .NET, even Microsoft JScript - their version of Javascript for command line/desktop/etc.). Using the script API on a different engine, you make the web service call externally from other language (VBScript, Applescript, etc.) then pass the results into the ExtendScript via the script API call (in COM/Applescript) of application.doScript('ExtendScript code snippet here') (or doJavascript) where for ExtendScript snippet, could be a short snippet that uses ExtendScript includes to include actual JSX file then call a ExtendScript function/method, passing it the web service results as arguments. An example of this technique (not covering the web service call portion) is described here in some of the solutions: Is it possible to execute JSX scripts from outside ExtendScript? A: You can also call AppleScript or VB depending on the os and use some command line utility like cUrl to call your webservice. Also you can give a try to getUrl, a free script from Rorohiko that eases web communication inside ExtendScript. A: ... probably if you use InDesign to create a pdf out of the doc. In the pdf you probably can. But from the raw InDesign doc probably not. I'd also vote that you won't be able to run js from the document before it's open. I'd suggest taking it up with InDesign experts. I'm curious however what you'll come up with since I remember that ID does let you include interactivity in the document. Please post back if you find your answer somewhere else.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: C# / LINQ - having trouble querying database with LINQ I have a database with two entities, A and B. A and B have a many-to-many relationship between them (there's also an AB table too that is created automatically to realize this). A has A_prop (key) and B has B_prop (key). I want to, given a specific B_prop, find all the A's that have in its relationship of B's, (call them Ayes and Bees for the respective Navigation Properties), the B with the specific B_prop. So I have this code: public class Repository { private ABEntities entities = new ABEntities(); public IQueryable<A> FindAllA(string b_prop) { return from b in entities.Bs where b.B_prop == b_prop select b.Ayes; } } The return types here doesn't match. What I would really like is to have a List of A's, or something similar that I can work with as in the following manner: List<A> listofa = repository.FindAllA("some string"); foreach (A a in listofa) { // Do my stuff here. } EDIT: Thanks for the replies. This is a solution (tested) to my problem: public List<A> FindAllA(string b_prop) { return (from b in entities.Bs where b.B_prop == b_prop select b.Ayes).First().ToList(); } A: public class Repository { private ABEntities entities = new ABEntities(); public IList<A> FindAllA(string b_prop) { return (from b in entities.Bs where b.B_prop == b_prop select b.Ayes).ToList(); } } I highly recommend you dont return queryables from a repository since each queryable implementation is different (linq/entities/sql/etc) A: Rather than calling .First on your initial subset, consider using SelectMany to flatten your result set: public List<A> FindAllA(string b_prop) { return (from b in entities.Bs where b.B_prop == b_prop from a in b.Ayes select a).ToList(); } This way if for some reason you had multiple records that matched in your Bs table, all of the associated As would be returned rather than just the first set matching up to your first B result. The problem with your first try is that you were returning IQueryable<EntitySet<Ayes>> rather than IQueryable<Ayes>. SelectMany flattens this relationship out. A: A LINQ query usually returns an iQueryable<type>. You can convert this to a list by doing: return (from b in entities.Bs where b.B_prop == b_prop select b.Ayes).ToList(); in your function. A: You have one IQueryable and other one is there fore not matching Just return List<A> that should work
{ "language": "en", "url": "https://stackoverflow.com/questions/7573835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Send customized response after custom certificate validation with Jetty 7 failed we are currently using a certificate based login for our webapp (running Jetty 7.4) With JSSLUtils I configured a custom org.jsslutils.sslcontext.X509SSLContextFactory that basically inspects a certificate, validates it against the backend and also checks if the user associated with this certificate is authorized to use a particular servlet. This works fine, the only problem is that if the user is not authorized or the certificate is not valid anymore etc. all I can do is throw new CertificateException("Not allowed to access ...."); The question now is, can I intercept this error somewhere/somehow server side and send back a user friendly page with some text indicating why this failed? All I get at the moment is the default browser page for SSL errors, for example in Chrome Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. I tried specifying an ErrorHandler for our servletholder (in the spring configuration file), but it is not called as it seems to be "higher" up the stack. Any suggestions? Thanks, Joey
{ "language": "en", "url": "https://stackoverflow.com/questions/7573837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Best Practices for java IO for creating a large CSV file Hi I need to create few large CSV Files the order of entires could be 2 million. i so i was wondering how to do it efficiently.. and hence few questions crop up my mind 1 . when we Write File via a BufferedWriter how often should we flush? however i think that bufferedWriter maintains its own buffer and it flushes it automatically once the buffer is full if that is the case then why is flush method there at all ?? * *As the file i am going to create would be big . so when i start writing the file will the file be automatically be committed to disk?? (before calling writer.close()) or the whole file remains in the main memory till i close the writer?. * *by commiting i mean that no part of the already written portion is in main memory i.e it is ready for GC A: * *The BufferedWriter implementation should do a pretty good job of flushing when appropriate. In your case, you should never need to call flush. As for why there is a flush method, this is because sometimes you will want output written immediately rather than waiting for BufferedWriter's buffer to become full. BufferedWriter isn't just for files; it can also be used for writing to the console or a socket. For example, you may want to send some data over a network but not quite enough data to cause BufferedWriter to automatically flush. In order to send this data immediately, you would use flush. *All the data you have written to the BufferedWriter will not remain in memory all at the same time. It is written out in pieces (flushed) as BufferedWriter's buffer fills up. Once you call close at the end, BufferedWriter will do one more final flush for everything remaining in its buffer that it hasn't already written to disk and close the file. A: If you wrap your writer in a BufferedWriter, you specify a number of bytes to be saved in memory before a physical write to disk happens. (If you don't specify, there's a default. I think it's 8k but please don't quote that as gospel.) If you use a PrintWriter, I think it writes to disk with each line. Other writers write to disk with each i/o call. There is no buffering. Which usually makes for sucky performance. That's why all disk writers should be wrapped in a BufferedWriter. A: My inclination would be to work in segments, flushing to disk after every 1k or 2k lines. With that much data, it would seem to be pushing a memory limit. Since this operation is likely to be slow already, fail on the safe side and write to disk often. That's my $0.02 anyways :) A: BufferedWriter uses a fixed-size buffer, and will flush automatically when the buffer gets full. Hence any big file will be written in chunks. The flush method exists because sometimes you might wish to write something to disk before the buffer is full. A typical example is a BufferedWriter wrapping a SocketOutputStream. If you do: writer.write(request); reader.read(response); your thread is likely to block indefinitely, because the request will not be sent until the buffer gets full. You'd therefore do: writer.write(request); writer.flush(); // make sure the request is sent now reader.read(response); instead.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: jQuery class selector -- what scope does it default to? I have a question about the class selector in jQuery. I'm looking at a page which uses a jQuery plugin called slidedeck, and the page author has two <div>s showing two different slidedeck settings. Along these lines: <div id="slidedeck_frame" class="skin-slidedeck"><dl class="slidedeck"> <!-...HTML in here--> </div> <script type="text/javascript"> $('.slidedeck').slidedeck({ autoPlay: true, cycle: true, autoPlayInterval: 2500, // 2.5 seconds hideSpines: true }); </script> <div id="slidedeck_frame" class="skin-slidedeck"><dl class="slidedeck"> <!-...HTML in here--> </div> <script type="text/javascript"> $('.slidedeck').slidedeck(); </script> So you have two <div>s sharing the same ids and CSS classes for their children, but with different slidedeck settings. I would have thought the jQuery class selector would have applied the last slidedeck setting to both <dl>s, but in fact they each use the slidedeck settings directly below them. I must not be understanding the jQuery selector scope (quite likely), or is there something else at play here possibly? A: I must not be understanding the jQuery selector scope (quite likely) You are misunderstanding the concept of IDs. ID attributes are meant to be unique across the elements in DOM. I hope that clears it up :) A: Duplicate IDs are not valid in HTML. The behavior is not defined. EDIT In this case, as you are not using the ID as a selector, your jquery selector should return both of the tags with the class in the selector. http://jsfiddle.net/cJ4wp/
{ "language": "en", "url": "https://stackoverflow.com/questions/7573845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: ListView obscuring sibling views? I can't seem to get the footer navigation bar to show up in this layout. It's obscured by the ListView no matter how I set layout_weight on the navigation bar or change the layout_height of the ListView to one of FILL_PARENT or WRAP_CONTENT. Any ideas how to get the correct result? Essentially, I want the footer to be fixed at the bottom of the screen. (BTW, I need to keep the nested LinearLayouts. However, I can change them to other ViewGroups.) <LinearLayout android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical"> <LinearLayout android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical"> <ListView android:layout_width="fill_parent" android:layout_height="fill_parent"/> <include layout="@layout/wizard_navbar_last" /> </LinearLayout> </LinearLayout> Update: and I should point out that I'm actually adding the ListView via code like so: ViewGroup page3 = (ViewGroup) findViewById(R.id.wizard_page3_container); //parent linearlayout page3 = (ViewGroup) page3.getChildAt(0); //linearlayout LayoutInflater inflater= (LayoutInflater) LayoutInflater.from(getApplicationContext()); folderList = (ListView) inflater.inflate(R.layout.wizard_dropbox_list, null); page3.addView(folderList, 0); Update2: the XML for the navbar: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/wizard_navbar_last" android:paddingTop="20dp" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="horizontal"> <include layout="@layout/wizard_previous_button"/> <View android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" /> <Button android:id="@+id/wizard_finish" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Done" android:textSize="20dp"/> </LinearLayout> And previous_button is ... you guessed it, just a button. As for wizard_dropbox_list, it's the ListView as shown here. A: Try wrapping the Listview and Navbar in a RelativeLayout. Set the ListView to alignParentTop="true" and layout_height="wrap_content", and set the NavBar to alignParentBottom="true" and layout_height="wrap_conent". That should give you the effect you're looking for.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Google Go: Why does the http server package not serve more than 5 simultaneous requests? I'm trying to code a small http server for later extension in Google's Go language. I am using Go on Windows (MinGw compiled version). This is quite easy in this language since it already has the necessary package: package main import ( "http" "io" "os" "fmt" "strconv" ) func FileTest(w http.ResponseWriter, req *http.Request) { w.Header().Add("Content-Type", "image/jpeg") w.Header().Add("Content-Disposition", "inline; filename=image.jpg") inName := "d:\\googlego\\somepic.jpg"; inFile, inErr := os.OpenFile(inName, os.O_RDONLY, 0666); if inErr == nil { inBufLen := 16; inBuf := make([]byte, inBufLen); _, inErr := inFile.Read(inBuf); for inErr == nil { w.Write(inBuf) _, inErr = inFile.Read(inBuf); } } inErr = inFile.Close(); } func MainPage(w http.ResponseWriter, req *http.Request) { io.WriteString(w, "Hi, download here: <a href=\"/FileTest\">HERE</a>") } func main() { fmt.Print("Port: ") var hi int fmt.Scanf("%d", &hi) http.HandleFunc("/FileTest", FileTest) http.HandleFunc("/", MainPage) err := http.ListenAndServe("0.0.0.0:" + strconv.Itoa(hi), nil) if err != nil { fmt.Print(err) fmt.Print((hi)) } } This starts a server that serves a main page and a download from an image. Both work very well and I get very good results from ab (Apache benchmark) up to 6 concurrent threads: > ab -n 10000 -c 6 http://localhost:8080/ Concurrency Level: 6 Time taken for tests: 1.678096 seconds Complete requests: 10000 Percentage of the requests served within a certain time (ms) 50% 1 66% 1 75% 1 80% 1 90% 2 95% 2 98% 2 99% 2 100% 3 (longest request) When the concurrency level is set higher, this happens: >ab -n 1000 -c 7 http://localhost:8080/ Concurrency Level: 7 Time taken for tests: 10.239586 seconds Complete requests: 1000 Percentage of the requests served within a certain time (ms) 50% 1 66% 2 75% 2 80% 3 90% 499 95% 505 98% 507 99% 507 100% 510 (longest request) Note that I only made 1'000 requests this time and it still took almost 6 times as much time. Both benchmarks don't even request the file yet. I don't know a lot about Go yet, but it seems that the Go runtime doesn't create enough OS threads to put the goroutines on, or something like that? EDIT: I downloaded the new r60.2 from 07.10.2011. Now it went even worse: >ab -c 7 -n 1000 http://localhost:8080/ Concurrency Level: 7 Time taken for tests: 12.622722 seconds Complete requests: 1000 Percentage of the requests served within a certain time (ms) 50% 1 66% 1 75% 2 80% 2 90% 496 95% 503 98% 506 99% 506 100% 507 (longest request) A: As of today (Sep 2011) the Windows port of Go is a work-in-progress. It lags behind the other supported platforms (Linux, etc.) in some important measures including stability and performance (though it is improving every day). I would suggest that you try your test on a 64-bit Linux platform, and see how it differs, then maybe you can start deconstructing what's going wrong under Windows. A: I just tried this benchmark with Go 64-bit at tip, and got the following results (on a Core 2 Duo 2GHz, Windows 7 x64): C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin>ab -c 7 -n 1000 http://localhost:8080/ Concurrency Level: 7 Time taken for tests: 0.458 seconds Complete requests: 1000 Percentage of the requests served within a certain time (ms) 50% 3 66% 3 75% 3 80% 3 90% 4 95% 5 98% 7 99% 8 100% 9 (longest request)
{ "language": "en", "url": "https://stackoverflow.com/questions/7573850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I create a birthday form for registration in Zend Framework? I am trying to create a drop down for birthday. I want 3 different drop downs (1 for month, 1 for date, 1 for year). I understand how to do it separately, but I don't know what's the best way to combine them, so I can store it in 1 field in MySQL. A: You should look at Composite Elements which are multiple form elements that are rendered, and validated together as one. The example there is a birthday element similar to what you want except they use text fields instead of dropdowns to simplify things. If you look at that example you should be able to create one using select elements instead of text elements. Also, check out this blog post from Matthew Weier O'Phinney (ZF project lead) on creating composite elements. He does the same birthday example from the ZF reference guide, but may be helpful as well. Some of the user comments on there may be helpful as well. If it all seems like too much work for now, you can render them as separate elements and "put them together" in your controller/form validation routines, and insert it into the database as a single value (YYYY-mm-dd), and then when you read back from the database, you can split that up and populate each individual select element with their respective date parts. This wouldn't be the best way, but if you are beginning with Zend Framework, creating composite elements, decorators and validators can be a daunting task at first.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Ruby on Rails Form Lookup with search and filters Basically I have a users table and a companies table. When a new user is created, they are assigned to a company. I am trying to find a way so that when a new user is created, they can click a magnifying glass icon next to the company name and it brings up a smaller window that shows a list of the available companies. From this list they are able to filter and sort the companies and click on one of them to have this part of the new user form filled out. What is the best way to approach this in Ruby on Rails (v3.1.x) UPDATE: In efforts to find the solution, I've started with a drop down box. I can settle for this for now. However, I do want to make sure that I can reference back to this information in the User view index to display the company name instead of the company code. <% label = content_tag("label", "Owner Company", :for => "companies_name") %> <% form_field = collection_select("user", "ownercode", Company.all, "companycode", "name") %> <%= content_tag(:div, "#{label} #{form_field}".html_safe,:class => "field") %> In my User index view i have <td><%= user.ownercode.company.name %></td> to try and display the name of the company that this user has been assigned to. The top part works now when I edit the user. It will show the name of the company that they are assigned to. However, my mind is slipping when showing the index of the users, the name of the company.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I delete a custom UITableViewCell without using commitEditingStyle? I have a custom UITableViewCell that has a delete button on it at all times. When the delete button is pressed, the current design is to flash an alert confirming the delete with Yes/No. So far, all of this is working. The problem is, actually pressing 'Yes' does not update the UITableView. It will delete the data from model, but the row will still be there. [table beginUpdates]; //modify model code goes here [table deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationLeft]; [table endUpdates]; [table reloadData]; So the above code will update the model, but not the view. I can tell that the model is being updated, because: (A) attempting to delete the same cell again results in a crash (B) moving to another screen and coming back results in the cell being deleted I would like the result of (B) without having to leave the screen. I would not like to use commitEditingSyle unless there is way to do this without the user knowing they are editing. I certainly do not want the standard delete button or the swipe to delete functionality. A: As i understand, you delete your model. In situation B, the view will appear again, which reloads your data and builds up the UITableView. When your remove your model, do you remove it from the collection? I think not. Your application will rebuild the view with the same data (your model is not removed from memory when you reload your data). Your flow should be as following: View loads/appears Load Data Load table Model Delete Action Delete Data (Re)Load Data Load table I'm not sure, but i think you forget to reload the data. Another tip: You should use the slide to delete. It is the standard defined in the Human Interface Guidelines (HIG)
{ "language": "en", "url": "https://stackoverflow.com/questions/7573856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Passing C++ Lambda Functions I've been searching everywhere for this, and I don't seem to be able to find a straight answer. Some sources say this isn't possible, but that only raises more questions for me, which I'll explain further below. So here's the situation. Suppose I have a custom container class with a selection function like below (this is just an example): template <typename T> class Container { public: // ... Container<T> select(bool (*condition)(const T&)) const; // ... }; So as you can see, the select function takes a pointer to a condition function. This is a function that defines which items should be selected. So, an example use of this would be something similar to: bool zero_selector(const int& element) { return (element == 0); // Selects all elements that are zero } Now if I have a container filled with, say s = { 1, 1, 0, 0, 1, 0, 1, 0 }, I could select a subset of these that would only contain zeroes using: t = s.select(&zero_selector); // t = { 0, 0, 0, 0 } As you can see, this is a bit clunky. Lambda functions would make this much more elegant, so then I could use (I'm not sure if this is the correct syntax for it), for example: t = s.select([&] (int x) -> bool { return (x == 0); }); My question is, is this possible? If so, what should my function prototype be for Container::select() to accept a lambda as one of its parameters? If it isn't possible, then how is something like std::for_each implemented that can use a lambda expression as one of its arguments? Any resources that would clearly explain this would be much appreciated. Everything I've found just gives examples of lambda functions and using std::function<> to pass them as parameters, but nothing explains how std::for_each works with lambda functions. I'd like to note that this code isn't compiled/tested as-is. It's for demonstration purposes only. I have tried implementing the same principles in the actual project and it doesn't work. A: There's no need to add the knee-jerk [&]-capture. Your lambda doesn't need it: [] (int x) -> bool { return (x == 0); } Captureless lambdas are convertible to the corresponding function pointer, so this should work out of the box. That said, you should probably declare the select function to accept std::function, to which all lambdas are convertible, capturing or not: Container<T> select(std::function<bool(const T&)> predicate) const; A: You need to declare your lambda as stateless (that is, with an empty capture specification [](int x)-> bool {...}) for it to be convertable to a function pointer.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "57" }
Q: Jstree - precheck checkboxes I'm using JsTree 1.0 and having trouble pre-checking checkboxes using the checkbox plugin. Here's my full code: $(".tree").bind("loaded.jstree", function (event, data) { $('.tree li.checked').each(function () { $(this).prop("checked", true); }) }).jstree({ "core" : { "animation" : 0}, "json_data" : { "ajax" : { "url" : "/admin/posts/get_taxonomy_tree", "data" : function (n) { return { id : n.attr ? n.attr("id") : 0 }; } }, "progressive_render" : true }, "checkbox" : { "real_checkboxes" : true, "real_checkboxes_names" : function(n){ return [("term_taxonomy_id_" + (n[0].id || Math.ceil(Math.random() * 10000))), 1]; } }, "themes" : { "url" : "/assets/admin/js/jstree/themes/default/style.css", "icons": false }, "plugins" : [ "themes", "json_data", "checkbox" ] }).delegate("a", "click", function (event, data) { event.preventDefault(); }); I've added the bind event for loading.jstree, but this isn't correct - doesn't work. Any ideas? Thank you! EDIT: Solution is to add the class jstree-checked, this will by default pre-check the box A: Solution is to add the class 'jstree-checked', this will by default pre-check the box A: Make sure to add the "checked" class because it used to pre-check boxes on load =) A: The following code may resolve your issue. $('.tree').jstree("check_all");
{ "language": "en", "url": "https://stackoverflow.com/questions/7573859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Wordpress Dispersing Multisites I have plus 15 sites locked in a Wordpress multisite installation by a previous developer, the owner of the company decided that it would be in the best interests to break these up and segregate them to different servers. I am having issues trying to do this with plugin data not following. Is there a better way to do this? My attempts so far are outlined like this. * *I attempted to log in to one site and export it's XML then merge it and the theme into a new installation. This allowed the site to carry over, but I lost all the data for plugins and users. *I dumped the entire database for the MU install, then tried to kern out all irrelevant tables and files. This achieved what I wanted, but left around a lot of bad table structures, and many files that aren't in use making it overly cluttered and slow. *Recreate each site by hand. This also works, but adds weeks to creating each site individually. My database structure weighs in at around 49.8mb and looks like this at the moment. *DB: multisite (200) *Tables: wp_13_posts wp_ulc_2_commentmeta wp_ulc_2_comments wp_ulc_2_links wp_ulc_2_options wp_ulc_2_postmeta wp_ulc_2_posts wp_ulc_2_terms wp_ulc_2_term_relationships wp_ulc_2_term_taxonomy wp_ulc_3_commentmeta wp_ulc_3_comments wp_ulc_3_links wp_ulc_3_options wp_ulc_3_postmeta wp_ulc_3_posts wp_ulc_3_terms wp_ulc_3_term_relationships wp_ulc_3_term_taxonomy wp_ulc_4_commentmeta wp_ulc_4_comments wp_ulc_4_links wp_ulc_4_options wp_ulc_4_postmeta wp_ulc_4_posts wp_ulc_4_terms wp_ulc_4_term_relationships wp_ulc_4_term_taxonomy wp_ulc_6_commentmeta wp_ulc_6_comments wp_ulc_6_links wp_ulc_6_options wp_ulc_6_postmeta wp_ulc_6_posts wp_ulc_6_terms wp_ulc_6_term_relationships wp_ulc_6_term_taxonomy wp_ulc_7_commentmeta wp_ulc_7_comments wp_ulc_7_links wp_ulc_7_options wp_ulc_7_postmeta wp_ulc_7_posts wp_ulc_7_terms wp_ulc_7_term_relationships wp_ulc_7_term_taxonomy wp_ulc_8_commentmeta wp_ulc_8_comments wp_ulc_8_links wp_ulc_8_options wp_ulc_8_postmeta wp_ulc_8_posts wp_ulc_8_terms wp_ulc_8_term_relationships wp_ulc_8_term_taxonomy wp_ulc_9_commentmeta wp_ulc_9_comments wp_ulc_9_links wp_ulc_9_options wp_ulc_9_postmeta wp_ulc_9_posts wp_ulc_9_terms wp_ulc_9_term_relationships wp_ulc_9_term_taxonomy wp_ulc_10_commentmeta wp_ulc_10_comments wp_ulc_10_links wp_ulc_10_options wp_ulc_10_postmeta wp_ulc_10_posts wp_ulc_10_terms wp_ulc_10_term_relationships wp_ulc_10_term_taxonomy wp_ulc_11_commentmeta wp_ulc_11_comments wp_ulc_11_links wp_ulc_11_options wp_ulc_11_postmeta wp_ulc_11_posts wp_ulc_11_terms wp_ulc_11_term_relationships wp_ulc_11_term_taxonomy wp_ulc_13_commentmeta wp_ulc_13_comments wp_ulc_13_links wp_ulc_13_options wp_ulc_13_postmeta wp_ulc_13_posts wp_ulc_13_role_scope_rs wp_ulc_13_terms wp_ulc_13_term_relationships wp_ulc_13_term_taxonomy wp_ulc_13_user2role2object_rs wp_ulc_13_yarpp_keyword_cache wp_ulc_13_yarpp_related_cache wp_ulc_14_commentmeta wp_ulc_14_comments wp_ulc_14_links wp_ulc_14_options wp_ulc_14_postmeta wp_ulc_14_posts wp_ulc_14_terms wp_ulc_14_term_relationships wp_ulc_14_term_taxonomy wp_ulc_15_commentmeta wp_ulc_15_comments wp_ulc_15_links wp_ulc_15_options wp_ulc_15_postmeta wp_ulc_15_posts wp_ulc_15_terms wp_ulc_15_term_relationships wp_ulc_15_term_taxonomy wp_ulc_15_yarpp_keyword_cache wp_ulc_15_yarpp_related_cache wp_ulc_15 (4) wp_ulc_16_commentmeta wp_ulc_16_comments wp_ulc_16_links wp_ulc_16_options wp_ulc_16_postmeta wp_ulc_16_posts wp_ulc_16_terms wp_ulc_16_term_relationships wp_ulc_16_term_taxonomy wp_ulc_17_commentmeta wp_ulc_17_comments wp_ulc_17_links wp_ulc_17_options wp_ulc_17_postmeta wp_ulc_17_posts wp_ulc_17_terms wp_ulc_17_term_relationships wp_ulc_17_term_taxonomy wp_ulc_18_commentmeta wp_ulc_18_comments wp_ulc_18_links wp_ulc_18_options wp_ulc_18_postmeta wp_ulc_18_posts wp_ulc_18_terms wp_ulc_18_term_relationships wp_ulc_18_term_taxonomy wp_ulc_19_commentmeta wp_ulc_19_comments wp_ulc_19_links wp_ulc_19_options wp_ulc_19_postmeta wp_ulc_19_posts wp_ulc_19_terms wp_ulc_19_term_relationships wp_ulc_19_term_taxonomy wp_ulc_20_commentmeta wp_ulc_20_comments wp_ulc_20_links wp_ulc_20_options wp_ulc_20_postmeta wp_ulc_20_posts wp_ulc_20_terms wp_ulc_20_term_relationships wp_ulc_20_term_taxonomy wp_ulc_21_commentmeta wp_ulc_21_comments wp_ulc_21_links wp_ulc_21_options wp_ulc_21_postmeta wp_ulc_21_posts wp_ulc_21_terms wp_ulc_21_term_relationships wp_ulc_21_term_taxonomy wp_ulc_22_commentmeta wp_ulc_22_comments wp_ulc_22_links wp_ulc_22_options wp_ulc_22_postmeta wp_ulc_22_posts wp_ulc_22_terms wp_ulc_22_term_relationships wp_ulc_22_term_taxonomy wp_ulc_blogs wp_ulc_blog_versions wp_ulc_commentmeta wp_ulc_comments wp_ulc_fb_friends wp_ulc_fb_lastlogin wp_ulc_groups_rs wp_ulc_links wp_ulc_options wp_ulc_postmeta wp_ulc_posts wp_ulc_registration_log wp_ulc_signups wp_ulc_site wp_ulc_sitemeta wp_ulc_terms wp_ulc_term_relationships wp_ulc_term_taxonomy wp_ulc_user2group_rs wp_ulc_usermeta wp_ulc_users I'm curious if I remove wp_ulc_ from the head of all tables (wp_13 being the exception), and replace all instances of wp_ulc_ inside the tables, delete all tables except one site at a time, and load those on top of a fresh install if I will be able to keep plugin data and users? This is extremely confusing to untangle and I'm not sure the best way to proceed in separating each site from the MU install. Advice would be (extremely) appreciated. A: From your description, combine the best parts of #1 and #2. Import the previous site's XML/theme, and do a SQL INSERT for all data in your 'users'/'plugin' WordPress Tables. There might also be already-developed tool to clean out old versions of posts and other WP content that are filling your databases.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: using nsmutabledata to initialize string Im trying to get my data retrieved from a socket into a nsmutablearray. however the examples and tutorials i found showed it going into a string first. which is fine i can parse it out from there. but i cant even get this string thing working. case NSStreamEventHasBytesAvailable: { if(!rawData) { rawData = [[NSMutableData data] retain]; } uint8_t buf[1024]; unsigned int len = 0; len = [(NSInputStream *)theStream read:buf maxLength:1024]; if(len) { [rawData initWithBytes:buf length:len]; int bytesRead; bytesRead += len; [self messageReceived:rawData]; } else { NSLog(@"no buffer!"); } NSString *str = [[NSString alloc] initWithData:rawData encoding:NSUTF8StringEncoding]; NSLog(@"data buffer: %@ |~|string buffer%@",rawData,str); [str release]; break; } But as you will see from the output below the string never gets any of the data (well actually i think its an encoding problem, and so i think it just looks empty) 2011-09-27 13:14:06.356 Cameleon[30095:207] data buffer: <0f000102> |~|string buffer 2011-09-27 13:14:06.359 Cameleon[30095:207] data buffer: <02000400 000003> |~|string buffer 2011-09-27 13:14:06.458 Cameleon[30095:207] data buffer: <05000500 00020300> |~|string buffer 2011-09-27 13:14:06.659 Cameleon[30095:207] data buffer: <05000b00 0008080e 13163809 2711> |~|string buffer 2011-09-27 13:14:06.663 Cameleon[30095:207] data buffer: <05000700 00040101 005a> |~|string buffer i want the string buffer to mirror the values of the databuffer or an array with each byte of the data buffer ANSWER: case NSStreamEventHasBytesAvailable: { if(!rawData) { rawData = [[NSMutableData data] retain]; } uint8_t buf[1024]; unsigned int len = 0; len = [(NSInputStream *)theStream read:buf maxLength:1024]; if(len) { [rawData initWithBytes:buf length:len]; } else { NSLog(@"no buffer!"); } const uint8_t *bytes = [rawData bytes]; NSMutableArray *mutableBuffer = [[NSMutableArray alloc] initWithCapacity:len]; for (int i =0; i < [rawData length]; i++) { [mutableBuffer addObject:[NSString stringWithFormat:@"%02X", bytes[i]]]; } [self gateKeeper:mutableBuffer]; [mutableBuffer release]; break; A: Your code has several problems. First off, this is not the usual pattern for allocating an NSData object: rawData = [[NSMutableData data] retain]; Although technically correct in terms of memory management, it's non-idiomatic and results in an unnecessary pair of autorelease and retain messages getting sent. It should instead be this: rawData = [[NSMutableData alloc] init]; Secondly, this code is useless: int bytesRead; bytesRead += len; You're declaring a variable, failing to initialize it, adding len to it (which is technically Undefined Behavior, but on x86 this will be harmless), and then doing nothing with it. You probably want to use a longer-lived variable declared outside of this block and initialize it properly. Finally, the actual cause of your problem is that the data you're receiving is not UTF-8 text. It's some kind of binary data with embedded NUL characters (the zero bytes). When these get converted to strings, the NULs signal termination of the string, so nothing after them gets printed. Just keep the data you have as an NSData, don't bother trying to convert it to a string if it's not actually textual data. What sort of data are you dealing with? Where is it coming from?
{ "language": "en", "url": "https://stackoverflow.com/questions/7573862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Change the Header Text of Autogenerated Databound Column in RadGrid How can i change the Header Text of Autogenerated DataBound Columns in RADGRID. I am loading a dataset whoz columns are autogenerated (Wgt1,Wgt2,Wgt3.......). I want these Column Headers as Wgt | abc | Wgt | edg | Wgt |....... which at the moment is coming as Wgt1 | abc | Wgt2 | edg | Wgt3 |....... I tried If (TypeOf e.Item Is GridDataItem) Then   For Each column1 As GridColumn In e.Item.OwnerTableView.RenderColumns       Dim dataItem As GridDataItem = DirectCast(e.Item, GridDataItem)            If column1.HeaderText = "Wgt1" Then               dataItem("Wgt1").Text = "Wgt"            End If    Next End If But this is changing the Column Data and not the Header Text A: protected void RadGrid2_NeedDataSource(object sender, Telerik.Web.UI.GridNeedDataSourceEventArgs e) { dynamic data = new[] { new { ID = 1, Name ="Name1"}, new { ID = 2, Name = "Name2"}, new { ID = 3, Name = "Name3"}, new { ID = 4, Name = "Name4"}, new { ID = 5, Name = "Name5"} }; RadGrid2.DataSource = data; } protected void RadGrid2_ColumnCreated(object sender, Telerik.Web.UI.GridColumnCreatedEventArgs e) { if (e.Column.UniqueName == "Name") { e.Column.HeaderText = "Jayesh"; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/7573863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Adobe Flex PopUpManager -- multiple instances of a TitleWindow opened Setup: My Flex application is one consisting of several "subapps". Basically, the main application area is an ApplicationControlBar with buttons for each of the subapps. The rest of the area is a canvas where the subapps are displayed. Only one subapp is visible at a time. When switching between subapps, we do a canvas.removeAllChildren(), then canvas.addChild(subAppSwitchedTo). It's essentially a manual implementation of a ViewStack (the pros and cons of which are not the topic of this, so refrain from commenting on this). Problem: In one of my subapps (let's say subapp "A"), I have a search function where results are displayed in a TitleWindow that gets popped up. Workflow is like enter search criteria, click search button, TitleWindow pops up with results (multiple selection datagrid), choose desired result(s), click OK, popup goes away (PopUpManager.removePopUp), and continue working. This all works fine. The problem is if I switch to a different subapp (say "B" -- where A gets removeAllChildren()'d and B gets added), then switch back to A and search again, when the results TitleWindow pops open, there will be TWO stacked on top of each other. If I continue to navigate away and back to A, every time I search, there will be an additional popup in the "stack" of popups (one for each time A gets addChild()'d). Has anyone else experienced this? I'm not sure what to do about it and it's causing a serious usability bug in my application. Does this ring any bells to anyone? It's like I somehow need to flush the PopUpManager or something (even though I'm correctly calling removePopUp() to remove the TitleWindow). Please help! EDIT Flex SDK = 4.5.1 // Subapp "A" if (!certificateSearchTitleWindow) { certificateSearchTitleWindow = new CertificateSearchTitleWindow; certificateSearchTitleWindow.addEventListener("searchAccept", searchOKPopupHandler); certificateSearchTitleWindow.addEventListener("searchCancel", searchClosePopupHandler); } PopUpManager.addPopUp(certificateSearchTitleWindow, this, true); A: My guess is that the popup is removed from the main display list when you remove its parent (this in the PopUpManager.addPopup() method), but not from its parent display list. Why don't you listen, in your subapps, to the Event.REMOVED event, and then remove your popup ? That would be : private var pp:CertificateSearchTitleWindow; private function onCreationComplete():void { addEventListener(Event.REMOVED, onRemovede); } private function addPopUp():void { if (!pp) { pp = new CertificateSearchTitleWindow(); PopUpManager.addPopUp(pp, this, true); } } private function onRemoved(event:Event):void { if (pp) { PopupManager.removePopUp(pp); pp = null; } } A: Thank you to those who gave suggestions. It turned out I was re-registering an eventListener over and over. I am using a singleton to act as "shared memory" between the subapps. I was setting singleton.addEventListener(someType, listener) in subapp A's creationComplete callback. So everytime I navigated back to A, the creationComplete was running and re-adding this listener. After the search, the listener method (that opened the popup) was being called multiple times, i.e., as many times as the event had been added. xref: http://forums.adobe.com/message/3941163
{ "language": "en", "url": "https://stackoverflow.com/questions/7573864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: facebook: how to grant a website permanent access to your wall? I am creating a website and I would like it to display the posts present on the wall of my own Facebook profile page. These posts must be visible to any user visiting my site. Having read various articles online regarding the topic of the Facebook api, I had thought I had found the correct solution. The following steps are what I have done thus far: * *Signed in as a developer on Facebook (I am only allowed a max of 2 URLs in the post hence I have ommitted the URL) *Set up a new app which gave me an App ID and an App Secret. *Generated my access token at the following URL: https://graph.facebook.com/oauth/access_token?grant_type=client_credentials&scope=offline_access&client_id=APP_ID&client_secret=APP_SECRET Where APP_ID and APP_SECRET are the App ID and App Secret from step 2. I also added the 'scope' parameter and set it to 'offline_access' as I believe this is what would allow my website to access my Facebook wall information without needing me to be currently logged into Facebook. When I access my test page at [http://pauldailly.javaprovider.net/danceclass][1] (I set this URL to be my 'site URL' when creating my app in step 2. Similarly, I set my 'Site Domain' to be the same as my site URL, without the '/danceclass'). I get the information message 'Paul Dailly has not shared any information' and it displays my Facebook profile image. I should point out that this test page is using the Facebook Wall jQuery plugin to make the call to facebook. I have supplied the plugin my facebook page's profile ID and the access token that I generated above. My question is, do I need to take some extra step in order to allow my site to now access specific pieces of data from my Facebook account, such as my wall posts, now that I appear to have a valid access token? A: First, a access token can expire in many ways, 1. Time. 2. When you log out. 3. When you change permission etc. Second, yes you need different permissions from the user (even if the user is app admin) to retrieve different bit of information from his Facebook profile. And also, make sure you construct meta tag currectly in your website, like putting og:admins, og:app_id etc. You have to supply the app_id when initiating the facebook (php/js)-sdk in your website too.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: pass the code of java to work for java/android I have 3 .java files: main.java, separetdat.java and token.java main.java import java.util.*; public class Main { public static void main(String[] args) { Tokenizer ob1=new Tokenizer(); LinkedList listaDeCoord=new LinkedList(); SepararDatos oSepararDatos=new SepararDatos(); ob1.leerPath(); ob1.getDataFromFile(); listaDeCoord=ob1.listaTokens; listaDeCoord=oSepararDatos.getLinkedList(listaDeCoord); double[] vectorDeDatos; //= new double[listaDeCoord.size()]; double[] vectorIndex; vectorDeDatos=oSepararDatos.getArrayData(listaDeCoord); vectorIndex=oSepararDatos.getArrayIndex(vectorDeDatos); } } separetdat.java: import java.util.*; public class SepararDatos { public void SepararDatos() { } public double[] getArrayIndex(double[] vector) { double[] vectorIndex=new double[vector.length/3]; int index=0,i=0; while (index<vector.length) { vectorIndex[i]=vector[index]; index=index+3; i++; } return vectorIndex; } public LinkedList getLinkedList(LinkedList lista) { String linea=""; StringTokenizer st; String palabra=""; LinkedList palabras=new LinkedList(); //lista=new LinkedList(); int j=0; char c; int indice=0; for (int i=0;i<lista.size();i++) { linea=lista.get(i).toString(); st=new StringTokenizer(linea); while (st.hasMoreTokens()) { palabras.add((st.nextToken())); } } return palabras; } public double[] getArrayData(LinkedList lista) { double[] vectorPalabras=new double[lista.size()]; for (int i=0;i<vectorPalabras.length;i++) { vectorPalabras[i]=Double.parseDouble(lista.get(i).toString()); } return vectorPalabras; } } token.java import java.io.*; import java.util.*; public class Tokenizer { String strFile; BufferedReader br; String strLine; StringTokenizer st = null; int lineNumber = 0, tokenNumber = 0; LinkedList lista=new LinkedList(); LinkedList listaTokens=new LinkedList(); public void Tokenizer() { } public void leerPath () { strFile = "path of the txt file.txt"; try { br = new BufferedReader( new FileReader(strFile)); } catch(Exception e) { System.out.println("Exception while reading csv file: " + e); } } public double[] getDataFromFile() { double[] vector=new double[lista.size()]; try { while( (strLine = br.readLine()) != null) { lineNumber++; //break comma separated line using "," //st = new StringTokenizer(strLine, ","); st = new StringTokenizer(strLine); while(st.hasMoreTokens()) { //display csv values System.out.println("Line # " + lineNumber + ", Token # " + tokenNumber + ", Token : "+ st.nextToken()); tokenNumber++; } //reset token number tokenNumber = 0; lista.add(strLine); } } catch(Exception e) { System.out.println("Exception while reading csv file: " + e); } listaTokens=lista; for (int i=0;i<vector.length;i++) { vector[i]=Double.parseDouble(lista.get(i).toString()); } return vector; } } When I create the java project, it works perfectly, but when I create an Android project, it does not work. What do I need to change the original java code so that it works in Android? im sorry for my question to be how to put it mmmm dumb basically what i like to now is how to call the methods from the other 2 clases in Activity, activity has to call form the other 2 classes, simple example will be let say activity.java has to call from calculate.java the method that prints out the result activity.java calls result from sum calls result from multiply prints the results name.java let say it does a sum in one function return sum result. 2 function does multiply 2 numbers returm multiply results. A: I've never liked answered on here that are simply "Read the documentation" followed by links, but I think that's the best answer in this case. I strongly recomment reading this doc and this doc on application fundamentals to get an idea of how the android framework works, as well as following the tutorials on the android dev site. I'm sure there is nothing wrong with you code, but the android framework doesn't work like java does. A: The code you posted is plain old Java SE code that has "System.out.println()" statements to print to the console. How were you expecting to run this on Android? Android apps are written in Java, but they have to be built as an Android application. You can't just take random Java code and expect it to magically run on an Android device. StackOverflow is also for providing information to help you program... it's not for people to "give you the code", and it doesn't seem like you've done enough research to even know how to use advice people could give you... There are tons of resources to start learning Android, just search for them on the web. But at least to start you off, here's the Android Dev Guide. A: The structure of an Android project is entirely different from a regular Java project. Android has no main, instead it has a main Activity. There are also many other differences; basically too many to cover in a single answer. You should check out Google's Android Developer's Guide. Then check out the Notepad tutorial. This will give you an idea as to how you can convert your project over to Android. The Notepad tutorial especially, will give you an idea as to how an Android project is structured, and how the various components communicate with each other.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Animation when changing textview I currently use a major workaround and have two activities switching each time I change the text on a TextView. I am using this code: Weeklytext.this.overridePendingTransition( R.anim.slide_in_left, R.anim.slide_out_right ); Is it possible to do this in one Activity? It's kind of annoying having two Activities with the exact same content just so that I can use animations ;) Thanks! Please ask if you don't understand my Question! A: You can use a TextSwitcher to have animations when changing the text in a TextView. A TextSwitcher is just a special kind of ViewSwitcher, and as such, it lets you provide two Views from which to animate between. When you call setText(), it updates the text of the next TextView and then animates that one into the screen, and the current one out. The old TextView is then designated as the 'next' TextView and the process repeats. You can specify the Views using setFactory(...) or just simply add two TextViews to it with addView(...). // get a TextSwitcher view; instantiate in code or resolve from a layout/XML TextSwitcher textSwitcher = new TextSwitcher(context); // specify the in/out animations you wish to use textSwitcher.setInAnimation(context, R.anim.slide_in_left); textSwitcher.setOutAnimation(context, R.anim.slide_out_right); // provide two TextViews for the TextSwitcher to use // you can apply styles to these Views before adding textSwitcher.addView(new TextView(context)); textSwitcher.addView(new TextView(context)); // you are now ready to use the TextSwitcher // it will animate between calls to setText textSwitcher.setText("hello"); ... textSwitcher.setText("goodbye");
{ "language": "en", "url": "https://stackoverflow.com/questions/7573870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Is it possible to add a .java file to my eclipse folder and have it appear in the output directory without being compiled? Let's say I have a folder that contains various typs of files. Some of them are regular .java files that are to be compiled, others are in their own format and others are .java files that are not to be compiled (but I want them to appear in the /bin/ folder). Is it possible to accomplish that in Eclipse? I've tried taking it out of the build path but them it won't appear in the output folder :( The following screenshot depicts an example situation: I want Tests.java, X.java and Value.java to be compiled to the output folder bin/creates_java_contracts_file. In that same folder, I'll want to have rfn.rfn, Value.spc and X.spc, plus ValueContractsClass.java(uncompiled). A: Another alternative is to either copy the files over to bin yourself, or change the file extension from .java to whatever you want (say, .njava or .foo). I'm guessing you want to retain the .java files for some specific reason? UPDATE * *Exclude them from the build path, as you tried *Manually copy the .java files to your output folder This prevents the files from being compiled, and they will survive a "Clean..." build, by my testing, meaning they don't get deleted when Eclipse scrubs the output folder. A: If you are using ANT, you can do a simple file copy.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: MySQL implementation of ray-casting Algorithm? We need to figure out a quick and fairly accurate method for point-in-polygon for lat/long values and polygons over google maps. After some research - came across some posts about mysql geometric extensions, and did implement that too - SELECT id, Contains( PolyFromText( 'POLYGON(".$polygonpath.")' ) , PointFromText( concat( \"POINT(\", latitude, \" \", longitude, \")\" ) ) ) AS CONTAINS FROM tbl_points That did not however work with polygons made up of a large number of points :( After some more research - came across a standard algorithm called the Ray-casting algorithm but before trying developing a query for that in MySQL, wanted to take my chances if someone had already been through that or came across a useful link which shows how to implement the algorithm in MySQL / SQL-server. So, cutting it short - question is: Can anyone please provide the MySQL/SQL-server implementation of Ray casting algorithm? Additional detail: * *Polygons are either of concave, convex or complex. *Targeting quick execution over 100% accuracy. A: I would write a custom UDF that implements the ray-casting algorithm in C or Delphi or whatever high level tool you use: Links for writing a UDF Here's sourcecode for a MySQL gis implementation that looks up point on a sphere (use it as a template to see how to interact with the spatial datatypes in MySQL). http://www.lenzg.net/archives/220-New-UDF-for-MySQL-5.1-provides-GIS-functions-distance_sphere-and-distance_spheroid.html From the MySQL manual: http://dev.mysql.com/doc/refman/5.0/en/adding-functions.html UDF tutorial for MS Visual C++ http://rpbouman.blogspot.com/2007/09/creating-mysql-udfs-with-microsoft.html UDF tutorial in Delphi: Creating a UDF for MySQL in Delphi Source-code regarding the ray-casting algorithm Pseudo-code: http://rosettacode.org/wiki/Ray-casting_algorithm Article in drDobbs (note the link to code at the top of the article): http://drdobbs.com/cpp/184409586 Delphi (actually FreePascal): http://www.cabiatl.com/mricro/raycast/ A: Just in case, one MySQL function which accepts MULTIPOLYGON as an input: http://forums.mysql.com/read.php?23,286574,286574 DELIMITER $$ CREATE DEFINER=`root`@`localhost` FUNCTION `GISWithin`(pt POINT, mp MULTIPOLYGON) RETURNS int(1) DETERMINISTIC BEGIN DECLARE str, xy TEXT; DECLARE x, y, p1x, p1y, p2x, p2y, m, xinters DECIMAL(16, 13) DEFAULT 0; DECLARE counter INT DEFAULT 0; DECLARE p, pb, pe INT DEFAULT 0; SELECT MBRWithin(pt, mp) INTO p; IF p != 1 OR ISNULL(p) THEN RETURN p; END IF; SELECT X(pt), Y(pt), ASTEXT(mp) INTO x, y, str; SET str = REPLACE(str, 'POLYGON((',''); SET str = REPLACE(str, '))', ''); SET str = CONCAT(str, ','); SET pb = 1; SET pe = LOCATE(',', str); SET xy = SUBSTRING(str, pb, pe - pb); SET p = INSTR(xy, ' '); SET p1x = SUBSTRING(xy, 1, p - 1); SET p1y = SUBSTRING(xy, p + 1); SET str = CONCAT(str, xy, ','); WHILE pe > 0 DO SET xy = SUBSTRING(str, pb, pe - pb); SET p = INSTR(xy, ' '); SET p2x = SUBSTRING(xy, 1, p - 1); SET p2y = SUBSTRING(xy, p + 1); IF p1y < p2y THEN SET m = p1y; ELSE SET m = p2y; END IF; IF y > m THEN IF p1y > p2y THEN SET m = p1y; ELSE SET m = p2y; END IF; IF y <= m THEN IF p1x > p2x THEN SET m = p1x; ELSE SET m = p2x; END IF; IF x <= m THEN IF p1y != p2y THEN SET xinters = (y - p1y) * (p2x - p1x) / (p2y - p1y) + p1x; END IF; IF p1x = p2x OR x <= xinters THEN SET counter = counter + 1; END IF; END IF; END IF; END IF; SET p1x = p2x; SET p1y = p2y; SET pb = pe + 1; SET pe = LOCATE(',', str, pb); END WHILE; RETURN counter % 2; END A: In reply to zarun function for finding lat/long within polygon. I had a property table having lat/long information. So I had to get the records whose lat/long lies within polygon lats/longs (which I got from Google API). At first I was dumb how to use the Zarun function. So here is the solution query for it. * *Table: properties *Fields: id, latitude, longitude, beds etc... *Query: SELECT id FROM properties WHERE myWithin( PointFromText(concat( "POINT(", latitude, " ", longitude, ")")), PolyFromText('POLYGON((37.628134 -77.458334,37.629867 -77.449021,37.62324 -77.445416,37.622424 -77.457819,37.628134 -77.458334))' ) ) = 1 limit 0,50; Hope it will save time for dumbs like me ;) A: The following function (MYSQL version of Raycasting algorithm) rocked my world : CREATE FUNCTION myWithin(p POINT, poly POLYGON) RETURNS INT(1) DETERMINISTIC BEGIN DECLARE n INT DEFAULT 0; DECLARE pX DECIMAL(9,6); DECLARE pY DECIMAL(9,6); DECLARE ls LINESTRING; DECLARE poly1 POINT; DECLARE poly1X DECIMAL(9,6); DECLARE poly1Y DECIMAL(9,6); DECLARE poly2 POINT; DECLARE poly2X DECIMAL(9,6); DECLARE poly2Y DECIMAL(9,6); DECLARE i INT DEFAULT 0; DECLARE result INT(1) DEFAULT 0; SET pX = X(p); SET pY = Y(p); SET ls = ExteriorRing(poly); SET poly2 = EndPoint(ls); SET poly2X = X(poly2); SET poly2Y = Y(poly2); SET n = NumPoints(ls); WHILE i<n DO SET poly1 = PointN(ls, (i+1)); SET poly1X = X(poly1); SET poly1Y = Y(poly1); IF ( ( ( ( poly1X <= pX ) && ( pX < poly2X ) ) || ( ( poly2X <= pX ) && ( pX < poly1X ) ) ) && ( pY > ( poly2Y - poly1Y ) * ( pX - poly1X ) / ( poly2X - poly1X ) + poly1Y ) ) THEN SET result = !result; END IF; SET poly2X = poly1X; SET poly2Y = poly1Y; SET i = i + 1; END WHILE; RETURN result; End; Add DELIMITER ;; before the function as required. The usage for the function is: SELECT myWithin(point, polygon) as result; where point = Point(lat,lng) polygon = Polygon(lat1 lng1, lat2 lng2, lat3 lng3, .... latn lngn, lat1 lng1) Please note that the polygon ought to be closed (normally it is closed if you're retrieving a standard kml or googlemap data but just make sure it is - note lat1 lng1 set is repeated at the end) I did not have points and polygons in my database as geometric fields, so I had to do something like: Select myWithin(PointFromText( concat( "POINT(", latitude, " ", longitude, ")" ) ),PolyFromText( 'POLYGON((lat1 lng1, ..... latn lngn, lat1 lng1))' ) ) as result I hope this might help someone. A: I wanted to use the above mywithin stored procedure on a table of polygons so here are the commands to do just that. After importing a shapefile containing polygons into mysql using ogr2ogr as follows ogr2ogr -f "mysql" MYSQL:"mydbname,host=localhost,user=root,password=mypassword,port=3306" -nln "mytablename" -a_srs "EPSG:4326" /path/to/shapefile.shp you can then use MBRwithin to prefilter your table and mywithin to finish as follows DROP TEMPORARY TABLE IF EXISTS POSSIBLE_POLYS; CREATE TEMPORARY TABLE POSSIBLE_POLYS(OGR_FID INT,SHAPE POLYGON); INSERT INTO POSSIBLE_POLYS (OGR_FID, SHAPE) SELECT mytablename.OGR_FID,mytablename.SHAPE FROM mytablename WHERE MBRwithin(@testpoint,mytablename.SHAPE); DROP TEMPORARY TABLE IF EXISTS DEFINITE_POLY; CREATE TEMPORARY TABLE DEFINITE_POLY(OGR_FID INT,SHAPE POLYGON); INSERT INTO DEFINITE_POLY (OGR_FID, SHAPE) SELECT POSSIBLE_POLYS.OGR_FID,POSSIBLE_POLYS.SHAPE FROM POSSIBLE_POLYS WHERE mywithin(@testpoint,POSSIBLE_POLYS.SHAPE); where @testpoint is created, for example, from SET @longitude=120; SET @latitude=-30; SET @testpoint =(PointFromText( concat( "POINT(", @longitude, " ", @latitude, ")" ) )); A: It is now a Spatial Extension as of MySQL5.6.1 and above. See function_st-contains at Docs. A: Here is a version that works with MULTIPOLYGONS (an adaptation of Zarun's one which only works for POLYGONS). CREATE FUNCTION GISWithin(p POINT, multipoly MULTIPOLYGON) RETURNS INT(1) DETERMINISTIC BEGIN DECLARE n,i,m,x INT DEFAULT 0; DECLARE pX,pY,poly1X,poly1Y,poly2X,poly2Y DECIMAL(13,10); DECLARE ls LINESTRING; DECLARE poly MULTIPOLYGON; DECLARE poly1,poly2 POINT; DECLARE result INT(1) DEFAULT 0; SET pX = X(p); SET pY = Y(p); SET m = NumGeometries(multipoly); WHILE x<m DO SET poly = GeometryN(multipoly,x); SET ls = ExteriorRing(poly); SET poly2 = EndPoint(ls); SET poly2X = X(poly2); SET poly2Y = Y(poly2); SET n = NumPoints(ls); WHILE i<n DO SET poly1 = PointN(ls, (i+1)); SET poly1X = X(poly1); SET poly1Y = Y(poly1); IF ( ( ( ( poly1X <= pX ) && ( pX < poly2X ) ) || ( ( poly2X <= pX ) && ( pX < poly1X ) ) ) && ( pY > ( poly2Y - poly1Y ) * ( pX - poly1X ) / ( poly2X - poly1X ) + poly1Y ) ) THEN SET result = !result; END IF; SET poly2X = poly1X; SET poly2Y = poly1Y; SET i = i + 1; END WHILE; SET x = x + 1; END WHILE; RETURN result; End;
{ "language": "en", "url": "https://stackoverflow.com/questions/7573881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Un-Nesting List Iterations for Performance I have several Lists that I need to iterate through in order to perform a calculation. In summary, List1 is List of roadway start and endpoints (ids) and List2 is a List of individual speed samples for those endpoints (there are multiple speed samples for each set of endpoints). List1 is defined like this: class RoadwaySegment { public int StartId {get; set;} public int EndId {get; set;} } List2 is defined like this: class IndividualSpeeds { public int StartHour {get; set;} public int StartMin {get; set;} //either 0,15,30,or 45 public int Speed {get; set;} public int StartId {get; set;} public int EndId {get; set;} } List3 is the result of my calculation and will contain the average speeds for the roadway segments in List1 for each 15 minute period of the day. List3 looks like this: class SummaryData { public string SummaryHour {get; set;} public string SummaryMin {get; set;} public int StartId {get; set;} public int EndId {get; set;} public int AvgSpeed {get; set;} } Currently, to generate List3, I iterate over List1, then over each 24 hour period of the day, then over each 15 minute interval of an hour. For each of these iterations, I check to see if the individual speed sample in List2 should be included in the average speed calculation for my roadway segment. So, it looks something like this: var summaryList = new List<SummaryData>(); foreach (var segment in RoadwaySegments) { for(int startHour = 0; startHour < 24; startHour++) { for(int startMin = 0; startMin < 60; startMin+= 15) { int totalSpeeds = 0; int numSamples = 0; int avgSpeed = 0; foreach(var speedSample in IndividualSpeeds) { if((segment.StartId == speedSample.StartId)&&(segment.EndId == speedSample.EndId)&&(speedSample.StartHour == startHour)&&(speedSample.StartMin == startMin)) { if(speedSample.Speed > 0) { totalSpeeds += speedSample.Speed; numSamples += 1; } } } SummaryData summaryItem = new SummaryData {SummaryHour = startHour, SummaryMin = startMin, StartId = segment.StartId, EndId = segment.EndId, AvgSpeed = totalSpeeds/numSamples; summaryList.Add(summaryItem); } } } The issue with this code is that List1 might have a hundred roadway segments but List2 can contain a million or more speed sample records so sub-iterations of the list are very time consuming. Is there a way to use GroupBy/LINQ to improve the performance and readability of this code? Note the condition for including a speed in the average--it has to be greater than 0. A: This is untested, but I think it will work: from segment in RoadwaySegments join sample in IndividualSpeeds on new { segment.StartId, segment.EndId } equals new { sample.StartId, sample.EndId } into segmentSamples from startHour in Enumerable.Range(0, 24) from startMin in new[] { 0, 15, 30, 45 } let startMinSamples = segmentSamples .Where(sample => sample.Speed > 0) .Where(sample => sample.StartHour == startHour) .Where(sample => sample.StartMin == startMin) .Select(sample => sample.Speed) .ToList() select new SummaryItem { StartId = segment.StartId, EndId = segment.EndId, SummaryHour = startHour, SummaryMin = minute, AvgSpeed = startMinSamples.Count <= 2 ? 0 : startMinSamples.Average() }; The main idea is to iterate the segment and sample lists once, producing a group of samples for each segment. Then, for each of those groups, you generate the hours and minutes and a summary item for each combination. Finally, you calculate the average speed of all non-zero samples in the hour/minute combination. This isn't quite ideal because you still iterate the segment's samples 24 * 4 times, but it is much better than iterating the entire sample list. This should get you on the right path, and hopefully you can optimize that last bit further. A: If you are using .net 4, I would suggest parallelizing this with Parallel Linq. This is embarrassingly parallel over RoadwaySegments. Secondly, instead of your nested iteration over the child lists, I would recommend iterating that list once and creating a Dictionary of List<IndividualSpeeds> with a Composite Key of StartId, EndId, StartHour, and EndHour. Doing a lookup in this Dictionary will be much faster vs re-iterating over that for each RoadwaySegment. A: The following is supplemantery to Chris and Bryan answers: Since you're saying there could be millions of speed sample records, you just don't want to iterate them more than the minimum possible - i.e. you should group (using GroupBy or Join operators) them, according to their start hour and minute. Than you could just iterate each group, and add each sample record into some dictionary of RoadwaySegments (something like Dictionary<RoadwaySegment, IEnumerable<IndividalSpeeds>>) and create your Summary items from this dictionary.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How does Android test network connectivity (system level) This is not about Java level code. What I'm looking for is how Android test the connectivities in low level. for example, when we call getActiveNetwork(), which low level (maybe C++ or even C) code is being called, and how does it work? does it ping to an external address (which is highly unlikely, just guessing around)? please try to be specific. Thanks, A: You need to research TCP/IP and OSI Models for networking. In these models it is the lowest level in the network stack that is responsible for maintaining the data-link and I believe this possibly outside of the scope of the Operating system i.e. it is all done at hardware level. I would assume that Android OS merely requests a network interface to 'connect' or 'disconnect' and probably provides hooks for the lower Data Link layer to call should network status change. You really have to consider what you mean by 'network connectivity'. Do you mean being able to access websites or merely a network interface has a data link... they are not necessarily the same thing.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Executing Sql Command only once I have a Database DB with a table name population but with no Primary Key. Means it can have duplication of data. For example: I have 5 families (f1,f2,f3,f4,f5) with different members inside it (and members may have same name). So I can have exactly same type of record in more than 1 row. Now I want to edit just 1 member of the family, but it is editing all the duplicate records. What I want to do is, I just want to Update my Database once and only once. In other words, I want to use UPDATE command to execute just once. How to do it? I am using Sql Server Express, VS2010, C# 4.0 (if this info matters). I am pretty sure my problem may sound stupid to some people (probably many). But it is just a dummy of my problem. Any help or suggestion will be greatly appreciated. Thanks A: I know it's not exactly what you're asking but seriously, the easiest option is to alter the database to have a primary key and use that. Perhaps an Identity key.... Without that, you could update just one record, but you have no guarantee of which record. This is why primary keys are such as fundamental concept. I suppose this doesn't really matter if they are all the same, so.... If you really want to proceed without a primary key, you need to use the TOP keyword as shown here: How do I update n rows in a table? Set it to UPDATE TOP 1 .... A: Add an identity column, ID int with auto increment on. Then update using the id's. CREATE TABLE dbo.Family ( Id int NOT NULL IDENTITY (1, 1), FamilyName varchar(50) NULL ) update dbo.Family set FamilyName = 'xxx' where Id = y A: In case you can't add an identity column for some reason: UPDATE TOP ( 1 ) Families SET Whatever = 'Your Value', ... WHERE <Your where clause> A: The real answer is: Fix your database design. Every record should have some unique identifier. Add an auto-increment field or something. To directly answer your question, you can say "update top (1) ..." to only update one record. But without some unique identifier, how do you know which record to update? The program will essentially update a random record. Which takes me back to point 1. Edit: Whoops, my original answer was for a different engine. Corrected above.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Core Data NSPredicate casting key value I have a data model which has values of type id that I was planning on casting appropriately when needed. Is it possible for me to cast these as strings and compare them to strings from a UISearchBar using NSPredicate or do I have to use another method? Maybe something like this: NSPredicate * predicate; predicate = [NSPredicate predicateWithFormat:@"CAST(%K) contains[cd] %@", employeeID , theSearchBar.text]; A: No. The CAST() function doesn't work that way. I think you just have to assume that the id returned from -employeeID is comparable to a string.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: With SharePoint/InfoPath: Is it possible to switch default views, and then save this setting with a button? So lets say that I have two views, one that is the default and another that can be triggered with a button. Is it possible to switch views, then have the view that you switched to become the default view? So that if the form were opened again it would still be on the view you switched to? If not, is there a way to have a part of a form read-only to a certain group in SharePoint and editable to another group? Or even better, could I have an email sent out to with different views to different people? Thanks! A: No, it's not possible to set the default view with a button/code. You'd want use a different approach (something similar to the State Machine Pattern) Create a "State" field, that represents the actual state of the form (usually the same as the views). So when the button is pressed, it sets the State-field to "View2" and switch the view to View2. In the form load rules (Data - Form Load) you create a new Rule that changes the view based on the value State-field. Yes, setting different section of the form to read-only for specific groups is also possible, however it requires custom code. For each section create a new field (like "Section1Enabled"). Then create a new conditional formatting rule that disables Section1 if Section1Enabled is false. In your form's load event, you add code that that decides whether the current user is in the specific group or not and based on that, you set the value of Section1Enabled. You can do with SharePoint's UserGroups.asmx or with the SharePoint Server Object Model (google should help you out with that).
{ "language": "en", "url": "https://stackoverflow.com/questions/7573889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Copy text from listview into another winform I am really stuck with this so I hope someone can help. I have 2 winforms, one has a listview and another has a textbox. I want to check which item is checked in the listview and copy this text into the second form. I have tried this code but it won't work, any help is greatly appreciated! // 1st form private void button5_Click(object sender, EventArgs e) // Brings up the second form { Form4 editItem = new Form4(); editItem.Show(); } public string GetItemValue() { for (int i = 0; i < listView1.Items.Count; i++) { if (listView1.Items[i].Checked == true) { return listView1.Items[i].Text; } } return "Error"; } // 2nd form private void Form4_Load(object sender, EventArgs e) { Form1 main = new Form1(); textBox1.Text = main.GetItemValue(); } A: You are creating a new Form1 inside of Form4 after it has been loaded. You need a reference to the original Form1. This can be accomplished in several ways, probably the easiest is passing a reference into the Form4 constructor. // Form 1 // This button creates a new "Form4" and shows it private void button5_Click(object sender, EventArgs e) { Form4 editItem = new Form4(this); editItem.Show(); } public string GetItemValue() { for (int i = 0; i < listView1.Items.Count; i++) { if (listView1.Items[i].Checked == true) { return listView1.Items[i].Text; } } return "Error"; } -- // Form 2 (Form4) // Private member variable / reference to a Form1 private Form1 _form; // Form4 Constructor: Assign the passed-in "Form1" to the member "Form1" public Form4(Form1 form) { this._form = form; } // Take the member "Form1," get the item value, and write it in the text box private void Form4_Load(object sender, EventArgs e) { textBox1.Text = this._form.GetItemValue(); } A: Try thisway FORM 1 using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace WindowsFormsApplication2 { public partial class Form1 : Form { //Fields public List<string> itemTexts; public Form1() { InitializeComponent(); //Generate some items for (int i = 0; i < 10; i++) { ListViewItem item = new ListViewItem(); item.Text = "item number #" + i; listView1.Items.Add(item); } } private void button1_Click(object sender, EventArgs e) { foreach (ListViewItem item in listView1.Items) { if (item.Checked) { itemTexts.Add(item.Text); } } Form2 TextBoxForm = new Form2(itemTexts); TextBoxForm.Show(); } } } FORM 2 using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace WindowsFormsApplication2 { public partial class Form2 : Form { //Fields List<string> itemTexts; public Form2(List<string> itemTexts) { InitializeComponent(); this.itemTexts = itemTexts; foreach (string text in itemTexts) { textBox1.Text += text + Environment.NewLine; } } } } A: You only need a couple of changes. No need to add your own way of storing the owner form as this functionality already exists. private void button5_Click(object sender, EventArgs e) // Brings up the second form { Form4 editItem = new Form4(); editItem.Show(this); //passes a reference to this form to be stored in owner } Then reference it on the other form. private void Form4_Load(object sender, EventArgs e) { textBox1.Text = ((Form1)owner).GetItemValue(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/7573890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Entity Framework TypeUsage Object I am running a memory profiler on my application to find a possible memory leak. The number of System.Data.Metadata.Edm.TypeUsage objects is consistently growing and it looks like this may be cause of my memory issues. Does anyone know a way of releasing these TypeUsatge objects from memory? They look to be internal Entity Framework objects since I do not have any reference to them in my code. I have confirmed that I have wrapped the context object within a using block, and the memory is being released, but this Type usage doesn't want to go away. Any help you can provide would be greatly appreciated. A: You are probably looking at the 1st-level cache (Change Tracker) that Entity Framework uses underneath. To read more about it check this out. I'd be surprised if there is a memory leak here, more likely that this is just normal behaviour. How much memory do you see leaking? To release the memory, try using another merge option (like NoTracking). The default is AppendOnly, which will hold on to types in memory that you might use again. NoTracking merge option will go to the database every time and hold nothing in memory. Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Retrieve Spring Security's Authentication, even on public pages with filter="none" Let's say I have a simple page called faq.html. I want this page to be publicly accessible, so I apply the usual Spring Security configuration: <sec:intercept-url pattern="/faq.html" filters="none" /> Let's also say that if the user reaches this page after authenticating, I want to print "Hi Firstname Lastname" on the page. For pages that require authentication, I simply put the result of the following into my ModelMap, and then the names are accessible in my view later: SecurityContextHolder.getContext().getAuthentication().getPrincipal() This doesn't work for faq.html, presumably because when you specify filters="none", then the call to getPrincipal() returns null. (This behavior makes sense since the configuration causes no filters to be applied.) So, instead it seems that I have to do a bunch of the Spring Security stuff manually: public static Authentication authenticate(HttpServletRequest request, HttpServletResponse response, SecurityContextRepository repo, RememberMeServices rememberMeServices) { Authentication auth = SecurityContextHolder.getContext().getAuthentication(); // try to load a previous Authentication from the repository if (auth == null) { SecurityContext context = repo.loadContext( new HttpRequestResponseHolder(request, response)); auth = context.getAuthentication(); } // check for remember-me token if (auth == null) { auth = rememberMeServices.autoLogin(request, response); } return auth; } Is there a better way to do this? For example, it seems like Spring should provide some facility for hooking their API calls in via the original <sec:intercept-url /> config. A: That's the reason not to use filters = "none" for public pages. Use access = "permitAll" instead (or access = "IS_AUTHENTICATED_ANONYMOUSLY, IS_AUTHENTICATED_FULLY, IS_AUTHENTICATED_REMEMBERED" if you don't have use-expressions = "true").
{ "language": "en", "url": "https://stackoverflow.com/questions/7573899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Counting occurrences of numbers in a CUDA array I have an array of unsigned integers stored on the GPU with CUDA (typically 1000000 elements). I would like to count the occurrence of every number in the array. There are only a few distinct numbers (about 10), but these numbers can span from 1 to 1000000. About 9/10th of the numbers are 0, I don't need the count of them. The result looks something like this: 58458 -> 1000 occurrences 15 -> 412 occurrences I have an implementation using atomicAdds, but it is too slow (a lot of threads write to the same address). Does someone know of a fast/efficient method? A: You can implement a histogram by first sorting the numbers, and then doing a keyed reduction. The most straightforward method would be to use thrust::sort and then thrust::reduce_by_key. It's also often much faster than ad hoc binning based on atomics. Here's an example. A: I suppose you can find help in the CUDA examples, specifically the histogram examples. They are part of the GPU computing SDK. You can find it here http://developer.nvidia.com/cuda-cc-sdk-code-samples#histogram. They even have a whitepaper explaining the algorithms. A: I'm comparing two approaches suggested at the duplicate question thrust count occurence, namely, * *Using thrust::counting_iterator and thrust::upper_bound, following the histogram Thrust example; *Using thrust::unique_copy and thrust::upper_bound. Below, please find a fully worked example. #include <time.h> // --- time #include <stdlib.h> // --- srand, rand #include <iostream> #include <thrust\host_vector.h> #include <thrust\device_vector.h> #include <thrust\sort.h> #include <thrust\iterator\zip_iterator.h> #include <thrust\unique.h> #include <thrust/binary_search.h> #include <thrust\adjacent_difference.h> #include "Utilities.cuh" #include "TimingGPU.cuh" //#define VERBOSE #define NO_HISTOGRAM /********/ /* MAIN */ /********/ int main() { const int N = 1048576; //const int N = 20; //const int N = 128; TimingGPU timerGPU; // --- Initialize random seed srand(time(NULL)); thrust::host_vector<int> h_code(N); for (int k = 0; k < N; k++) { // --- Generate random numbers between 0 and 9 h_code[k] = (rand() % 10); } thrust::device_vector<int> d_code(h_code); //thrust::device_vector<unsigned int> d_counting(N); thrust::sort(d_code.begin(), d_code.end()); h_code = d_code; timerGPU.StartCounter(); #ifdef NO_HISTOGRAM // --- The number of d_cumsum bins is equal to the maximum value plus one int num_bins = d_code.back() + 1; thrust::device_vector<int> d_code_unique(num_bins); thrust::unique_copy(d_code.begin(), d_code.end(), d_code_unique.begin()); thrust::device_vector<int> d_counting(num_bins); thrust::upper_bound(d_code.begin(), d_code.end(), d_code_unique.begin(), d_code_unique.end(), d_counting.begin()); #else thrust::device_vector<int> d_cumsum; // --- The number of d_cumsum bins is equal to the maximum value plus one int num_bins = d_code.back() + 1; // --- Resize d_cumsum storage d_cumsum.resize(num_bins); // --- Find the end of each bin of values - Cumulative d_cumsum thrust::counting_iterator<int> search_begin(0); thrust::upper_bound(d_code.begin(), d_code.end(), search_begin, search_begin + num_bins, d_cumsum.begin()); // --- Compute the histogram by taking differences of the cumulative d_cumsum //thrust::device_vector<int> d_counting(num_bins); //thrust::adjacent_difference(d_cumsum.begin(), d_cumsum.end(), d_counting.begin()); #endif printf("Timing GPU = %f\n", timerGPU.GetCounter()); #ifdef VERBOSE thrust::host_vector<int> h_counting(d_counting); printf("After\n"); for (int k = 0; k < N; k++) printf("code = %i\n", h_code[k]); #ifndef NO_HISTOGRAM thrust::host_vector<int> h_cumsum(d_cumsum); printf("\nCounting\n"); for (int k = 0; k < num_bins; k++) printf("element = %i; counting = %i; cumsum = %i\n", k, h_counting[k], h_cumsum[k]); #else thrust::host_vector<int> h_code_unique(d_code_unique); printf("\nCounting\n"); for (int k = 0; k < N; k++) printf("element = %i; counting = %i\n", h_code_unique[k], h_counting[k]); #endif #endif } The first approach has shown to be the fastest. On an NVIDIA GTX 960 card, I have had the following timings for a number of N = 1048576 array elements: First approach: 2.35ms First approach without thrust::adjacent_difference: 1.52 Second approach: 4.67ms Please, note that there is no strict need to calculate the adjacent difference explicitly, since this operation can be manually done during a kernel processing, if needed. A: As others have said, you can use the sort & reduce_by_key approach to count frequencies. In my case, I needed to get mode of an array (maximum frequency/occurrence) so here is my solution: 1 - First, we create two new arrays, one containing a copy of input data and another filled with ones to later reduce it (sum): // Input: [1 3 3 3 2 2 3] // *(Temp) dev_keys: [1 3 3 3 2 2 3] // *(Temp) dev_ones: [1 1 1 1 1 1 1] // Copy input data thrust::device_vector<int> dev_keys(myptr, myptr+size); // Fill an array with ones thrust::fill(dev_ones.begin(), dev_ones.end(), 1); 2 - Then, we sort the keys since the reduce_by_key function needs the array to be sorted. // Sort keys (see below why) thrust::sort(dev_keys.begin(), dev_keys.end()); 3 - Later, we create two output vectors, for the (unique) keys and their frequencies: thrust::device_vector<int> output_keys(N); thrust::device_vector<int> output_freqs(N); 4 - Finally, we perform the reduction by key: // Reduce contiguous keys: [1 3 3 3 2 2 3] => [1 3 2 1] Vs. [1 3 3 3 3 2 2] => [1 4 2] thrust::pair<thrust::device_vector<int>::iterator, thrust::device_vector<int>::iterator> new_end; new_end = thrust::reduce_by_key(dev_keys.begin(), dev_keys.end(), dev_ones.begin(), output_keys.begin(), output_freqs.begin()); 5 - ...and if we want, we can get the most frequent element // Get most frequent element // Get index of the maximum frequency int num_keys = new_end.first - output_keys.begin(); thrust::device_vector<int>::iterator iter = thrust::max_element(output_freqs.begin(), output_freqs.begin() + num_keys); unsigned int index = iter - output_freqs.begin(); int most_frequent_key = output_keys[index]; int most_frequent_val = output_freqs[index]; // Frequencies
{ "language": "en", "url": "https://stackoverflow.com/questions/7573900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Complex YouTube search query I have to search in a subset of YouTube results. For example, the query should search for some song title that belongs to a set of artists. Is it possible to group expressions? For example, I would like to find all videos titled Vogue by either Madonna or Rihanna. So the required query should like something like this: (Madonna|Rihanna)+Vogue. The problem with this query is that the results returned will include all the songs by Rihanna and Madonna and all the videos that have Vogue in the title. And this I don't want. Is there any way to specify complex logical expression to YouTube search API? A: Just be more precise with your logic and use: madonna+vogue|rihanna+vogue
{ "language": "en", "url": "https://stackoverflow.com/questions/7573902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: append a DIV and animate it $(document).ready(function(){ function anima() { $(".box").stop().animate({bottom:'0px'},{queue:false,duration:160}); } $('ul#aa img').hover(function(){ $(this).parent().append("<div class='box'>Artist<br/>More</div>", anima()); }, function() { $(".box").stop().animate({bottom:'-100px'},{queue:false,duration:160, complete: function(){ $('.box').remove();} }); }); <ul id="aa"><li id="bb"> <img src="delete.jpg"title="one|date|location|detail"/> </li> </ul> I am trying to add a div box on hover, then making it slide up (this part works), but then on mouse-out I would like to animate it down and remove the div (doesn't work, just removes, doesn't animate). Also: $('#caption', this)? What does this call? Setting the caption inside this element? A: $(document).ready(function(){ var flag = false; $('ul#aa img').hover( function() { if(($(this).next().length)==0) { $(this).parent().append("<div class='box'>Artist<br/>More</div>"); $(".box").stop().animate({bottom:'0px'},{queue:false,duration:160}); } }, function() { $(".box").stop().animate({bottom:'-100px'},{ queue:false,duration:1000, complete:function() { $(this).remove(); } }); } ); }); I figured it out, I had to use flags as well as it was creating a new div everytime on hover before the older one was deleted. Not sure if you can use slideup/slidetoggle here with the queue attribute? This does not work for more than one li item though, I need for infinite number of items, how can I have flags per item? edit: Instead of flags you can just use if(($(this).next().length)==0) to check if the div is there or not. I updated the code. A: (-1 for code formatting.) $(document).ready(function(){ function anima() { $(".box").stop().animate({bottom:'0px'},{queue:false,duration:160}); } $('ul#aa img').hover( function(){ $(this).parent() .append("<div class='box'>Artist<br/>More</div>", anima()); }, function() { $(".box").stop() .animate({bottom:'-100px'},{queue:false,duration:160, complete: function() { $('.box').remove(); } }); }); <ul id="aa"> <li id="bb"> <img src="delete.jpg"title="one|date|location|detail"/> </li> </ul> That second function(queue) looks strange to me.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-12" }
Q: How to place code snippets within Visual Studio 2010 Toolbox window? I've really enjoyed Anders Hejlsberg presentation at BUILD 2011 and it's not the first time that I notice someone having a collection of code snippets available within Visual Studio's Toolbox window, so given that all the searches I've performed so far pointed me to how to deal with IntelliSense snippets, I was wondering if anyone knows how to achieve this? A: You just need to copy the code to the toolbox. A simple selection of the code, and a drag and drop to the toolbox just make it available. It will not be deleted until you delete it (at least that never happens to me by itself).. This is what you need?
{ "language": "en", "url": "https://stackoverflow.com/questions/7573912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: TypeDescriptor.GetProperties vs. Type.GetProperties I'm looking at some code where an MSDN author uses the following in different methods of the same class: if ( TypeDescriptor.GetProperties(ModelInstance)[propertyName] != null ) return; var property = ModelInstance.GetType().GetProperty(propertyName); Would you use the former because its faster and you only need to query a property and the latter if you need to manipulate it? Something else? A: The first method should generally not be faster since internally per default it actually uses the second method. The TypeDescriptor architecture adds functionality on top of the normal reflection (which instance.GetType().GetProperty(...) represents. See http://msdn.microsoft.com/en-us/library/ms171819.aspx for more information about the TypeDescriptor architecture. In general using reflection directly is faster (i.e. your second line above), but there may be a reason for using the TypeDescriptor if some custom type provider is in use which may return other results than the standard reflection.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to prevent exception in parallel threads from killing the application? I'm running a HtmlUnit web automation app. It usually works correctly, however, sometimes it goes overboard with StackOverflowError. That usually happens somewhere within its JS thread, and, hence, I can't catch it by surrounding the statement with try..catch. As it stands, each time I get StackOverflow, the app crashes. I've tried to do this with Thread.setDefaultUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() { @Override public void uncaughtException(Thread t, Throwable e) { System.out.println("Uncaught exception in thread :"+t.getName()); e.printStackTrace(); scr = new HtmlUnitWrapper(); } }); but the app keeps crashing. Is there anything else I can do to catch and process exceptions? A: Stackoverflow is always going to be a fatal error in the JVM, it means the JVM stack is out of memory, there is nothing you can do to fix that. It means that there is some method that is recursing and blowing out the stack. Since you say, this is happening when there is Javascript on the page, I would assume that the Javascript is causing some kind of recursion, try changing the Javascript logic and see if that fixes the Stackoverflow problem, that is your root cause and the only way to "fix" this problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: 500.19 Error from custom MembershipProvider I have written a custom Membership Provider and Role Provider and locally these work great. They are pulling all the correct data and writing correctly. However, when i deploy this project to the web server; I receive a 500.19 error pointing to the web.config file. I have narrowed the issue to the declaration of the membership provider <connectionStrings> <add name="ProjectConnectionString" connectionString="blahblahblah" providerName="System.Data.SqlClient" /> </connectionStrings> <membership defaultProvider="CustomMembership"> <providers> <clear/> <add name="CustomMembership" type="CustomMembership.CustomMembershipProvider" connectionStringName="ProjectConnectionString" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="60" applicationName="/" /> </providers> </membership> Has anyone seen this issue before? or have an idea what could be causing it? Technology asp.net 4.0 with mvc3 locally - VS 2010 server - Server 08 A: As far as i see from using google, main reason for this is insufficient permissions, as error message says. Your file permissions do not allow the IIS_IUSRS (or, if your aplication pool is running on custom user, that user) user to access web.config (or probably any of the files). One easy way to test this is to prevent remotely accessing website and add everyone full rights to that folder, just to see, that it is permission issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Playing short selfmade sound with immediate start in iOS I have a self-generated DTMF sound (with a wav header) generated by program that I want to be able to play quickly, in fact as soon as the user touches a button. This DTMF sound must play/loop infinitely, until I stop it. Some other sounds must be able to be played at the same time. I'm very new to Audio programming, and I tested many ways of doing that and I'm lost now. How can I achieve that ? Needs : * *very quick playback start (including the first time) *many sounds at the same time (short sounds +- 2-6 seconds) *infinite DTMF sound without gaps *having control over the different sounds that are playing / being able to stop just one played sound A: AVAudioPlayer if you can live with some latency, OpenAL (for example Finch) if you really need to have the latency as low as possible. A: I use already exist .wav file. and I can easily play it. For run following code. include AudioToolbox framework. write into .h file #import <AudioToolbox/AudioToolbox.h> Write into .m file -(IBAction)startSound{ //Get the filename of the sound file: NSString *path = [NSString stringWithFormat:@"%@%@", [[NSBundle mainBundle] resourcePath], @"/sound1.wav"]; //declare a system sound SystemSoundID soundID; //Get a URL for the sound file NSURL *filePath = [NSURL fileURLWithPath:path isDirectory:NO]; //Use audio sevices to create the sound AudioServicesCreateSystemSoundID((CFURLRef)filePath, &soundID); //Use audio services to play the sound [self sound]; timer =[ [NSTimer scheduledTimerWithTimeInterval:2.1 target:self selector:@selector(sound) userInfo:nil repeats:YES]retain]; }
{ "language": "en", "url": "https://stackoverflow.com/questions/7573922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Href Links Dymamically I have some links in an json file <code> "links": [ {"link": "http://www.google.com/", "id": "1" }, {"link": "http://www.poogle.com/", "id": "2" }, {"link": "http://www.foogle.com/ ", "id": "3" }, ] </code> on a webpage, I will like a js function or script that will write in the url href dynamically So if im on a page and the href is <a href id=”1”> </a> it should be able to write in google.com into that ahref. Also is this the best approach to take, should I use Id? Or something else? Your insight will be helpful UPDATED JSON FILE: {"links": [ {"link": "google.com", "id": "1" }, {"link": "yahoo.com", "id": "2" }, {"link": "msn.com", "id": "3" }, {"link": "mash.com", "id": "4" }, {"link": "facebook.com", "id": "5" } ] } JS for (var i = 0; i < linksObj.links.length; i++) { var linkObj = linksObj.links[i]; var elem = document.getElementById(linkObj.id); if (elem) { elem.href = linkObj.link; elem.innerHTML = linkObj.link; } } HTML <a id='1'></a><br> <a id='2'></a> A: First off, IDs beginning with a number are not valid - change them to have a letter in front :) Other than that though, this would do: for (var i = 0; i < links.length; i++) { var link = links[i]; $('#' + link.id).attr('href', link.link); } EDIT Also, as John Hartsock mentions above, make sure you use a standard double quote to surround your attribute values, not the curly one shown in your original code. A: here's the way to do it if you don't want to use jQuery: for (var i = 0; i < links.length; i++) { var link = links[i]; document.getElementById(link.id).setAttribute('href', link.link); } A: var linksObj = {"links": [ {"link": "http://www.google.com/", "id": "1" }, {"link": "http://www.poogle.com/", "id": "2" }, {"link": "http://www.foogle.com/ ", "id": "3" }, ]}; for (var i = 0; i < linksObj.links.length; i++) { var linkObj = linksObj.links[i]; var elem = document.getElementById(linkObj.id); if (elem) { elem.href = linkObj.link; elem.innerHTML = linkObj.link; } } Example
{ "language": "en", "url": "https://stackoverflow.com/questions/7573925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: iOS Memory Management & NSString Initialisation Still learning iOS development with ObjectiveC and iOS, and trying to realy understand memory management! Appreciate any advise on the snippet below, eg: 1) Analyser says there are potential memory leaks, but can't solve them? 2) Should I keep alloc and init the NSStrings in the for loop and when appended to? Thanks - (NSString *) lookUpCharNameForID: (NSString *) inCharID { debugPrint ("TRACE", [[@"Lookup Char Name for = " stringByAppendingString: inCharID] UTF8String]); NSString *tempName = [[NSString alloc] initWithFormat: @""]; if (![inCharID isEqualToString: @""]) { // Potentially lookup multiple values // NSString *newName = [[NSString alloc] initWithFormat: @""]; NSArray *idList = [inCharID componentsSeparatedByString: @","]; for (NSString *nextID in idList) { NSLog( @"Lookup %i : %@", [idList count], nextID); newName = [[NSString alloc] initWithFormat: @"C%@", nextID]; // Append strings if ([tempName isEqualToString: @""]) tempName = [[NSString alloc] initWithFormat: @"%@", newName]; else tempName = [[NSString alloc] initWithFormat: @"%@+%@", tempName, newName]; } [newName release]; } return [tempName autorelease]; } A: You don't need any of the calls to alloc, release, or autorelease. Instead, use [NSString stringWithFormat:] to create instances of NSString that you don't own, and therefore don't need to manage. Also, consider using NSMutableString to simplify your code a bit, for example along the lines of the following (untested) version: - (NSString *) lookUpCharNameForID: (NSString *) inCharID { NSMutableString *tempName = nil; if (![inCharID isEqualToString: @""]) { NSArray *idList = [inCharID componentsSeparatedByString: @","]; for (NSString *nextID in idList) { [tempName appendString:@"+"]; // Does nothing if tempName is nil. if (tempName == nil) tempName = [NSMutableString string]; [tempName appendFormat:@"C%@", nextID]; } } return tempName; } A: You have 2 alloc initWithFormat for tempName. One before the loop and one within the loop. A: Use ARC (Automatic Reference Counting) for new projects. For older projects it may be easy to convert them, if not ARC can be disabled on a file-by-file basis where necessary. Using a mutable string, autoreleased convience methods and a little rerfactoring: - (NSString *) lookUpCharNameForID: (NSString *) inCharID { NSMutableString *tempName = [NSMutableArray array]; if (inCharID.length) { NSArray *idList = [inCharID componentsSeparatedByString: @","]; for (NSString *nextID in idList) { if (tempName.length == 0) [tempName appendFormat: @"%@C", nextID]; else [tempName appendFormat: @"+%@C", nextID]; } } return tempName; }
{ "language": "en", "url": "https://stackoverflow.com/questions/7573926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Do Foreign Key constraints get checked on an SQL update statement that doesn't update the columns with the Constraint? Do Foreign Key constraints get checked on an SQL update statement that doesn't update the columns with the Constraint? (In MS SQL Server) Say I have a couple of tables with the following columns: OrderItems - OrderItemID - OrderItemTypeID (FK to a OrderItemTypeID column on another table called OrderItemTypes) - ItemName If I just update update [dbo].[OrderItems] set [ItemName] = 'Product 3' where [OrderItemID] = 2508 Will the FK constraint do it's lookup/check with the update statement above? (even thought the update is not change the value of that column?) A: No, the foreign key is not checked. This is pretty easy to see by examining the execution plans of two different updates. create table a ( id int primary key ) create table b ( id int, fkid int ) alter table b add foreign key (fkid) references a(id) insert into a values (1) insert into a values (2) insert into b values (5,1) -- Seek on table a's PK update b set id = 6 where id = 5 -- No seek on table a's PK update b set fkid = 2 where id = 6 -- Seek on table a's PK drop table b drop table a A: No. Since the SQL update isn't updating a column containing a constraint, what exactly would SQL Server be checking in this case? This is similar to asking, "does an insert trigger get fired if I only do an update?" Answer is no. A: There is a case when the FK not existing will prevent updates to other columns even though the FK is not changed and that is when the FK is created WITH NOCHECK and thus not checked at the time of creation. Per Books Online: If you do not want to verify new CHECK or FOREIGN KEY constraints against existing data, use WITH NOCHECK. We do not recommend doing this, except in rare cases. The new constraint will be evaluated in all later data updates. Any constraint violations that are suppressed by WITH NOCHECK when the constraint is added may cause future updates to fail if they update rows with data that does not comply with the constraint.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: lowest possible timeuuid in php (phpcassa) pycassa has pycassa.util.convert_time_to_uuid(time_arg, lowest_val=True, randomize=False) phpcassa has static string uuid1 ([string $node = null], [int $time = null]) Can phpcassa's uuid1 be used to get lowest/highest uuids like in pycassa? If not, what's the best approach to ensure you get everything between two given timestamps? A: I believe that if you have a column with a type of UUID version 1, Cassandra will ignore the 'unique' component of the UUID and just use the time part for the range. A: Strictly speaking, Cassandra sorts primarily by the timestamp component of a v1 UUID, and in the case of a tie, it sorts by the remaining bytes: int res = compareTimestampBytes(o1, o2); if (res != 0) return res; return o1.compareTo(o2); phpcassa should offer something similar to pycassa here. As a workaround in the meantime, you can set the last 8 bytes of the return value to 0x00.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Make Up/Down arrow in input boxes not do anything How can I make <input> elements not react to pressing the Up arrow (keyCode 38) or the Down arrow (keyCode 40), while they are focused? I'm using jQuery for the project, but have no qualms against writing it in raw JS if that's easier. A: Like this: $('.yourinputclass').keypress(function(e) { if(e.which == 38 or e.which == 40) return false; // or you can use e.preventDefault(); like it was mentioned in the comments }); Documentation here
{ "language": "en", "url": "https://stackoverflow.com/questions/7573948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Comparing strings in python to find errors I have a string that is the correct spelling of a word: FOO I would allow someine to mistype the word in such ways: FO, F00, F0O ,FO0 Is there a nice way to check for this ? Lower case should also be seen as correct, or convert to upper case. What ever would be the prettiest. A: One approach is to calculate the edit distance between the strings. You can for example use the Levenshtein distance, or invent your own distance function that considers 0 and O more close than 0 and P, for example. Another is to transform each word into a canonical form, and compare canonical forms. You can for example convert the string to uppercase, replace all 0s with Os, 1s with Is, etc., then remove duplicated letters. >>> import itertools >>> def canonical_form(s): s = s.upper() s = s.replace('0', 'O') s = s.replace('1', 'I') s = ''.join(k for k, g in itertools.groupby(s)) return s >>> canonical_form('FO') 'FO' >>> canonical_form('F00') 'FO' >>> canonical_form('F0O') 'FO' A: The builtin module difflib has a get_close_matches function. You can use it like this: >>> import difflib >>> difflib.get_close_matches('FO', ['FOO', 'BAR', 'BAZ']) ['FOO'] >>> difflib.get_close_matches('F00', ['FOO', 'BAR', 'BAZ']) [] >>> difflib.get_close_matches('F0O', ['FOO', 'BAR', 'BAZ']) ['FOO'] >>> difflib.get_close_matches('FO0', ['FOO', 'BAR', 'BAZ']) ['FOO'] Notice that it doesn't match one of your cases. You could lower the cutoff parameter to get a match: >>> difflib.get_close_matches('F00', ['FOO', 'BAR', 'BAZ'], cutoff=0.3) ['FOO'] A: you can use the 're' module re.compile(r'f(o|0)+',re.I) #ignore case you can use curly braces to limit the number of occurrences too. you can also get 'fancy' and define your 'leet' sets and add them in w/ %s as in: ay = '(a|4|$)' oh = '(o,0,\))' re.compile(r'f%s+' % (oh),re.I)
{ "language": "en", "url": "https://stackoverflow.com/questions/7573952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Decoding UTF-8 email subject? I have a string in this form: =?utf-8?B?zr... And I want to get the name of the file in proper UTF-8 encoding. Is there a library method somewhere in maven central that will do this decoding for me, or will I need to test the pattern and decode base64 manually? A: MimeUtility.decodeText is working for me, eg, MimeUtility.decodeText("=?UTF-8?B?4K6q4K+N4K6q4K+K4K604K6/4K614K+BIQ==?="); A: javax.mail.internet.MimeUtility.decodeWord() On the other hand, if you use JavaMail for decoding your emails, you don't have to care about either subject parsing or MIME body (attachments) parsing at all. BTW it does not need to be Base64 (common with Apple's clients), it can also be Quoted-Printable (common with MS Outlook client). Thunderbird uses whichever format is shorter (Base64 for Japanese, QP for most European languages). If you really want to implement it yourself, have a look at RFC2047 and RFC2184 (you have to, there are a few subtleties like split encoding in two different character sets or merging adjacent encoded words only separated by folding white space) A: In MIME terminology, those encoded chunks are called encoded-words. Check out javax.mail.internet.MimeUtility.decodeText in JavaMail. The decodeText method will decode all the encoded-words in a string. You can grab it from maven with <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4.4</version>
{ "language": "en", "url": "https://stackoverflow.com/questions/7573957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do I create an Entity with Multiple Navigation Properties to the same Table? This question has been asked in a few variations, but I believe that my issue is somewhat unique. I'm using the database-first approach with Entity Framework 4, and I am trying to map and Account to multiple Addresses, as well as map that same account to BillingAddress and ShippingAddress. Here is my db schema: Account * *ID *BillingAddressID *ShippingAddressID Address * *ID *AccountID I have 3 foreign keys. * *Account.BillingAddressID to Address.ID *Account.ShippingAddressID to Address.ID *Address.ID to Account.ID I would like to have the following POCO setup: public class Account { public int ID { get; set; } public virtual Address BillingAddress { get; set; } public virtual Address ShippingAddress { get; set; } public virtual ICollection<Address> Addresses { get; set; } } public class Address { public int ID { get; set; } public int AccountID { get; set; } } Yet when I do this and try to create a new Account with an address, I get the following error: "An error occurred while saving entities that do not expose foreign key properties for their relationships." One thought I had was to shift the schema to a full many-to-many relationship between Account and Address. Then I guess I could put the Billing vs. Shipping vs. Other in the AccountAddresses table and use that in the foreign keys, but I am really curious if I can get it to work as is. Account a = new Account(); // snip: add some account properties Account.Addresses.Add(new Address { // snip: some properties here }); context.SaveChanges(); ==> ERROR (as mentioned above) Any thoughts? Thanks!
{ "language": "en", "url": "https://stackoverflow.com/questions/7573958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Modal-dialog won't hide on page load I am trying to create a modal dialog to just show content (html of some sort or other): <script> $.fx.speeds._default = 1000; $(function() { $( "#dialog" ).dialog({ autoOpen: false, closeOnEscape: true, modal: true, position: 'center', width: 800, height: 600, show: "blind", hide: "explode" }); $( "#opener" ).click(function() { $( "#dialog" ).dialog( "open" ); return false; }); }); </script> When I view the page, the dialog is inline and not hidden. Here is my html: <div id="dialog">This is my dialog that should be hidden until called</div> <button id="opener">I Open the Dialog</button> What am I doing wrong? A: You should set the autoOpen property to false, below is some reference http://jqueryui.com/demos/dialog/#option-autoOpen Here is an example $(function() { $( "#dialog" ).dialog({ closeOnEscape: true, modal: true, position: 'top', width: 800, height: 600, show: "blind", hide: "explode", autoOpen: false ///added this line }); $( "#opener" ).click(function() { $( "#dialog" ).dialog( "open" ); return false; }); }); A: Hide the div using css like such: <div id="dialog" style="display:none;">This is my dialog that should be hidden until called</div> Now it will only show when called upon.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Disable jsessionid via http header (cookie) in Tomcat 7 I'm looking to disable jsessionid from being used in the https headers. Is there a way to turn this off or disable this being set as a cookie in tomcat 7? I either want the jsessionid to arrive embedded into a GET method url name value pairs or to be part of a POST request name value pairs. I know all the advantages and disadvantages of using cookie based sessioning and url rewriting but I have specific needs for specific impl of restful web services. I need tomcat 7 to accept jsessionid without using the http header: jsessionid. Thanks. UPDATE: so I looked around some more and found this which is implemented using the web.xml conf. However the following doesn't seem to work with Tomcat 7. <session-config> <tracking-mode>URL</tracking-mode> </session-config> is it a case of TC7 not fully implementing the servlet 3.0 spec? A: The web.xml setting works for me with Tomcat 7.0.20. Log and check the effective (and maybe the default) session tracking modes: logger.info("default STM: {}" , servletContext.getDefaultSessionTrackingModes()); logger.info("effective STM: {}" , servletContext.getEffectiveSessionTrackingModes()); Maybe your app override somewhere in the code the session tracking modes. An example: final Set<SessionTrackingMode> trackingModes = Collections.singleton(SessionTrackingMode.COOKIE); servletContext.setSessionTrackingModes(trackingModes); Check ServletContext.setSessionTrackingModes() calls in your code. It's also possible to set default session tracking modes in the Tomcat's context settings but I found that web.xml settings override them.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to check Stack Usage when Calculating Ackermann I'm learning about my system's ability to calculate Ackermann's algorithm both the two and three parameter version. For very small values of m and n, my system will calculate and print results returning from A0 and A1 method calls. However anything higher than 3 or 4 does not return and freezes the terminal I'm using atm. My problem is that I do determine for what values of m and n my machine can compute. I have tried a few things to catch a stack overflow, for all i know c++ doesn't have a stackoverflowexception I can catch. try-catch blocks don't work. In the below code, I use getrlimit() to find the stack limit, create a address location in main gStackRef. I call checkStack recursively checking the local variable pointer to gStackLimit. Is there a better way of checking my stack usage in relation to recursive methods? Also I do i check for segment faults? I'll let you know I'm running on a unix terminal. #include <cstdlib> #include <iostream> #define _XOPEN_SOURCE_EXTENDED 1 #include <sys/resource.h> int getrlimit(int resource, struct rlimit *rlp); using namespace std; int * gStackRef; int gStackLimit; void checkStack(void); int main(int argc, char *argv[]) { int temp = 0; gStackRef = &temp; rlimit myl; getrlimit(RLIMIT_STACK, &myl); gStackLimit = (myl.rlim_cur / 3 * 8 / 10) ;/* modified for segment fault */ cout << gStackLimit << "\n"; checkStack(); } void checkStack() { int temp = 0; int* pVariableHere = &temp; size_t stackUsage = gStackRef - pVariableHere; printf("Stack usage: %d / %d \n", stackUsage, gStackLimit); if(stackUsage > gStackLimit) return; else checkStack(); } A: However anything higher than 3 or 4 does not return and freezes the terminal I'm using atm. That's kind of the point of the Ackermann function. It grows extremely rapidly. For m >= 4 and n >= 3, if you're calculating A(m, n) recursively, I doubt your function will return before you're dead. I have tried a few things to catch a stack overflow, for all i know c++ doesn't have a stackoverflowexception I can catch. Of course not. The process is out of stack space. It should be torn down immediately. Is there a better way of checking my stack usage in relation to recursive methods? If you have to use recursion, do it manually by creating your own stack data structure that is allocated on the heap instead of in the stack space. Use that to keep track of where you are in the recursion. Push and pop and as you recurse, instead of recursing by nested method calls. But at the end, you shouldn't be using recursion to calculate Ackermann anyway. A: I have tried a few things to catch a stack overflow, for all i know c++ doesn't have a stackoverflowexception I can catch. try-catch blocks don't work. In the below code, I use getrlimit() to find the stack limit, create a address location in main gStackRef. I call checkStack recursively checking the local variable pointer to gStackLimit. POSIX does not have a "safe" way of detecting a stack overflow. Stack Overflows result in SIGSEGV signals, which you (generally) should not catch because they also are indicative of general segmentation faults, which should crash your program. Windows environments can deal with stack overflows safely, using EXCEPTION_STACK_OVERFLOW -- but in such cases what Windows is doing is merely putting a guard page at the end of the stack and notifying with SEH. If you use up the guard page (after ignoring the SEH exception), then your program gets terminated (just as it would in POSIX-land). Is there a better way of checking my stack usage in relation to recursive methods? Also I do i check for segment faults? I'll let you know I'm running on a unix terminal. No. Even what you're doing has undefined behavior. On some machines the stack grows up. On some machines the stack grows down. The compiler may insert any amount of slop space in between two methods. Technically, the compiler could implement things such that there were two separate stacks, located in two completely different memory segments, and still be conformant. If you want to calculate Ackermann in a stack safe manner, either use an explicit stack structure allocated from the heap, or use dynamic programming.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Cookie name length, uniqueness I want to know what is the maximum value of the cookie name? Is the cookie name unique per domain, and/or path? A: All those informations are specified in RFC 2965 - HTTP State Management Mechanism. A cookie name must be, like Jay said, unique within a path. The RFC also specifies that there should be no maximum length of a cookie's name or value (in ) : From chapter 5.3 - Implementation Limits Practical user agent implementations have limits on the number and size of cookies that they can store. In general, user agents' cookie support should have no fixed limits. They should strive to store as many frequently-used cookies as possible. Furthermore, general-use user agents SHOULD provide each of the following minimum capabilities individually, although not necessarily simultaneously: * *at least 300 cookies *at least 4096 bytes per cookie (as measured by the characters that comprise the cookie non-terminal in the syntax description of the Set-Cookie2 header, and as received in the Set-Cookie2 header) *at least 20 cookies per unique host or domain name User agents created for specific purposes or for limited-capacity devices SHOULD provide at least 20 cookies of 4096 bytes, to ensure that the user can interact with a session-based origin server... In practice, each browsers defines it's own maximum length. For more concrete data on the subject, you can consult the following stackoverflow question : What is the maximum size of a web browser's cookie's key?. A: It must be unique within a path. A: I dont know about max size but each cookie should not be more than 4,000 characters and in all practicality it should not be more that 2000 characters
{ "language": "en", "url": "https://stackoverflow.com/questions/7573965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to remove files/folders older than a certain time I already use a function do delete all files and folders within a certain folder. function rrmdir($dir) { if (is_dir($dir)) { $objects = scandir($dir); foreach ($objects as $object) { if ($object != "." && $object != "..") { if (filetype($dir."/".$object) == "dir") rrmdir($dir."/".$object); else unlink($dir."/".$object); } } reset($objects); rmdir($dir); } } What I want to do now is: adapt that function to only delete files and folders older than 60 minutes (for instance). There's a php function 'filetime' that I believe it gives the file/folder age, but I don't know how to delete specifically files older than "x" minutes. A: This construct will delete files older than 60 minutes (3600 seconds) using the filemtime() function: if (filemtime($object) < time() - 3600) { // Remove empty directories... if (is_dir($object)) rmdir($object); // Or delete files... else unlink($object); } Note that for rmdir() to work, the directory must be empty.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Retrieving form value with jQuery Sadly, this isn't as cut and dry as I had hoped. Over the past few weeks I had been researching the use of jQuery with CRM. While it's nice and dandy for style alterations, I couldn't find any examples that are closer to business logic. For example, today I needed to alert the browser if one of 4 fields were empty. Two were date fields, one a picklist and one a checkbox (bit). I thought that calling $("#formElement").val() would have gotten the value, and in some cases it did, such as the picklist after I parsed it as an int. However, the date fields always returned an empty string. Looking through the CRM form HTML, I see that "#formElement" isn't always the ID of an input for a CRM form element. Case in point, the date fields had ID="DateTime" (or something similar). At this point, I had thought that I will need to create a filter that will take the table that contains #formElement as it's ID and look for the value of the first input in that table, but at that point using crmForm.all.formElement.DataValue just seemed easier. I'm sure someone here has a solution for this (and maybe some explaination of how CRM Forms are written to help with a filter), and it really stinks not being able to install add-ons for Internet Explorer here at work. Thanks for any and all help. A: Use jQuery to select the form itself (either by its ID or just by $(form)) and then iterate over its children that are input text fields. I haven't done this for a form before but it might work for you. A: For anyone else who is looking for an answer, I have figured it out to a managable degree. Unfortuantely, I haven't been able to use CSS selectors to shorten attribute names, but I have been able to utilize jQuery to cut down on time. If you'd like to use a CRM 4 attribute with jQuery, it looks like this: $(crmForm.all.new_attribute).bind("click", function() { ClickFunction(); }); What I was really gunning for was chaining, because there are plenty of times when I need to null a field, disable it, and then force it to submit. A little bit of magic and this: crmForm.all.new_attribute.DataValue = null; crmForm.all.new_attribute.Disable = true; crmForm.all.new_attribute.ForceSubmit = true; Becomes: crmForm.all.new_attribute.dataValue().disable().forceSubmit(); I hope this helps some of you guys out!
{ "language": "en", "url": "https://stackoverflow.com/questions/7573972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can IQueryable<> only contain instructions (in the form of expression tree) on how to get the initial sequence, but not 1) public class Query<T> : IQueryable<T> ... { ... public IEnumerator<T> GetEnumerator() { return((IEnumerable<T>)this.provider.Execute(this.expression)).GetEnumerator(); } } Query<string> someQuery = new Query<string>(); someQuery.Expression holds the expression tree associated with this particular instance of IQueryable and thus describes how to retrieve the initial sequence of items. In the following example initialSet variable actually contains a sequence of strings: string[] initialSet = { }; var results1 = from x in initialSet where ... select ...; a) Can someQuery also contain a sequence of strings, or can it only contain instructions ( in the form of expression tree ) on how to get this initial sequence of strings from some DB? b) I assume that even if someQuery actually contains an initial sequence of strings, it is of no use to Where and Select operators, since they won't ever operate on this sequence of strings, but instead their job is only to build queries or request queries to be executed ( by calling IQueryProvider.Execute)? And for this reason someQuery must always contain an expression tree describing how too get the initial sequence of strings, even if someQuery already contains this initial sequence? Thank you EDIT: c) The way I understood your post is that query provider may contain information about describing the table or at least describing particular DB rows which initial query needs to retrieve. But I didn't interpret your answer as saying that query provider may also contain actual elements required by this initial query ( someQuery in our example )? d) Regardless, I assume even if query provider maintains actual elements, it can only maintain them for initial query? Thus if we apply Linq-to-entity or Linq-to-Sql operators on that initial query, I assume provider will have to query the database. As such, if my assumption are correct then answer to b) would be even if query does contain actual elements, when we call Where on someQuery ( someQuery.Where ),query provider will have to retrieve results from a DB, even if this query provider already contains all the elements of someQuery? e) I only started learning Linq-to-entities, so my question may be too general, but how does EF handle all of this? In other words, when does ObjectSet<T> returned by some EF API ( such as ObjectContext ), contain actual elements and when does it ( if ever ) contain only logic for retrieving elements from some data source (such as DB)? f) Also, even if ObjectSet<T> ( returned by say ObjectSet ) does contain actual elements, I assume if we apply Where operator on it ( ObjectSet<T>.Where ), query provider will always have to retrieve results from the DB? A: a) You wouldn't normally create a Query<T> yourself - a query provider would. It can choose to include whatever information it wants. It's most likely to just contain the information about what table it's associated with though. b) That's entirely up to the query provider. As you saw in the other question, the query provider may end up recognizing when it's reached a Query<T> - so it could know to ask the Query<T> for its strings, if that was appropriate. c) The query provider wouldn't usually contain data itself - but it could do. It's up to the provider. d) The query provider may notice that it's within a transaction, and that it's already executed a similar context in the query - it may be able to answer the query from within its cache. It's up to the query provider. e, f) No idea, I've never used Entity Framework in anger. The answer to almost all of your questions around this is topic is "it's up to the query provider". For details about a particular query provider, you should read the documentation for that provider. That should explain when it will make queries etc. It's unclear what you're really trying to get out of these questions - but if you're after a full implementation to study, there are plenty of open source LINQ providers around. You might want to look at NHibernate for example.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Edge-Side-Includes Module for nginx? Does anyone know about an ESI 1.0 implementation for nginx?
{ "language": "en", "url": "https://stackoverflow.com/questions/7573974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What does =+ (equals-plus) mean in C? I came across =+ as opposed to the standard += today in some C code; I'm not quite sure what's going on here. I also couldn't find it in the documentation. A: It's an ancient defunct variant of +=. In modern compilers, this is equivalent to an assignment operator followed by a unary +. A: In ancient versions of C, =+ was equivalent to +=. Remnants of it have been found alongside the earliest dinosaur bones. For example, B introduced generalized assignment operators, using x+=y to add y to x. The notation came from Algol 68 via McIlroy, who incorporated it in his version of TMG. (In B and early C, the operator was spelled =+ instead of +=; this mistake, repaired in 1976, was induced by a seductively easy way of handling the first form in B's lexical analyzer.) [The Development of the C Language, Dennis Ritchie. Copyright ACM, 1993. Internal citations omitted.] Since the mid-1970's, it has no special meaning -- it's just a = followed by a +. A: I think a =+ 5; should be equivalent to a = (+5); and therefore be code of very bad style. I tried the following code and it printed "5": #include <iostream> using namespace std; int main() { int a=2; a =+ 5; cout << a; } A: You can find evidence of the old notation in the 7th Edition UNIX Manual (Vol 2a) dated January 1979, available online at http://cm.bell-labs.com/7thEdMan/ (unavailable since approximately July 2015; the June 2015 version is now available via the WayBack Machine at http://cm.bell-labs.com/7thEdMan/ — or at https://9p.io/7thEdMan/). The chapter is titled 'C Reference Manual' by Dennis M. Ritchie, and is in the PDF version of the manual, but not in the HTML version. In the relevant part, it says: 7.14.1 lvalue = expression The value of the expression replaces that of the object referred to by the lvalue. The operands need not have the same type, but both must be int, char, float, double, or pointer. If neither operand is a pointer, the assignment takes place as expected, possibly preceded by conversion of the expression on the right. When both operands are int or pointers of any kind, no conversion ever takes place; the value of the expression is simply stored into the object referred to by the lvalue. Thus it is possible to generate pointers which will cause addressing exceptions when used. 7.14.2 lvalue =+ expression 7.14.3 lvalue =- expression 7.14.4 lvalue =* expression 7.14.5 lvalue =/ expression 7.14.6 lvalue =% expression 7.14.7 lvalue =>> expression 7.14.8 lvalue =<< expression 7.14.9 lvalue =& expression 7.14.10 lvalue =^ expression 7.14.11 lvalue = | expression The behavior of an expression of the form ‘‘E1 =op E2’’ may be inferred by taking it as equivalent to ‘‘E1 = E1 op E2’’; however, E1 is evaluated only once. Moreover, expressions like ‘‘i =+ p’’ in which a pointer is added to an integer, are forbidden. Separately, there is a paper 'Evolution of C' by L Rosler in the 'UNIX® SYSTEM: Readings and Applications, Volume II', originally published by AT&T as their Technical Journal for October 1984, later published in 1987 by Prentice-Hall (ISBN 0-13-939845-7). One section of that is: III. Managing Incompatible Changes Inevitably, some of the changes that were made alter the semantics of existing valid programs. Those who maintain the various compilers used internally try to ensure that programmers have adequate warning that such changes are to take effect, and that the introduction of a new compiler release does not force all programs to be recompiled immediately. For example, in the earliest implementations the ambiguous expression x=-1 was interpreted to mean "decrement x by 1". It is now interpreted to mean "assign the value -1 to x". This change took place over the course of three annual major releases. First, the compiler and the lint program verifier were changed to generate a message warning about the presence of an "old-fashioned" assignment operation such as =-. Next, the parsers were changed to the new semantics, and the compilers warned about an ambiguous assignment operation. Finally, the warning messages were eliminated. Support for the use of an "old-fashioned initialization" int x 1; (without an equals sign) was dropped by a similar strategy. This helps the parser produce more intelligent syntax-error diagnostics. Predictably, some C users ignored the warnings until introduction of the incompatible compilers forced them to choose between changing their obsolete source code or assuming maintenance of their own versions of the compiler. But on the whole the strategy of phased change was successful. Also, in Brian W Kernighan and Dennis M Ritchie The C Programming Language, 1st Edn (1978), on p212 in Appendix A, §17 Anachronisms, it says: Earlier versions of C used the form =op instead of op= for assignment operators. This leads to ambiguities, typified by: x=-1 which actually decrements x since the = and the - are adjacent, but which might easily be meant to assign -1 to x. A: It's just assignment followed by unary plus. #include <stdio.h> int main() { int a; a =+ 5; printf("%d\n",a); return 0; } Prints "5". Change a =+ 5 to a =- 5 and it prints "-5". An easier way to read a =+ 5 is probably a = +5. A: After reading your question I just investigated on these. Let me tell you what I have found. Tried it on gcc and turboc. Did not make it sure on Visual Studio as I have not installed it on my pC int main() { int a=6; a =+ 2; printf("%d",a); } o/p , a value is 2 int main() { int a=6; a =- 2; printf("%d",a); } o/p , a value is -2 I dont know about the other answers as they said its an ancient version of C.But the modern compilers treat them as a value to be assigned ( thats positive or negative nothing more than that) and these below code makes me more sure about it. int main() { int a=6; a =* 2; \\ Reporting an error inavlid type of argument of unary * printf("%d",a); } if *= is equal to =* then it should not report error but its throwing an error A: using "=+" you are just assigning the operand is positive for example int a = +10; same as for negative number int a = -10;
{ "language": "en", "url": "https://stackoverflow.com/questions/7573978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: Can iOS file system changes be observed from files being uploaded through iTunes? I have an application that displays a list of files to the user. The application has file sharing enabled, so the user can add or remove files through iTunes. Is it possible to observe file system changes from the user doing this? I'd like to automatically update the display of files available. A: Unfortunately there is no observer or notification for such event, but instead you can rescan the files in the applicationDidBecomeActive: method of your application's delegate. Workflow: when the user adds files in your app's Document directory thru iTunes, iTunes briefly synchronize the files, making your app become inactive (applicationWillResignActive:) during this small duration, and then make it active again (applicationDidBecomeActive:). Thus (and even if it is not the only time this method is called) scanning the contents of your Documents folder in this method guaranties that it will be up to date when the user adds files thru iTunes. For more info about UIApplication's Delegate Messaging workflow, I encourage you to read this excellent article from cocoanetics.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Something confusing about IQueryable.GetEnumerator public class Query<T> : IQueryable<T> ... { ... public IEnumerator<T> GetEnumerator() { return((IEnumerable<T>)this.provider.Execute(this.expression)).GetEnumerator(); } } Query<string> someQuery = new Query<string>(); var results1 = someQuery.Select(...).Where(...); string[] initialSet = { }; var results2 = initialSet.Select(...).Where(...); * *When operating on initialSet, Linq-to-object's Where<T> returns WhereEnumerableIterator<T> and thus results2 is of type WhereEnumerableIterator<T>. But when operating on someQuery, does Where<T> operator assign to results1 an instance retrieved by calling someQuery.GetEnumerator or does it also return some custom class? *If the latter, when exactly is someQuery.GetEnumerator called by Where and Select operators? A: The type of results2 is just Enumerable<T> - the type of the implementation that the value of results2 actually refers to at execution time happens to be WhereEnumerableIterator<T>, that's all. When operating on someQuery, it depends what you do with it - the type of the results1 variable is IQueryable<T>, so you can use more Queryable calls on it. someQuery.GetEnumerator() may never be called - it's up to the query provider implementation to work out exactly how to represent the query; it doesn't need to call GetEnumerator all the way up the chain like LINQ to Objects typically does. As for the type of object returned by Queryable.Where - again, that's up to the query provider implementation - the difference is that whereas the knowledge is baked into Enumerable.Where and can't be replaced, Queryable.Where will chain the call through to the query provider. A: If the latter, when exactly is someQuery.GetEnumerator called by Where and Select operators? When the query is enumerated. Hence comes the name. initialSet.Select(...).Where(...); This looks wrong. You use the Where to filter, and the Select to project the result. You appear to have it backwards.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: TCPDF: Mixed orientation in one pdf Is it possible to have mixed orientation in one PDF. E.g. some pages portrait and some landscape? Or is it possible to rotate content? I can see that you can set the overall orientation can be set in the constructor, but didn't see anything. MS A: It's actually pretty easy all yo have to do is when you add a page: // protrait $tcpdf->addPage( 'P', 'LETTER' ); // landscape $tcpdf->addPage( 'L', 'LETTER' ); A: For rotating the content you can use the following code. $pdf->StartTransform(); $pdf->Rotate(-90); $pdf->Cell(0,0,'This is a sample data',1,1,'L',0,''); $pdf->StopTransform();
{ "language": "en", "url": "https://stackoverflow.com/questions/7573994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Must IQueryProvider.Execute be called from within IQueryable.GetEnumerator? 1) IQueryable essentially represents a query which when executed will yield a sequence of results. a) I assume we execute query by either directly calling IQueryProvider.Execute ( immediate execution ) and passing in an expression tree or by calling IQueryable.GetEnumerator ( deffered execution )? b) is it IQueryProvider.Execute that actually converts the expression tree into target language ( say SQL ) and then retrieves the results from a DB? 2) public class Query<T> : IQueryable<T> ... { ... public IEnumerator<T> GetEnumerator() { return((IEnumerable<T>)this.provider.Execute(this.expression)).GetEnumerator(); } } Query<string> someQuery = new Query<string>(); foreach (var item in someQuery) { ... } a) In the above example query is executed by foreach calling someQuery.GetEnumerator. I assume that in order for the query to actually get executed, the IQueryProvider.Execute must be called from within someQuery.GetEnumerator( via this.provider.Execute )? Thank you EDIT 1) If the query is returned from a Queryable method, then it is deffered until GetEnumerator() or Execute() is called. I realize that queries returned from IQueryable are deffered, but it seems that you're implying that queries ( those implementing IQueryable ) may also be returned from non-Queryable methods ( in which case they are not deffered )? 2) This is how LINQ-to-SQL does it. LINQ-to-Entites, however, seems to bypass IqueryProvider when GetEnumerator() is called since its ObjectQuery provider already has the target implementation-specific command tree ready to be executed. I've just began learning Linq to entities, but it seems that ObjectQuery<> represents both a query and a specific provider, while with Linq-to-sql ( don't know any Linq-to-sql, so I'm just guessing ) a query is represented with IQueryable<> and provider is represented with IQueryProvider? A: 1.a) An IQueryable<T> query is usually executed by calling IEnumerable<T>.GetEnumerator() (more often than not via a foreach loop). You can use IQueryable.Execute() as well, which will also execute the query, but you'll have to cast the result from the IExecuteResult object that's returned. If the query is returned from a Queryable method, then it is deffered until GetEnumerator() or Execute() is called. 1.b) Yes, IQueryProvider.Execute takes an implementation-specific Expression and converts it to the implementation-specific target form, and executes that code on the implementation backing (e.g., database or web service). 2.a) This is how LINQ-to-SQL does it. LINQ-to-Entites, however, seems to bypass IQueryProvider when GetEnumerator() is called since its ObjectQuery<T> provider already has the target implementation-specific command tree ready to be executed.
{ "language": "en", "url": "https://stackoverflow.com/questions/7573996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Silverlight 4 Transition Animations Between Adding/Removing Grid Child Elements I'm using the Silverlight Wizard control provided by this blog: http://weblogs.asp.net/bryansampica/archive/2010/07/21/silverlight-4-0-wizard-custom-control.aspx And I would like to add a transition between ActivePage changes...the way they are handled in the codebehind are like so: public void manager_PropertyChanged(object sender, System.ComponentModel.PropertyChangedEventArgs e) { ContentHost.Children.Clear(); ContentHost.Children.Add(manager.ActiveStep); HeaderText = manager.ActiveStep.StepHeaderText; } Is there any way to add an animation between the Clear & Add? My apologies if this is a silly question! Thanks! A: One way to get the desired effect would be to launch a Storyboard which handles the visual transition, then listen on the Completed event to update the ContentHost.Children. * *In a storyboard animate ContentHost.Opacity to 0 *When the Storyboard.Completed event fires, execute the code in your manager_PropertyChanged() code block *Launch a second Storyboard to animate ContentHost.Opacity back to 1
{ "language": "en", "url": "https://stackoverflow.com/questions/7573999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to get All users in web application by setting role as search parameter - asp.net mvc In my MVC application I am using membership service . I need a page to list the users. But there are 1000's of users are in my application. So i don't need to display all of them in one page. I am planning to give a search option .I mean admin user can search by specifying user role and how many users can show in one page.How can i do this ? Any ideas? current code Model public MembershipUserCollection Users { get; set; } Controller model.Users = Membership.GetAllUsers(); But i am getting all users in the application. A: You probably want to query your role provider: public ActionResult Foo() { string[] usernamesInRole = Roles.GetUsersInRole("some_role"); ... }
{ "language": "en", "url": "https://stackoverflow.com/questions/7574001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Generic Interface, IEnumerable I Have the following code: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace QQQ.Mappings { interface IExcess<T> { IEnumerable<string, T> getExcessByMaterialGroup(T[] data); void Sort<TKey>(T[] data, Func<T, TKey> selector); } } But I'm getting this error, "Using the generic type 'System.Collections.Generic.IEnumerable' requires '1' type arguments" A: There is no standard IEnumerable<T, K> generic type interface, only IEnumerable<T> (MSDN). I believe you are need IDictionary<string, T> (MSDN) instead A: IEnumerable<T> is the only method there is no IEnumerable<T,T> but you can use IDictionary<T,T> A: This is your problem, IEnumerable has only 1 generic argument. IEnumerable<string, T> What exactly are you trying to accomplish? A: IEnumerable only accepts a single type argument. You should be declaring that as IEnumerable<T>. A: IEnumerable only has one type argument, yet you have specified two (string, T). You probably want something like: IEnumerable<string> getExcessByMaterialGroup(T[] data); if the method is supposed to return an enumerable of strings. A: You are attempting to return IEnumerable<string, T> from getExcessByMaterialGroup. IEnumerable<T> only takes one type parameter, not two (String and T). My guess is that you want to return something like IEnumerable<KeyValuePair<String, T>> A: IEnumerable<T> exists, there is no dual dictionary style IEnumerable<T, U>. If you're looking for a KeyValue like relationship, consider IEnumerable<KeyValuePair<string, T>>
{ "language": "en", "url": "https://stackoverflow.com/questions/7574009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: java.lang.NullPointerException at com.sun.faces.renderkit.RenderKitImpl.createResponseWriter I have a JSF 2.0 project with PrimeFaces 3.0.0.M3 on Glassfish. When I run it, I get the following exception: java.lang.NullPointerException at com.sun.faces.renderkit.RenderKitImpl.createResponseWriter(RenderKitImpl.java:228) at com.sun.faces.application.view.JspViewHandlingStrategy.renderView(JspViewHandlingStrategy.java:214) at com.sun.faces.application.view.MultiViewHandler.renderView(MultiViewHandler.java:131) at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:121) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:139) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:594) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1539) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:281) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:98) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:91) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:162) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:330) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:174) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:828) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:725) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1019) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:225) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:662) A: Given the following line, at com.sun.faces.application.view.JspViewHandlingStrategy.renderView(JspViewHandlingStrategy.java:214) I assume that you're actually using the legacy JSP instead of its successor Facelets as view technology (if you were using Facelets, you would have seen FaceletViewHandlingStrategy here). JSF 2.x at its own works fine on JSP, but the PrimeFaces component library does not support JSP anymore since PrimeFaces 2.2 for various reasons (it boils down to "not worth the maintenance effort"). If you want to use PrimeFaces 3.0, you need to upgrade to Facelets. Facelets is a XML based view technology which offers enormous a lot of advantages over JSP. It's already bundled as default view technology in JSF 2.x, you do not need to install or configure anything separately. See also: * *Our Facelets wiki page *Disadvantages of JSF 2.0 (a bit of history) *Java EE 6 tutorial - Introduction to Facelets *JSF 2.0 tutorial with Eclipse and Glassfish
{ "language": "en", "url": "https://stackoverflow.com/questions/7574013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Apple.com product animation http://www.apple.com/mac/ (the products animating into position) Anyone know how apple does this? I don't need useable code. Just an idea of how to accomplish it. I use the jQuery framework. EDIT: Thanks to Jordan for pointing this out. Apple is using css3 animations for this, not javascript. If anyone has a good idea on doing this with JS please post. A: Apple is using CSS3 animations for this. Check out the CSS file and scroll down to /* animations. A: Here I made a version in jQuery, which works in all browsers. Using this technique, you have many ways to do it using different CSS approaches, like absolute divs inside a relative one, etc. and then changing that values with the jQuery's animate function. I made it as simple as possible. http://jsfiddle.net/sanbor/SggMG/ HTML <div class="box">one</div> <div class="box">two</div> <div class="box">three</div> <div class="clearFloat"></div> <a id="resetAnimation" href="#">Run animation again</a> CSS .box { background: red; width: 100px; height: 50px; margin: 10px; float: left; margin-left: 100%; } .clearFloat { clear: both; } JS function animateBoxes() { $('.box').each(function(index, element) { $(element).animate({ 'marginLeft': '10px' }, { duration: 500, specialEasing: { marginLeft: 'easeOutBounce' } }, function() { // Animation complete. }); }); } $('#resetAnimation').click(function() { $('.box').css('marginLeft', '100%'); animateBoxes(); }); animateBoxes(); Alternate way, with css3 (http://jsfiddle.net/sanbor/SggMG/6/) This also can be done with css3 transitions, which is more, because just add an smooth effect between property changes, but animation allows to apply certain HTML <div class="box">one</div> <div class="box">two</div> <div class="box">three</div> <div class="clearFloat"></div> <a id="resetAnimation" href="#">Click twice</a> CSS .clearFloat { clear: both; } .box { background: red; width: 100px; height: 50px; margin: 10px; float: left; } .box.moveit{ -webkit-animation-name: moveit; -webkit-animation-duration: 1s; -moz-animation-name: moveit; -moz-animation-duration: 1s; -ms-animation-name: moveit; -ms-animation-duration: 1s; animation-name: moveit; animation-duration: 1s; } @-webkit-keyframes moveit { from { margin-left: 100%; } to { margin-left: 0%; } } @-moz-keyframes moveit { from { margin-left: 100%; } to { margin-left: 0%; } } @-ms-keyframes moveit { from { margin-left: 100%; } to { margin-left: 0%; } } @keyframes moveit { from { margin-left: 100%; } to { margin-left: 0%; } } JS $('#resetAnimation').click(function() { $('.box').toggleClass('moveit'); });
{ "language": "en", "url": "https://stackoverflow.com/questions/7574016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How is an MD5 or SHA-X hash different from an encryption? I've read a couple times that MD5 is not an encryption, e.g. on MD5 ... Encryption? or Command Line Message Digest Utility. Well, I get that it's a hash/message digest, and the explanation in the links above says an encryption has to have a key, while hash/md is a cryptographic hash function that produces just a signature. I don't really understand the difference. Couldn't you see the cryptographic hash function / algorithm as a key? Also, what is the difference between something that's cryptographic and something that's encryption? A: You can't "decrypt" a md5 hashed function, and you chose a bad algorithm if you want to transmit information and the receiver can't read it. So encryption must be decryptable. MD5 is a "cryptographic" hash function, because it's very difficult to produce a block of information that has a specific given hash value. So if you want to sign a message, it is enough to sign the hash. This uses less computing power and the receiver can regardless be sure that the original message is untouched. A: A hash algorithm causes information about the original data to be lost irreplaceably, whereas an encryption algorithm has a corresponding decryption algorithm which restores the original data. This can be shown in that hash algorithm results have a uniform size (128, 160, 256, etc. bits) regardless of the input, whereas encryption algorithm results have a variable size depending on the size of the input. A: I don't think you can regard the function itself as a key. Because a key is something you pass to the function in order encrypt or decrypt (<- impossible with md5) a message.
{ "language": "en", "url": "https://stackoverflow.com/questions/7574023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Inserting complex values to a map in C++ I am having trouble inserting data into this map. I honestly can not figure out the way to do this, but the last line of the code I gave is the part that I need fixed. map<string, vector<vector<Obj*>* >* > the_map; vector<vector<Obj*> *>* vectors = new vector<vector<Obj*> *>; vector<Obj*> Obj_vector; vectors->push_back(&Obj_vector); the_map.insert(make_pair(string("field1", &vectors)); //error on this line only A: Try this: the_map.insert(make_pair(string("field1"), vectors)); //you forgot this ^ ^ // | // & is not needed here By the way, I suspect the usage of so many pointers in your code, and especially these two lines: vector<Obj*> Obj_vector; //this is local variable vectors->push_back(&Obj_vector); //inserting address of the local variable Inserting address of a local variable into vector? Beware that the local variable wouldn't exist after it goes out of scope, which in turn, means that the address which you just inserted into the vector, points to the destroyed object, and using it would invoke undefined behaviour.
{ "language": "en", "url": "https://stackoverflow.com/questions/7574025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: .htaccess "Ignore all rules if link starts with..." In my root folder I have installed wordpress and there is also my submenu.php that can not be loaded with ajax if I use rules for /%postname%/ (in default ) So this is what WP gave me # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /wordpress/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /wordpress/index.php [L] </IfModule> # END WordPress What do I need to add so that calling $('#submenu').load('submenu.php?cat=4'); works again? A: This is not the way you should be performing AJAX within WordPress. I suggest you read up on Using AJAX within WordPress from the codex. A: I am not really good with htaccess, but this RewriteRule !^media/ index.php [L] Will redirect everything except media/* to index, so something like this should work RewriteRule !^yourscript.php index.php [L] Note: I agree with Jason there, using it without htaccess is better.
{ "language": "en", "url": "https://stackoverflow.com/questions/7574027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to give a Facebook App Tab access to the fan page data without having users log in? I wanted to build a fan page tab that would pull that page's photo albums. At first I thought I would need to build app with extended permissions to ask for "manage_pages" and "offline_access" so that the app can use my access as page owner to access page data. However, random users that go to the tab will now be asked for permissions also, which I don't want. What's best way to build a tab that can access fan page data via the api? A: If you don't have any fancy privacy settings for your page, you can just use the Graph API to pull all albums from the page - means creating a tab and link to a php file on your server where you do just this. Downside of this is that it's quite slow, but I can't tell whether the javascript sdk would be any faster (don't think so) If you need some help pulling and printing them out, leave a comment!
{ "language": "en", "url": "https://stackoverflow.com/questions/7574033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Select box not floating right I'm trying to create a basic title bar div that contains an h1 and a select list. I want the select list to be on the far right of the div, but floating it right is not working. Does anyone have any ideas? The code is very simple but can't see where the mistake is. Thanks! <style type="text/css"> #select { float: right; } h1 { display: inline; } #titleBar { width: 800px; } </style> <body> <div id="titleBar"><h1>Select Your Car </h1> <select name="categories"> <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option value="audi">Audi</option> </select> </div> </body> Here's a link to the jsfiddle: http://jsfiddle.net/qhvDG/1/ A: Your style is not correct it should be as shown below, because the # represents an element's id and select is the tag name not the id. select { float: right; } Or better yet a little more descriptive like this: div#titleBar > select { float: right; } Here is an example fiddle http://jsfiddle.net/qhvDG/3/ A: Your "select" in the CSS is an ID, not an element name. Just remove the # sign from #select. A: Try using "select" instead of #select in your style. select { float: right; }
{ "language": "en", "url": "https://stackoverflow.com/questions/7574039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: NSString search whole text for another string I would like to search for an NSString in another NSString, such that the result is found even if the second one does not start with the first one, for example: eg: I have a search string "st". I look in the following records to see if any of the below contains this search string, all of them should return a good result, because all of them have "st". Restaurant stable Kirsten At the moment I am doing the following: NSComparisonResult result = [selectedString compare:searchText options:(NSCaseInsensitiveSearch|NSDiacriticInsensitiveSearch) range:NSMakeRange(0, [searchText length])]; This works only for "stable" in the above example, because it starts with "st" and fails for the other 2. How can I modify this search so that it returns ok for all the 3? Thanks!!! A: Why not google first? String contains string in objective-c NSString *string = @"hello bla bla"; if ([string rangeOfString:@"bla"].location == NSNotFound) { NSLog(@"string does not contain bla"); } else { NSLog(@"string contains bla!"); } A: Compare is used for testing less than/equal/greater than. You should instead use -rangeOfString: or one of its sibling methods like -rangeOfString:options:range:locale:. A: I know this is an old thread thought it might help someone. The - rangeOfString:options:range: method will allow for case insensitive searches on a string and replace letters like ‘ö’ to ‘o’ in your search. NSString *string = @"Hello Bla Bla"; NSString *searchText = @"bla"; NSUInteger searchOptions = NSCaseInsensitiveSearch | NSDiacriticInsensitiveSearch; NSRange searchRange = NSMakeRange(0, string.length); NSRange foundRange = [string rangeOfString:searchText options:searchOptions range:searchRange]; if (foundRange.length > 0) { NSLog(@"Text Found."); } For more comparison options NSString Class Reference Documentation on the method - rangeOfString:options:range: can be found on the NSString Class Reference
{ "language": "en", "url": "https://stackoverflow.com/questions/7574041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: JavaScript: How to pass object by value? * *When passing objects as parameters, JavaScript passes them by reference and makes it hard to create local copies of the objects. var o = {}; (function(x){ var obj = x; obj.foo = 'foo'; obj.bar = 'bar'; })(o) o will have .foo and .bar. *It's possible to get around this by cloning; simple example: var o = {}; function Clone(x) { for(p in x) this[p] = (typeof(x[p]) == 'object')? new Clone(x[p]) : x[p]; } (function(x){ var obj = new Clone(x); obj.foo = 'foo'; obj.bar = 'bar'; })(o) o will not have .foo or .bar. Question * *Is there a better way to pass objects by value, other than creating a local copy/clone? A: ES6 Using the spread operator like obj2 = { ...obj1 } Will have same values but different references ES5 Use Object.assign obj2 = Object.assign({}, obj1) A: Javascript always passes by value. In this case it's passing a copy of the reference o into the anonymous function. The code is using a copy of the reference but it's mutating the single object. There is no way to make javascript pass by anything other than value. In this case what you want is to pass a copy of the underlying object. Cloning the object is the only recourse. Your clone method needs a bit of an update though function ShallowCopy(o) { var copy = Object.create(o); for (prop in o) { if (o.hasOwnProperty(prop)) { copy[prop] = o[prop]; } } return copy; } A: As a consideration to jQuery users, there is also a way to do this in a simple way using the framework. Just another way jQuery makes our lives a little easier. var oShallowCopy = jQuery.extend({}, o); var oDeepCopy = jQuery.extend(true, {}, o); references : * *http://api.jquery.com/jquery.extend/ *https://stackoverflow.com/a/122704/1257652 *and to dig into the source.. http://james.padolsey.com/jquery/#v=1.8.3&fn=jQuery.extend A: Actually, Javascript is always pass by value. But because object references are values, objects will behave like they are passed by reference. So in order to walk around this, stringify the object and parse it back, both using JSON. See example of code below: var person = { Name: 'John', Age: '21', Gender: 'Male' }; var holder = JSON.stringify(person); // value of holder is "{"Name":"John","Age":"21","Gender":"Male"}" // note that holder is a new string object var person_copy = JSON.parse(holder); // value of person_copy is { Name: 'John', Age: '21', Gender: 'Male' }; // person and person_copy now have the same properties and data // but are referencing two different objects A: Not really. Depending on what you actually need, one possibility may be to set o as the prototype of a new object. var o = {}; (function(x){ var obj = Object.create( x ); obj.foo = 'foo'; obj.bar = 'bar'; })(o); alert( o.foo ); // undefined So any properties you add to obj will be not be added to o. Any properties added to obj with the same property name as a property in o will shadow the o property. Of course, any properties added to o will be available from obj if they're not shadowed, and all objects that have o in the prototype chain will see the same updates to o. Also, if obj has a property that references another object, like an Array, you'll need to be sure to shadow that object before adding members to the object, otherwise, those members will be added to obj, and will be shared among all objects that have obj in the prototype chain. var o = { baz: [] }; (function(x){ var obj = Object.create( x ); obj.baz.push( 'new value' ); })(o); alert( o.baz[0] ); // 'new_value' Here you can see that because you didn't shadow the Array at baz on o with a baz property on obj, the o.baz Array gets modified. So instead, you'd need to shadow it first: var o = { baz: [] }; (function(x){ var obj = Object.create( x ); obj.baz = []; obj.baz.push( 'new value' ); })(o); alert( o.baz[0] ); // undefined A: Check out this answer https://stackoverflow.com/a/5344074/746491 . In short, JSON.parse(JSON.stringify(obj)) is a fast way to copy your objects, if your objects can be serialized to json. A: Here is clone function that will perform deep copy of the object: function clone(obj){ if(obj == null || typeof(obj) != 'object') return obj; var temp = new obj.constructor(); for(var key in obj) temp[key] = clone(obj[key]); return temp; } Now you can you use like this: (function(x){ var obj = clone(x); obj.foo = 'foo'; obj.bar = 'bar'; })(o) A: Use Object.assign() Example: var a = {some: object}; var b = new Object; Object.assign(b, a); // b now equals a, but not by association. A cleaner example that does the same thing: var a = {some: object}; var b = Object.assign({}, a); // Once again, b now equals a. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign A: I needed to copy an object by value (not reference) and I found this page helpful: What is the most efficient way to deep clone an object in JavaScript?. In particular, cloning an object with the following code by John Resig: //Shallow copy var newObject = jQuery.extend({}, oldObject); // Deep copy var newObject = jQuery.extend(true, {}, oldObject); A: With the ES6 syntax: let obj = Object.assign({}, o); A: Use this x = Object.create(x1); x and x1 will be two different object,change in x will not change x1 A: You're a little confused about how objects work in JavaScript. The object's reference is the value of the variable. There is no unserialized value. When you create an object, its structure is stored in memory and the variable it was assigned to holds a reference to that structure. Even if what you're asking was provided in some sort of easy, native language construct it would still technically be cloning. JavaScript is really just pass-by-value... it's just that the value passed might be a reference to something. A: When you boil down to it, it's just a fancy overly-complicated proxy, but maybe Catch-All Proxies could do it? var o = { a: 'a', b: 'b', func: function() { return 'func'; } }; var proxy = Proxy.create(handlerMaker(o), o); (function(x){ var obj = x; console.log(x.a); console.log(x.b); obj.foo = 'foo'; obj.bar = 'bar'; })(proxy); console.log(o.foo); function handlerMaker(obj) { return { getOwnPropertyDescriptor: function(name) { var desc = Object.getOwnPropertyDescriptor(obj, name); // a trapping proxy's properties must always be configurable if (desc !== undefined) { desc.configurable = true; } return desc; }, getPropertyDescriptor: function(name) { var desc = Object.getOwnPropertyDescriptor(obj, name); // not in ES5 // a trapping proxy's properties must always be configurable if (desc !== undefined) { desc.configurable = true; } return desc; }, getOwnPropertyNames: function() { return Object.getOwnPropertyNames(obj); }, getPropertyNames: function() { return Object.getPropertyNames(obj); // not in ES5 }, defineProperty: function(name, desc) { }, delete: function(name) { return delete obj[name]; }, fix: function() {} }; } A: If you are using lodash or npm, use lodash's merge function to deep copy all of the object's properties to a new empty object like so: var objectCopy = lodash.merge({}, originalObject); https://lodash.com/docs#merge https://www.npmjs.com/package/lodash.merge
{ "language": "en", "url": "https://stackoverflow.com/questions/7574054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "105" }
Q: How to get a optimized paginated list from a query that has a UNION ALL? I have a query formed by an UNION ALL from two tables. The results have to be ordered and paginated (like the typical list of a web application). The original query (simplified) is: SELECT name, id FROM _test1 -- conditions WHERE UNION ALL SELECT name, id FROM _test2 -- conditions WHERE ORDER BY name DESC LIMIT 10,20 The problem is that the 2 tables have more than 1 million rows each, and the query is very slow. How can I get an optimized paginated list from a UNION ALL? Postdata: I've used the search of Stack Overflow and I've found some similar questions of this, but the answer was incorrect or the question isn't exactly the same. Two examples: Optimize a UNION mysql query Combining UNION and LIMIT operations in MySQL query I'm surprised that in Stack Overflow nobody could answered this question. Maybe it is impossible to do this query more efficiently? What could be a solution of this problem? A: I would think that you could use something similar to the solution in your second link to at least help performance, but I doubt that you'll be able to get great performance on later pages. For example: ( SELECT name, id FROM _test1 -- conditions WHERE ORDER BY name DESC LIMIT 0, 30 ) UNION ALL ( SELECT name, id FROM _test2 -- conditions WHERE ORDER BY name DESC LIMIT 0, 30 ) ORDER BY name DESC LIMIT 10, 20 You're basically limiting each subquery to the subset of possible rows that might be on the given page. In this way you only need to retrieve and merge 20 rows from each table before determining which 10 to return. Otherwise the server will potentially grab all of the rows from each table, order and merge them, then start trying to find the correct rows. I don't use MySQL a lot though, so I can't guarantee that the engine will behave how I think it should :) In any event, once you get to later pages you're still going to be merging larger and larger datasets. HOWEVER, I am of the strong opinion that a UI should NEVER allow a user to retrieve a set of records that let them go to (for example) page 5000. That's simply too much data for a human mind to find useful all at once and should require further filtering. Maybe let them see the first 100 pages (or some other number), but otherwise they have to constrain the results better. Just my opinion though.
{ "language": "en", "url": "https://stackoverflow.com/questions/7574059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: XML parse error I have a Valid XML. It asked me to treat the entire metadata record as the value of metadata element "dataxml", with newlines (and percent signs) indicated by percent signs (i.e. “percent-escaped”) so i have done the following Note: It asked me percent only the following : \n,\r, : and % so i only str_replaced those $input .= 'dataxml: ' . str_replace(array(chr(hexdec('3A')),chr(hexdec('25')),chr(hexdec('0A')),chr(hexdec('0D'))),array('%3A', '%25', '%0A', '%0D'), $xmlfile) . "\n"; But it pops out the following error: 400 error:'dataxml': XML parse error: xmlns: URI http%253A//dataxml.org/schema/kernel-2.1 is not absolute Can any one point out what i have done wrong? A: The colon in http:// is being replaced twice: first the colon is replaced by %3A then the percent in that replacement is being replaced by %25. You can use the function strtr() to avoid replacing already replaced parts of the string. For example, $input .= 'dataxml: ' . strtr($xmlfile, array(":" => "%3A", "%" => "%25", "\n" => "%0A", "\r" => "%0D")) . "\n";
{ "language": "en", "url": "https://stackoverflow.com/questions/7574066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: About Redirect as POST I want to redirect the user to some url in other website, but to send with his redirect some post variable.. is this possible? And if yes, how? Thanks. A: It is not. :( You can however submit an hidden form using Javascript. EDIT: shame upon me. It seems it can be achieved w/o Javascript. Try to post some data to a PHP page you write yourself, which basically tells the browser to do a 303 See Other redirect. It shall work, in the sense that the browser should re-POST the data on the redirection target, but someone reports this causes the browser to show a "really repost the data?" message, like the one you see if you refresh a web page you loaded with a POST. However, even if it works, I think nobody does it.
{ "language": "en", "url": "https://stackoverflow.com/questions/7574068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: jQuery canvas addImage dynamically I have a canvas element and I need to add images dynamically. function draw(){ var ctx = document.getElementById('myCanvas').getContext('2d'); ctx.drawImage("img/image1.jpg",0,0,200,200); } The html code is the next: <div id="divCanvas"> <canvas id="myCanvas" width="322px" height="450px">Canvas not suported</canvas> </div> A: Something like this? Live Demo var startX = 0, startY = 0; $('#clicker').click(function(){ draw($('#testImage')); }); function draw(image){ image = image.get(0); var ctx = document.getElementById('myCanvas').getContext('2d'); ctx.drawImage(image,startX,startY,20,20); startY+=20; } Used jQuery because you have it tagged as such. Not sure what issue your running into exactly, but to get the actual DOM element to draw onto the canvas you have to use .get().
{ "language": "en", "url": "https://stackoverflow.com/questions/7574073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: IsDayLightSavingTime changed? Before, this code would return True, now it returns False. Have any of you hear of an update on this function? d2 = New DateTime(2010, 11, 7, 1, 0, 0) Console.WriteLine("D2: " & System.TimeZone.CurrentTimeZone.IsDaylightSavingTime(d2)) We parse files and put the data into a database, if I parse the exact same file with the same code (it was never changed) I get different results. Update This is EST/EDT
{ "language": "en", "url": "https://stackoverflow.com/questions/7574074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is it possible to find average color of an image from its histogram? I took the average of each color by this method and wrote the average of red, green and blue to database. Here are the images sorted by "-blue". As you can see the 5th image has the most blue. Am I doing something wrong, or is it not possible to get average color from the histogram? This is the handler where I create the histogram: class ImageSave(webapp.RequestHandler): def post(self): homepage = HomePage() original_image = self.request.get("img") url = self.request.get("url") firm_name = self.request.get("firm_name") original_image = db.Blob(original_image) thumbnail = images.resize(original_image, 250, 250) img = images.Image(thumbnail) hist = img.histogram() rgb_weighed_average = hist_weighed_average(hist) #update database homepage.original_image = original_image homepage.thumbnail = thumbnail homepage.firm_name = firm_name homepage.url = url homepage.red = rgb_weighed_average[0] homepage.green = rgb_weighed_average[1] homepage.blue = rgb_weighed_average[2] homepage.put() self.redirect("/imageupload") Thanks! A: Actually, the fifth image doesn't have the most blue. Note that white is (255, 255, 255) as rgb, so an image that is completely white has just as much blue as an image that is completely blue. A darker blue has a smaller blue component than white.
{ "language": "en", "url": "https://stackoverflow.com/questions/7574078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Problem with pasting I'm trying to write an apple scrip to search Sparrow (mail client for Mac) Here is the script: on run argv tell application "Sparrow" activate end tell tell application "System Events" key code 3 using {option down, command down} keystroke argv end tell end run The problem is that I want the script to take an argument on run so that I can supply it with what to search for, but I can't get it to pastet it out. A: * *argv is always initialized to a list. *You cannot keystroke a list (you have to coerce each item to a string first). *You can never tell the exact number of parameters that will be sent to the script, so a better route would be to iterate through the list and do whatever needs to be done, as shown below: tell application "System Events" tell process "Sparrow" key code 3 using {command down, option down} repeat with this_item in argv keystroke (this_item as string) end repeat end tell end tell @Runar * *The script is implying that Sparrow is already activated. *You can't do this as written (the result of every text item of argv is still a list). However, if you coerce the result into a string, this will work, but it will squash everything together (assuming AppleScript's text item delimiters is ""). If you set AppleScript's text item delimiters to space, then this would actually be better than the previous script... on run argv tell application "Sparrow" to activate tell application "System Events" tell process "Sparrow" --implying Sparrow is already activated set prevTIDs to AppleScript's text item delimiters key code 3 using {command down, option down} set AppleScript's text item delimiters to space keystroke (every text item of argv) as string set AppleScript's text item delimiters to prevTIDs end tell end tell end run
{ "language": "en", "url": "https://stackoverflow.com/questions/7574081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Java XML parser Possible Duplicate: Java:XML Parser I have a XML file, in which i want to get the text only within the specified tags(lets say, only the text between "<HERE> ... </HERE>. Each file have multiple "<HERE>" blocks. How can i get that? I was using this for normal text files: Scanner scanner = new Scanner(file); while (scanner.hasNextLine()) { String line = scanner.nextLine(); .. } I want to be able to get only the multiple blocks of text inside the tag. A: I would type a long response about XML parsing in Java, but one of the best quick reads on it which I cannot beat is this Dzone article: http://refcardz.dzone.com/refcardz/using-xml-java Explains all you need to know in just a few pages. Definitely worth a read. A: While there's better answers, without the fundamentals you'll not appreciate them. Learn SAX parsing. Basically the parser will call your class when entering and exiting tags. You just need to keep track of the depth, or where you are in the document, check the tag names, and capture the text you want in a StringBuilder buffer. After the parser is complete, you do a toString() on the buffer and get your combined text. Later on, learn DOM parsing. Then learn XPath. However, without learning how to parse XML using an XML parser you will burn through way too much time and brainpower attempting to solve a problem badly. Building a parser from scratch isn't impossible; however, it is stealing away from your time solving the problem at hand (and odds are you don't know enough about XML yet to parse it correctly).
{ "language": "en", "url": "https://stackoverflow.com/questions/7574087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Python Scapy wrpcap - How do you append packets to a pcap file? I have some software that can emulate things like BER and delays on the network. I need a way to test the BER module of the software to make sure it actually works correctly. My solution is to create a program that sends out raw Ethernet frames with the type field set to an unused type. Inside the Ethernet frame is just random bits. For each frame sent out I need to log the frame to a pcap file. On the other side of the network link will be a receiving application that simply writes every packet it sees to its own pcap log. After the test is done running the two pcap logs will be compared to get the BER. I'm using the python module Scapy and so far its done everything that I need. I can send out raw Ethernet frames with random data and see them in Wireshark. However, I don't know how to get the wrpcap() method to append to the pcap file, instead of overwriting. I know I can write a list of packets to wrpcap, but this application needs to be able to run for an indefinite amount of time and I don't want to have to wait until the application quits to write all of packets sent to the hard drive. As that would be a lot to store in memory, and if something happened I would have to start the test all over from scratch. My question is: How do I append to a pcap file using scapy instead of overwriting the pcap file? Is it even possible? If not then what module can do what I need? While looking for something with Scapy's capabilities I ran into dpkt, but I didn't find a lot of documentation for it. Can dpkt do what I'm asking and if so where can I get some good documentation for it? A: There is a way to do what you want, but it means either: * *[Memory hog with one big pcap]: Read the existing pcap from disk with rdpcap() into a scapy PacketList() and then writing frames to the PacketList as they are received. You can selectively save intermediate PacketList to the pcap at will, but I don't think there is anything like an append capability in scapy's wrpcap(). As you mentioned, this technique also means that you are keeping the entire PacketList in memory until completion. *[Glue individual pcap files together]: Only keep small snapshots of packets in memory... you should save pcap snapshots on a per-X-minute basis to disk, and then aggregate those individual files together when the script finishes. You can combine pcap files in linux with mergecap from the wireshark package... The following command will combine pak1.pcap and pak2.pcap into all_paks.pcap: mergecap -w all_paks.pcap pak1.pcap pak2.pcap As for dpkt, I looked through their source, and it might be able to incrementally write packets, but I can't speak for how stable or maintained their code base is... it looks a bit neglected from the commit logs (last commit was January 9th 2011). A: For posterity, PcapWriter or RawPcapWriter looks to be the easier way to deal with this in scapy 2.2.0. Couldn't find much documentation other than browsing the source though. A brief example: from scapy.utils import PcapWriter pktdump = PcapWriter("banana.pcap", append=True, sync=True) ... pktdump.write(pkt) ... A: The wrpcap() function can be used to append if you include the keyword argument append=True. For example: pkt = IP() wrpcap('/path/to/filename.pcap', pkt, append=True) pkt2 = IP() wrpcap('/path/to/filename.pcap', pkt2, append=True) rdpcap('/path/to/filename.pcap') <filename.pcap: TCP:0 UDP:0 ICMP:0 Other:2> Side note: wrpcap opens and closes the file handle with each call. If you have an open file handle to the pcap file, it will be closed after a call to wrpcap(). A: I think I am following you here as packets are sniffed you would like to have them all written to a single pcap file? While you cannot append to a pcap you can append the packets to list and then write them all at once to the pcap. I'm not sure if this answers your question or helps at all, if not let me know and I can tweek it to meet your needs. In this example I set the threshold to create a new pcap for ever 500 packets sniffed. Be careful if you run this twice as your pcaps may get over written on the second go. #!/usr/bin/python -tt from scapy.all import * pkts = [] iter = 0 pcapnum = 0 def makecap(x): global pkts global iter global pcapnum pkts.append(x) iter += 1 if iter == 500: pcapnum += 1 pname = "pcap%d.pcap" % pcapnum wrpcap(pname, pkts) pkts = [] iter = 0 while 1: sniff(prn=makecap) This should give you a little bit of leverage however the last few packets may get lost (lower the value in the if statement to mitigate this.) Suggest using it on both sides at the same time so each pcap should line up, Later on you can use mergepcap as Mike suggests to if you like. Let me know if this works for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/7574092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How to add row margin while inflating a LinearLayout? I have a Linearlayout part of a Tablerow of tablelayout. Below is sample description <!-- Master Layout--> <?xml version="1.0" encoding="utf-8"?> <TableLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_height="match_parent" android:layout_width="match_parent" android:stretchColumns="0" android:baselineAligned="false"> <TableRow> <Button></Button> </Tablerow> <TableRow> <LinearLayout android:id="@+id/listinfo"></LinearLayout> </Tablerow> Then I wrote another linearlayout to create rows of linearlayout which I inflate and add to linearlayout of Main Layout. <!-- listrow layout--> <?xml version="1.0" encoding="utf-8"?> <LinearLayout android:orientation="horizontal" xmlns:android="http://schemas.android.com/apk/res/android" <ImageView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="left" android:src="@drawable/icon" /> <TextView android:layout_weight="1" android:id="@+id/textView1" /> <TextView android:layout_weight="1" android:id="@+id/textView1" /> </LinearLayout> Everything is working perfectly fine but I am not getting row demarcators. How to do that? Java code LinearLayout placeHolderLinearLayout = (LinearLayout)findViewById(R.id.listinfo); for (int i =0; i < myarray.size(); i++) { final Employee eobj = myarray.get(i); LayoutInflater vi = (LayoutInflater)getApplicationContext().getSystemService(Context.LAYOUT_INFLATER_SERVICE); LinearLayout lrow = (LinearLayout)vi.inflate(R.layout.listrow, null); lrow.setBackgroundColor(android.graphics.Color.WHITE); lrow.setPadding(2, 2, 2, 2); ((TextView)lrow.getChildAt(1)).setText(eobj.getName()); //some more settings placeHolderLinearLayout.addView(lrow); } The view doesn't show any demarcator between subsequent linearlayout. How can I achieve that? |---------------------| | lrow1 | |______demarcator_____| | lrow2 | => The demaractor is missing in my view |_____________________| A: LinearLayout lrow = (LinearLayout)vi.inflate(R.layout.listrow, null); Try to pass placeHolderLinearLayout instead null. If you define true as third parameter the infalted layout will be added automatically to the given ViewGroup. false will avoid that.
{ "language": "en", "url": "https://stackoverflow.com/questions/7574095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }