text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
C function float argument takes different value inside the function
I am sure I'm missing something obvious here but just can't figure out what it is. In my repository, I have a file called ddpoly_2.c which contains a function called "ddpoly2". I call this function from the main function in tst5.c. If you look at the code in tst5_2.c, I am assigning to x a value of 2, immediately printing it and then passing it to "ddpoly2" as the third argument (which is as per what the calling pattern should be like as far as I can tell). I then immediately print x from within the function "ddpoly2". What I see is the following:
x outside: 2.000000
x inside : 0.000000
nc: 2
nd: 3
You can see that x was 2.000000 just before the function was called but it became 0.0000000 once inside the function. I must be missing something overtly obvious here but can't figure out what it is.
P.S. I compile the code using the makefile in the same directory.
EDIT: Including relevant parts of code here.
Calling the function in tst5_2.c:
int main(int argc, char *argv[])
{
//Used to determine which section gets its results printed.
float c[] = {1,1,1,0,0};
int nc = 2;
float x = 2.0;
float pd[] = {0,0,0,0,0};
int nd = 3;
printf("x outside: %f\n",x);
ddpoly2(c,nc,x,pd,nd);
}
printing it inside the function ddpoly2:
#include <stdio.h>
void ddpoly2(float c[], int nc, float x, float pd[], int nd)
{
int nnd, j, i;
float cnst = 1.0;
printf("x inside : %f\n",x);
printf("nc: %d\n",nc);
printf("nd: %d\n",nd);
}
Please include the actual piece of code you are referring to in the question itself.
Please provide a [mcve]
Can you also please show the actual call to the function?
@JoachimPileborg: Added - sorry for missing that.
You have to create a new file, build your MCVE inside, check it gives the same error as in your project (thus, it must compile and link) and then post it. That is what we call a MCVE.
Maybe I'm blind but I can't find the point where you declare your ddpoly function. How does your project even compile?
Are ddpoly and main defined in separate source files? If yes, recompile everything. Because, if they are in separate files, it is possible that: 1) at some point you created ddpoly function with another list of arguments; 2) you compiled source code with ddpoly into object file; 3) you modified definition/declaration of ddpoly but didn't recompile its source file; 4) you compiled main source file; 5) you linked ddpoly object file (which uses old declaration) with new main object file.
@jdarthenay: Created new files for everything (edited question as well) and still see the repro.
@gudok: Deleted all .o files, created new versions of the tst5.c and ddpoly.c files, rebuilt using make and still see the repro.
do you get any compiler warnings? like implicit declaration of ddpoly2? I don't see where main is given the prototype for ddpoly2.
I am looking at tst5_2.c. It uses ddpoly which must be declared either in nrutils.h or fileio.h or in their descendants in order to compile. But I do not see any declaration there...
Thanks @gudock. I added void ddpoly2(float x); to the top of the tst5_2.c file (I had in the meantime trimmed the minimal repro still further) and now the outside and inside values of x match up. Still, it is concerning that the other int variables nc and nd got the correct values even though I didn't add the method signature and only x which was a float was getting the wrong value. Anyway, thanks for solving the problem. If you add that as an answer, I will accept it.
You're calling a function without a prototype. This is illegal since 1999 but your compiler is helpful and allows this as a compatibility with old C standards.
The C standard says:
If the expression that denotes the called function has a type that does not include a prototype, the integer promotions are performed on each argument, and arguments that have type float are promoted to double.
The correct solution is to:
Always have correct function prototypes.
Always enable all warnings in compilation and treat them as errors.
-Wall -Wextra -Werror -pedantic should always be on your command line.
I heard somewhere that programming in C is like swordfighting and ice skating at the same time. I now see why.
| common-pile/stackexchange_filtered |
Take Case/Somebody To Trial
I have a question about the usage of the noun "trial". Most dictionaries say the following usages:
The case went to trial.
The case came to trial.
He was brought to trial.
are standard English. But I found on the web the following usages:
The case was taken to trial.
He was taken to trial.
which I cannot find in dictionaries. Are "take a case to trial" and "take somebody to trial" wrong? Or are they standard English that should have been included in dictionaries?
I'm sorry, but why would you expect a dictionary's definition of a noun to list every possible verb that can be used with the noun? That would be virtually impossible. Dictionaries can give examples of verbs you can use, but that doesn't mean no other verbs are permissible.
The two uses are essentially the same; the difference is minimal. The main difference is not the verb "take" but the use of the passive. "The case went to trial" is more of a neutral statement describing the process that the case went through. "The case was taken to trial" is in the passive, which implies an agent: "Somebody took the case to trial." Here, we might infer that there were other options available besides trial: a settlement, a mediation, etc., but someone (plaintiff, defendant, lawyer) decided that trial was their preferred option. Context would make it clear. Without context, there isn't a significant difference among these examples.
| common-pile/stackexchange_filtered |
Mapping a single entity to multiple tables of same schema in hibernate using annotations
I have a class CustomerProfile which is mapped to a table CUST_PROFILE. We have requirement to maintain the closed profiles in a separate table which will have the same schema.
I have read many questions in SO especially the below (which has a answer summarizing many other similar questions )
hibernate two tables per one entity
from which I can understand that it is difficult to acheive the same using annotations other than the MappedSuperClass. but possible using the xml mapping.
The reason i am hesitant to use MappedsuperClass is because CustomerProfile has 17 other table with one to many mapping and we have the same set of tables for closed customer profiles also. So i would end up with (17 + 1 for customer profile) 18 mapped super class, 18 active profile and 18 closed profile classes which is 54 classes.
Is there any other way that this can be achieved without MappedSuperClass when using Annotations.
I have achieved the same using the MappedSuperClass itself.
| common-pile/stackexchange_filtered |
What does "2- place real function" mean?
What does "2-place real function" mean?
This comes up in the context of copulas, as here.
I'd guess, it's a function of two variables. e.g. $f:\Bbb R^2\rightarrow\Bbb R$.
You're going to have to include more context. It probably means a function mapping two real number inputs to a single real number output, though.
Is this in a logic text?
Thanks for your responses. I see this in the copula context. https://books.google.com/books?id=EqbzBwAAQBAJ&pg=PA6&lpg=PA6&dq=%222-+place+real+function%22&source=bl&ots=Ev_XtlcLml&sig=qw42Y8ie7v3pm-oPPizBbDSZKsY&hl=en&sa=X&ei=uAJiVdbiGcmS7AbJt4CIDg&ved=0CCcQ6AEwAQ#v=onepage&q=%222-%20place%20real%20function%22&f=false
| common-pile/stackexchange_filtered |
In jsp how we compare string value in If condition
We can write
if (usermailid != null)
It is working but
if (usermailid =="Ram")
is not working .
You can use expression language to compare strings (yes it works with strings!), just like this: ${usermailid == "Ram"}
As Brijesh pointed out, == checks equality and you should use equals as outlined extensively in this answer.
More efficiently, you could combine null check and check against value in one expression, like so
if ("Ram".equals(usermailid)) { /* ... */ }
The equals check fails if the value is null too, so you don't have to check both in seperate conditions and don't run the risk of NullPointerExceptions.
Also consider extracting the string being tested against as a constant, if it's used elsewhere.
if (usermailid.equals("Ram"))
this will check the equality.
== will compare the reference of both objects which is not same.
For you Reference to JAVA string functions, refer this
| common-pile/stackexchange_filtered |
Is function $f:\{1,2\}\rightarrow\{1\}$ continuous?
Under "regular" topology in $\mathbb{R}$, is function $f:\{1,2\}\rightarrow\{1\}$ continuous? Here $f$ is a function defined on $\{1,2\}$ and $f(1)=f(2)=1$.
I think it is. According to the definition of continuity, the pre-image of any open set is open. The only open set in this case is $\phi$, the empty set, and the pre-image is also $\phi$. Thus, $f$ is open. Please correct me if I am wrong.
What have you tried so far? For instance, what can you say about the topology of the set ${1}$, and what does this imply for this function?
@Martin R The domain here isn't a singleton
@JeanMarie: The question was changed after my closing vote.
@Martin R I understand. Always a disturbing situation...
You missed the open set $\{1\}$ whose pre-image is the entire domain which is open. So the function is indeed continuous.
Why ${1}$ is open? Open set means it contain a open ball in the real line. But ${1}$ does not contain a open ball.. right?
No. I am assuming you don't have experience in topology. For a function $f:X\to Y$ to be continuous, we talk about open sets in $Y$, not some superset of $Y$. So we see subsets of ${1}$ which are open with respect to ${1}$, not necessarily with respect to $\Bbb R$. Note that there is only one open ball in ${1}$, and that is ${1}$ itself no matterwhich metric you use. This is because for any $\varepsilon>0,B(1,\varepsilon)={x\in{1}:d(x,1)<\varepsilon}={1}$. Likewise ${1,2}$ is open in ${1,2}$.
Perfect. My understanding here is that continuity of a function $f$ is with respect to the topology of the image of $f$.
A slight correction: we use the topology on the codomain of $f$.
| common-pile/stackexchange_filtered |
How to put folders to different branches?
I haven't used git for 2 years and it is giving me a hard time now when I look back my at old work. :(
Basically, I have one folder (the main folder) which I want to push to a branch called source and there is a folder inside the main folder named _deploy which I want to push to the master branch of the same repo. when I do the following!
$ git checkout -b source (on main folder)
$ git push -u origin source
$ cd _deploy
$ git checkout -b master
$ git push -u origin master
I get something weird. On my github repo, I see in both the branches, the same folders and _deploy in black, which means I am not able to see inside _deploy. However, this should only happen if I view the branch source not master. master branch should show me the content inside _deploy.
Background:
This is a jekyll blog.
I have done git init in the main folder and have added remote origin to the main folder as well. Please help me understand and solve if I am doing something wrong.
What is the purpose of your cd _deploy command?
I just checked your Github page and saw that your _deploy folder is now accessible from both branches. It seems like you would like to clean up the branch now? I have an assumption what happened, please let me know if it is correct.
You have initialised your repository using git init and then proceeded by adding your files and folders to Git. Probably by running git add . followed by a git commit -m "Init Message". Which will put all your files — except of the ones mentioned in .gitignoreunder version control.
Then you used git checkout -b source which branched off of your last commit which included all files. That's the reason, why you got the same files in master and source.
To remove the folders only from the version control, but not on your disk, you can run the following commands.
git rm --cached ./_deploy
git commit -m "Remove deploy"
A little side-note: Changing the directory in your terminal (cd _deploy) will not affect your git branch / git status.
Hi! Yes it is visible from both the branches now. However, I wanted to see only the content inside _deploy in my master branch. Like I have done here https://github.com/talkstwor/talkstwor.github.io . you can check the build.sh which I used to use before to commit after changes. However, initially how to set it up I don't know so that my main folder only pushes to source branch and _deploy pushes to master branch.
Within your shell script, you enter the folder cd _deploy and run git add . assuming that it will only add files within this folder. This is not the case. git add . adds all files within the main main folder. What you would like to do it git add ./_deploy instead.
| common-pile/stackexchange_filtered |
Firestore security rule doesn't detect phone number from token when on real device
I have an issue where the firestore security rules is unable to filter out users based on their phone number. The rule is set up perfectly and it works well with the simulator but not when I test it out on a real device.
I have a collection of admins where the document ids are the phone numbers. My security rule checks if the phone number of the user is present in the collection and if it does then it returns true. I have set it up according to the answer here as follows-
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
function isAdmin() {
return exists(path("/databases/" + database + "/documents/admins/" + request.auth.token.phone_number));
}
match /tasks/{task=**} {
allow read: if request.auth.uid != null;
allow create: if isAdmin();
allow delete: if isAdmin();
allow update: if request.auth.uid != null;
}
}
}
The create and delete rules in the above snippet are getting approved when I try on a simulator but fails when I test it out on a real device. I have to also mention that if I change the filter from phone number to uid and change the document ids in the collection to the uids, it works out fine with the real device in this manner-
return exists(path("/databases/" + database + "/documents/admins/" + request.auth.uid));
It feels like it is unable to read the token value from auth. I am using the flutter package for firebase auth and firestore to send requests
were you able to resolve this?
Hey, I have posted the answer to my question. Let me know if this helps you as well.
Solved:
The problem is that firestore rules is unable to resolve the "+" symbol from the country code. So this is unable to verify if the document with an id of "+919876543210" exists, for example (for Indian country code of +91). All I had to do was to get rid of the "+" symbol while making the documents in the admin collection like so:
and also manually replace the symbol while checking it in the rules like this:
function isAdmin() {
return exists(path("/databases/" + database + "/documents/admins/" + request.auth.token.phone_number.replace('\\+', '')));
}
| common-pile/stackexchange_filtered |
Make random curvy lines or segments for paths on Canvas
Here is the code that I tried:
var myPath = new Path();
myPath.strokeColor = 'red';
myPath.strokeWidth = 5;
myPath.dashArray = [20, 5];
myPath.add(new Point(1800, 120));
function onFrame(event) {
if(event.count%10 == 0) {
var angle = Math.random(0, 1)*Math.PI;
var all_points = myPath.segments.length;
var last_point = myPath.segments[all_points - 1];
myPath.add(new Point(last_point.point.x + 40*Math.cos(angle*(180 / Math.PI)), last_point.point.y + 40*Math.sin(angle*(180 / Math.PI))));
myPath.smooth({type: 'geometric', factor: 2});
}
}
I was thinking this would choose a random angle between 0 and Math.PI and then plot the points from the last point. However, it just draws a line at 45 degree going back and forth. I did a console.log() for the angle and it is always something random between 0 and Math.PI.
I want to achieve something similar to this: https://codepen.io/speaud/pen/ExxzrL
Here is what I have so far. My idea was to get a random angle and then gets sin and cosine values for that angle and add them to the last point in the path. This would move the path randomly in a certain direction.
I was hoping that it would create nice curvy non-intersecting paths but the path goes over the same place multiple times also making it smooth with the smooth() function changes its previous course a little.
Could you try to describe the algorithm that you have in mind and try to achieve, with simple words even maybe illustrating it ?
Hi @sasensi I have created a sketch and posted link to it in the question. :)
Ok, now I think that I better understand what you're trying to achieve. If you really want to have a smooth output, I think that Paper.js is maybe not the way to go as it deals with vector graphics and you would have to create a path with a lot of point to achieve your desired result (which would be memory consuming). I think that using the Canvas API directly and switch from raw random number to something like Perlin noise can greatly help you: https://www.youtube.com/watch?v=ikwNrFvnL3g
Math.cos and Math.sin get angle in radians but in your code you convert them to degrees
Thanks but I don't understand how that would affect anything. :)
| common-pile/stackexchange_filtered |
Trouble with Jasper and getCallerClassLoader()
I'm trying to get a runtime reference to a compiled subreport that lies in the same directory as my main report. After hours of googling, I've tried to get the file reference as a URL using the following:
new String(
ClassLoader.getCallerClassLoader().toString().substring(
ClassLoader.getCallerClassLoader().toString().indexOf("=") + 1,
ClassLoader.getCallerClassLoader().toString().lastIndexOf("/") ).toString() ) +
"/some.jar/com/foo/reports/ThatDamnedReport/ThatDamnedReport_subreport1.jasper"
When I debug, I can change the value of my string with the above statement and it works! Yay!
Problem 1
We're using pre-compiled jasper files and I can only compile up to version 3.1.4 (otherwise the rest of the ancient code breaks). The "standard" way of accessing my subreport doesn't work because I can't find the relative directory to my subreport. We're not using JasperServer.
Problem 2
When I compile via iReport, I get the the error "the method getCallerClassLoader from the type ClassLoader is not visible"
Since I'm trying to compile from a JRXML file, subclassing is not an option here.
Question
How can I get my file to compile or how do I find the relative path to ThatDamnedReport_subreport1.jasper?
Provide some more details about what you are trying to do, and the environmental constraints. Listing the environments that JasperReports is running would be helpful. (For example, IIS, Servlet, etc.)
JasperReports uses an absolute path to find files. To work around this issue do the following:
Create a DIR_ROOT parameter.
Assign a default value for DIR_ROOT of /path/to/com/foo/reports/ (not inside a JAR file; the trailing slash is important).
Create a DIR_SUBREPORT parameter.
Define DIR_SUBREPORT relative to $P{DIR_ROOT}. For example: $P{DIR_ROOT} + "ThatDamnedReport/" (keep the trailing slash).
Reference subreports: $P{DIR_SUBREPORT} + "ThatDamnedReport_subreport1.jasper".
When you run the report, if the root directory is different, pass the DIR_ROOT parameter to the report. This allows you to keep the subreports relative to the root directory and have the root directory vary between environments.
Thanks for the reply, Dave. The trouble is that the code must be portable across platforms too. I can't assign a default dir (at least, I can't see how that will work). I've also tried using the WEB-INF/classes dir. It found the subreport then, but the versions "conflicted". I don't have the message with me now, but it was apparently a problem in how the jasper binaries were created. I even recompiled both main and subreport with the same compiler, but still that error persisted. I could call it as a File and not a URL from WEB-INF.
@Rick: I've used this technique on Windows and Unix platforms. For Servlet-based applications you can use the Servlet API to acquire the true path to the report file. Another mechanism would be to create a symbolic link. (Symbolic links are possible on some Windows platforms; you need a special tool.)
Hi Dave, I'm sorry that I missed your message. The thing is, it found the file with no problem. Jasper then threw an error saying that the binary being called was created with a dirrent version of iReport (although, I fail to see how: I recompiled iReport 3.1.4 and generated the jasper file. Still threw the error).
@Rick: Post the full stack trace of the errors in your question. Also post a minimal JRXML file that shows how to recreate the error. Also, since "Jasper" could mean "JasperReports Server" or "JasperReports", please be explicit. I'd also like to see your CLASSPATH settings and directory list of all the JAR files you are using. My guess is that you think one version of JasperReports is being used but the Java Virtual Machine is picking up a different version.
| common-pile/stackexchange_filtered |
Chromedriver TypeError: 'NoneType' object is not subscriptable
I am using Chrome 76.0.3809.132 and chromedriver 76.0.1.
I am getting error while converting from HTML to PDF.
Chromedriver TypeError: 'NoneType' object is not subscriptable
Sometimes it works and sometimes it doesn't work.
Is there any solution for this?
I once had this issue and what I referred was this.
I looked into that, but i am using correct versions, it should work with this setup?
| common-pile/stackexchange_filtered |
MySQL to PostreSQL and Named Scope
I've got a named scope for one of my models that works fine. The code is:
named_scope :inbox_threads, lambda { |user|
{
:include => [:deletion_flags, :recipiences],
:conditions => ["recipiences.user_id = ? AND deletion_flags.user_id IS NULL", user.id],
:group => "msg_threads.id"
}}
This works fine on my local copy of the app with a MySQL database, but when I push my app to Heroku (which only uses PostgreSQL), I get the following error:
ActiveRecord::StatementInvalid (PGError: ERROR: column "msg_threads.subject" must appear in the GROUP BY clause or be used in an aggregate function:
SELECT "msg_threads"."id" AS t0_r0, "msg_threads"."subject" AS t0_r1, "msg_threads"."originator_id" AS t0_r2, "msg_thr
eads"."created_at" AS t0_r3, "msg_threads"."updated_at" AS t0_r4, "msg_threads"."url_key" AS t0_r5, "deletion_flags"."id" AS t1_r0, "deletion_flags"."user_id" AS t1_r1, "deletion_flags"."msg_thread_id" AS t1_r2, "deletion_flags"."confirmed"
AS t1_r3, "deletion_flags"."created_at" AS t1_r4, "deletion_flags"."updated_at" AS t1_r5, "recipiences"."id" AS t2_r0, "recipiences"."user_id" AS t2_r1, "recipiences"."msg_thread_id" AS t2_r2, "recipiences"."created_at" AS t2_r3, "recipien
ces"."updated_at" AS t2_r4 FROM "msg_threads" LEFT OUTER JOIN "deletion_flags" ON deletion_flags.msg_thread_id = msg_threads.id LEFT OUTER JOIN "recipiences" ON recipiences.msg_thread_id = msg_threads.id WHERE (recipiences.user_id = 1 AND
deletion_flags.user_id IS NULL) GROUP BY msg_threads.id)
I'm not as familiar with the working of Postgres, so what would I need to add here to get this working? Thanks!
... must appear in the GROUP BY clause ...
MySQL flagrantly violates the standard in favor of simplicity and allows you to not specify every single column you're looking for in GROUP BY, instead relying on ordering to pick the "best" available value.
Postgres (and most other RDBMSes), on the other hand, requires that you name every single column that you're selecting in the GROUP BY clause. If you've manually assembled that query, you're going to need to, one by one, add each column into GROUP BY. If an ORM generated that query (which is what it looks like based on all the double quotes), you're going to need to check up on their Postgres support and/or file a bug with them.
| common-pile/stackexchange_filtered |
Audit Trail Design using Table Storage
I'm considering implementing an Audit Trail for my application in using Table Storage.
I need to be able to log all actions for a specific customer and all actions for entities from that customer.
My first guess was creating a table for each customer (Audits_CustomerXXX) and use as a partition key the entity id and row key the (DateTime.Max.Ticks - DateTime.Now.Ticks).ToString("D19") value. And this works great when my question is what happened to certain entity? For instance the audit of purchase would have PartitionKey = "Purchases/12345" and the RowKey as the timestamp.
But when I want a birds eye view from the entire customer, can I just query the table sorting by row key across partitions? Or is it better to create a secondary table to hold the data with different partition keys? Also when using the (DateTime.Max.Ticks - DateTime.Now.Ticks).ToString("D19") is there a way to prevent errors when two actions in the same partition happen in the same tick (unlikely but who knows...).
Thanks
Can you describe what you mean by entity id?
Yeah, it's a string id like "pictures/1234" or "purchase/5221"
One more question: Regarding the birds eye view, you would want to see all the actions between certain date/time ranges. Is that correct?
Exactly. Regardless of what entity
You could certainly create a separate table for the birds eye view but you really don't have to. Considering Azure Tables are schema-less, you can keep this data in the same table as well. You would keep the PartitionKey as reverse ticks and RowKey as entity id. Because you would be querying only on PartitionKey, you can also keep RowKey as GUID as well. This will ensure that all entities are unique. Or you could append a GUID to your entity id and use that as RowKey.
However do keep in mind that because you're inserting two entities with different PartitionKey values, you will have to safegaurd your code against possible network failures as each entry will be a separate request to Table service. The way we're handling this in our application is we write this payload to a queue message and then process that message through a background process.
| common-pile/stackexchange_filtered |
Rails 4: unable to view the show page in heroku
My restaurant review app works in Localhost but when i try this on heroku and click on the link to show the reviews i get We're sorry, but something went wrong. heroku logs shows the following information with the error ActionView::Template::Error (undefined method `capitalize' for nil:NilClass):
2014-08-20T09:32:49.175203+00:00 app[web.1]: 46: <h4>
2014-08-20T09:32:49.175221+00:00 app[web.1]: 49: <p><%= review.created_at.strftime("%-m/%-d/%y") %></p>
2014-08-20T09:32:49.175223+00:00 app[web.1]: 50: </td>
2014-08-20T09:32:49.175225+00:00 app[web.1]: app/views/restaurants/show.html.erb:47:in `block in _app_views_restaurants_show_html_erb__3540437068917616550_70094864076860'
2014-08-20T09:32:49.175226+00:00 app[web.1]: app/views/restaurants/show.html.erb:43:in `_app_views_restaurants_show_html_erb__3540437068917616550_70094864076860'
2014-08-20T09:32:49.175303+00:00 app[web.1]: 48: </h4>
2014-08-20T09:32:49.175229+00:00 app[web.1]:
2014-08-20T09:32:49.175284+00:00 app[web.1]: ActionView::Template::Error (undefined method `capitalize' for nil:NilClass):
2014-08-20T09:32:49.175307+00:00 app[web.1]: app/views/restaurants/show.h
tml.erb:47:in `block in _app_views_restaurants_show_html_erb__3540437068917616550_70094864076860'
2014-08-20T09:32:49.175298+00:00 app[web.1]: 45: <td>
2014-08-20T09:32:49.175199+00:00 app[web.1]: ActionView::Template::Error (undefined method `capitalize' for nil:NilClass):
2014-08-20T09:32:49.175282+00:00 app[web.1]:
2014-08-20T09:32:49.175304+00:00 app[web.1]: 49: <p><%= review.created_at.strftime("%-m/%-d/%y") %></p>
2014-08-20T09:32:49.175309+00:00 app[web.1]: app/views/restaurants/show.html.erb:43:in `_app_views_restaurants_show_html_erb__3540437068917616550_70094864076860'
2014-08-20T09:32:49.175300+00:00 app[web.1]: 46: <h4>
2014-08-20T09:32:49.175218+00:00 app[web.1]: 47: <%= "#{review.user.first_name.capitalize} #{review.user.last_name.capitalize[0]}." %>
2014-08-20T09:32:49.175301+00:00 app[web.1]: 47: <%= "#{review.user.first_name.capitalize} #{review.user.last_name.capitalize[0]}." %>
2014-08-20T09:32:49.175310+00:00 app[web.1]:
2014-08-20T09:32:49.175305+00:00 app[web.1]: 50: </td>
2014-08-20T09:32:49.175312+00:00 app[web.1]:
2014-08-20T09:32:49.626763+00:00 heroku[router]: at=info method=GET path="/restaurants/5" host=yelpdemo2014.herokuapp.com request_id=76387447-eb34-431b-9fcf-785a901e95ee fwd="<IP_ADDRESS>" dyno=web.1 connect=1ms service=58ms status=500 bytes=1030
2014-08-20T09:32:49.580600+00:00 app[web.1]: Parameters: {"id"=>"5"}
2014-08-20T09:32:49.580595+00:00 app[web.1]: Parameters: {"id"=>"5"}
2014-08-20T09:32:49.617957+00:00 app[web.1]: Rendered restaurants/show.html.erb within layouts/application (20.7ms)
2014-08-20T09:32:49.617970+00:00 app[web.1]: Rendered restaurants/show.html.erb within layouts/application (20.7ms)
2014-08-20T09:32:49.620207+00:00 app[web.1]: Completed 500 Internal Server Error in 37ms
2014-08-20T09:32:49.622252+00:00 app[web.1]: ActionView::Template::Error (undefined method `capitalize' for nil:NilClass):
2014-08-20T09:32:49.580536+00:00 app[web.1]: Processing by RestaurantsController#show as HTML
2014-08-20T09:32:49.620214+00:00 app[web.1]: Completed 500 Internal Server Error in 37ms
2014-0
show.html
div class="row">
<div class="col-md-3">
<%= image_tag @restaurant.image_url %>
<h2>
<%= @restaurant.name %>
</h2>
<div class="star-rating" data-score= <%= @avg_rating %> ></div>
<p><%=<EMAIL_ADDRESS>reviews" %></p>
<p>
<strong>Address:</strong>
<%= @restaurant.address %>
</p>
<p>
<strong>Phone:</strong>
<%= @restaurant.phone %>
</p>
<p>
<strong>Website:</strong>
<%= link_to @restaurant.website, @restaurant.website %>
</p>
<%= link_to 'Write a review', new_restaurant_review_path(@restaurant), class: "btn btn-primary" %>
</div>
<div class="col-md-9">
<% if @reviews.blank? %>
<h3>No reviews yet, be the first to write one!</h3>
<% else %>
<table class="table">
<thead>
<tr>
<th class="col-md-3"></th>
<th class="col-md-9"></th>
</tr>
</thead>
<tbody>
<% @reviews.each do |review| %>
<tr>
<td>
<h4>
<%= "#{review.user.first_name.capitalize} #{review.user.last_name.capitalize[0]}." %>
</h4>
<p><%= review.created_at.strftime("%-m/%-d/%y") %></p>
</td>
<td>
<div class="star-rating" data-score= <%= review.rating %> ></div>
<p><%= h(review.comment).gsub(/\n/, '<br/>').html_safe %></p>
</td>
</tr>
<% end %>
</tbody>
</table>
<% end %>
</div>
</div>
<%= link_to 'Edit', edit_restaurant_path(@restaurant), class: "btn btn-link" %> |
<%= link_to 'Back', restaurants_path, class: "btn btn-link" %>
<script>
$('.star-rating').raty({
path: 'https://s3-eu-west-1.amazonaws.com/yelpdemoneil/stars',
readOnly: true,
score: function() {
return $(this).attr('data-score');
}
});
</script>
restaurant controller
class RestaurantsController < ApplicationController
before_action :set_restaurant, only: [:show, :edit, :update, :destroy]
# GET /restaurants
# GET /restaurants.json
def index
@restaurants = Restaurant.all
end
# GET /restaurants/1
# GET /restaurants/1.json
def show
@reviews = Review.where(restaurant_id: @restaurant.id).order("created_at DESC")
if @reviews.blank?
@avg_rating = 0
else
@avg_rating = @reviews.average(:rating).round(2)
end
end
# GET /restaurants/new
def new
@restaurant = Restaurant.new
end
# GET /restaurants/1/edit
def edit
end
# POST /restaurants
# POST /restaurants.json
def create
@restaurant = Restaurant.new(restaurant_params)
respond_to do |format|
if @restaurant.save
format.html { redirect_to @restaurant, notice: 'Restaurant was successfully created.' }
format.json { render action: 'show', status: :created, location: @restaurant }
else
format.html { render action: 'new' }
format.json { render json: @restaurant.errors, status: :unprocessable_entity }
end
end
end
# PATCH/PUT /restaurants/1
# PATCH/PUT /restaurants/1.json
def update
respond_to do |format|
if @restaurant.update(restaurant_params)
format.html { redirect_to @restaurant, notice: 'Restaurant was successfully updated.' }
format.json { head :no_content }
else
format.html { render action: 'edit' }
format.json { render json: @restaurant.errors, status: :unprocessable_entity }
end
end
end
# DELETE /restaurants/1
# DELETE /restaurants/1.json
def destroy
@restaurant.destroy
respond_to do |format|
format.html { redirect_to restaurants_url }
format.json { head :no_content }
end
end
private
# Use callbacks to share common setup or constraints between actions.
def set_restaurant
@restaurant = Restaurant.find(params[:id])
end
# Never trust parameters from the scary internet, only allow the white list through.
def restaurant_params
params.require(:restaurant).permit(:name, :address, :phone, :website, :image)
end
end
Nil
The error is here:
ActionView::Template::Error (undefined method `capitalize' for nil:NilClass)
The problem is that you're calling a method on a variable / object which isn't set. As you mention it's only showing on Heroku, I'd say that it's caused by you not having the required records in your database
The fix for this is rather simple:
<% if @reviews.any? %>
<% @reviews.each do |review| %>
...
<% end %>
<% end %>
--
Data
The bottom line is that you need to have data in your database in order to populate the various variables / objects you wish to display in your application
The problem for a lot of people is that Heroku doesn't maintain the same database as the your local system, hence if you expect data to be there, and there isn't, your system will invoke the exception you're seeing
The bottom line is that you'll either need some validation (to determine if the data you require is present), or you'll need to seed your production database with the required records for your application
thanks for taking the time to explain your solution in detail , cheers Neil
As a bit of an aside, when iterating (or doing anything with) arrays that you suspect might have nils, empty strings, etc, in them, you can ignore the blank values with my_array.reject(&:blank?). In this case this wouldn't try to iterate on the values of @reviews that were equal to nil, and if there were no non-blank values in @reviews then it wouldn't iterate at all.
Also, my_array.reject!(&:blank?) (note the !) will permanently remove the blank values from the array.
I agree with Rich and here is an alternative workaround:
I use try for optional attributes like "first name" because i don't want to force users through validations to submit them:
<%= "#{review.user.first_name.try(:capitalize)} #{review.user.last_name.try(:capitalize)}." %>
This way it shouldn't throw exceptions weather there was an attribute or not
| common-pile/stackexchange_filtered |
Delphi XE2: firemonkey playing video with libvlc?
I am still trying to play video on firemonkey using directx api with libvlc! I have played video using by different way already but I want to play video on directx surface.
please looked this link first : http://forum.videolan.org/viewtopic.php?f=32&t=82618
someone write a delphi code about how to use libvlc on directx and that code working well, but this code is pure directx code. I want to integrate this code to firemonkey! How can I do it with firemonkey library(I know for windows firemonkey using directx library too but firemonkey libraries (interfaces, classes, objects) naming and using so different then directx!
Where I am now:
I almost integrate this pure directx code to firemonkey using with "Winapi.Direct3D9, Winapi.D3DX9, FMX.Context.DX9" libraries
I can access to IDirect3DDevice9 object in firemonkey Context!
var
Device: IDirect3DDevice9;
begin
Device := TCustomDirectXContext(TCustomForm3D(ParentForm).Context).Device;
Device.CreateTexture(video_width, video_height, 1, D3DUSAGE_DYNAMIC, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, vlcVideoTexture, nil);
Device.CreateTexture(video_width, video_height, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, vlcMemoryTexture, nil);
end;
That code in here working completely in firemonkey but you will see the result object of this code type is IDirect3DTexture9 , this object contains frame buffer of video, i just need to render this object to on some control in firemonkey form.
How can I draw this buffer to firemonkey canvas?
I am waiting good delphi developers solutions.
Thanks
I do not think trying direct3D with LibVlc is possible since libvlc request handle to any window to play video, playing on any form is ok since they have valid handle.
I am succeed to render video stream on direct3d already longtime ago
| common-pile/stackexchange_filtered |
Entity Framework and string as NCLOB on oracle Db
I have this model:
public class Teacher
{
public int TeacherID { get; set; }
public string Name { get; set: }
public string Surname{ get; set; }
}
and when Model First run, it creates my Teachers table and DbSet, but for Name and Surname (which are string) it assigns NCLOB type to the column.
Now, with NCLOB type I cannot do some operations like equals or distincts on my table....
How can I force MF to set columntype to varchar?
I've managed to solve the issue setting the maximum string lenght into the model
public class Teacher
{
public int TeacherID { get; set; }
[StringLength(255, MinimumLength = 3, ErrorMessage = "My Error Message")]
public string Name { get; set: }
[StringLength(255, MinimumLength = 3, ErrorMessage = "My Error Message")]
public string Surname{ get; set; }
}
Without the StringLength Orcale creates a NCLOB field that can contain up to 4Gb of data.
Note: Maximum lenght for varchar is 4000 bytes, so we cannot set more than 2000 as MaximumLenght (2 byte per character with Unicode)
I have an int with a .ToString() in a LINQ query and it is using TO_NCLOB in the query that is created. Is there there any way of preventing that?
@JonathanPeel If the int is available outside of the LINQ query, you could convert the int to a local string variable outside of LINQ, and use the string in the LINQ query.
I found a way to map extentions methods to Oracle functions, using [Function(FunctionType.BuiltInFunction, "TO_CHAR")]. I then created an int.ToChar(), and it works wonderfully.
@JonathanPeel How did you get this work? I can't seem to. I am using database first and have a generated edmx file. This is the error: "The specified method 'System.String ToChar(Int64)' on the type 'Acorn.Data.OracleFunctions' cannot be translated into a LINQ to Entities store expression.""
I am using code first, on an existing DB. OracleFunctions is a static class, with members like [Function(FunctionType.BuiltInFunction, "TO_CHAR")] public static string ToChar(this int value) => Function.CallNotSupported<string>(); In the DbContext class There is a method called OnModelCreating. There is a line of code in there: modelBuilder.Conventions.Add(new FunctionConvention(typeof(OracleFunctions)));
With DB first, the code files are generated by a tool, so it is not a good idea to change them. The DbContext class might be partial. If it is, you can create another partial class with the same name in a different file. This then won't be overridden if you update the edmx.
If you don't come right, let me know, and I will send you a sample. It is difficult explaining in comments.
@JonathanPeel I didn't realize that library only worked with Code First. Converted to Code First and added the FunctionConvention in OnModelCreating and it now works. Thanks.
Try to configure it explicitly:
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Teacher>().Property(x => x.Name).HasColumnType("varchar");
modelBuilder.Entity<Teacher>().Property(x => x.Surname).HasColumnType("varchar");
}
See Documentation
I've managed to solve it the 'right way'. I'll post the answer but thank anyway for the effort!
only should be varchar2?
| common-pile/stackexchange_filtered |
How to republish SharePoint 2013 workflows via Powershell farm-wide
I have a SharePoint farm with Dedicated WFM farm which is working fine, Now i am splitting SharePoint farm in to two farms (Farm A and Farm B) but workflow manager farm will be remain same. Everything is working fine after the split expect SharePoint 2013 workflows. I am getting the Scope not found error when i try to run the workflows in farm B (it is expected that scope of the WFM change in farm B).
getInstances threw exception:
Microsoft.Workflow.Client.ScopeNotFoundException: Scope
'/SharePoint/default/ad2f730d-7d0c-417d-972f-df9a2d992506/c352cde2-28b2-4b54-a7ee-3ad215a6b726'
was not found. HTTP headers received from the server - ActivityId:
dsdd9142-62af-4df0-ad2b-297f6713e70. NodeId: KFWFM. Scope:
/SharePoint/default/0d-7d0c-417d-972f-df9a2d992506/cde2-28b2-4b54-a7ee-3ad215a6b726.
Client ActivityId : aebf9e-b173-a0e9-9225-ccda760a9ba5. --->
System.Net.WebException: The remote server returned an error: (404)
Not Found. at
Microsoft.Workflow.Common.AsyncResult.End[TAsyncResult](IAsyncResult
result) at
Microsoft.Workflow.Client.HttpGetResponseAsyncResult`1.End(IAsyncResult
result) at
Microsoft.Workflow.Client.ClientHelpers.SendRequest[T](HttpWebRequest
r... 64aebf9e-b173-a0e9-9225-ccda760a9ba5
Now question, is their a way to identify and re published the Sharepoint 2013 Only workflows in farm b using powershell.
or what is way to migrate the SharePoint 2013 workflows from one scope to different.
| common-pile/stackexchange_filtered |
how to set nutch to extract content of only urls present on seed file
I am using nutch 2.3 and I am trying to get the html content of some urls present on seed.txt file which I pass to nutch into HBase.
So the problem is as below---
First crawl:
Everything runs fine and I get the data into HBase with url as the row key.
Second Run:
when i run the crawl for second time with different urls I see there are so many urls for the fetching job is running while I have only one url in my seed file.
So my question is how can make sure that nutch only crawls and get the html contents of the urls present in seed.txt and not the out links present in urls html content of seed.txt
I think you want to fetch only domains that are given in seed file. For that update nutch-site.xml as following
<property>
<name>db.ignore.external.links</name>
<value>true</value>
</property>
This only ignores links to external hosts. It does not prevent fetching of pages that are within the same domain.
Yes you are right. I have to customize nutch to implement above functionality
You might keep the iteration of the crawl command as "1" and then nutch will crawl only the urls present in seed.txt file.
e.g.
bin/crawl -i -D solr.server.url=<solrUrl> <seed-dir> <crawl-dir> 1
Also, you can restrict the outer links by configuring your regex-urlfilter.txt present in conf directory.
#accept anything else
+http://doamin.com
thanks for replying rocksta first I would like to ask you can you please tell me what is the meaning of the text which you want me to insert in regex-urlfilter.txt and can you please elaborate what is the meaning of the crawl command because I dont think so my crawl command handles any -i switches
| common-pile/stackexchange_filtered |
Continuous Function and use of IVP
Let $\;f:[0,1]\to[0,1]\;$ be continuous function such that $f(0)=f(1)$ and
let $A =\{t,s ∈ [0,1]\times[0,1]\; |\; t \neq s ,\;f(t) =f(s) \}$
then prove that number of elements in A is infinite.
I feel I have to prove it a constant function using intermediate value property but not getting articulate way of writing the proof .
There is no question here...Write clearer: what is the given information, what is the question.
@DonAntonio I corrected the question .I have to find number of elements in A
Can you use Weierstrass Theorem: "a continuous function on a closed bounded subset of $;\Bbb R;$ is bounded there and attains its maximum/minimum there" ?
Hint : The argument goes as follows.
Let
$$
M=sup_{t\in [0,1]}f(t)\ ;\ m=inf_{t\in [0,1]}f(t)
$$
then $m\leq f(0)\leq M$ and discuss the cases
$m=f(0)=f(1)=M$
$m<f(0)=f(1)$
$f(0)=f(1)<M$
Further : In case (1), $A=([0,1]\times [0,1])\setminus \Delta$ ($\Delta$ is the diagonal i.e. $\Delta=\{(x,x)\}_{x\in [0,1]}$) so it is infinite. In case (2), note $x_m$ a point such that $f(x_m)=m$, now, by IVP for each $m<c<f(0)$, the equation $f(x)=c$ has at least one solution in $]0,x_m[$ (say $a_c$) and one in $]x_m,1[$ (say $b_c$). One has $\{(a_c,b_c)\}_{m<c<f(0)}\subset A$ which is, therefore, infinite. Try to write case (3).
Hope it helps. Do not hesitate.
PS : I do not agree with the downvote(s) as this question, even trivial to visualize, needs some care to write down.
Sir , i will try .But can you elaborate a bit more. It will be very helpful
I don't agree with the downvote either so I gave the Q an upvote.
| common-pile/stackexchange_filtered |
Ionic to signed apk with neo4j driver
I'm actually working on a Ionic projet and i'm using neo4j driver to use my neo4j database.
And i wanted to know if the connection with my database won't stop working after creating my apk
Thanks :)
Welcome! Can you add more details about what you're trying to do? What's the relation between creating an APK and the database connectivity?
i'll try to add more detail:
I have to create an app with ionic trought capacitor. And i'm using the neo4j driver to connect my database. And i wanted to know if those driver database will "transfert" in the native code created with capacitor or not
| common-pile/stackexchange_filtered |
How to add legend in WFS OpenLayers
How can I publish WFS OpenLayers with legend, have tried different examples such as http://api.geoext.org/1.0/examples/vector-legend.js. But when I tried using this Firebug shows an error=> GeoExt.MapPanel is not a constructor and the polygon for which the legend is defined in the rule(rule_highlight1 and rule_highlight2) has disappeared from the WFS layers.
var india = new OpenLayers.Layer.WMS(
"cite:india_state - Tiled", "http://localhost:8080/geoserver/cite/wms",
{
LAYERS: 'cite:india_state',
EPSG:4326,
tiled: true,
tilesOrigin : map.maxExtent.left + ',' + map.maxExtent.bottom
},
{
buffer: 0,
displayOutsideMaxExtent: true,
isBaseLayer: true,
opacity: 0.8,
}
);
var style = new OpenLayers.Style();
var rule_highlight1 = new OpenLayers.Rule({
title: "> 2000m",
maxScaleDenominator: 3000000,
filter: new OpenLayers.Filter.Comparison({
type: OpenLayers.Filter.Comparison.EQUAL_TO,
property: "classification",
value: "1",
}),
symbolizer: {
graphicName: "star",
pointRadius: 8,
fillColor: "#FE2EF7",
graphicName: "star",
fillOpacity: 0.6,
strokeColor: "#FF0000",
strokeWidth: 2,
strokeDashstyle: "solid",
label: " ${classification}",
labelAlign: "cc",
fontColor: "#000000",
fontOpacity: 1,
fontFamily: "Arial",
fontSize: 16,
fontWeight: "600"}
});
var rule_highlight2 = new OpenLayers.Rule({
title: "< 2000m",
maxScaleDenominator: 3000000,
filter: new OpenLayers.Filter.Comparison({
type: OpenLayers.Filter.Comparison.EQUAL_TO,
property: "classification",
value: "2",
}),
symbolizer: {
fillColor: "#0040FF",
graphicName: "star",
pointRadius: 5,
fillOpacity: 0.6,
strokeColor: "#FF0000",
strokeWidth: 2,
strokeDashstyle: "solid",
label: " ${classification}",
labelAlign: "cc",
fontColor: "#000000",
fontOpacity: 1,
fontFamily: "Arial",
fontSize: 16,
fontWeight: "600"}
});
var rule_highlight3 = new OpenLayers.Rule({
filter: new OpenLayers.Filter.Comparison({
type: OpenLayers.Filter.Comparison.EQUAL_TO,
property: "classification",
value: "2",
}),
symbolizer: {
fillColor: "#0040FF",
fillOpacity: 0.6,
strokeColor: "#FF0000",
strokeWidth: 2,
strokeDashstyle: "solid",
label: " ${classification}",
labelAlign: "cc",
fontColor: "#000000",
fontOpacity: 1,
fontFamily: "Arial",
fontSize: 16,
fontWeight: "600"}
});
style.addRules([rule_highlight1,rule_highlight2,rule_highlight3]);
var wfs_layer = new OpenLayers.Layer.Vector("fsa",{
strategies: [new OpenLayers.Strategy.BBOX()],
styleMap: style,
visibility: true,
protocol: new OpenLayers.Protocol.WFS({
version: "1.1.0",
srsName: "EPSG:4326",
url: "http://localhost:8080/geoserver/cite/wms",
featureNS : "http://www.opengeospatial.net/cite",
featureType: "india_state",
extractAttributes: true,
transparent:true
})
});
map.addLayers([india,wfs_layer]);
map.zoomToMaxExtent();
mapPanel = new GeoExt.MapPanel({
renderTo: "map",
border: false,
layers: map,
center: [6.3, 45.6],
height: 256, // IE6 wants this
zoom: 8
});
legendPanel = new GeoExt.LegendPanel({
layerStore: mapPanel.layers,
renderTo: "legend",
border: false
});
});
</script>
Are you using GeoExt? Have you referred to the GeoExt Js library along with the ExtJS library?
yes, I have added GeoExt Js library along with the ExtJS library
Was this issue solved ? If so , How ?
Have you checked that the versions of Openlayers and GeoExt you are using support the controls you want to use?
Most of the code seems to be for WMS not WFS. You should really be posting code to replicate your WFS issue only
| common-pile/stackexchange_filtered |
Reloading bind entity cause "An object with the same key already exists in the ObjectStateManager"
This has been asked before, but none of the available answers seem to fit to my case.
In order to perform some validation, I have to reload from DB the same entity, which already bound to the model. The following causes the error. I'm almost losing my mind.
[HttpPost]
public ActionResult Edit(Tekes tekes, FormCollection fc)
{
...
Tekes myTekes = db.Tkasim.Find(tekes.TeksID);
<some validation here>
if (ModelState.IsValid)
{
db.Entry(tekes).State = EntityState.Modified;
db.SaveChanges();
return RedirectToAction("Details", new { id = tekes.TekesID });
}
}
Not sure why you have to grab the same entity from the data base. Anyway, the problem is that the Find statement adds the Tekes object to the context, subsequently the statement db.Entry(tekes).State = EntityState.Modified tries to do the same. It's not clear what the validation is all about, but I think you can solve this by replacing the Find by
var myTekes = db.Tkasim.AsNoTracking().Single(x => x.TeksId == tekes.TeksID);
AsNoTracking tells EF to fetch entities without adding them to the cache and the state manager.
| common-pile/stackexchange_filtered |
getting unexpected result while adding two numbers in a program written in assembly
I wrote a program in assembly to ask the user to enter two numbers one by one and print the result on the console after performing addition on the two numbers.
after compiling the program in x86 architecture, when I run the program, the program asks two numbers.
but the problem is that, if I enter two numbers one-by-one, and the result of the consecutive numbers are grater than 9, it produces unexpected result on the screen. below I mention the steps, I go through, and face problem.
below is a simple program, written in assembly code:
; firstProgram.asm
section .data
msg1 db "please enter the first number: ", 0xA,0xD
len1 equ $- msg1
msg2 db "please enter the second number: ", 0xA,0xD
len2 equ $- msg2
msg3 db "the result is: "
len3 equ $- msg3
section .bss
num1 resb 2
num2 resb 2
result resb 2
section .code
global _start
_start:
; ask the user to enter the first number
mov eax, 4
mov ebx, 1
mov ecx, msg1
mov edx, len1
int 0x80
; store the number in num1 variable
mov eax, 3
mov ebx, 0
mov ecx, num1
mov edx, 2
int 0x80
; print the first number
mov eax, 4
mov ebx, 1
mov ecx, num1
mov edx, 2
int 0x80
; ask the user to enter the second number
mov eax, 4
mov ebx, 1
mov ecx, msg2
mov edx, len2
int 0x80
; store the number, enter by the user in num2 variable
mov eax, 3
mov ebx, 0
mov ecx, num2
mov edx, 2
int 0x80
; print the second number, enter by user
mov eax, 4
mov ebx, 1
mov ecx, num2
mov edx, 2
int 0x80
; move the two numbers to eax and ebx register
; and subtract zero to convert it into decimal
mov eax, [num1]
sub eax, '0'
mov ebx, [num2]
sub ebx, '0'
;add two numbers
; and add zero to convert back into ascii
add eax, ebx
add eax, '0'
; store the number in result variable
mov [result], eax
; print a message to the user before printing the result
mov eax, 4
mov ebx, 1
mov ecx, msg3
mov edx, len3
int 0x80
; now print the result
mov eax, 4
mov ebx, 1
mov ecx, result
mov edx, 2
int 0x80
; exit the program
mov eax, 1
mov ebx, 0
int 0x80
after written the code, I compiled and execute it on the terminal as follows:
nasm -f firstProgram.asm -o firstProgram.o
ld -m elf_i386 -s -o first firstProgram.o
./first
<blockquote>
please enter the first number:
5
please enter the second number:
3
the result is: 8please enter the first number:
6
please enter the second number:
4
the result is: :please enter the first number:
4
please enter the second number:
67the result is: :Aplease enter the first number:
25please enter the second number:
the result is: 5
</blockquote>
can anyone explain the reason with example?
num1 resb 2 so 2 bytes are reserved for num1, then mov eax, [num1] which loads 4 bytes from there? Also, your scheme of subtracting 0 is only correct for 1-digit numbers. If you think for a little bit, you'll see that for multi-digit numbers you need something with actual division.
For other [tag:assembly] stalkers: is there a canonical question somewhere about operand size mismatches? I'm wondering if there is somewhere to point people with this kind of bug, instead of re-explaining it again and again. If not, maybe we should make one.
@NateEldredge: I don't know of a good one. But yeah it comes up a lot, in two main form: dword store to resb 1 or 2 overwriting other stuff, or dword load instead of movzx eax, byte [mem], and then cmp eax, '0' but the upper bytes have non-0 garbage. I've gone digging for duplicates multiple times, but I don't remember whether I've eventually found a viable duplicate or not. I think not a clean one, at best a Q&A where it was one of multiple problems, so it's just one of multiple things an answer mentions.
For the byte-load version of the problem, there's Why can't I move directly a byte to a 64 bit register?. Or How to load a single byte from address in assembly with an answer that unfortunately starts off suggesting loading into AL, not zero-extending. But neither of those explain that resb 2 isn't 4 bytes, and don't cover the store problem.
@Eldredge oh my dear, this assembly language is making me very tension. assume that I've no programming experience any more. please make me understand it. the reason, why i am worried about it is, I am trying to understand the computer more deeply. assembly teaches us something new, that I have not learnt in other languages, such as java, c/c++ and python. currently, I have much confusion about registers, al, ah, ax, eax, rax. I understand it, but you know, I'm new in assembly. so I need a good answer. I hope you must help me
What about if you clean the registers after you print the output?
You can do an xor of the registers, that will clean all the garbage that you stored from last sum, ie:
; Clean registers.
xor eax, eax
xor ebx, ebx
xor edx, edx
xor ecx, ecx
I don't see how that can possibly help. It looks to me like all relevant registers are overwritten after the output anyway.
| common-pile/stackexchange_filtered |
Sqlite: How to insert text entered by the user into a sqlite table using Node/Express
I am trying to find out how to pass client (HTML form) data to a Node/Express/sqlite back-end - not sure how to package up the request on the client side and/or extract the request on the server side.
Client code:
// listen for the form to be submitted and add a new dream when it is
dreamsForm.onsubmit = function(event) {
// stop our form submission from refreshing the page
console.log("SUBMIT clicked!!!");
event.preventDefault();
// jeng: save the value to post
var data = {dream: dreamInput.value}; // jeng: JSON
const dreamRequest = new XMLHttpRequest();
dreamRequest.open('post', '/putDreams', true);
dreamRequest.setRequestHeader('Content-Type', "application/json;charset=UTF-8"); // jeng - after open but before send
//dreamRequest.send(JSON.stringify(data));
dreamRequest.send(data);
};
Sever code:
// jeng: added insert a new row to the Dreams table
app.get("/putDreams", function (request, response) {
console.log("insertRows called!!");
var stmt = db.prepare("INSERT INTO Dreams VALUES (?)");
var val = request.body.dream;
//var s = JSON.parse("test")
//request.json();
console.log("1 "+request);
console.log("2 "+request.body);
console.log("3 "+request.body.dream);
//console.log("4 "+s);
//stmt.run("test"); // this works!! test is inserted
stmt.run(val); // this doesn't work - val is undefined - null is inserted
stmt.finalize();
response.redirect("/");
});
My specific need is to simply take a piece of text entered by the user and insert it into a sqlite table using Node/Express. I've searched everywhere to no avail so far. Any suggestions/pointers would be gratefully appreciated. Thanks in advance
The server-side should use .post to listen for post method that you are sending from the client.
Could you include the code itself in the question rather than screenshots of it, please?
| common-pile/stackexchange_filtered |
Why is sympy.arg() function not returning the expected output?
I am trying to extract the phase of a complex number in sympy but the .arg() method is not returning useful or expected results for even simple cases.
import sympy as s
A, theta = s.symbols('A theta', real = True)
expr = A*s.exp(theta*s.I)
angle = s.arg(expr)
print(angle) #Arg(A*exp(theta*I))
I expect the output to be just theta but it returns Arg(A*exp(theta*I) which is true but not all that helpful. why might this be happening for such a simple case?
It looks like the argument to the function arg has to be an actual number. No symbols. You can see this from the following
from sympy import *
arg(1+2*I)
Out[5]: atan(2)
A=symbols('A',real=True)
arg(A+2*I)
Out[8]: arg(A + 2*I)
May be you can implement your own arg:
from sympy import *
A, theta = symbols('A theta', real = True)
expr = A*exp(theta*I)
simplify(simplify(atan(im(expr)/re(expr))),inverse=True)
Gives
theta
ps. it is strange why this does not work
simplify(atan(im(expr)/re(expr)),inverse=True)
---> atan(tan(theta))
But had to call simplify twice
This is the method that I ended up using. It still comes off as strange that the .arg function in a symbolic library can't handle symbols as inputs.
| common-pile/stackexchange_filtered |
Uniform Distribution: finding the probability between two variables
Q: In a uniform density $\mathcal{U}(a,b)$ with $a=-0.025$ and $b=0.025$, what is the probability that an error will be between 0.010 and 0.015?
A: From the density function, I didn't know how $d$ and $c$ (constants) were related to 0.010 and 0.015, $P(0.010 < X < 0.015)$.
I think that the answer is $\frac{b-a}{d-c}=\frac{.015 - .01}{.025+.025}= 0.1$
Would this be correct?
Yes, it is correct. The density function is $\frac{1}{0.025-(-0.025)}$ on our interval, and our probability is $0.015-0.010$ times the density.
ok thank you for the clarification. Should I remove this post?
one more question. what if it was between -.012 and .012? This would add up to 0. how can this be
I do not see why any reason to remove the post: you had a legitimate question, and showed your work.
okay sounds good
If $\alpha=-0.012$ and $\beta=0.012$, then the density function as usual is $\frac{1}{\beta-\alpha}$. But there is a complication with your $0.010$ to $0.015$, since we can't be beyond $0.012$. So the probability would be $\frac{0.012-0.010}{\beta-\alpha}$.
Please note that there is a typo in the post. You calculated, correctly, $\frac{d-c}{b-a}$. However, you called it \frac{b-a}{d-c}$.
Suppose that a random variable $X$ has (continuous) uniform distribution on the interval $[a,b]$. Then $X$ has density function $\frac{1}{b-a}$ on our interval, and $0$ elsewhere.
So if $a\le c\lt d\le b$, then $\Pr(c\le X\le d)=\dfrac{d-c}{b-a}$.
For the example in the post, we have $b-a=0.025-(-0.025)=0.05$ and $d-c=0.015-0.010=0.005$, so the probability is indeed $0.1$.
We would have to be a little careful if we were asked to compute, for example, $\Pr(0.01\le X\le 0.035)$. Since the density is $0$ past $0.025$, the probability would be the same as $\Pr(0.01\le X\le 0.025)$. This turns out to be $0.3$.
| common-pile/stackexchange_filtered |
rm - command not found
I recently wanted to use the rm command in Terminal on my Mac. To my surprise, it just reponds with:
-bash: rm: command not found
I can use mv, cp, cd, ls etc. However rm somehow doesn't work. I can't use /bin/rm either.
Any ideas what is wrong?
Does the file /bin/rm acually exist?
You didn't happen to have run sudo rm rm from the /bin folder by any chance? ;)
I dont know.. How do i check.. and how do i repair it :S
your system could be compromised...
I had a similar problem, trying to delete a "file in use". Initially, it recognised the command, giving a warning and asking for password. However, the file wasn't deleted, but the rm command did not exist anymore. This file has strange behaviour, as it changes name when trying to delete with secure empty trash.
Check the Trash for the /bin/rm file?
You can also try the following to see if its anywhere on the machine: sudo find / -name rm. It will start looking for the rm command in /. If you find it, then verify its location/path and check permissions on it. If not, then you should look for ways to restore it.
Can you give us the output of the following commands:
stat /bin/rm
and
echo $PATH
Does the problem persists when you open a new terminal?
Did you do anything particular before trying to use the rm command?
Update: The output of your first command confirms that your command rm is missing. Why, I have no idea. Maybe, by error, while being sudo-ed, you did something like /bin/rm /bin/rm.
To fix it, can you try to copy the executable file command from an other mac?
stat: /bin/rm: stat: No such file or directory
and
/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/git/bin:/usr/texbin:/usr/X11/bin
And no i've tried to restart/ and repair with disc utility and so on.. Just doesnt work.. Maybe my bashcr file is messed up. But i dont know how to fix it :S
@s0mmer: Your "bashcr" is in no way related to missing system commands. Read the output of the 'stat' command again.
If you're taking TimeMachine backups of your system, use that to travel back in time on the /bin directory and restore the rm command from the TM backup.
I had a similar occurrence, however, mine was
sudo: rm: command not found
This was caused by incorrectly altering the path in ~/.bash_profile
| common-pile/stackexchange_filtered |
Methods of modern web development
Have just finished my first web based application using Ruby on Rails. It was fine and I feel like i have a good general understanding of RoR now.
I want to try a different method of web design next but don't want to start learning outdated methods, I want to stay current.
I understand that different technologies are used to achieve different things, but just want make sure I'm investing my time in something useful.
What are the latest wave of developers using to create their applications? Java? PHP?
Assembly --- it's the next big thing. Assembly on Air Curtains is a good framework to look into.
Very subjective question. There are a ton of things happening, so there's no way this could give you a useful answer. If you're looking at just web programming languages, you'll find that PHP is the most popular.
Take a look @ node.js. It allows you to code JavaScript on the browser and the server. JavaScript is becoming the defacto web language so it allows you to extend your skills their and share code (very powerful for the browser form and server REST service to share validation and other logic).
Also, many cloud services (Azure, CloudFoundry, Joyent etc...) are hosting it. Alot of folks are onto this web technology (including Microsoft https://github.com/tjanczuk/iisnode - disclaimer: I work for them).
Besides it being new, it also encourages an async non-blocking I/O method of creating network servers which is a good pattern to get your head around.
Some videos to watch:
http://www.youtube.com/watch?v=jo_B4LTHi3I
http://www.youtube.com/watch?v=F6k8lTrAE2g
http://www.youtube.com/watch?v=SAc0vQCC6UQ
Start @ nodebeginner.org then look @
http://expressjs.com and
http://socket.io
Edit: Recently created this full sample pulling together modern concepts: https://github.com/bryanmacfarlane/quotes-feed
Can you leave a comment if you down vote? node.js is potentially something to look at if you're looking at new web tech.
Looking at this post after 8 years. Such an accurate prediction :)
@krk - recently added this example specifically showing how to share code (and views) between server and client: https://github.com/bryanmacfarlane/quotes-feed
Try Django. It's a web framework built completely in Python (a language similar to Ruby) and has great documentation, if you're willing to learn Python, which is relatively easy.
There's a great and free tutorial on it here.
| common-pile/stackexchange_filtered |
Are the flutter third party plugins safe?
I am new to flutter development so I am using plugins from pub.dev.. I don't know whether those third party plugins safe or not..can someone hack or access the user data or database transactions through flutter plugins / can a plugin act as a malware? Please help
Here is something you could do to check how genuine a pub.dev package is :
Visit their Github repository
Check the number of collaborators and the number of stars. Usually popular packages have a considerable number of stars and / or contributors.
Number of issues and if the issues were addressed by any of the contributors or the repo owner.
Usually legitimate package devs spend a considerable amount of time working on the package and making it easy for use by new users thus also adding documentation.
P.S. - Even if a pub package doesn't qualify any of the above checks, it might still be a genuine trustworthy package. If it's not too extensive and you have some time on your hands, you could quickly scour through the code to confirm if its safe to use.
If you do find a package to be malicious, I recommend reporting it to the flutter team so such packages are removed from pub.dev and the portal becomes more safe for other users.
You can see information on this Link:
[1]: https://medium.com/flutter-community/how-to-make-a-flutter-app-with-high-security-880ef0aa54da#:~:text=Flutter%20provides%20a%20secure%20data,including%20passwords%20and%20PIN%20numbers.
Thank for the reply.my question is ,for example, can someone develop a plugin which can track the entire user interactions and data without noticing?
I personally see couple of things before using any plugin
Number of likes and Popularity percentage.
Number of open & closed issues.
If the package is from flutter or dart team, you need not to check anything. Just start using it.
| common-pile/stackexchange_filtered |
The formula of ScoreDoc.score. in Lucene
I want to create a research engine using Lucene. From Lucene documentation, I noticed that ScoreDoc.score gives the similarity score between the document and query.
I want to know how the similarity score is calculated?
Please help me..
Similarly score is calculated based on the similarly model being used in the field on which user is doing the query. There are two I am aware of tf-idf and another is BM25.
Both of those uses the documents characterstics like doc length, word frequency, idf etc. So you could go through this link if it helps
That link doesn't really explain much about how BM25 works - a much better explanation can be found at BM25 - The Next Generation of Lucene Relevation. BM25 is the default similarity in Solr these days.
@MatsLindhThe page not found
@AmanTandon I would like to normalize the scores in Lucene, Do you know how to do this?
@Noran Please refer the https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.4.0/lucene/core/src/java/org/apache/lucene/search/TopDocs.java#L41
You can divide the score given by scoring algo (BM25) by the max score returned by the getMaxScore to normalize the score, however could you explain why you want to normalize those values?
@Noran Here is the corrected link which Mats provided and gives the good explanation of BM25. https://opensourceconnections.com/blog/2015/10/16/bm25-the-next-generation-of-lucene-relevation/
@Noran also adding lucene search package summary having little explaination for class/interfaces if it helps. https://lucene.apache.org/core/7_5_0/core/org/apache/lucene/search/package-summary.html
@AmanTandon Thanks for replying. I want to create a fact checking program. The program determine if the entered text is a fact or not based on the score. But I do not know how can I determine the threshold of the score.
Could you gave us an example
| common-pile/stackexchange_filtered |
PyCharm cannot "Check out from Version Control"
I'm trying to check-out a public project from GitHub into Windows 10/PyCharm, but getting the error message: "Unable to create temporary file ... No such file or directory. Index pack failed". See attached screenshot.
I think it has something to do with permissions, but not sure how to resolve it so it doesn't pop-up every time I work on a GitHub project.
I found a workaround:
Download the package to C:\temp
Then copy/paste it to its final destination.
Not sure why this works, or why I was getting the original error message.
| common-pile/stackexchange_filtered |
How to retrieve all charges for a customer in Stripe and dump them into Datatables using ajax?
I am using Stripe for my payment gateway and my application is in Classic ASP and I am using DataTables as a table to store all of the charges that a customer has had. What I need to do is retrieve all of the customer metadata and display it in the Datatables table. I know that Stripe will send a response in Json, which is what Datatables uses to populate its table, however there is virtually no documentation for Classic ASP to show to implement this. Here is how Datatables works:
$('#sortable').dataTable({
autoWidth: false,
paging: true,
order: [[ 1, 'asc' ]],
ajax: {
url: '<ASP PAGE WHERE JSON WILL RETURN>',
type: 'POST'
},
deferRender: true,
columnDefs: [
{ targets: [0], visible: false, searchable: false },
{ targets: [1], title: 'Office' },
{ targets: [2], title: 'Client' },
{ targets: [3], title: 'Charge Date' },
{ targets: [4], title: 'Charge Amount' },
{ targets: [5], title: 'Last 4 of Card' }
],
pagingType: 'full_numbers'
});
So, if anyone out there has done this, I would really appreciate some kind of insight. Thanks in advance.
You can't make requests to Stripe's API directly from your frontend code, as you need to use your secret API key in the request.
You need to send the AJAX request to your backend server, which itself will send the request to retrieve all charges with the ID of the customer you want in the customer parameter, then relay the response back to your frontend code.
Stripe does not have an official library for Classic ASP, but if you can use .NET, the Stripe.net community library is well-maintained and should be able to do what you want.
Thank you Ywain, that helped me figure it out. I decided to go with PHP.
| common-pile/stackexchange_filtered |
gnuplot xvalue of maximum of y
I have a file made of three columns, x, y1 and y2. I need to know the value at which y2 has a maximum. To find the maximum of y2 is easy:
stats 'test2-EDB.dat' u 3
from which I know that the y2 has a maximum on the 6779th line of the file
STATS_index_max = 6779.0
However, what I need it the x value at the 6779th line of the file. Do you have any suggestions? Which optimally are platform independent?
The solution which I have found here (Reading dataset value into a gnuplot variable (start of X series)) was:
at(file, row, col) = system( sprintf("awk -v row=%d -v col=%d 'NR == row {print $col}' %s", row, col, file) )
file="test2-EDB.dat" ; row=STATS_index_max ; col=1
c=at(file,row,col)
However, I doubt that this solutions works without any problems also on windows (no idea, I'm not using it).
With best regards,
Leonardo
You can use every for the stats command to get the x-value:
stats 'test2-EDB.dat' u 3
stats 'test2-EDB.dat' u 1 every ::STATS_index_max::STATS_index_max
print sprintf("x-value is %e", STATS_max)
I think you can do it shorter and easier:
stats 'test2-EDB.dat' u 1:3 nooutput
print sprintf("Maximum at x=%g y=%g", STATS_pos_max_y, STATS_max_y)
Should work with gnuplot>=4.6.0 (March 2012)
| common-pile/stackexchange_filtered |
How long can I power my device using a battery?
Lets say I have a battery that has 100mAh, and I am using it to power an IC that according to it's datasheet, consumes 1mA.
I understand that if I were to connect my IC to a bench power supply I would see it consumes 1mA (if it has a display), and theoretically I could leave it running constantly for months, but what about connecting it to a battery? How can I calculate the minutes/hours I can leave the device running?
Well, 100mAH/1mA = 100 hours. Depending on the battery, its output may droop to an unusable level before the 100 hours. NiMH or Li based batteries have the output level better before dropping off.
100 hours/24 hrs/day = just over 4 days.
What I don't understand is, that current consumption of 1mA that is specified on the datasheet - is it per minute? per hour?
It isn't "per" anything -- it's continuous. Current is defined as the amount of charge (coulombs) that pass in a given amount of time: 1A = 1C per second. That's why battery capacity (charge) is given as current * time. The units of mA and hours are the most convenient, but it could just as well been specified as 360 Coulombs.
@Eran Current is a rate quantity. One ampere is one coulomb per second, if you must have a per in there, so 1mA is 1mC/s.
If I have a battery of 100mAh, and a load that according to it's datasheet, draws 100mA, does it mean that after connecting the load, the battery will be able to supply 100mA continuously for exactly 1 hour (theoretically)?
With an ideal battery, sure. With a real battery, doubtful. The 100mA will likely cause the battery output voltage to droop pretty quickly, and once the voltage drops it is not clear how much current the load will continue to draw, or if it will work correctly. For example, an Atmega328P clocked at 16 MHz needs a 3.8V to 5.5V to work correctly. Once 3.8V is reached, it may continue to work correctly for whatever the code is doing, or some parts may appear to have errors (such as serial interface). An example is this: CR2032 battery, 235mAH - but only designed for 0.2mA output current.
I've been thinking about this a bit more, and I'll try to explain why I can't seem to understand this. As I've said, the datasheet says 1mA is the current consumption. Like I knew already, and @DaveTweed said, it is continuous. What I can't understand from it being continuous is - does it draw 1mA every 1nsec? every 1psec? every 1msec? From your answers and a guide I've found, I've learned to think about it in terms of coulombs, but it's still confusing.
@Eran You're fundamentally misunderstanding what current is. Imagine you're talking about the speed of a car driving (very slowly) at 1 m/s. Does it go 1 m/s every nanosecond? every picosecond? every millisecond? No, it's just always going 1 m/s. It doesn't make sense to talk about the car's speed in m/s per second, because m/s/s (or m/s²) is fundamentally a unit of acceleration.
@Eran If you want another way of looking at it, 1mA = 1mC/s (one millicoulomb per second), as noted above. A millicoulomb is about<PHONE_NUMBER>000000 electrons. So 1mA is<PHONE_NUMBER>000000 electrons going past any given point per second.
and<PHONE_NUMBER>000 electrons every ms, and<PHONE_NUMBER> electrons every $\mu$s, etc. Things may get dicy at one ps (6240 electrons), and at only 6.24 electrons per femtoseconds you'll definitely want to revisit your definition of "continuous". But for nearly all practical purposes, a current is a rate of how fast electrons are going by, not a unit of charge.
| common-pile/stackexchange_filtered |
Could/Should I use static classes in asp.net/c# for shared data?
Here's the situation I have: I'm building an online system to be used by school groups. Only one school can log into the system at any one time, and from that school you'll get about 13 users. They then proceed into a educational application in which they have to co-operate to complete tasks, and from a code point of view, sharing variables all over the place.
I was thinking, if I set up a static class with static properties that hold the variables that are required to be shared, this could save me having to store/access the variables in/from a database, as long as the static variables are all properly initialized when the application starts and cleaned up at the end. Of course I would also have to put locks on the get and set methods to make the variables thread safe.
Something in the back of my mind is telling me this might be a terrible way of going about things, but I'm not sure exactly why, so if people could give me their thoughts for or against using a static class in this situation, I would be quite appreciative.
Why the aversion to using a database? I would seriously recommend using a database and just let it go. It's not like it's going to bring your server to its knees. It takes a lot of requests to a database to do that, even if it's just on a desktop. SQL Server Express is free...
I'm not really against using a database as such, it just struck me as unnecessary to store data in a database that could quite easily be stored in the server's memory and disposed of later.
I realize with so few people using the system at once, there's not really a performance concern here, I was just looking at alternatives.
| common-pile/stackexchange_filtered |
How to convert the Timestamp in millis to Date with TimeStamp in Rest ful service?
I am trying to do to convert the Date in millis to Date in timestamp by using Jersey Converters. but I am not able to convert. Pleae help me .
Entity class:
@JsonSerialize(using=JsonDateSerializer.class)
@Column(name="TSTAMP")
private Timestamp timeStamp;
**JsonDateSerializer:**
@Component
public
class JsonDateSerializer extends JsonSerializer<Timestamp>{
private static final SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-mm-dd HH:MM:SS");
@Override
public void serialize(Timestamp t, JsonGenerator gen,
SerializerProvider arg2) throws IOException, JsonProcessingException {
String formattedDate = dateFormat.format(t);
gen.writeString(formattedDate);
}
}
You can use this to convert date to timestamp
//use your formatted date here
Timestamp timestamp = new Timestamp(date.getTime());
I am getting the List from the DB as I expected but when I add the list Response in build() can changes the Timestamp in Millis
That is not . List ActivityDTOList = collarService.findMostRecentActivity(tagId, maxResults); ActivityDTOList containes correct format but RestUtil.getOkResponse(dogActivityDTOList) -- now format is changing
didn't get u what do u mean by add the list Response in build() can changes the Timestamp in Millis
| common-pile/stackexchange_filtered |
PHP : How to set timer to every night 12:00am
I have a PHP function that sets the timer for every 24 hours from the current time, and i'm using that function in my website as time() + 24 * 60 * 60 but i want the timer to be reset every night at 12:00am how can i do this?
If you are running Linux/nix, create a crontab that calls your PHP script and runs in a specific hour and time.
| common-pile/stackexchange_filtered |
How to properly stop all the threads in Android?
In the code below, I maintain the "isFound" boolean variable to determine whether a thread should run or not. However, when I set the "isFound" to true. My threads will run exactly one more time. In other words, they will not stop properly which also causes the UI to not update the Textview properly. I guess this is due to some synchronized problems but I am not sure how to deal with it and I have tried using a synchronized block but it still does not work.
import android.app.Notification;
import android.app.NotificationManager;
import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.content.IntentFilter;
import android.os.Bundle;
import android.os.Handler;
import android.os.Message;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;
import android.view.View;
import android.widget.ProgressBar;
import android.widget.TextView;
public class MainActivity extends AppCompatActivity {
private ProgressBar progressBar;
private TextView message;
private TextView magicNumDisplay;
private boolean isFound = false;
private NotificationManager notificationManager;
private Notification notifyDetail;
private final Handler handler = new Handler(){
public void handleMessage(Message msg) {
if (isMagic(msg.what)) {
isFound = true;
progressBar.setVisibility(View.INVISIBLE);
magicNumDisplay.setText("The magic number is " + msg.what);
handler.removeMessages(0);
message.setText("A magic number s found");
Intent intent = new Intent("MAGIC_NUMBER");
intent.putExtra("THREAD_NAME", msg.obj.toString());
intent.putExtra("MAGIC_NUMBER", String.valueOf(msg.what));
sendBroadcast(intent);
} else {
message.setText("Finding a magic number...");
}
}
};
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
message = (TextView) findViewById(R.id.message);
progressBar = (ProgressBar) findViewById(R.id.progressbar);
magicNumDisplay = (TextView) findViewById(R.id.magic_num_display);
progressBar.setVisibility(View.VISIBLE);
IntentFilter filter = new IntentFilter("MAGIC_NUMBER");
BroadcastReceiver receiver = new MagicNumberReceiver();
registerReceiver(receiver, filter);
notificationManager = (NotificationManager) getSystemService(NOTIFICATION_SERVICE);
}
public void onStart(){
super.onStart();
Thread firstThread = new Thread(backgroundTask, "First");
Thread secondThread = new Thread(backgroundTask, "Second");
firstThread.start();
secondThread.start();
}
public boolean isMagic(int num) {
return (num % 7 == 0 && isLastDigitEqualTwo(num));
}
public boolean isLastDigitEqualTwo(int num) {
String numString = String.valueOf(num);
return Integer.parseInt(numString.substring(numString.length() - 1))==2;
}
Runnable backgroundTask = new Runnable() {
@Override
public void run() {
try {
while (!isFound) {
Thread.sleep(1000);
int number = (int) (Math.random() * 9999);
Message msg = handler.obtainMessage(number, Thread.currentThread().getName());
handler.sendMessage(msg);
Log.e(Thread.currentThread().getName(), "The number is :" + number);
}
} catch(InterruptedException e){
Log.e("Exception", e.getMessage());
}
}
};
public class MagicNumberReceiver extends BroadcastReceiver {
@Override
public void onReceive(Context localContext, Intent callerIntent) {
String threadName = callerIntent.getStringExtra("THREAD_NAME");
String magicNum = callerIntent.getStringExtra("MAGIC_NUMBER");
Log.e("Magic", "Thread name:"+threadName+" Magic Number:"+magicNum);
int MAGIC_NUMBER_NOTIFICATION_ID=200;
notifyDetail = new Notification.Builder(getApplicationContext())
.setContentTitle("Magic Number")
.setContentText("Thread Name: " + threadName + ". Magic Number: " + magicNum+".")
.setSmallIcon(R.drawable.droid)
.setVibrate(new long[] {1000, 1000, 1000, 1000})
.setLights(Integer.MAX_VALUE, 500, 500)
.build();
notificationManager.notify(MAGIC_NUMBER_NOTIFICATION_ID,notifyDetail);
}
}
}
Is it because you sleep at the start of the while loop? If it's found while it's sleeping it'll finish that loop? What if you sleep at the end of the while block?
isFound variable is used by three different threads at the same time. You should synchronize its access, or else you application may break or worse
@nandsito what would be the preferred way to do the synchronization? Thanks in advance.
there isn't a preferred way to sync threads, but there are several ways to do it, and you can pick the one you feel most comfortable with. For example, synchronized statements, synchronized methods, serialized access, guarded blocks... i suggest you to spend some time studying concurrency https://docs.oracle.com/javase/tutorial/essential/concurrency/index.html
| common-pile/stackexchange_filtered |
River red gum seedlings developing new leaves at stem sign of stress?
I am growing river red gums (eucalyptus camaldulensis) from seeds. They are now about two months old and approximately 10cm (4 inches) tall.
After only growing in one direction, they have suddenly both started developing new leaves at the stems of the existing leaves. Is this a sign of stress? Should I change something?
Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking.
| common-pile/stackexchange_filtered |
How to replace two propellers on the same aircraft reaching the time limit?
When both propellers of the aircraft reach the time limit (60 month) and need to be replaced, can both propellers be replaced at the same time and perform only one flight test to reduce time on ground or should they be replaced one at a time just for safety or to investigate any discrepancy that may arise?
It's a useful idea to include a details on the country and legislation, since these can change from area to area and helps to narrow the down possible answers for the type of aircraft you are interested in :)
I can't name a source now since it is quite a while since I've seen it, but I believe to remember that at least in the EU and Switzerland this does not work. Testing a propeller implies a high probability of it failing thus you need the other one to already be tested to ensure that at least one is most likely working with no problems. Therefore you would have to replace one, test it, then replace the other one and test it. If I can find the source again, I make it a full scale answer, till then I hope somebody else can point you in the right direction.
Why is a flight test needed for a prop change?
@SteveH That sounds like an interesting thing to ask as a new question. :)
Any paralells to a brand new plane with two untested props? It is probably flight tested at only one test event.
Can you please elaborate if you are asking "can" under regulations? Or out of an abundance of safety
Since replacing a propeller with the same propeller that is in the Type Certificate is not a major repair, the FAR(s) (CFR Title 14) that most apply are 43.7 and (assuming the airplane is under Part 91) 91.407.
43.7 states:
(a) Each person performing maintenance, alteration, or preventive maintenance on an aircraft, engine, propeller, or appliance shall use the methods, techniques, and practices prescribed in the current manufacturer's maintenance manual or Instructions for Continued Airworthiness prepared by its manufacturer, or other methods, techniques, and practices acceptable to the Administrator, except as noted in §43.16. He shall use the tools, equipment, and test apparatus necessary to assure completion of the work in accordance with accepted industry practices. If special equipment or test apparatus is recommended by the manufacturer involved, he must use that equipment or apparatus or its equivalent acceptable to the Administrator.
(b) Each person maintaining or altering, or performing preventive maintenance, shall do that work in such a manner and use materials of such a quality, that the condition of the aircraft, airframe, aircraft engine, propeller, or appliance worked on will be at least equal to its original or properly altered condition (with regard to aerodynamic function, structural strength, resistance to vibration and deterioration, and other qualities affecting airworthiness).
So you must follow the maintenance manual and put the airplane back in a state at least as good as original. If you think that putting them both on does not qualify to satisfy this requirement, then you should do them one at a time. If it does satisfy this, then in that case, 91.407 comes into play, which states:
(a) No person may operate any aircraft that has undergone maintenance, preventive maintenance, rebuilding, or alteration unless—
(1) It has been approved for return to service by a person authorized under §43.7 of this chapter; and
(2) The maintenance record entry required by §43.9 or §43.11, as applicable, of this chapter has been made.
(b) No person may carry any person (other than crewmembers) in an aircraft that has been maintained, rebuilt, or altered in a manner that may have appreciably changed its flight characteristics or substantially affected its operation in flight until an appropriately rated pilot with at least a private pilot certificate flies the aircraft, makes an operational check of the maintenance performed or alteration made, and logs the flight in the aircraft records.
(c) The aircraft does not have to be flown as required by paragraph (b) of this section if, prior to flight, ground tests, inspection, or both show conclusively that the maintenance, preventive maintenance, rebuilding, or alteration has not appreciably changed the flight characteristics or substantially affected the flight operation of the aircraft.
I would argue that a propeller change cannot be satisfied by (c) above and does require a flight test. However, there is nothing that requires separate flight tests UNLESS the maintenance manual requires it, in which case part 43.7(a) from above comes into play. But as a "fail safe" a good run-up on the ground at high throttle with the brakes applied, followed by an inspection of the work can give good confidence in the work prior to the flight test.
Welcome, nice answer for a new guy, well done! The only thing you're missing is a source reference, and a good link to the source. Thanks for finding an old question in desperate need of a decent answer.
Sure thing. FARs are the Federal Aviation Regulations, which are in CFR (Code of Federal Regulations) Title 14. Pretty much any websearch will give them, I linked to one of them.
@Paul, Truly a quality answer! Welcome to Aviation.SE!
Excellent, @Paul, well done! That now nicely fits the StackExchange model, and should garner you more up votes. Since your first answer was this good, I'm certain you'll grace us with many other excellent answers, as well.
It would probably be safer to replace one at a time in case the propellers were defective. Better safe on one good prop than sorry on two bad ones.
Just because you say so? Or are there any requirements to do so?
Check your country's rules on that, but even then it is one of the unwritten rules of flying that keeps almost everything with a fail-safe backup.
| common-pile/stackexchange_filtered |
replacing line in a document with Python
I have the following file and I would like to replace #sys.path.insert(0, os.path.abspath('.'))
with sys.path.extend(['path1', 'path2'])
import sys
import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
However, the following code does not change the line.
with open(os.path.join(conf_py_path, "conf.py"), 'r+') as cnfpy:
for line in cnfpy:
line.replace("#sys.path.insert(0, os.path.abspath('.')))",
"sys.path.extend(%s)\n" %src_paths)
cnfpy.write(line)
How is it possible to replace the line?
http://stackoverflow.com/questions/9189172/python-string-replace
the problem can be demonstrated without opening a file. please strive to create a minimal example.
We have two problems here: 1. Not saving the result of line.replace(), 2. Not reading and writing a file opened in r+ mode correctly.
Try fileinput to change a string in-place within a file:
import fileinput
for line in fileinput.input(filename, inplace=True):
print(line.replace(string_to_replace, new_string))
Thank you, I had to do print line.replace(...,...),
@user977828, right. I've edited the answer.
| common-pile/stackexchange_filtered |
Django rest auth email instead of username
I have a django project in which I am using Django-rest-auth to do authentication. I want to use email with password to authenticate the user and not the username+password.
I have following settings in my settings.py but it didn't do anything for me:
REST_SESSION_LOGIN = True
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_AUTHENTICATION_METHOD = 'EMAIL'
ACCOUNT_EMAIL_VERIFICATION = 'optional'
How can I achieve it?
Do you get any errors? 2. Do you have a custom user model? 3. Did you try the following settings? http://django-allauth.readthedocs.org/en/latest/advanced.html#custom-user-models
Following setting worked:
#This is required otherwise it asks for email server
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# ACCOUNT_EMAIL_REQUIRED = True
# AUTHENTICATION_METHOD = 'EMAIL'
# ACCOUNT_EMAIL_VERIFICATION = 'optional'
ACCOUNT_AUTHENTICATION_METHOD = 'email'
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_USERNAME_REQUIRED = False
#Following is added to enable registration with email instead of username
AUTHENTICATION_BACKENDS = (
# Needed to login by username in Django admin, regardless of `allauth`
"django.contrib.auth.backends.ModelBackend",
# `allauth` specific authentication methods, such as login by e-mail
"allauth.account.auth_backends.AuthenticationBackend",
)
I'm using this package too, and by call this config it worked for me:
ACCOUNT_AUTHENTICATION_METHOD = 'email'
Be careful about this config, this config belongs to django-allauth, see this:
class AuthenticationMethod:
USERNAME = 'username'
EMAIL = 'email'
USERNAME_EMAIL = 'username_email'
The above class is the settings which is in allauth, so you should write 'EMAIL' in lower case.
Django maintains a list of "authentication backends" that checks for authentication. If one authentication method fails, Django tries the other methods.
The default django.contrib.auth.backends.ModelBackend requires you to provide the username. If you want to add customization to it, you need to add another authentication backend. For your use case, allauth.account.auth_backends.AuthenticationBackend
So, modify the AUTHENTICATION_BACKENDS setting like following:
AUTHENTICATION_BACKENDS = (
"django.contrib.auth.backends.ModelBackend",
"allauth.account.auth_backends.AuthenticationBackend",
)
You can read more about customizing authentication backends here:
https://docs.djangoproject.com/en/5.1/topics/auth/customizing/
| common-pile/stackexchange_filtered |
Rails: how to output pretty html?
I use SASS as views engine with built-in prettyfying option in config/application.rb:
Slim::Engine.set_default_options :pretty => true
Nevertheless, not only usage of rdiscount for posts rendering breaks all that beauty, but typical commands do that, e.g.:
title
= "#{t "title.main"} - #{(yield :title) || "#{t "title.default"}"}"
== stylesheet_link_tag "application", :media => "all"
turns into
<title>Some title</title><link href="/assets/application.css?body=1" media="all" rel="stylesheet" type="text/css" />
In additon, some examples of tags indentation:
<body>
<header>
<h1>...
</header>
<div class="content"><div class="posts"><article>
<div class="title">
<h3>...</h3>
<h2>...</h2>
</div>
<div class="entry"><p>...</p>
<p>...</p>
</div>
</article><article>
<div class="title">
<h3>...</h3>
<h2>...</h2>
</div>
<div class="entry"><p>...</p>
<p>...</p>
</div>
</article><article>
Maybe there is some after_filter or whatever exist to prettify responce.body after all?
That :pretty => true will be disabled then because it does only a half of the work.
Possible duplicate of Can you get ERB to properly indent when rendered?
| common-pile/stackexchange_filtered |
Combining multiple database table and converting rows to column - SQL Server
I have this 3 tables and I need to combine it together.
tblSites
| Site_Code | Site_Name |
AA AaaaaA
BB BaaaaB
CC CaaaaC
tblWeb
| WebID | AppName | AppUrl | ServerName |
1 aWeb www.aWeb.com ServerA
2 bWeb www.bWeb.com ServerA
3 cWeb www.cWeb.com ServerB
4 dWeb www.dWeb.com ServerA
5 eWeb www.eWeb.com ServerC
6 fWeb www.fWeb.com ServerC
7 gWeb www.gWeb.com ServerD
8 hWeb www.hWeb.com ServerD
tblWebServices
| Sites | WebID | SummaryState | Last_Check |
A 1 OK 02/01/2016 10:00:00.000
A 1 Critical 02/01/2016 10:00:04.000
A 2 OK 02/01/2016 10:00:04.000
A 2 Critical 02/01/2016 10:00:06.000
A 3 OK 02/01/2016 10:00:07.000
A 3 OK 02/01/2016 10:00:09.000
A 4 OK 02/01/2016 10:00:10.000
A 4 OK 02/01/2016 10:00:12.000
A 5 Critical 02/01/2016 10:00:14.000
A 5 OK 02/01/2016 10:00:17.000
A 6 OK 02/01/2016 10:00:20.000
A 6 OK 02/01/2016 10:00:23.000
A 7 OK 02/01/2016 10:00:25.000
A 7 Critical 02/01/2016 10:00:36.000
A 8 OK 02/01/2016 10:00:39.000
A 8 OK 02/01/2016 10:00:40.000
B 1 Critical 02/02/2016 10:00:00.000
B 1 OK 02/02/2016 10:00:04.000
B 2 Critical 02/02/2016 10:00:04.000
B 2 OK 02/02/2016 10:00:06.000
B 3 Critical 02/02/2016 10:00:07.000
B 3 Critical 02/02/2016 10:00:09.000
B 4 Critical 02/02/2016 10:00:10.000
B 4 Critical 02/02/2016 10:00:12.000
B 5 OK 02/02/2016 10:00:14.000
B 5 Critical 02/02/2016 10:00:17.000
B 6 Critical 02/02/2016 10:00:20.000
B 6 Critical 02/02/2016 10:00:23.000
B 7 Critical 02/02/2016 10:00:25.000
B 7 OK 02/02/2016 10:00:36.000
B 8 Critical 02/02/2016 10:00:39.000
B 8 Critical 02/02/2016 10:00:40.000
This are the 3 tables. I need to get this kind of output I need to get the corresponding Appname per ServerName and also the "LATEST" summary state per Sites.
Please. Can u help me create this output.
Im okay with 1 ServerName per Codes.
**Expected Output**
| ServerA | Site-AA | Site-BB | Site-CC | Site-DD |
aWeb Critical OK No Data Found No Data Found
bWeb Critical OK No Data Found No Data Found
dWeb OK Critical No Data Found No Data Found
| ServerB | Site-AA | Site-BB | Site-CC | Site-DD |
cWeb OK Critical No Data Found No Data Found
| ServerC | Site-AA | Site-BB | Site-CC | Site-DD |
eWeb OK Critical No Data Found No Data Found
fWeb OK Critical No Data Found No Data Found
| ServerD | Site-AA | Site-BB | Site-CC | Site-DD |
gWeb Critical OK No Data Found No Data Found
hWeb OK Critical No Data Found No Data Found
This is the codes that created by one of expert here.
DECLARE @sql NVARCHAR(MAX)
SET @sql = ''
SELECT @sql = 'SELECT tblWeb.AppName ' + CHAR(10)
SELECT @sql = @sql + ' , ISNULL(MAX(CASE WHEN Site_Code = ''' + Site_Code + '''THEN SummaryState END), ''No Data Found'') AS ' + QUOTENAME('Site-'+Site_Code) + CHAR(10)
FROM tblSites
ORDER BY Site_Code
SELECT @sql = @sql + 'FROM ( SELECT *, rn = ROW_NUMBER() OVER(PARTITION BY Site_Code , WebID ORDER BY Last_Check DESC) FROM tblWebServices ) t
LEFT JOIN tblWeb ON t.WebID = tblWeb.WebID
WHERE t.rn = 1 GROUP BY tblWeb.AppName ORDER BY tblWeb.AppName '
PRINT @sql
EXEC sp_executesql @sql
The output of that codes is like this
| AppName | Site-AA | Site-BB | Site-CC | Site-DD |
aWeb Critical OK No Data Found No Data Found
bWeb Critical OK No Data Found No Data Found
cWeb OK Critical No Data Found No Data Found
dWeb OK Critical No Data Found No Data Found
eWeb OK Critical No Data Found No Data Found
fWeb OK Critical No Data Found No Data Found
gWeb Critical OK No Data Found No Data Found
hWeb OK Critical No Data Found No Data Found
I need to get the AppName per Server like the one I've shown on the Expected Output.
You have some very bad table design here. Your keys don't match up.
Yeah. But it's from the database. And we cannot edit the database. So that we need to configure it by ourselves. @JoelCoehoorn Thanks for reminding :)
You are doing crosstabbing in a SQL query. It's almost always a better idea to do this in a reporting tool instead. Are you using a reporting tool or just exporting to a file that someone opens?
For example it would be far easier to produce your output if it just had an extra column 'Server Name' which contained the server. If you MUST have it the way you've described then you are describing a report and you should use a proper reporting tool rather than jumping through all sorts of SQL hoops to format the data in a certain way.
Oh I see. Okay. I'll try your suggestion.. reporting tool like?
Like SSRS (free with SQL Server). How is this output used? Is someone looking at it or is it sent to another system?
It is sent to another system
| common-pile/stackexchange_filtered |
Hash Passwords php
I have a very basic logon system, that authenticates users by the means of a user table in a mysql database with php.
can someone explain what the point of hashing passwords it, how to do it with php, and what is actually stored in the database.
Thanks
see http://stackoverflow.com/questions/3505346/php-md5-explained/3505368#3505368
possible duplicate of Secure hash and salt for PHP passwords
can someone explain what the point of
hashing passwords it,
The point of hashing passwords is for security purposes. If inserted as plain text, anyone that gets into your database will now have all of your users passwords. Another huge problem that stems with this is that it more than likely compromises the user everywhere, not just your site, as most people tend to use the same password everywhere.
how to do it
with php, and what is actually stored
in the database.
To use it in PHP you simply take a string, in this example $password = 'password'; and use the command sha1();. This will return something like d0be2dc421be4fcd0172e5afceea3970e2f3d940. It is also good practice to 'salt' passwords with your php script, so that the PHP script login script is required to successfully log in. Example:
<?php
$salt1 = '2348SDasdf!^*__';
$salt2 = '_a35j@*#(lsdf_';
$password = sha1($salt1.$_POST['password'].$salt2); // d0be2dc421be4fcd0172e5afceea3970e2f3d940
?>
Then insert $password into your database. Upon logging in, you would need to salt the password given run it through sha1 in order for it to match the password in the database. You insert it into the database just like any other string, just make sure you have sufficient length granted to the column you're attempting to insert too.
This is not a really good answer, since the salt should be random. If the salting is constant, a brute force attack can crack passwords in parallel (i.e. per generated dictionary word, you calculate one hash and see if it matches one of the N users). But if you take a random salt, the hash needs to be calculated for every word and every users, adding a factor of N to the complexity.
In opposition of them knowing the salt by getting the database, both ways have their flaws. If it's random you'll have to store the salt somewhere, which if someone gets access to it will be just as easy to brute force.
Say someone breaks into your system (or finds a loophole in your sql queries) then you don't want them to know all passwords.
So you hash them before storing them. So you can check if the password is ok, but not deduce the password from the hash.
Unless you use a weak hash. If you would only sha1($password) then you will find putting the hash of often-used passwords into google gives the password in under 0.1 sec.* (but otherwise you could also find rainbow tables for all kinds of hashes)
So you want to add a "salt", that means, you generate some garbage value:
$salt = rand().rand().rand();
and then store
$hash = $salt."-".sha1($salt.$password);
on checking, you know the salt and you can check if the password is right, but knowing the hash and salt makes it still hard to recover the password. (unless you have a rainbow table which includes the salt, of course)
* this needs some explanation: I once took a large user table and found some hashes to appear multiple times. I googled the most-occurring one and it reversed to computer
"MD5 should be considered cryptographically broken and unsuitable for further use." http://www.kb.cert.org/vuls/id/836068
rand() is not cryptographically secure. also the salt should be a separate column, thats bad db design.
So if the hacker knows what the salt is, doesn't that completely destroy the purpose of salting?
@The Rook: Even if I would agree, it's a little beside the point here. (and frankly, I don't agree, because I want to keep the password salt+hash a nice, "atomic" chunk of data wherever I take it. The salt and hash are worthless when separate, I only pack/unpack them when needed, the rest of the system just sees it as one string)
@TechplexEngineer it doesn't destroy the purpose. You salt your passwords to make it more difficult. You lock your doors at home to deter petty theft, but determined thieves won't be foiled by something like a door being locked
@Techplex: no, because if he would know the salt is, say, 7236817 then he effectively has to reverse the hash of 7236817your_password, which will be less likely to occur in a rainbow table. Besides, a brute-force on all your user accounts will mean that the attacker has to hash every password he tries for every single salt encountered. So just make the salt long and put more than just 0-9 in there!
@mvds I guess that means your salt is alpha-numeric or maybe even just base16. A salt should be base256 (a full byte per char), you want the most entropy to string length ratio.
@Rock well, I would keep it with alphanumeric (or the OP will come back later with mysql binary data issues), but anyway the example with rand() was easiest to get the message across.
A hash is a "one-way function". You feed in a password and get an approximately unique output that cannot be (computationally feasibly) converted back into the real password. Depending on the hash, it will look different. For instance, with sha1 (http://php.net/manual/en/function.sha1.php) you will always get a string 20 bytes long.
The benefit is that the real password is never stored in plaintext. To verify the user's password, you just compute the hash on the supposed password and compare it to the stored hash. If somebody gets a hold of your password database, they still don't have the actual passwords.
Well, you can easily find a big list of common passwords. It's trivial to write a program to compute a hash on all of them, and then compare those hashes against the password database. This is only relevant, of course, when somebody has access to the hashed passwords. Also, it can be effectively prevented by using a salt as mentioned in another post.
yes, if I tell you I hash my passwords using md5($password), and my hash is 08b5411f848a2581a41672a759c87380, can you tell me what my password is?
@Jonathan: you don't have to do these things yourself anymore, they are out there and indexed for free.
That hash returned some unexpected results by the way: user tables!
Correct, but you don't need md5decryption.com for that, google will do just fine.
Noone has said why yet, so let me: most users are idiots and use the same password everywhere. You don't want a hacker to break into your system, grab the passwords then go and hack your users accounts everywhere else.
This is so not the point. The point is that if your database was stolen, you have to tell all your 1 million users that their passwords were stolen and that they must change it. They won't, so there will be some lunatic having access to accounts of 900k users on your system. So you have the biggest problem.
kinda a good point @mvds, except that you can force your users to change passwords to get round that problem - it won't make you popular but it will work. But mine is still a good point to. Sure, if a hackers hacks other people's sites using passwords stolen from yours it's not really your problem - but do you think your (probably ex-)customers are going to see it that way?
Of course it depends. If you are a company, you can force 100% of your employees to change passwords, and they will not leave your company for that, true. If you are gmail/hotmail, you can force maybe 80% (assuming there's a lot of dead accounts), and some customers will leave you. If you are in a realm of less important stuff (social networking, gaming, forums, ...) you will most probably scare away 50% of your hard-earned users, since you will have to block accounts at some point, if the users don't respond to your request. Besides there is the problem that for that last case, you -- cont'd
-- probably don't have the proper mechanisms in place to really authenticate the user (secret question, actual birth date, ...). So the real risk is that you will just go out of business, not only because of the bad press.
| common-pile/stackexchange_filtered |
ValueError: Must pass DataFrame with boolean values only
Question
In this datafile, the United States is broken up into four regions using the "REGION" column.
Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.
This function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index).
CODE
def answer_eight():
counties=census_df[census_df['SUMLEV']==50]
regions = counties[(counties[counties['REGION']==1]) | (counties[counties['REGION']==2])]
washingtons = regions[regions[regions['COUNTY']].str.startswith("Washington")]
grew = washingtons[washingtons[washingtons['POPESTIMATE2015']]>washingtons[washingtons['POPESTIMATES2014']]]
return grew[grew['STNAME'],grew['COUNTY']]
outcome = answer_eight()
assert outcome.shape == (5,2)
assert list (outcome.columns)== ['STNAME','CTYNAME']
print(tabulate(outcome, headers=["index"]+list(outcome.columns),tablefmt="orgtbl"))
ERROR
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-77-546e58ae1c85> in <module>()
6 return grew[grew['STNAME'],grew['COUNTY']]
7
----> 8 outcome = answer_eight()
9 assert outcome.shape == (5,2)
10 assert list (outcome.columns)== ['STNAME','CTYNAME']
<ipython-input-77-546e58ae1c85> in answer_eight()
1 def answer_eight():
2 counties=census_df[census_df['SUMLEV']==50]
----> 3 regions = counties[(counties[counties['REGION']==1]) | (counties[counties['REGION']==2])]
4 washingtons = regions[regions[regions['COUNTY']].str.startswith("Washington")]
5 grew = washingtons[washingtons[washingtons['POPESTIMATE2015']]>washingtons[washingtons['POPESTIMATES2014']]]
/opt/conda/lib/python3.5/site-packages/pandas/core/frame.py in __getitem__(self, key)
1991 return self._getitem_array(key)
1992 elif isinstance(key, DataFrame):
-> 1993 return self._getitem_frame(key)
1994 elif is_mi_columns:
1995 return self._getitem_multilevel(key)
/opt/conda/lib/python3.5/site-packages/pandas/core/frame.py in _getitem_frame(self, key)
2066 def _getitem_frame(self, key):
2067 if key.values.size and not com.is_bool_dtype(key.values):
-> 2068 raise ValueError('Must pass DataFrame with boolean values only')
2069 return self.where(key)
2070
ValueError: Must pass DataFrame with boolean values only
I am clueless. Where am I going wrong?
Thanks
You're trying to use a different shaped df to mask your df, this is wrong, additionally the way you're passing the conditions is being used incorrectly. When you compare a column or series in a df with a scalar to produce a boolean mask you should pass just the condition, not use this successively.
def answer_eight():
counties=census_df[census_df['SUMLEV']==50]
# this is wrong you're passing the df here multiple times
regions = counties[(counties[counties['REGION']==1]) | (counties[counties['REGION']==2])]
# here you're doing it again
washingtons = regions[regions[regions['COUNTY']].str.startswith("Washington")]
# here you're doing here again also
grew = washingtons[washingtons[washingtons['POPESTIMATE2015']]>washingtons[washingtons['POPESTIMATES2014']]]
return grew[grew['STNAME'],grew['COUNTY']]
you want:
def answer_eight():
counties=census_df[census_df['SUMLEV']==50]
regions = counties[(counties['REGION']==1]) | (counties['REGION']==2])]
washingtons = regions[regions['COUNTY'].str.startswith("Washington")]
grew = washingtons[washingtons['POPESTIMATE2015']>washingtons['POPESTIMATES2014']]
return grew[['STNAME','COUNTY']]
def answer_eight():
df=census_df[census_df['SUMLEV']==50]
#df=census_df
df=df[(df['REGION']==1) | (df['REGION']==2)]
df=df[df['CTYNAME'].str.startswith('Washington')]
df=df[df['POPESTIMATE2015'] > df['POPESTIMATE2014']]
df=df[['STNAME','CTYNAME']]
print(df.shape)
return df.head(5)
Please add an explanation to your answer to assist learners in understanding it. Code only answers are not always clear and are considered impolite. Use the [edit] function for enhancement,
def answer_eight():
county = census_df[census_df['SUMLEV']==50]
req_col = ['STNAME','CTYNAME']
region = county[(county['REGION']<3) & (county['POPESTIMATE2015']>county['POPESTIMATE2014']) & (county['CTYNAME'].str.startswith('Washington'))]
region = region[req_col]
return region
answer_eight()
def answer_eight():
df=census_df
region1=df[ df['REGION']==1 ]
region2=df[ df['REGION']==2 ]
yes_1=region1[ region1['POPESTIMATE2015'] > region1['POPESTIMATE2014']]
yes_2=region2[ region2['POPESTIMATE2015'] > region2['POPESTIMATE2014']]
yes_1=yes_1[ yes_1['CTYNAME']=='Washington County' ]
yes_2=yes_2[ yes_2['CTYNAME']=='Washington County' ]
ans=yes_1[ ['STNAME','CTYNAME'] ]
ans=ans.append(yes_2[ ['STNAME','CTYNAME'] ])
return ans.sort()
Simple and Easy
I solved the problem at Coursera like this.
def answer_eight():
df8 = census_df.copy()
washington = df8['CTYNAME'].str[0:10] == 'Washington'
popincrease = df8['POPESTIMATE2015']) > (df8['POPESTIMATE2014']
region = (df8['REGION'] == 1) | (df8['REGION'] == 2)
df8 = df8[region & popincrease & washington]
return df8[{'STNAME','CTYNAME'}]
answer_eight()
I was a beginner in Pandas back then and it took me almost 20 LOLs.
I solved it in this way (I have not used any Local variables directly accessed census_df in a single line)
Solution is the Pretty much the same as you see the other solutions, but in the other solutions, they have used the local variables in my solutions I have not used it.
def answer_eight():
return census_df[
(census_df['SUMLEV'] == 50) &
((census_df["REGION"] == 1) | (census_df["REGION"] == 2)) &
(census_df["CTYNAME"].str.lower()).str.startswith('washington') &
(census_df["POPESTIMATE2015"] > census_df["POPESTIMATE2014"])
][["STNAME","CTYNAME"]]
if you could provide some explanation for your answer would be helpful.
Is these Explanation fine ?
| common-pile/stackexchange_filtered |
Copying cells with Range.Copy
I'm writing an Excel workbook to take data for a research study, analyze each day's worth, then send the entered data and its 7 summary computations into a workbook (named test subject 1, and worksheet test med 1) for archive and analysis.
The range.copy command correctly copies the range of the data entered directly (or by a Userform) and the cells with the descriptive titles.
The column P, which contains data referenced from another sheet in the Workbook, is copied with the wrong cell reference. Cell AB16 is copied with the data from ='[FAVor Study Medication Daily Calculator8.xlsm]Calculations'!AB7, (not ...AB16). This is true for all the 7 summary cells in that column (AB16 - AB22), but cells ab15 and ab23, simple references to other cells on the same worksheet, are correctly copied.
I tried using the PasteSpecial command but got another set of problems that I'll try to solve next!
I've labelled the problem line of code as ProblemLine:
Dim wsCopyfrom As Worksheet
Dim wsCopyto As Worksheet
'Dim CopyfromLastRow As Long
'Dim CopytoLastRow As Long
'set variables for copy and dest sheets
Set wsCopyfrom = ThisWorkbook.Sheets("EnterData")
Set wsCopyto = ActiveWorkbook.Sheets(medname)
'Find last used row in the copyfrom based on data in column E and the copyto Col A
CopyfromLastRow = wsCopyfrom.Range("E200").End(xlUp).Row
CopytoLastRow = wsCopyto.Range("A200").End(xlUp).Row
'make copy of range, but ensure that all of the block rows are included, to row 10
If CopyfromLastRow > 19 Then
wsCopyfrom.Range("A7:P" & CopyfromLastRow).Copy wsCopyto.Range("A" & CopytoLastRow + 1)
Else
'ProblemLine
wsCopyfrom.Range("A7:q20").Copy wsCopyto.Range("A" & CopytoLastRow + 1)
End If 'if to copy at least to row 19
This is expected "Relative Reference" behviour. Same thing would happen if you manually copied those ranges in Excel yourself.
Two option to solve this:
Change your formula on the sheet to "Absolute References" eg ='[FAVor Study Medication Daily Calculator8.xlsm]Calculations'!$AB$16
Change your code to not use Copy, eg
wsCopyto.Range("A" & CopytoLastRow + 1).Formula = wsCopyfrom.Range("A7:Q20").Formula
@JSurow please read this (since you are new here...)
| common-pile/stackexchange_filtered |
How to get rows exactly like they are displayed in react mui datagrid? (filtered and sorted in the same way?)
Im making selected row move up and down on arrow clicks and for that I need to get rows from mui datagrid preferably using useGridApiRef hook. It's crucial that rows are filtered and sorted so they match what's visible in dataagrid.
I got to the point where i can get sorted but not filtered rows.
const apiRef = useGridApiRef();
function arrowsEventListener(event) {
if (event.key === "ArrowUp" || event.key === "ArrowDown") {
const sortedData = apiRef.current.getSortedRows();
}
}
You can do this using the functions intended for row exporting. Here's a minimal example:
const apiRef = useGridApiRef();
// if you just need the ids
const filteredSortedIds = gridFilteredSortedRowIdsSelector(apiRef);
// if you also need the row models
const filteredSortedRowsAndIds = gridFilteredSortedRowEntriesSelector(apiRef);
| common-pile/stackexchange_filtered |
Trying to read a specific line in c++ but nothing is displaying anything
So i'm writing some test code for a bigger project that will display specific data when a code and address has been entered, but currently nothing is running and i'm not getting any output at all.
#include <iostream>
#include <string>
#include <fstream>
#include <sstream>
int main(){
std::ifstream testdata; testdata.open("testdata.csv");
std::string line, csvitem;
int linenum = 0;
int linenumsought = 2;
if(testdata.is_open()){
while (testdata){
linenum++;
if(linenum == linenumsought){
std::cout << line << '\n';
std::istringstream myline(line);
while(getline(myline, csvitem, ',')){
std::cout << csvitem << '\n';
}
}
}
testdata.close();
}
return 0;
}
Im really tired and most likely have just missed something small lol, but if someone could help on why im not getting any display at all from my csv file that'd be very much appreciated.
House,111111,2046
House2,222222,2222
csv file ^^
Whenever I have managed to get an output, it runs infinitely creating lines in the terminal. That or it will simply say "permission denied"
Your code misses reading line from the file. while (testdata){ probably should be while (std::getline(testdata,line)) {
Side note: get in the habit of initializing objects with meaningful values rather than default initializing them and immediately overwriting the default values. In this case that means changing std::ifstream testdata; testdata.open("testdata.csv"); to std::ifstream testdata("testdata.csv");. Also, you don’t have to call testdata.close(); the destructor will do that.
Style note: you may wish to test if the file is not open, print an error message and immediately return with an error code. You can then proceed under the knowledge that the file is open and unindent the remainder of your code a level.
@drescherjm tysm that fixed it lol, genuinely tysm
Your code misses reading the file into the line variable.
A simple correction would be to replace:
while (testdata){
With the following:
while (std::getline(testdata,line)) {
This change not only reads the line one at a time but ends the loop when the read failed which will happen at the end of the file or if there was some type of error.
You may also remove this line:
testdata.close();
The destructor for testdata will close the file for you so you need not close it yourself.
Taking the example 1 step further instead of reading the lines we can skip ahead n lines:
#include <iostream>
#include <string>
#include <fstream>
#include <sstream>
#include <limits>
bool readCSVLine(std::ifstream &inputFile, size_t nLine)
{
if (!inputFile.is_open())
{
std::cerr << "ERROR: File not open!\n";
return false;
}
// Skip the next n lines using 0 based indexing.
for (size_t i = 0; i < nLine; ++i)
{
// Ignore the next line. Here we ignore the maximum amount of characters
// up until the newline delimiter is found.
if (!inputFile.ignore(std::numeric_limits<std::streamsize>::max(), '\n'))
{
// If this returns false we could not read the line.
std::cerr << "ERROR: Could not read line " << i << '\n';
return false;
}
}
// We skipped ahead now read the line.
std::string line;
if (std::getline(inputFile, line))
{
std::cout << line << '\n';
std::istringstream myline(line);
std::string csvitem;
while (std::getline(myline, csvitem, ','))
{
std::cout << csvitem << '\n';
}
return true;
}
std::cerr << "ERROR: Could not read line " << nLine << '\n';
return false;
}
int main()
{
std::ifstream testdata("testdata.csv");
unsigned int linenumsought = 2;
return readCSVLine(testdata, linenumsought) ? 0 : 1;
}
Note that the above code uses 0 based indexing so the third line is what would be printed in the example. For 1 based indexing you could change this line: for (size_t i = 0; i < nLine; ++i) to for (size_t i = 1; i < nLine; ++i)
To take this a step further, you can use testdata.ignore() to skip unwanted lines until the desired line is reached, and then call std::getline() just 1 time to read that line. No point in wasting memory reading lines that will just get discarded. Also, you should terminate the loop once the desired line is reached, since subsequent lines are being ignored.
| common-pile/stackexchange_filtered |
How to override property "sites.email.membership.reply.body" in liferay
I want override the following properties in my portal-ext.properties:
sites.email.membership.reply.subject=com/liferay/portlet/sites/dependencies/email_membership_reply_subject.tmpl
sites.email.membership.reply.body=com/liferay/portlet/sites/dependencies/email_membership_reply_body.tmpl
sites.email.membership.request.subject=com/liferay/portlet/sites/dependencies/email_membership_request_subject.tmpl
sites.email.membership.request.body=com/liferay/portlet/sites/dependencies/email_membership_request_body.tmpl
to something like this:
sites.email.membership.reply.subject=com/krishna/email_membership_reply_subject.tmpl
sites.email.membership.reply.body=com/krishna/email_membership_reply_body.tmpl
sites.email.membership.request.subject=com/krishna/email_membership_request_subject.tmpl
sites.email.membership.request.body=com/krishna/email_membership_request_body.tmpl
I have done this in EXT, i.e. I have created the package: ext-impl/src/com/krishna/ in EXT-plugin and it works fine, but I am not able to do this in a hook or portlet. Why? Because its giving me exception:
java.io.IOException: Unable to open resource in class loader com/krishna/email_membership_request_subject.tmpl
So, my question: Is there a way to do it in hook or portlet or only EXT can be used?
Thanks
This can be done only in an EXT plugin. Because of the following two reasons:
Hooks can be advantageous to override few properties/services but not all. This particular property is not supported by hooks.
This is definitely not possible with portlets, as you already are facing class loading issues. As portal-impl.jar is located inside the ROOT/WEB-INF/lib of liferay and your portlet doesn't have access to it.
So EXT plugin is the only way.
Do you have a list of the properties supported by hooks? and also by overriding this property it doesn't mean I am trying to access portal-impl.jar. I understand that the code to load these *.tmpl files is in portal-impl.jar but is there not a way I can ask liferay to look into my specific plugin?
You can find out through Liferay IDE, it will allow you to see most of the properties that you can override when you are creating hooks. As far as I know there is no other method to include templates from your plugin.
| common-pile/stackexchange_filtered |
How do I preserve colors in the console when running a script from a spawned child process?
I have a mocha file full of Selenium tests. When I run mocha from the command line like normal, I get this nice formatted and colorized output thanks to the colors module.
This looks great and works wonderfully, but running manually only runs the tests against a single environment. To run the tests in multiple environments in parallel, Sauce Labs (Selenium cloud hosting) recommends spawning mocha child processes. I built this into a Gulp task in our project.
gulp.task('test', function () {
var targets = [
'windows.chrome',
'windows.firefox',
'windows.ie',
'mac.chrome',
'mac.firefox',
'mac.safari',
'linux.chrome',
'linux.firefox'
];
function run_mocha(target, done) {
var env = Object.assign({}, process.env);
env.TARGET = target;
var mocha = exec('mocha ./test/test-runner.js', {env: env}, done);
['stdout', 'stderr'].forEach((stream) =>
mocha[stream].on('data', (data) => process[stream].write(`${target}:${data}`))
);
}
var jobs = targets.map(
(target) => new Promise(
(resolve, reject) => run_mocha(target, resolve)
)
);
return Promise.all(jobs).then(() => {
console.log('ALL SUCCESSFUL');
});
});
This works great but the output completely loses the colorization. It was also injecting superfluous newlines but I was able to fix that by swapping out console.log and console.error for process.stdout.write and process.stderr.write.
You can see that lines printed from gulp are colorized and work fine, but the minute it spawns the child processes any lines printed from there lose their color. It's not the end of the world but now I'm curious what is going on and how to fix it. I've learned a bit about ANSI escape codes but I'm still very confused about what's going on. Any suggestions?
have you tried any of the answers mentioned here? http://stackoverflow.com/questions/9135579/node-js-spawn-with-colors
I did mess around with spawn and the stdio property as per that question. I was able to supply inherit to stdio which forced the child process to adopt the same output streams as the parent, which did also enable colors. The issue I ran into was that I wanted to prepend each line with the target environment when the tests are running in parallel so that I know what lines go to which test runs. There was no way to do that from the parent as far as I could tell. I prefer to pipe the child streams so I can transform them in the parent. Dan's answer below solved the color issue :)
So I found this question:
Node.js spawn with colors?
It looks like it's an issue with Mocha, where it's detecting that its output is not going to be stdout. You'll need to specifically enable colors:
exec('mocha ./test/test-runner.js --colors')
Thank you. This fixed mocha's own output to properly include the colors within the child process. It also put me on the right track to finding out that the colors module also has a flag to explicitly enable colors except it's singular intead of plural: --color. Adding both flags to the spawned mocha process enabled all the colors in both mocha and the Selenium web driver. Thanks again!
| common-pile/stackexchange_filtered |
Alternative way to "preload" attribute
Hi everyone and sorry for my bad english. I'm writing code for an exam at my university.The exam it's about coding with HTML4 and JavaScript. The point is : i have a set of sounds to play when some events occours but the machines where my project will be examinated are SO old. I thought to preload all the tracks before launching all the javascript code so that when i'll have to play multiple sounds or more istances of the same sound i'll get no lag. My problem is that we have to use HTML 4. I know that in HTML 5 there is the "preload" attribute that can be insert in the audio tag, but even with an hour of research on the internet i didn't find nothing. Could someone give me and advice, trick or a link to solve my problem?
My solution was : the little game starts with a menu and if the player clicks start the audio load starts ( here is the problem because when i create the object nodes with the dom all the audios auto-play ) and than the game starts. I'd like to only load the audio without the collateral auto-play because i have already scripted the function that plays the audio when needed ( or if you have a better solution i can rewrite my code listening to your advice ). If you want some parts of my code just tell me to post them and i'll edit this question.
For now i'll post just this ( it's the part where i load the sound )
function create_audio_set()
{
var new_element_node = null ;
var body = document.getElementById( "game_body" ) ;
// Olaf death sounds
for ( var i = 1 ; i <= 4 ; i++ )
{
var new_element_node = document.createElement ( "object" ) ;
new_element_node.setAttribute ( "data", "Media/audio/Sample_dead_olaf_" + i + ".wav" ) ;
new_element_node.setAttribute ( "id", "Sample_dead_olaf_" + i ) ;
new_element_node.style.visibility = "hidden" ;
body.appendChild ( new_element_node ) ;
set_element_style ( "Sample_dead_olaf_" + i, "absolute", 1, 1, 1, 1, -5 )
}
....
....
}
set_element_style it's a function that sets the style catching the element with the "id" attribute, setting the positioning, left, top, height, width and zindex ;
KEEP IN MIND THAT I CANNOT USE DEPRECATED ELEMENTS OR ATTRIBUTES.
Thanks everybody!
| common-pile/stackexchange_filtered |
PERSISTENT message have much slower performance than NON_PERSISTENT message
I found that PERSISTENT message have much slower performance than NON_PERSISTENT message.
I sent and received non_persistent messages and the performance is as follows.
Method Number of Msg Elapsed Time
Sending - 500 messages - 00:00:0332
Receiving - 500 messages - 00:00:0281
I sent and received persistent messages and the performance is as follows.
Sending - 500 messages - 00:07:0688
Receiving - 500 messages - 00:06:0934
This behavior happens in both MQMessage and JMSMessage.
Thank all people helping me out the problem.
Special thanks to Shashi, T.Rob and Pangea.
500 messages in 7 seconds = 71 messages in 1 second. That is not fast enough.
What version of Java and JMS classes are you using?
Java version is [ 1.6.0 ]. I think JMS version is [ 1.1 ]. I used Java5 compiler.
WebSphere MQ Java and JMS classes would be v5.3.x.x, v6.0.x.x, 7.0.x.x, 7.1.x.x or <IP_ADDRESS>. As a rule, these get more performant in later versions and against a later version queue manager. If you use v7.1, the Performance Report shows hundreds of messages per second up to thousands of messages per second using JMS. See: http://bit.ly/SupptPacMP0B There is nowhere enough info in your post to provide an answer as to the difference you are seeing other than that it is not typical.
WebSphere Version is <IP_ADDRESS>.
I will edit my post to add producer and consumer classes. Thanks for the help.
This is an old question but this link clarifies the matter a bit.
Given the new title, I find I now have a response to this question.
Yes, persistent messages will always take longer than non-persistent messages with all other aspects being equal. The degree to which they are slower is highly tunable, though. In order to get the results that you are seeing it is likely that the queue manager has many of the worst-case tunings. Here are some of the factors that apply:
Whether the disk is local or networked. For 100MBS and slower connections talking to NFS mounted over spinning disks, a local drive is almost always much faster. (Mounts to SAN with fiber channel and battery-backed cached controllers are nearly always faster than local spinning drives, however.) A common example is use of consumer-grade NAS drives. As great as home NAS units are, throughput is always slower than local disk.
For local drives, the speed of the drive. Newer 'green' drives vary rotational speed to conserve power. Even 7200RPM disks can exhibit performance degredation compared to a 10k RPM drive.
For local drives, the degree of fragmentation. Even though the messages are small, they are placed into pre-allocated log files that may be highly fragmented.
Putting disk and log files on the same local volume. This causes head contention because a single message is written to both files before control returns to the application.
Linear versus circular logs. Linear are slower because the log formatter must allocate them new each time.
Whether syncpoint is used or not. If many messages are written in a single unit of work, WMQ can use cached writes and optimize the sector put operations. Only the COMMIT need ever be a blocking call.
Since non-persistent messages are handled entirely in memory, or overflowed to at most one disk file, then they are not affected by most of these issues.
@Rob please help me her: https://stackoverflow.com/questions/48404838/load-balancing-issue-while-connecting-to-ibm-mq-using-jms-ccdt-file
I believe your mirco-benchmarking is flawed (Using wrong benchmarking mechanism and you are also including i/o (System.out.println within the start and end time calculation) into the picture). Use a tool like Google's Caliper first to come up with correct the numbers and then seek for the answers. Last time I know (2003 i think), the MQ JMS implementation was a wrapper around Java MQI classes and we had to stick with Java MQI to meet our optimal throughput.
FYI - The Java/JMS implementation has changed considerably since 2003. Both Java and JMS now inherit from the same base classes and WMQ now supports properties natively so much less mapping going on.
@T.Rob I agree and that is why I mentioned the year explicitly. But the measurement itself is flawed.
I probably should have clarified I agree with the comments regarding the measurements. :-)
WebSphere MQ JMS messages carry additional headers (known as JMS headers) to support JMS style of messaging. JMSDestination, JMSReplyTo, JMSDeliveryMode, JMSType etc and provider specific JMS headers are part of the JMS headers. These headers are processed every time a message is sent or received.
On the other hand WebSphere MQ Java classes send/receive pure WMQ messages only. WMQ messages will have MQMD followed by message body.
So you can see the difference. The advantage of JMS message is that it is standards based and JMS messages are interoperable.
I think the two performances should not be that different although JMS structure has more headers. Can it be mapping JMS message to WMQ message the one that causes the delay?
Agreed. While Shashi's point is legit, the difference seems extraordinary.
@Shashi, this answer suggests that the difference OP reports is legitimate when it is not. Headers do not account for two orders of magnitude difference in performance.
@LwinHtooKo No, the mapping of the messages could not cause a difference of two orders of magnitude. If I had to guess, it would be that you do not have a clean install of the WMQ client (i.e. you just grabbed the jars instead of running the installer) and the jar files are mismatched or you have a CLASSPATH problem.
| common-pile/stackexchange_filtered |
Sagepay - making the billing address visible in VPS form
The billing address being sent by our system used to be visible when the system reached the VPS Form - so that it can be adjusted by the user at the time of entering their card details.
It is no longer visible since Sagepay updated their UI a few weeks ago.
This is leading to a lot of payments being declined which would never have been declined in the past.
How can I force it to remain visible?
TIA
You need to set the template back to Default in My Sage Pay. This will give you the old page, which isn't so great (OK, it's horrible) on a mobile device.
| common-pile/stackexchange_filtered |
Dynamic Enabled = true in C# chart
I have a database witch contains values of different tracks.
For example:
Track 1:
Sensor 1
Sensor 2
Sensor 3
Sensor 4
Sensor 5
Sensor 6
Sensor 7
Sensor 8
Track 2:
Sensor 1
Sensor 2
Sensor 3
Sensor 4
Sensor 5
Sensor 6
Sensor 7
Sensor 8
Sensor 9
Sensor 10
Sensor 11
Sensor 12
As you can see Track 2 has 12 sensors (Witch is the max!).
Now i want to display the values of the sensors in a graph. Witch is working.
However right now there are 12 items hardcoded. So when there are 8 sensors, the legend will still show 12.
Now what i did was add Enabled = False, so you won't see it. (See example below)
<chart:DataSeries x:Name="dsSensor1" Enabled="False" RenderAs="Line" LineThickness="3" LegendText="1" XValueType="DateTime" XValueFormatString="dd-MM HH:mm" YValueFormatString="#0.##'V'" MarkerEnabled="False">
<chart:DataSeries.DataPoints>
<chart:DataPoint XValue="2001-01-01" YValue="3.2" Enabled="False"/>
</chart:DataSeries.DataPoints>
</chart:DataSeries>
<chart:DataSeries x:Name="dsSensor2" Enabled="False" RenderAs="Line" LineThickness="3" LegendText="2" XValueType="DateTime" XValueFormatString="dd-MM HH:mm" YValueFormatString="#0.##'V'" MarkerEnabled="False">
<chart:DataSeries.DataPoints>
<chart:DataPoint XValue="2001-01-01" YValue="3.2" Enabled="False"/>
</chart:DataSeries.DataPoints>
</chart:DataSeries>
Right now you won't see the legend.
Now i the code i made a new loop like this:
foreach (DCHistory item in loadOperation.Entities.OrderByDescending(t => t.SensorNumber).Take(1))
This will take the highest number of a specific track. For example this query will result in 8.
What i do right now then =
foreach (DCHistory item in loadOperation.Entities.OrderByDescending(t => t.SensorNumber).Take(1))
{
DataSeries series1 = chart.Series.First(s => s.Name == string.Format("dsSensor1"));
DataSeries series2 = chart.Series.First(s => s.Name == string.Format("dsSensor2"));
DataSeries series3 = chart.Series.First(s => s.Name == string.Format("dsSensor3"));
DataSeries series4 = chart.Series.First(s => s.Name == string.Format("dsSensor4"));
DataSeries series5 = chart.Series.First(s => s.Name == string.Format("dsSensor5"));
DataSeries series6 = chart.Series.First(s => s.Name == string.Format("dsSensor6"));
DataSeries series7 = chart.Series.First(s => s.Name == string.Format("dsSensor7"));
DataSeries series8 = chart.Series.First(s => s.Name == string.Format("dsSensor8"));
DataSeries series9 = chart.Series.First(s => s.Name == string.Format("dsSensor9"));
DataSeries series10 = chart.Series.First(s => s.Name == string.Format("dsSensor10"));
DataSeries series11 = chart.Series.First(s => s.Name == string.Format("dsSensor11"));
DataSeries series12 = chart.Series.First(s => s.Name == string.Format("dsSensor12"));
int sensor = item.SensorNumber;
if (sensor == 8)
{
series1.Enabled = true;
series2.Enabled = true;
series3.Enabled = true;
series4.Enabled = true;
series5.Enabled = true;
series6.Enabled = true;
series7.Enabled = true;
series8.Enabled = true;
}
}
This code will count if the result is 8. If it is 8 enable the first 8 items in the legend.
Now this is working. However i also need to make one for like 12 sensors, or 6.
This will result in a lot of code and still it's hardcoded!
My question:
Is it posible to make this in a for loop?
I have tried it already but was unsuccesfull...
What i tried:
foreach (DCHistory item in loadOperation.Entities.OrderByDescending(t => t.SensorNumber).Take(1))
{
int sensor = item.SensorNumber;
int sensor2 = sensor + 1; //set +1 because if number = 0, it gives error.
if (sensor >= 1)
{
for (int number = 1; number < sensor2; number++)
{
DataSeries series = chart.Series.First(s => s.Name == string.Format("dsSensor{0}", number));
series.Enabled = true;
}
}
}
The result of this code is that the legend shows me 12 sensors even when there are 8...
ALright, i have got the awnser!
I think that once i visited a track with 12 sensors, it kept the sensor on Enabled = true.
So whenever i visited a Track with 8 sensors, the old one kept enabled.
What i did now:
foreach (DCHistory item in loadOperation.Entities.OrderByDescending(t => t.SensorNumber).Take(1))//Gets highest sensor number of the track
{
var sensor = item.SensorNumber + 1;//Gets the number we just asked from the loop. For example 8 or 12.
var start = 1;
var max = 12;
while (start < sensor)
{
var test = chart.Series.First(s => s.Name == string.Format("dsSensor{0}", start.ToString()));
test.Enabled = true;
start++;
}
if (sensor < max)
{
while (sensor <= max)
{
var test2 = chart.Series.First(s => s.Name == string.Format("dsSensor{0}", sensor.ToString()));
test2.Enabled = false;
sensor++;
}
}
}
Basically i made a check that if the sensor count is 8 for example. It would check if it's lower then the maximum. If it is lower, then disable the rest.
Sure, I will transform your code to use some Linq instead.
var series = Enumerable.Range(1, item.SensorNumber)
.Select(i => chart.Series.First(s => s.Name == ("dsSensor" + i)))
.ToArray();
for(var i = 0; i < series.Length; ++i){
series[i].Enabled = true;
}
When i put this in the loop i have (The one where i take 1 result). I get the exact same results as my last example. And that is that i get 12 sensors while i only have 8
@Mitch Try with this latest version (I was still on grace period), now it only retrieves 8 instances of the series
I put it inside this loop:
foreach (DCHistory item in loadOperation.Entities.OrderByDescending(t => t.SensorNumber).Take(1))
However still always returns 12 :/
@Mitch Seems to me that you will need to debug it yourself then, or include details in your question that you didn't.
| common-pile/stackexchange_filtered |
Does France have addressing products similar to British AddressBase?
In Britain people can license the Ordnance Survey's AddressBase product, which provides a list of almost all the country's addresses, with good building location accuracy even for remote country buildings. This is obviously useful for address suggestion and geocoding applications!
Like all geodata products it's got some errors and omissions, but it's generally good (if expensive to license)
I'd like to broaden my horizon beyond just the UK. What's the best data available for France, and how good is it?
Welcome to GIS SE! Unfortunately, I think your question may be too broad for our Q&A format so I think you should edit it to perhaps focus on just the country that is most important to you first.
Just to help out - avoid Australia! The national address file (G-NAF) is fiendishly expensive - even for the tertiary education sector it's a premium charged for product.
yards is imperial - metres are metric based on the Ordnance Survey Grid.
UK (England, Scotland, Wales +Northern Ireland) OS is only GB (England, Scotland, Wales)
@PolyGeo OK. Is it bad form to submit several similar questions? If it is permitted, should there be a certain delay between posts?
I would avoid swamping by perhaps focusing this one on France for which it looks like you may have a ready to Accept answer then perhaps put out at a rate of 2-3 per day and see if answers start and continue to flow.
@PolyGeo How about now?
I tweaked your title, added a new geographic tag and voted to re-open. I think that now serves as a good model for subsequent questions about the addressing products of other questions. As you do each I suggest not making them exact clones with just the country name changed i.e. show that you have tried to research each new country before asking about what you may have missed. Thanks for taking my initial advice on board.
IGN France has a good equivalent the Ordnance Survey MasterMap formally AddressPoint now Address Layer 2 database.
http://www.ordnancesurvey.co.uk/business-and-government/products/address-layer-2.html
based from many years of working very closely with OS addressing, it
is in conjunction with Royal Mail (privatised in October 2013).
Accuracy, it is not 100% more 92%-95% and with 28 million residential
and commercial properties that leaves quite a number of incorrect locations.
GB coverage cost with MasterMap ITN and Topolgraphy layers leads into many £10,000's)
bdaddresse:
http://professionnels.ign.fr/bdadresse
For European coverage viaMichelin have a good commercial product.
http://business.viamichelin.com/
| common-pile/stackexchange_filtered |
Is Myhill-Nerode equivalence class of a language which contains all palindrome pairwise distinct?
In my formal language class, we define a language called PAL, which is on a alphabet set $\Sigma = \{0,1\}$. $PAL = \{w \in \{0,1\}^* : w = w^R\}$. We have proved that every string in this language will be pairwise inequivalent, i.e. every string is in its own Myhill-Nerode equivalence class.
However, what if we extend this definition to an alphabet $\Sigma$ that contains more than two characters? Can I still claim any 2 distinct strings in $\Sigma^*$ are still pairwise inequivalent?
Are you using in your proof explicitly that $|\Sigma|<3$? I guess not.
Actually, the way we prove it is assume a $z = 10^{|x|+|y|+|2|}x^{rev}$. so for every xz, unless y = x otherwise $yz \notin PAL$. The same logic seems apply to alphabet larger than 3. But I think it doesn't. Because Myhill-Nerode relation requires for all $z \in \Sigma^*$, $xz \in L$ if and only $yz\in L$. With the number of elements in alphabet set growing it becomes more harder to define this z. That is what hinders me.
@jinxuanWu You should put that line of thought in the question.
Your proof goes also through with a larger alphabet. Let $x,y\in \Sigma^*$, with $x\neq y$. We use as $z$ the word $10^{|x|+|y|+2}1x^{rev}$. Clearly, $xz$ is a palindrome and therefore the word is in the language. The word $yz$ however is not a palindrome. Note that the middle of $yz$ there has to be a $0$ coming from $z$, hence for being a palindrome the middle of $yz$ has to be centered between the two $1$s of $z$. As a consequence, $yz$ is a palindrome only if $x=y$. Thus $yz$ is not in the language.
For the proof you only need that $|\Sigma|\ge 2$. I would say having more letters makes it even easier, since you have more possibilities to choose the split word $z$.
I got the impression that @jinxuanWu wanted to adapt the proof to show that every word has its own equivalence class; leaving the proof as is does not satisfy this.
@Raphael: What do you mean? This is the proof that every word forms its on class in the MN relation.
@Raphael If that is indeed the question, it has been answered here: What is one method used to prove each palindrome is in its own Myhill-Nerode equivalence class? But I was under the impression the question was on a detail in that proof?
| common-pile/stackexchange_filtered |
Yaw problem in my wall following algorithm
Hi,
I'm trying to create a wall following algorithm, but I'm having some problem. Essentially my code use laser to extract a line and calculate the slope (locally to the robot, so from 0 to 2*pi) but to make the control I need the robot yaw (0 to 3.14 and -3.14 to 0) so when I have a negative yaw for example -1.59 and a slope of 1.61 my error that is error=(slope-yaw). Causes the robot turn to the wrong direction :(
Somebody already have the same problem with angles in ROS ?
Gabriel Oliveira
Originally posted by leivas_gabriel on ROS Answers with karma: 1 on 2012-10-02
Post score: 0
If you're using an error in an angle to feed a controller you generally need to wrap the error to be between -pi and pi as well. The way I normally do this is with the mod function in whatever language you're using. For example, the following will give an error that is always between -pi and pi
error=mod(slope-yaw+2*PI,2*PI)-PI
Originally posted by david.hodo with karma: 395 on 2012-10-02
Post score: 0
| common-pile/stackexchange_filtered |
Group commutator relations and the neutral element.
In Serge Lang: Algebra it is stated on page 69 that for a group $G$ that, if for $x,y,z\in G$ $y=[x,y]:=xyx^{-1}y^{-1}, z=[y,z], x=[z,x]$ is satisfied, we have $x=y=z=e$.
I see that $y=[x,y]$ directly implies $y=x^{-n}y^{2^n}x^n$ for $n>0$, but I need a hint why it is true for $n=0$.
Edit: As pointed out in the comments: the relation should be $y^n=x^{-1}y^{2^n}x$.
"but I need a hint why it is true for $n=0$" For $n = 0$, we get $x^{-0}y^{2^0}x^0 = y^1 = y$. Is that really what you were looking for? That doesn't seem to be very helpful, as it is true regardless of any commutator relations.
I cannot deduce that $y=x^{-n}y^{2^n}x^n$ for $n=0$.
I just deduced it for you in my previous comment. It's almost trivial. And I am not sure how it would help.
Ah, okay, you are right. I mixed it up with the relation $y^n=x^{-1}y^{2^n}x$, which also holds.
| common-pile/stackexchange_filtered |
If statement seems to be ignored
I have an if statement, as soon as it is true I will redirect to another page. It does not seem to work for me.
<?php
require 'includes/config.php';
session_start();
if ( !empty($_POST['username'] && $_POST['password']) ){
$gebruikersnaam = $_POST['username'];
$wachtwoord = $_POST['password'];
}
try{
$conn = new PDO('mysql:host=localhost;dbname=project_sync', $config['DB_USERNAME'], $config['DB_PASSWORD']);
$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$stmt = $conn->prepare('SELECT * FROM employee WHERE gebruikersnaam = :gebruikersnaam');
// Bind and Execute
$stmt->execute(array(
'gebruikersnaam' => $gebruikersnaam
));
// Fetch Result
while($result = $stmt->fetch()){
if ($gebruikersnaam == $result['gebruikersnaam'] && $wachtwoord == $result['wachtwoord']){
header('Location: http://localhost/project_sync2/dashboard.php');
exit();
}else{
header('Location: http://localhost/project_sync2/index.php?set=loginerror');
exit();
set=loginerror';
}
}
}catch (PDOExeption $e) {
echo 'ERROR: ' . $e->getMessage();
}
?>
Can Anyone tell me what I am doing wrong?
You should be getting a syntax error from the first line. You can't have an expression in the arguments to empty(), its argument has to be a single variable.
It should be:
if ( !empty($_POST['username']) && !empty($_POST['password']) ){
From the documentation:
Prior to PHP 5.5, empty() only supports variables; anything else will result in a parse error.
Gosh I cant believe I overlooked this. Cheers.
Replace you condition with this:
if ( !empty($_POST['username']) && !empty($_POST['password']) ){
[...]
| common-pile/stackexchange_filtered |
Numerically solving a large dimensional non-standard quadratic matrix equation
I need to solve a system of two coupled matrix equations in $X$ and $P$,
$$
X = A_1 + A_2P \\
P = A_3X + A_4PX.
$$
In the above equations, $A_1$-$A_4$ are known matrices and $X,P$ are unknown. All matrices are squares of dimension $n\times n$, where $n = 1400$.
I have tried several methods for solving these equations, including the following:
Guess $P^{(0)}$.
Use the first equation to produce a guess for $X$ by solving $X^{(0)} = A_1 + A_2 P^{(0)}$.
Update the guess for $P$ by solving the Sylvester equation $-A_4P^{(1)}X^{(0)} + P^{(1)} = A_3 X^{(0)}$.
If $\lVert P^{(1)} - P^{(0)}\rVert$ is small enough, I stop, otherwise I set $P^{(0)} = P^{(1)}$ and start over again.
This algorithm, however, keeps diverging. I tried a slightly different algorithm by substituting the first equation in the second to get the following quadratic equation:
$$0 = A_4^{-1}A_3A_1 + A_4^{-1}(A_3A_2 - \mathbf I)P + PA_1 + PA_2P \\
\implies 0 = B_1 + B_2 P + PB_3 + PB_4P,$$
where $B_1 = A_4^{-1}A_3A_1, B_2 = A_4^{-1}(A_3A_2 - \mathbf I), B_3 = A_1, B_4 = A_2$. I tried solving this system using the following iterative scheme:
Guess a value $P^{(0)}$.
Solve the Sylvester equation for $P^{(1)}$: $$0 = B_1 + B_2 P^{(1)} + P^{(1)}(B_3 + B_4 P^{(0)})$$.
If $\lVert P^{(1)} - P^{(0)}\rVert$ is small enough, I stop, otherwise I set $P^{(0)} = P^{(1)}$ and start over again.
This algorithm unfortunately diverges as well.
I would greatly appreciate any suggestions on how I may approach this problem. For context, these equations come from a macroeconomic model, in particular a control-theory problem in which typically there are both stable and unstable paths and the desired solution is the stable saddle path.
Are there any symmetries or other structures in the involved matrices? Namely, maybe those can exploited to formulate your problem to something similar as an algebraic Riccati equation.
Unfortunately no. Apart from being symmetric and invertible, there are no other useful properties of matrices $A_1$-$A_4$ that I can think of.
Which variables are symmetric? All, only $P$ or also others? Please update this also in the problem description of your question.
It seems that if there are some additional unstated conditions, this might
be just the standard control theory, but perhaps you are trying to do
something nonstandard, as hinted at in the title. I would have asked for
clarification but I lack the reputation so am giving this as an answer.
Edit: After clarification by the poster, have added a solution for the more general conditions required in the update below.
In your second algorithm, you derive and seek to solve $%
0=B_{1}+B_{2}P+PB_{3}+PB_{4}P$. This looks very like the continuous-time
case of the algebraic matrix Ricatti equation (CARE), which would be written
(Wikipedia) $%
A^{T}P+PA-PBR^{-1}B^{T}P+Q=0$. Comparing with yours, $%
Q=B_{1}=A_{4}^{-1}A_{3}A_{1}$, $-BR^{-1}B^{T}=B_{4}=A_{2},$ and $%
A=B_{3}=A_{1}$. This is consistent with your first equaton being for the
closed loop transfer matrix. But then $B_{2}$ and $B_{3}$ would need to be
transposes, which is not obvious.
Under the assumption that your equations
can be cast into a standard form, then there are standard solvers. If you
have access to Maple, it solves the slightly more general $%
A^{T}P+PA-(S+PB)R^{-1}(S+PB)^{T}+Q$, by calling the SLICOT
library. SLICOT routines are also accessible also
via Matlab and (perhaps?) Mathematica. These algorithms find stable
solutions with $P$ symmetric. An algorithm that uses eigendecomposition of a
matrix of double the dimensions is given on the Wikipedia page, though $Q$
and $BR^{-1}B^{T}$ need to be symmetric.
Update with solution
To solve:
$$
0 = B_{1}+B_{2}P+PB_{3}+PB_{4}P \\
X = B_{3}+B_{4}P
$$
for $P$ and $X$, where $X$ is stable (eigenvalues have negative real parts).
($B_{3}=A_{1}$ and $B_{4}=A_{2}$ are symmetric, but the others are not.)
Given that the usual assumptions don't apply, we need a slight modification
of the Hamiltonian matrix method on the Wikipedia
page. The
algorithm is as follows.
Construct the $2n\times 2n$ matrix $Z$, which in block form is
$$
Z=
\begin{bmatrix}
B_{3} & B_{4} \\
-B_{1} & -B_{2}
\end{bmatrix}
$$
Find its eigenvalues and eigenvectors.
Sort the eigenvalues in order of increasing real part. If there are at
least $n$ with negative real part, there is a solution in which $X$ is
stable. Select the $n$ with the most negative real parts. Let the $2n\times n
$ matrix of corresponding (column) eigenvectors in block form be
$$
U=
\begin{bmatrix}
U_{11} \\
U_{21}
\end{bmatrix}
$$
Find the solutions for $P$ and $X$ as
$$
P = U_{21}U_{11}^{-1} \\
X = B_{3}+B_{4}P
$$
Notes:
The matrix $Z$ is not Hamiltonian unless $B_{2} = B_{3}^{T}$ and so in the
general case we are not guaranteed to find $n$ eigenvalues with negative
real parts.
The eigenvalues of $X$ are the $n$ eigenvalues of $Z$ with most negative
real parts.
The matrix $P$ is not symmetric (as it is if $Z$ is Hamiltonian).
Justification
The matrix $U$ of the eigenvectors of $Z$ means it can be diagonalized with
a similarity as follows
$$
Z=
\begin{bmatrix}
U_{11} & U_{12} \\
U_{21} & U_{22}
\end{bmatrix}
\begin{bmatrix}
\Lambda _{1} & 0 \\
0 & \Lambda _{2}
\end{bmatrix}
\begin{bmatrix}
U_{11} & U_{12} \\
U_{21} & U_{22}
\end{bmatrix}
^{-1}
$$
where we have permuted as necessary so $\Lambda _{1}$ has the most negative
eigenvalues. That is
$$
Z
\begin{bmatrix}
U_{11} \\
U_{21}
\end{bmatrix}
=
\begin{bmatrix}
U_{11} \\
U_{21}%
\end{bmatrix}
\Lambda _{1}
$$
$$
\begin{bmatrix}
B_{3} & B_{4}
\end{bmatrix}
\begin{bmatrix}
U_{11} \\
U_{21}
\end{bmatrix}
=U_{11}\Lambda _{1}
$$
$$
\begin{bmatrix}
B_{3} & B_{4}
\end{bmatrix}
\begin{bmatrix}
U_{11} \\
PU_{11}
\end{bmatrix}
=U_{11}\Lambda _{1}
$$
$$
\left( B_{3}+B_{4}P\right) U_{11}=U_{11}\Lambda _{1}
$$
$$
B_{3}+B_{4}P=U_{11}\Lambda _{1}U_{11}^{-1}
$$
and $X=B_{3}+B_{4}P$ is similar to $\Lambda _{1}$ with the same eigenvalues.
Now consider
$$
\begin{bmatrix}
P & -I
\end{bmatrix}
\begin{bmatrix}
B_{3} & B_{4} \\
-B_{1} & -B_{2}
\end{bmatrix}
\begin{bmatrix}
I \\
P
\end{bmatrix}
=
\begin{bmatrix}
P & -I
\end{bmatrix}
\begin{bmatrix}
B_{3}+B_{4}P \\
-B_{1}-B_{2}P
\end{bmatrix}
$$
$$
=
\begin{bmatrix}
P & -I
\end{bmatrix}
\begin{bmatrix}
B_{3}+B_{4}P \\
-B_{1}-B_{2}P
\end{bmatrix}
=B_{1}+B_{2}P+PB_{3}+PB_{4}P
$$
which we require to be zero. If we choose $P=U_{21}U_{11}^{-1}$ then
$$
\begin{bmatrix}
P & -I
\end{bmatrix}
\begin{bmatrix}
U_{11} & U_{12} \\
U_{21} & U_{22}
\end{bmatrix}
\begin{bmatrix}
\Lambda _{1} & 0 \\
0 & \Lambda _{2}
\end{bmatrix}
U^{-1}
\begin{bmatrix}
I \\
P
\end{bmatrix}
=
\begin{bmatrix}
0 & \ast
\end{bmatrix}
\begin{bmatrix}
\Lambda _{1} & 0 \\
0 & \Lambda _{2}
\end{bmatrix}
U^{-1}
\begin{bmatrix}
I \\
P
\end{bmatrix}
=
\begin{bmatrix}
0 & \ast
\end{bmatrix}
U^{-1}
\begin{bmatrix}
I \\
P
\end{bmatrix}
$$
where $\ast $ is some noncritical entry. Writing the inverse $U^{-1}=V$ in
partitioned form
$$
U^{-1}
\begin{bmatrix}
I \\
P
\end{bmatrix}
=
\begin{bmatrix}
V_{11} & V_{12} \\
V_{21} & V_{22}
\end{bmatrix}
\begin{bmatrix}
I \\
U_{21}U_{11}^{-1}
\end{bmatrix}
=
\begin{bmatrix}
V_{11}+V_{12}U_{21}U_{11}^{-1} \\
V_{21}+V_{22}U_{21}U_{11}^{-1}
\end{bmatrix}
$$
But we know $V_{21}U_{11}+V_{22}U_{21}=0$ or $
V_{21}+V_{22}U_{21}U_{11}^{-1}=0$ so the bottom entry is zero and we
conclude the whole expression is zero as required.
Thanks for your answer. Unfortunately, $B_2$ and $B_3$ are not transposes of each other, which, as you correctly point out, makes my problem different from the algebraic matrix Ricatti equation.
Have added a solution under the more general conditions. It is always a solution to the two equations but is not guaranteed to give a stable solution, in the sense that $X$ has eigenvalues with negative real part, as I assume is required. Perhaps there are details of $A_1\ldots A_4$ that make that the case. For some random vectors I tried there were close to but not quite $n$ eigenvalues with negative real part.
$
\def\red#1{\color{red}{#1}}
\def\green#1{\color{green}{#1}}
\def\blue#1{\color{blue}{#1}}
$Rearrange the equations
$$\small\eqalign{
(1)\;\;X &= A_1 + A_2P &\quad\implies\quad P &= A_2^{-1}(X-A_1) \\
(2)\;\;P &= A_3X + A_4PX &\quad\implies\quad X &= (A_3+A_4P)^{-1}\:P \\
&&&= \Big(A_3+A_4A_2^{-1}\:(X-A_1)\Big)^{-1}A_2^{-1}\;(X-A_1) \\
&&&= \Big(\red{(A_3-A_4A_2^{-1}A_1)}+\blue{A_4A_2^{-1}}\:X\Big)^{-1}A_2^{-1}\:(X-A_1) \\
&&&= \Big(\red{B}+\blue{C}X\Big)^{-1}A_2^{-1}\:(X-A_1) \\
}$$
Construct a nonlinear matrix equation $\:F(X) = 0,\,$ where
$$\small\eqalign{
F &= X - \Big(B+CX\Big)^{-1}A_2^{-1}\:(X-A_1) \\
}$$
Calculate the differential of $F$
$$\small\eqalign{
dF
&= dX + \Big(B+CX\Big)^{-1} \Big(C\:dX\Big)
\Big(X-F\Big) - \Big(B+CX\Big)^{-1}A_2^{-1}\;dX \\
&= dX + \Big(X+C^{-1}B\Big)^{-1}\,dX\,
\Big(X-F\Big) - \Big(A_2B+A_2CX\Big)^{-1}\:dX \\
}$$
Then vectorize it and recover the Jacobian
$$\small\eqalign{
\def\l{\alpha}
\def\p{\partial}
\def\k{\otimes}
df
&= \Big[I\k I\Big]\,dx
+ \Big[(X-F)^T\k(X+C^{-1}B)^{-1}\,\Big]\,dx
- \Big[I\k(A_2B+A_2CX)^{-1}\,\Big]\,dx \\
J\doteq\frac{\p f}{\p x}
&= \Big[I\k I\Big]
+ \Big[(X-F)^T\k(X+C^{-1}B)^{-1}\,\Big]
- \Big[I\k(A_2B+A_2CX)^{-1}\,\Big] \\
}$$
The Jacobian can be used either in a gradient descent iteration
$$\eqalign{
x_{k+1} &= x_k - \l_k J_k^Tf_k \\
k &= k+1 \\
}$$
or (in the unlikely event you are able to invert it) a Newton-Raphson iteration
$$\eqalign{
x_{k+1} &= x_k - J_k^{-1}f_k \\
k &= k+1 \\\\
}$$
Since $J\,$ is $\,1400^2\times1400^2\:$ even a gradient step is not feasible in the vectorized form.
Here is the gradient step in a form using mere $\,1400\times1400\:$ matrices
$$\small\eqalign{
X_{k+1}
\:&=\: X_k \:-\: \l_k\left[F_k
+ \Big(X_k+\red{C^{-1}B}\Big)^{-T}F_k\,\Big(X_k-F_k\Big)^T
- \Big(\blue{A_2B}+\red{A_2C}X_k\Big)^{-T}F_k\,\right]
\\
\:&=\: X_k \:-\: \l_k\left[F_k
+ \Big(X_k^T+\red{P}\Big)^{-1}\,F_k\,\Big(X_k-F_k\Big)^T
- \Big(\blue{Q}+X_k^T\red{R}\Big)^{-1}\,F_k\,\right]
\\
}$$
To accelerate convergence, use one of the Barzilai-Borwein steplengths.
They can be modified to accommodate matrix variables like so
$$\small\eqalign{
\def\D{\Delta}
\D X &= X_k-X_{k-1},\qquad \D F = F_k-F_{k-1} \\
\\
\l_k^{LONG} &= \frac{\D X:\D X}{\D X:\D F}
, \qquad
\l_k^{SHORT} = \frac{\D X:\D F}{\D F:\D F}
, \qquad
\l_k^{MEAN} = \sqrt{\frac{\D X:\D X}{\D F:\D F}}
}$$
where a colon denotes the matrix inner product
$$\eqalign{
A:B \;=\; \sum_{i=1}^m\sum_{j=1}^n A_{ij}B_{ij}
}$$
In Julia, this would be implemented as an elementwise product followed by a sum over all elements, e.g.
α = sum(ΔX .* ΔF) / sum(ΔF .* ΔF)
You could avoid inverting the Jacobean by using a finite difference to approximate a Jacobean-vector product and a Krylov method for the linear solve. The gains in efficiency will depend on whether the spectrum of $J$ is clustered around a few points
| common-pile/stackexchange_filtered |
X11 Forwarding request failed
I'm trying to use ssh -Y/X ManjaroHost from my Mac, but get "X11 Forwarding request failed". I've searched for the solution for two weeks, and have tried many methods suggested by similar posts. It would be a great help to point out my mistakes!
Here are some experiments I've done. To make everything clear, I always ssh from HostA to HostB. HostA is the X server and ssh client, while HostB is the ssh server.
Experiment 1
HostA: My Macbook.
HostB: Another Linux cluster.
It works, perfectly, GUI windows will popup on my Mac.
In HostA, echo $DISPLAY --> /private/tmp/com.apple.launchd.6AxM1TJrRh/org.xquartz:0
In HostB, echo $DISPLAY --> localhost:10.0
So I think my Mac end works good.
Experiment 2
HostA: My Macbook. HostB: Manjaro Linux Lenovo.
HostA: DISPLAY is /private/tmp/com.apple.launchd.6AxM1TJrRh/org.xquartz:0
HostB: DISPLAY is empty.
Here is the debug information from ssh -Yvvv
...
...
...
debug1: Requesting X11 forwarding with authentication spoofing.
debug1: Requesting authentication agent forwarding.
debug1: Sending environment.
debug1: channel 2: setting env LC_TERMINAL_VERSION = "3.4.15"
debug1: channel 2: setting env LANG = "en_US.UTF-8"
debug1: channel 2: setting env LC_TERMINAL = "iTerm2"
debug1: mux_client_request_session: master session id: 2
Last login: Wed Nov 9 13:55:34 2022 from <IP_ADDRESS>
X11 forwarding request failed
Experiment 3
In case this is because of some internet setup, I tried to ssh -Y <IP_ADDRESS> in Manjaro Linux Lenovo
HostA = HostB = Manjaro Linux Lenovo
HostA: DISPLAY = :0
HostB (after ssh): DISPLAY is empty.
The debug information from ssh -Yvvv is
ssh -Yvvv <IP_ADDRESS>
...
...
...
debug1: client_input_hostkeys: no new or deprecated keys from server
debug3: receive packet: type 91
debug2: channel_input_open_confirmation: channel 0: callback start
debug2: x11_get_proto: /usr/bin/xauth list :0 2>/dev/null
debug1: Requesting X11 forwarding with authentication spoofing.
debug2: channel 0: request x11-req confirm 1
debug3: send packet: type 98
debug2: fd 3 setting TCP_NODELAY
debug3: set_sock_tos: set socket 3 IP_TOS 0x48
debug2: client_session2_setup: id 0
debug2: channel 0: request pty-req confirm 1
debug3: send packet: type 98
debug2: channel 0: request shell confirm 1
debug3: send packet: type 98
debug2: channel_input_open_confirmation: channel 0: callback done
debug2: channel 0: open confirm rwindow 0 rmax 32768
debug3: receive packet: type 100
debug2: channel_input_status_confirm: type 100 id 0
**X11 forwarding request failed on channel 0**
debug3: receive packet: type 99
debug2: channel_input_status_confirm: type 99 id 0
debug2: PTY allocation request accepted on channel 0
debug2: channel 0: rcvd adjust 2097152
debug3: receive packet: type 99
debug2: channel_input_status_confirm: type 99 id 0
debug2: shell request accepted on channel 0
Last login: Wed Nov 9 14:43:39 2022 from <IP_ADDRESS>
It still shows "X11 forwarding request failed on channel 0"
Here is my Manjaro sshd_config file
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
AllowTcpForwarding yes
AllowAgentForwarding yes
PermitRootLogin yes
ssh_config file
HOST *
#ServerAliveInterval 60
#ServerAliveCountMax 5
ForwardAgent yes
ForwardX11 yes
#ControlPersist yes
ControlMaster auto
ForwardX11Trusted yes
My Mac ssh_config file
HOST *
ServerAliveInterval 60
ServerAliveCountMax 5
ForwardAgent yes
ForwardX11 yes
ControlPersist yes
ControlMaster auto
ControlPath ~/.ssh/master-%r@%h:%p
ForwardX11Trusted yes
identityfile ~/.ssh/id_rsa_gmail
I think I've tried everything I can, but fail to figure out the issue. Appreciated if someone could help!
The most common cause is that the xauth package and thus executable are missing on the server. Run which xauth to find out. Usually you can install with apt install xauth or such.
If it is installed and X11 forwarding still fails: you can enable sshd debugging on the server side too.
Assuming you have root access on the server, find the script starting sshd. On a modern system using systemd, you can do sudo systemctl status ssh to find it. Mine is called /lib/systemd/system/ssh.service. Look for a line starting with ExecStart=. The debugging option for sshd is -d.
You can check the server logfile with journalctl -b -a -u ssh.
Good luck.
Thanks very much for the suggestion of debugging! It helps!
Thanks @Jdehan for your great help! Now it works!
With sudo, I setup the debug mode for sshd in the remote server. From the debug information, I found that the sshd service is only loading /etc/ssh/sshd_config file, without loading .ssh/sshd_config file! So all my edit on ~/.ssh/sshd_config had never been really loaded into the system.
In ~/.ssh folder, only the keys and know_host file, and config works. But the ssh_config, sshd_config in .ssh does not, only if you explicitly using -f to load the files
I my case the cause for the ssh client message "X11 forwarding request failed" was that the ssh daemon (on the server) was not able to bind to any port 6000+ (the X11 forwarding port range), because it tried only IPv6, and that wasn't working (it was in a docker container and probably switched-off).
The solution was to change a line in /etc/sshd_config: from AddressFamily any (the default) to AddressFamily inet (i.e., no inet6).
Thanks man - recently disabled ipv6 for security reasons and this was my case
Thanks @Jdehan for his great help! Now it works!
With sudo, I setup the debug mode for sshd in the remote server. From the debug information, I found that the sshd service is only loading /etc/ssh/sshd_config file, without loading .ssh/sshd_config file! So all my edit on ~/.ssh/sshd_config had never been really loaded into the system.
In ~/.ssh folder, only the keys and know_host file, and config works. But the ssh_config, sshd_config in .ssh does not, only if you explicitly using -f to load the files.
Glad it helped.
The /etc/ssh/sshd_config is meant just for the server process sshd. Your ~/.ssh/config is used by the outgoing ssh client process. There is no overlap in usage either way, unless you force them, which is definitely not recommended.
| common-pile/stackexchange_filtered |
Query using geom_bar() of ggplot2 - R
I have a similar data frame as follows:
mapDF <- structure(list(var = c(11L, 3L, 4L, 15L, 19L, 17L, 1L), approvals = c(10.5233545765422,
67.9809421770218, 9.66394835013545, 2.93736399165075, 3.36787205222721,
4.0168261757783, 1.50969267664431)), .Names = c("var", "approvals"
), row.names = c(NA, -7L), class = "data.frame")
When I try creating a bar graph using the data frame above using:
gplot <- ggplot(mapDF, aes(x= mapDF[1], y= mapDF[2])) + geom_bar()
.. I get the following messages with nothing showing up in the 'Plots' section of RStudio:
Don't know how to automatically pick scale for object of type data.frame. Defaulting to continuous
Don't know how to automatically pick scale for object of type data.frame. Defaulting to continuous
Error: stat_bin requires the following missing aesthetics: x
Can anyone please point out my error?
Ever notice how in all the ggplot code you've ever seen people map aesthetics inside aes using the name of the column...? :)
(And you will want stat = "identity" inside geom_bar.)
Rolling @joran's comments into an answer:
ggplot(mapDF, aes(x=var, y=approvals)) + geom_bar(stat="identity")
| common-pile/stackexchange_filtered |
Getting an Error trying to add a user to mongodb 2.6.5
I have spent a majority of my day trying to add a user to mongodb 2.6.5. I installed & re-installed Mongo via macports.
Try #1 ( addUser(); ):
1. # switching to the admin user
> use admin
2. # db.addUser("admin", "password");
When use the addUser(); I get this error:
WARNING: The 'addUser' shell helper is DEPRECATED. Please use 'createUser' instead
2014-11-03T15:19:16.193-0500 Error: couldn't add user: User and role management
commands require auth data to have schema version 3 but found 1 at src/mongo/shell/db.js:1004
Try #2 ( createUser(); ):
1. # switching to the admin user
> use admin
2. # db.createUser("admin", "password");
When use the createUser(); I get this error:
2014-11-03T15:25:38.243-0500 Error: couldn't add user: no such cmd: 0 at src/mongo/shell/db.js:1004
I have spent a lot of time looking up other people's questions and answers. Does anyone know how to fix this?
createUser expect a document and not a string literal. for example:
db.createUser( { "user" : "accountAdmin01",
"pwd": "cleartext password",
"customData" : { employeeId: 12345 },
"roles" : [ { role: "clusterAdmin", db: "admin" },
{ role: "readAnyDatabase", db: "admin" },
"readWrite"
] },
{ w: "majority" , wtimeout: 5000 } )
Or
db.createUser(
{
user: "accountUser",
pwd: "password",
roles: [ "dbAdmin", "userAdmin" ]
}
)
Refer db.createUser() for more information
However before you add the user check if the auth schema exists by executing:
db.system.users.find()
If it does not return anything then you need create the schema by executing:
db.system.users.insert({
"roles" : [ "userAdmin", "dbAdmin"],
"userSource":"$external",
"user" : "dbadmin"
})
You should not get error while adding the user once the schema is generated.
Or if you have upgraded to MongoDb 2.6 version, then you need to upgrade the auth schema by executing:
db.getSiblingDB("admin").runCommand({authSchemaUpgrade: 1 })
Thanks for responding so quickly. I'm new to mongo and maybe you might see something I am doing wrong. I have posted a gist of my terminal window : https://gist.github.com/rvirvo/622891629c946535b23f
What roles do you want to assign to this user?
I would like to assign the role of admin, or userAdmin.
Thanks ARS! I ended up getting it. I had to update the schema db.getSiblingDB("admin").runCommand({authSchemaUpgrade: 1 }); then I could create the user
| common-pile/stackexchange_filtered |
Convert pressure cooker recipes for 8 psi
I have a new electric multi cooker, it works at 8PSI and I am having difficulty at finding out how to convert stove top pressure cooker times at 15PSI down to 8PSI, hoping for your help.
Regards
Joan
| common-pile/stackexchange_filtered |
Straight or Drop handlebars?
Can someone explain to me the pros/cons of drop or straight? I've never used drops before and have never owned a good road bike (just cheap mountain bikes) - I am buying a decent single speed soon and unsure if I am better to go with straight or drop bars.
I am not an experienced cyclist and the bike will be used for fun/commute mainly (no on-road riding where I am weaving between cars or anything, cycleways)
Drop bars
The main advantage of drop bars is that you have a variety of positions for your hands, giving you options to make you more aerodynamic or just have a change if you get a bit sore in one spot. Additionally, having your hands 'pointing forward' is a more neutral position for your shoulders, where flat bars cause them to rotate outwards.
Related questions:
What are the positions on drop bars called?
How to use drop bars properly?
Flat bars
The main advantage of flat bars is that you are always in the optimal position to grab the brakes and gears. Additionally, flat bars can be wider, providing more control.
Other bar types
Different kinds of Handlebars
As long as you stick to the hoods and drops, and leave the tops for things like climbing hills, you should always be in a good position to use your brakes. Although there is less braking power from the hoods, you should still be able to stop relatively quickly if your brakes are set up properly.
| common-pile/stackexchange_filtered |
Multisites - Select homepage based on location
Would it be possible to have index.php of domain.com to be a dropdown to choose the homepage based on whether the user is in city x or city y? I'd then like a cookie to be set that remembered the user's choice.
| common-pile/stackexchange_filtered |
Controller action alias in ASP.NET MVC CORE
I have an action called /Telemetries/Page2 . I wish that /Telemetries/Index was an alias of the aforementioned action.
I added to startup.cs/Configure the following lines:
app.UseMvc(routes =>
{
routes.MapRoute(
name: "alias_route_home",
template: "Telemetries/Index",
defaults: new { controller = "Telemetries", action = "Page2" });
});
app.UseMvc(routes =>
{
routes.MapRoute(
name: "alias_route_events",
template: "Events/Index",
defaults: new { controller = "Events", action = "Pagina5" });
});
and it works!
Is this MVC4 or MVC Core, you have tagged both? Also, please show how you are doing your routing now. Finally, you know that multiple URLs that point to the same page are bad for Google SEO?
app.UseMvc(routes =>
{
routes.MapRoute(
name: "alias_route",
template: "Telemetries/Index",
defaults: new { controller = "Telemetries", action = "Page2" });
});
Martin, in addition to the route, is there a way to create the action index respond with the action page2?
I do not know an other way, routing is supposed to be used for such tasks.
See https://stackoverflow.com/questions/57628092/asp-net-core-controller-action-aliasing
| common-pile/stackexchange_filtered |
Execute Hive Query with IN clause parameters in parallel
I am having a Hive query like the one below:
select a.x as column from table1 a where a.y in (<long comma-separated list of parameters>)
union all
select b.x as column from table2 b where b.y in (<long comma-separated list of parameters>)
I have set hive.exec.parallel as true which is helping me achieve parallelism between the two queries between union all.
But, my IN clause has many comma separated values and each value is taken once in 1 job and then the next value. This is actually getting executed sequentially.
Is there any hive parameter which if enabled can help me fetch data parallelly for the parameters in the IN clause?
Currently, the solution I am having is fire the select query with = multiple times instead of one IN clause.
There is no need to read the same data many times in separate queries to achieve better parallelism. Tune proper mapper and reducer parallelism for the same.
First of all, enable PPD with vectorizing, use CBO and Tez:
SET hive.optimize.ppd=true;
SET hive.optimize.ppd.storage=true;
SET hive.vectorized.execution.enabled=true;
SET hive.vectorized.execution.reduce.enabled = true;
SET hive.cbo.enable=true;
set hive.stats.autogather=true;
set hive.compute.query.using.stats=true;
set hive.stats.fetch.partition.stats=true;
set hive.execution.engine=tez;
SET hive.stats.fetch.column.stats=true;
SET hive.tez.auto.reducer.parallelism=true;
Example settings for Mappers on Tez:
set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;
set tez.grouping.max-size=32000000;
set tez.grouping.min-size=32000;
Example settings for Mappers if you decide to run on MR instead of Tez:
set mapreduce.input.fileinputformat.split.minsize=32000;
set mapreduce.input.fileinputformat.split.maxsize=32000000;
--example settings for reducers:
set hive.exec.reducers.bytes.per.reducer=32000000; --decrease this to increase the number of reducers, increase to reduce parallelism
Play with these settings. Success criteria is more mappers/reducers and your map and reduce stages are running faster.
Read this article for better understanding of how to tune Tez: https://community.hortonworks.com/articles/14309/demystify-tez-tuning-step-by-step.html
Thanks for the answer.
hive.vectorized.execution.enabled cannot be used as my data is not in ORC format. It is in Avro format. Check https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties
hive.cbo.enable cannot be used as I have hive version 0.13.1
Tez is not present in Cloudera 5.3.3 :(
@vijayinani Use mapper settings for MR then. And config for reducers works both for Tez and MR.
@leftjoin, Thanks for this answer, am running query to remove duplicates (240Million records), is there any way to decide what value should we use for - mapreduce.input.fileinputformat.split.maxsize and hive.exec.reducers.bytes.per.reducer
@Vijiy Experimentally. chose best configuration based on performance. It depends on your cluster capacity also. If too many mappers(reducers) started, they may waiting in queue. Also it depends on the task they are doing.
| common-pile/stackexchange_filtered |
Is this the best way to handle storage of uploaded files in Django?
I'm not sure if I should use something like fs.save() or another method, such as chunking, to handle file uploads in Django. So far this is my method:
def handle_uploaded_file(uploaded_file):
filename = uploaded_file.name
filepath = os.path.join(settings.MEDIA_ROOT, filename)
with open(uploaded_file.name, 'wb') as destination:
for chunk in uploaded_file.chunks():
destination.write(chunk)
return filepath
@csrf_protect
def transcribeSubmit(request):
if request.method == 'POST':
form = UploadFileForm(request.POST, request.FILES)
if form.is_valid():
print('valid')
uploaded_file = handle_uploaded_file(request.FILES['file'])
The purpose of this view is to handle video and audio uploads if this is relevant. I haven't been able to find any good and simple documentation on this.
| common-pile/stackexchange_filtered |
Cancelling an AngularJS ng-click event from a directive
I've written a directive that should allow text to be selectable when inside on an element with an ng-click on it. When the mouse is pressed and then moved past a threshold the click event is cancelled.
Right now this only partially works and I can't figure out why. When the text is contained within another child element it works. Otherwise, it doesn't.
<tr>
<td selectable-text ng-click="doThing()">
When this text is selected the click event fires regardless
</td>
</tr>
<tr>
<td selectable-text ng-click="doThing()">
<strong>When this text is selected, the click does not fire (as expected!)</strong>
</td>
</tr>
DEMO: http://codepen.io/chrismbarr/pen/wgLJBx
app.directive("selectableText", () => {
const mouseMoveThreshold = 5; //in pixels
const linkFn = (_$scope: angular.IScope, $element: angular.IRootElementService): void => {
let killClick = false;
$element.on("mousedown", (downEvent: JQueryMouseEventObject) => {
//reset this with each click
killClick = false;
//When the mouse button is pressed, start watching the mouse movements and record the differences
const diffs = { x: 0, y: 0 };
$element.on("mousemove", (moveEvent: JQueryMouseEventObject) => {
diffs.x = Math.abs(moveEvent.clientX - downEvent.clientX);
diffs.y = Math.abs(moveEvent.clientY - downEvent.clientY);
console.info(diffs);
if (diffs.x >= mouseMoveThreshold || diffs.y >= mouseMoveThreshold) {
//If the mouse have moved more than the threshold in any direction, cancel the default click event behavior
killClick = true;
}
});
});
$element[0].addEventListener('click', (evt: MouseEvent) => {
//When the mouse button is released, kill the move event no matter what
$element.off("mousemove");
if (killClick) {
console.info("Should have killed click event")
//if set, stop the default click event from happening
//I've tried all of these...
evt.preventDefault();
evt.stopPropagation();
evt.stopImmediatePropagation();
}
}, true);
};
return {
link: linkFn,
restrict: 'A'
};
});
As for me it works same. But this actually doesnt matter -- this idea in general seems wrong, there are a lot of ways to select - using shift, double click, (smth else?). Also I didnt face such behavior in web, so it is not common -- usually if mouse is "pointer" u may click OR it looks like select and u can select -- combining these will just confuse user.
The goal is just to allow people to select text if they want to do so, just something "extra". It's not a primary feature. I'm trying to reduce confusion by allowing the text to be selectable instead of the click event firing when they attempt this. However... yes my demo does seem to be working correctly for me as well... so I'm not entirely sure why that code isn't working in my app right now. Very strange!
| common-pile/stackexchange_filtered |
Currency table in CSS
I have a table and I need to format the currency in order for the . to be displayed always under each other.
This is the table:
<table class="data" cellspacing="0" cellpadding="5" border="0">
<thead>
<tr>
<th>Date</th>
<th>Description</th>
<th>Field1</th>
<th>Field2</th>
<th>Balance</th>
</tr>
</thead>
<tbody>
<tr class="verticalDivider"></tr>
<tr>
<td>08 April 2010</td>
<td>value 1</td>
<td>GBP 20.00</td>
<td> </td>
<td>GBP 20.00</td>
</tr>
<tr>
<td>08 May 2010</td>
<td>value 2</td>
<td>GBP 100.00</td>
<td> </td>
<td>GBP 1020.00</td>
</tr>
<tr>
<td>19 May 2010</td>
<td>value 3</td>
<td> </td>
<td>GBP 50.00</td>
<td>GBP 970.00</td>
</tr>
</tbody>
</table>
How can I achieve this?
This jQuery plugin makes this easy: https://github.com/ndp/align-column
How does this look?
<style type="text/css">
.price {
text-align: right;
}
</style>
<table class="data" cellspacing="0" cellpadding="5" border="0">
<thead>
<tr>
<th>Date</th>
<th>Description</th>
<th>Field1</th>
<th>Field2</th>
<th>Balance</th>
</tr>
</thead>
<tbody>
<tr class="verticalDivider"></tr>
<tr>
<td>08 April 2010</td>
<td>value 1</td>
<td class="price">GBP 20.00</td>
<td> </td>
<td class="price">GBP 20.00</td>
</tr>
<tr>
<td>08 May 2010</td>
<td>value 2</td>
<td class="price">GBP 100.00</td>
<td> </td>
<td class="price">GBP 1020.00</td>
</tr>
<tr>
<td>19 May 2010</td>
<td>value 3</td>
<td> </td>
<td class="price">GBP 50.00</td>
<td class="price">GBP 970.00</td>
</tr>
</tbody>
</table>
obviously i would place the css at the top in a seperate stylesheet.
what I needed was to align the word GBP under each other and the decimals under each other as well. I managed to this using nested tables but it was not worth it so I used the align right instead.
To have the currency symbol (GBP) AND the dots aligned you can do the following (tested on Chrome and Firefox, breaks on IE):
CSS file:
...
td.money {
text-align: right;
}
.currencySymbol {
float: left;
}
...
And your table cell would look like:
<td class="money">
<div class="currencySymbol">GBP</div>
970.00
</td>
Although it's dangerous (probably the reason why it breaks on IE), see: Is a DIV inside a TD a bad idea?
<td align="right">GBP 20.00</td>
<td align="right">GBP 100.00</td>
<td align="right"> </td>
I Guess thats what you are looking for as long as thee is ".00". If I were you, I would start using css even for this bit of code where you need to edit 3 places instead of one.
Just a note: This is not XHTML/HTML5 valid.
| common-pile/stackexchange_filtered |
Can you use "Ich habe es geknickt" to mean "decided against having done" something?
I'm familiar with the expression "kannst du knicken" to mean "forget it", but can "Ich habe es geknickt" be used to express the idea of deciding against something (in the past tense)?
No. But I can’t elaborate on why.
I added the colloquial tag, since even the infinitive form looks questionable in writing.
I would not have understood Ich habe es geknickt had I not been told what it is supposed to mean. Intersting to see that quite a few people here seem to have no issue with it.
@johnl Austrian native speaker here. I also haven't heard this use of "knicken" in my whole life. To me personally it sounds like something a foreigner would say when they can't remember the correct word (which I'd say is "aufgegeben", "hingeschmissen" or something like that). From the sound I'd guess it has to be a very North-Germany thing.
I am familiar with "kannst du knicken" but I haven't experienced the past tense version and if someone told me that without relation to "kannst du knicken", I probably wouldn't understand that. (compare "das kannst du knicken - ja, ich hab es geknickt" with "ich hab es geknickt" without implicit explanation what is meant) If you can do something - who wants to forbid ;-) - it doesn't mean everyone else will understand. Over-using such fuzzy phrases can become quite embarassing when you need a long-winded explanation of a short statement.
@puck: I assume the "long-winded explanation" is needed for any colloqiual abbreviation. Because I claim that colloqiual abbreviations - especially those ones using lean words from "high language" - are always context sensitive. Like with normal abbreviations: DDR means different things depending on context. Und ich weiß dass die DDR unfähig war, DDR zu erzeugen ;-)
@ShegitBrahm I assume the "long-winded explanation" is needed for any colloqiual abbreviation - well I wouldn't think so. A widely known phrase doesn't need explanations, this is why it exists. Why would you use a short phrase for something that nobody understands and you know you will have to add an explanation? If such a phrase needed an explanation then it would be quite worthless.
@puck: k, sorry, I forgot a detail: the colloqiual abbreviation is widely known by the peer group. Inside the "bubble of same interest". I have no idea about usual phrases of car mechanics or medics. Yet I'm quite sure these exists, speed their communications - and need some context explained that I as a bystander are able to understand it.
Yes, you can. At least: I did and do so and I get my message delivered.
So the infinitive of the verb would be "etwas knicken". Thus the usual rules for "knicken" apply "in general". Just with the addition that it is combined with "haben" instead of "sein". And that "etwas/es" is always present gramatically - and still get usually (visually) ommitted in colloqiual speech: Das habe ich geknickt. = Hab' ich geknickt.
I found this quote in a forum from round about 2013:
(quote made by E. W.)
Post by C. S.:
Post by E. W.:
Variante 2, nebenbei eine weisse Konblauchsauce herstellen und die Mupfeln nach dem Kochen abgiessen und darin versenken.
Ich finde das würde den feinen Geschmack von den Muscheln völlig tot
schlagen, Ede.
Ich gebe Dir Recht.
Ich habe es geknickt.
In this forum discussion, person E. W. proposes something about cooking. C.S. speaks against about it. And later E. W. confirms that the decision made was against own proposal. While there is no causal determination that E. W. talked about a past or present decision, I take it from the time stamps that there is actual doing in the kitchen involved.
While I personally would use "Hab' ich geknickt." to talk in a colloquial manner, your proposal is just fine. Instead of saying that someone else should "not think about it happening" it says that I decided to "not make it happen".
I guess it is due to "colloqiual" that I did not find a more "reputable" source.
Of course you can. I don't know, if it is common, but I immediately understood what you were trying to say. And that's the point.
kannst du knicken
is common and everybody who knows that one, will understand the other version. The grammar is also perfectly fine.
it’s not necessarily immediately obvious depending on the context. If you reply to „kannst du Knicken“ mit „ok, habe es geknickt“ then it should be clear. But if you say „Ich habe nicht verstanden wie man aus Gras eine Pfeiffe macht. Ich habe es dann geknickt.“ then that would be quite confusing (those are two extreme examples but I guess there can be a similar ambigues context which might lead to the receiver not even thinking about the expression.
@eckes Right, but the same is true for "kannst Du knicken". The meaning depends on the context.
@PaulFrost yes But it’s much more common
| common-pile/stackexchange_filtered |
jquery find('[id*=operator]') does not work in asp .net
What might be the reason that this code does not work
There is DataList and inside asp:Menu with id Operator is located.
I use jQuery in order to get
$('#<%= DataList.ClientID %>').find('[id*=Operator]').each(function () {
strEnd = strEnd + ' ' +$(this).attr('id');
});
alert(strEnd);
Funny thing is that I can locate it when I use find('[id*=o]', why single letter works and multiple letters does not (I tried with quotas around Operator, does not work), I tried to use 'Op' and 'op' words instead of operator, does not work.
What I want to do, I want to get this menu object. Theoretically I should be able to do it by simple play with <%=Operator.ClientId%> however it is inside this DataList, therefore when I try to use it it throws an error, which says that object does not exist.
Strange thing is that when I use [id$=Operator], it selected the object.
Example:
<asp:DataList ID="DataList" runat="server">
<table cellpadding="0" cellspacing="0">
<tr >
<td >
<asp:Menu ID="Operator" runat="server"></asp:Menu>
</td>
</tr>
</table>
</asp:DataList>
could you paste a sample example of an id that you expect to select? Please inspect html for getting it.
@Darqer It would probably be more useful to show a sample of the generated markup as well as the generated javascript.
Is the id tag in capitals letter when the DOM is generated?
This id has following value ctl00_ctl00_c_c_DisplyList_DataList_ctl00_RowIndex ctl00_ctl00_c_c_DisplayList_DataList_ctl00_Operator
Probably looking for something like: http://stackoverflow.com/questions/609382/jquery-selector-id-ends-with
| common-pile/stackexchange_filtered |
Is there a database programming language with encapsulation to prevent the injections?
One of things that annoys me about SQL is that it can't think in terms of objects and it's lack of encapsulation makes me constantly have to escape commands to prevent injections.
I want a database language that can be polymorphic and secure. I have searched online for non-procedural database programming languages and so far my google search has been unsuccessful.
I know in languages like php there are ways to prevent the injections by making the PHP encapsulated well, but not all database programming situations involve embedding the database language in another language.
In situations where it's database programming only, is there a database programming language that is object oriented in itself? If not, are they working on one?
Are you talking about SQL itself, or the API to connect and work with the database? SQL isn't procedural, so I'm a bit confused.
Forget the procedural part. Just is there a database language with encapsulation?
SQL actually has encapsulation built into the language, specifically in the part you're talking about, for preventing injection. The basic idea is, if you're escaping your SQL queries as you build them, you're doing it wrong.
Wrong way:
SQL = 'select * from MY_TABLE where NAME = ' + Escape(nameParam);
RunQuery(SQL);
Query encapsulation in SQL is called using parameters. It looks something like this:
SQL = 'select * from MY_TABLE where NAME = :name';
Params.Add(nameParam);
RunQuery(SQL, Params);
This sends the query parameters to the database as something separate from the query itself, so it doesn't get parsed as part of the string, making injection impossible. The database engine substitutes the parameter in for the param token (:name).
This also has efficiency benefits. On the client side, you don't have to concatenate strings, and you can usually declare your SQL strings as constants. And on the database side, the DB engine can cache a parametrized query and use the same query plan if you reuse it multiple times, making data access faster.
Exactly how parameters work on the client side varies, based on the language database, and DB access library you're using. Look at your documentation to figure out how it's done. But AFAIK all SQL databases support it, so you shouldn't have much trouble being able to use them.
@DrinkJavaCodeJava: SQL will also let you use execute stored procedures by passing in query parameters. Usually, the code for doing so is similar to the code for parameterizing inline SQL. If you're concerned about injection, one way to make it far more difficult is to change your permissions to only provide access to execute stored procedures, and then make sure those stored procedures are safe.
@Brian. Procs are themselves invoked by sql and are vulnerable to injection. Even when using procs, the parameterization described by MasonWheeler is needed. Here is an example of a proc execution sql with an injection hole: "myProc " + var1
@Mike: If your DB connection only has execute permission, "exec myProc + [EVIL Injection Code]" will trigger a permission error. Though obviously you should avoid that issue entirely by just using parameterized queries for the proc. Of course, a stored procedure can also execute dynamic sql explicitly (e.g., by calling sp_executesql). Don't do that.
I think you used the wrong word at the beginning there, you should replace that with ADO.NET / JDBC / Common database drivers. The escaping is necessary, but the drivers will do it for you; neither the SQL language nor any SQL servers will do this escaping for you so the way you explained this, it comes off a little inaccurate even though you understand the reality of the mechanism accurately.
@Jimmy: What do you mean? As I said, the exact details of how it works varies from one DB access library to another, but the basic principle as I described it is accurate, and although I wrote it basically as pseudocode, the style is quite close to to the way I do queries in actual work on one of my projects.
I'm referring to the opening statement SQL actually has encapsulation built into the language, specifically in the part you're talking about, for preventing injection. specifically that you say SQL has it in the language, when it is not in the SQL language or database, but rather the database connectivity librarys which you allude to at the end.
@JimmyHoffa: It's actually part of the database for at least some databases. I know MS SQL Server has Parameters support baked in, and I'm pretty sure Firebird does as well. They make it part of the database, and not "something the connectivity library interpolates into the query," so that they can cache the plan and not need to reparse it when you run the same query multiple times with different param sets. Those are the two I use most often. Not sure about other DBs.
I think that parameters support is a little different, the fact that a double quote is escaped when being handed across the pipe to the SQL server is still a necessity of the library considering if it is not, the server has no alternative but to assume that is the end of the parameter's value. The execution plan cache does cache parameterized queries but this meaning of "parameter" is more akin to "parameter" of a stored procedure which in no SQL requires a libraries "command" object to be executed, and when using query analyzer/management studio does require manual escaping of the parameters.
| common-pile/stackexchange_filtered |
Select rows based on date and time range
I am querying a table based on two columns date and time to return the rows:
tx_creation_date [date] and tx_creation_time [time(7)]
With the following query statement:
public interface PurchaseRepository extends PagingAndSortingRepository<Purchase, BigDecimal> {
@Query(value = "select p from Purchase p where (p.txCreationDate >=
:startDate and p.txCreationTime >= :startTime) and (p.txCreationDate <= :endDate and p.txCreationTime <= :endTime)")
public List<Purchase> findAllTxByTimestampRange(@Param(startDate) LocalDate startDate,
@Param(startTime) LocalTime startTime, @Param(endDate) LocaleDate endDate,
@Param(endTime) LocalTime endTime);
other queries...
}
Here is a snippet of the Purchase class:
@Entity
@Table(name = "po_log")
public class Purchase implements Serializable {
private static final long serialVersionUID = -1L;
@Id
@Column(name = "po_number")
private BigDecimal poNumber;
private String type;
@Column(name = "tx_creation_date")
@Type(type = "org.jadira.usertype.dateandtime.joda.PersistentLocalDate")
private LocalDate txCreationDate;
@Column(name = "tx_creation_time")
@Type(type = "org.jadira.usertype.dateandtime.joda.PersitentLocalTime")
private LocalTime txCreationTime;
..
..
getters and setters
}
Upon execution I got the following error message:
The data types datetime and time are incompatible in the greater than or equal to operator
I've tried couple tricks with no success:
Setting this sendTimeAsDateTime=false in the connection properties
Use of cast (txCreationTime as time) or convert
Use of cast to concatenate date and time as datetime
Any help is greatly appreciated.
Sidenote: Selecting rows with date range only yield to success. Therefore the issue truly lies in time.
My current setup:
Spring annotations
Hibernate 4
Jadira for user types and Joda for LocalDate and LocalTime
SQL Server 2014
localDate from user formatted to yyyyMMdd
localTime from user formatted to HHmmssSSS
If you are using jtds driver - try to use native microsft JDBC driver - it implements JDBC 4 and should work properly with time.
Or, you can map time column in yours entity as String and set params in query like new SimpleDateFormat("HH:mm:ss.SSS")
I am using microsoft JDBC 6 driver already and tried out version 4 just to be thorough. no luck. So I switched to native query. Anyway thank you for your input.
| common-pile/stackexchange_filtered |
Improving performance on SQL query
I'm currently having performance problems with an expensive SQL query, and I'd like to improve it.
This is what the query looks like:
SELECT TOP 50 MovieID
FROM (SELECT [MovieID], COUNT(*) AS c
FROM [tblMovieTags]
WHERE [TagID] IN (SELECT TOP 7 [TagID]
FROM [tblMovieTags]
WHERE [MovieID]=12345
ORDER BY Relevance ASC)
GROUP BY [MovieID]
HAVING COUNT(*) > 1) a
INNER JOIN [tblMovies] m ON m.MovieID=a.MovieID
WHERE (Hidden=0) AND m.Active=1 AND m.Processed=1
ORDER BY c DESC, m.IMDB DESC
What I'm trying to find movies that have at least 2 matching tags for MovieID 12345.
Database basic scheme looks like:
Each movie has 4 to 5 tags. I want a list of movies similar to any movie based on the tags. A minimum of 2 tags must match.
This query is causing my server problems as I have hundreds of concurrent users at any given time.
I have already created indexes based on execution plan suggestions, and that has made it quicker, but it's still not enough.
Is there anything I could do to make this faster?
You should cache semi-immutable stuff like this.
Suggestion 1 - never use count (*) always use count ( some_key)
Maybe http://msdn.microsoft.com/en-us/library/dd171921%28SQL.100%29.aspx
if you are grouping by movie_id you can simply use count(movie_id) instead of count(*)
How many movies? How many tags?
What is TOP 7 doing in the middle of this without an ORDER BY?
@AbhishekGoel Why; worst suggestion ever.
using count(*) over a specified key is a very slight performance hit, however that's not what's costing the most, it's all the embedded statements.
What does the execution plan say is the costliest part of this?
@AbhishekGoel That statement is just incorrect. If some_key in count(some_key) is anything but the clustered index of the table it will be slower than Count(*). Also note that count(some_key) counts only non null values.
@MarcusAdams Movie database consists of about 2,000,000 and tags could be around 1,000,000.
@HABO You are right, when simplifying this example for my question, I accidentally removed it! It's in there now with a field "Relevance". This is basically an int field with the most relevant tag starts as value of 1.
@JohnChrysostom It says that 91% of the cost is happening on a Key Lookup. Here is a picture: http://i.imgur.com/CLx7dYt.png. Please note that the tblSiteTags is the tblMovieTags in my example
If you are really having performance issues why not separate the data into a single table with the results of this query for each movie in it. then update the table periodically i would imagine the tags do not change all that often. all client matches then run from this single table, where you can control the indexes and Data contained
@Yakyb I was hoping I could avoid doing it that way but its looking more like this would be the best solution.
What are the indexes? Is MovieTags indexed on TagId with MovieId as an included column? Are the index statistics current?
I Like to use temp tables, because they can speed up your queries (if used correctly) and make it easier to read. Try using the query below and see if it speeds it up any. There were a few fields (hidden,imdb) that weren't in your schema, so I left them out.
This query may, or may not, be exactly what you are looking for. The point of it is to show you how to use temp tables to increase the performance and improve readability. Some minor tweaks may be necessary.
SELECT TOP 7 [TagID],[MovieTagID],[MovieID]
INTO #MovieTags
FROM [tblMovieTags]
WHERE [MovieID]=12345
SELECT mt.MovieID, COUNT(mt.MovieTagID)
INTO #Movies
FROM #MovieTags mt
INNER JOIN tblMovies m ON m.MovieID=mt.MovieID AND m.Active=1 AND m.Process=1
GROUP BY [MovieID]
HAVING COUNT(mt.MovieTagID) > 1
SELECT TOP 50 * FROM #Movies
DROP TABLE #MovieTags
DROP TABLE #Movies
Edit
Parameterized Queries
You will also want to use parameterized queries, rather than concatenating your values in your SQL string. Check out this short, to the point, blog that explains why you should use parameterized queries. This, combined with the temp table method, should improve your performance significantly.
INNER JOIN #MovieTags tmt ON mt.MovieTagID=tmt.MovieTagID should be TagID not MovieTagID
Actually, that join isn't needed at all. I just needed to select from #MovieTags. I Edited my query.
I'm noticing that our two solutions are almost identical. Mine just uses temp tables in the background heh.
Yeah, back to your comment about the inner join; it should be joining on MovieTagID, because MovieTagID is the pk on tblMovieTags. Since you (and I was) joining back to the same table (cte_tags is a temp table of tblMovieTags), you will want to join on the pk of that table. Your query also contains the unnecessary inner join, since you should have the data that you need in the cte_tags table. Pretty darn close though.
You're putting the TagID into the #MovieTags not the MovieID. You need to get the related movies before you can get to the movie information
There, I added the rest of the fields to that first query. My join in my second query is getting the related movies. The tags table is never being used in that query, so tag id (foreign key of tags), is not needed. Look at it a little closer. Joining on tagid will give you undesired results.
#MovieTags will only have 1 MovieID even though there may be multiple instances of it, it will only have the MovieID 12345, so #Movies will only be the count of tags for MovieID 12345 as that is how it is joined to the tblMovies. I verified this with test data just to make sure I wasn't missing something.
I'm trying to get my head around this one, the values returned was just the count of tags in the #SiteTags and the SiteID, no other movies were returned? Cheers
Did it not return the MovieID and the Count? If you don't need the count, you can change the last query to SELECT TOP 50 MovieID FROM #Movies. Otherwise you can remove the count from the second query. I guess the import questions is; was this any faster than your original query?
After many hours of testing, I managed to use the temporary table style lookup to reduce average query time from 1-2seconds to 0.3 secs. Thanks!
I want to see if there is some unnecessary processing happening from that query you wrote. Try the following query and let us know if it's faster slower etc And if it's even getting the same data.
I just threw this together so no guarantees on perfect syntax
SELECT TOP 7 [TagID]
INTO #MovieTags
FROM [tblMovieTags]
WHERE [MovieID]=12345
ORDER BY TagID
;cte_movies AS
(
SELECT
mt.MovieID
,mt.TagID
FROM
tblMovieTags mt
INNER JOIN #MovieTags t ON mt.TagId = t.TagId
INNER JOIN tblMovies m ON mt.MovieID = m.MovieID
WHERE
(Hidden=0) AND m.Active=1 AND m.Processed=1
),
cte_movietags AS
(
SELECT
MovieId
,COUNT(MovieId) AS TagCount
FROM
cte_movies
GROUP BY MovieId
)
SELECT
MovieId
FROM
cte_movietags
WHERE
TagCount > 1
ORDER BY
MovieId
GO
DROP TABLE #MovieTags
Thanks for your attempt! I tried this but outcome was slightly worse. Average execution time on original query was around 1200 yet this query takes on average 1900 :(
How about now that it is limited to the top 7 tags ordered by the tagID?
The first time I ran this query, it took 6500ms, every time after that it takes 1500ms. Still too slow for what I need due to high number of requests for this query :(
| common-pile/stackexchange_filtered |
Event bus message not received when executing blocking code
Here's the thing, let's say there are 2 verticles in a cluster over Hazelcast. Verticle 1 is sending loads of messages via event bus to Verticle 2. V2 consumes these messages, splits their contents into bulks and inserts these bulks into a database. The problem is that while the bulks are being inserted, some messages are not consumed by the verticle 2. Verticle 1 reports that it didn't recieve reply in 30s.
Question is, what is the best way to process the bulks so the messages could be consumed? I've started with vertx.executeBlocking, then I tried to create separate shared worker executor and execute with it. My last try was deploying new verticle that does the processing in "start" method.
What is the recommendation?
Deploying a new verticle is definitely wrong. In Vert.x the idea is that verticles are long living and do the processing. All operations that might take long, if possible should be done async (like using an async DB driver) or if not possible then as sync code in worker verticles.
You can deploy many instances of the (worker) verticles and they can all consume the same topic, this is how work will be done in parallel. By not blocking a normal (=non worker) verticle you let the thread be free and next instance of that same verticle will pick up next event and start processing it (instead of blocking the thread waiting for DB).
Would it help to deploy number of worker verticles from the one verticle and delegate the bulks to them via event bus? Or would the worker verticles block event loop of the main verticle that deployed them? What I mean that when the Verticle 2 is starting it'd deploy a number of worker verticles that would do the database inserts.
no, communication over eventbus is NOT blocking and this is the model for scaling in Vert.x. Each Vertx application has a main verticle that deploy other ones. Those (worker or not) will handle events and communicate over event bus without blocking each other.
So you V2 should deploy multiple instances of a worker verticle that listen to eventbus and do the processing
Keeping in mind Vertx golden rule ; don’t block the event loop,
perform the blocking operations of V2 in worker threads which will be executed asynchronously.
Use Async handlers that will be called when the
future completes and the result is available
Use result (For example succeeded, or failure) from handlers to continue operation.
| common-pile/stackexchange_filtered |
PHP preg_replace_callback match string but exclude urls
What I'm trying to do is find all the matches within a content block, but ignore anything that is inside tags, for use inside preg_replace_callback().
For example:
test
<a href="test.com">test title</a>
test
In this case, I want the first line to match, and the third line to match, but NOT the url match, nor the title match in between the a tags.
I've got a regex that I feel like is close:
#(?!<.*?)(\btest\b)(?![^<>]*?>)#si
(and this will not match the url part)
But how do I modify the regex to also exclude the "test" between a and /a?
and the fourth line to match Erm, you only have three lines in your input?
Do you have to account for nested tags as well? Eg <a>test<b>test</b>test</a>, or self-closing tags? Sounds like a job for something that's not a regular expression (HTML and regex generally do not work well together)
HTML and regex are not good friends. Use a parser, it is simpler, faster and much more maintainable.
See: http://php.net/manual/en/class.domdocument.php
It doesn't use nested tags, and unfortunately due to the application I have to use regex, but I appreciate the thoughtful question and suggestion.
If it's always the same pattern you can use [A-Z] or a combination like [A-Za-z]
How is this answering the question?
I ended up solving it myself. This regex pattern will do what I wanted:
#(?!<a[^>]*?>)(\btest\b)(?![^<]*?<\/a>)#si
| common-pile/stackexchange_filtered |
AvFoundation takes black image from camera in Iphone
I am newbie to iPhone and have made a demo using AVFoundation for taking pics. I am taking 5 max images but issue is when I am taking 1st image it always come as compare to other images. Can anybody help me to resolve it? Code is as below:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
//Added by jigar
session.sessionPreset = AVCaptureSessionPreset640x480;
//[[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] supportsAvCaptureSessionPreset:AVCaptureSessionPreset640x480];
//End
// Create device input and add to current session
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error: nil];
[session addInput:input];
// Create video output and add to current session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
// Start session configuration
[session beginConfiguration];
[device lockForConfiguration:nil];
// Set torch to on
if(flashMode == UIImagePickerControllerCameraFlashModeAuto)
{
device.torchMode = AVCaptureTorchModeAuto;
}
else if(flashMode == UIImagePickerControllerCameraFlashModeOn){
device.torchMode = AVCaptureTorchModeOn;
}
else if(flashMode == UIImagePickerControllerCameraFlashModeOff){
device.torchMode = AVCaptureTorchModeOff;
}
[device unlockForConfiguration];
[session commitConfiguration];
// [session startRunning];
[currentPicker takePicture];
// [session stopRunning];
// session = nil;
Could you edit your question and try to be more clear about what exactly is not working, what you tried, expected and results? Also, are you sure, the user has allowed camera access, as you are not checking that?
Any update on this?
| common-pile/stackexchange_filtered |
Skip attributes in XML Deserialization
Currently I use the following approach to deserialize a RESTful XML return:
[DataContract(Name = "Part", Namespace = "")]
public class Part
{
[DataMember(Order = 1)]
public string ItemId { get; set; }
[DataMember(Order = 2)]
public string ItemDescription { get; set; }
[DataMember(Order = 3)]
public string Weight { get; set; }
}
public bool GetPartInformation(string itemId)
{
var URL = "...some URL...";
client.BaseAddress = new Uri(URL);
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/xml"));
HttpResponseMessage response = client.GetAsync(urlParameters).Result;
if (response.IsSuccessStatusCode)
{
Part part = response.Content.ReadAsAsync<Part>().Result;
Console.WriteLine("ItemId: {0}", part.ItemId);
Console.WriteLine("Description: {0}", part.ItemDescription);
Console.WriteLine("Weight: {0}", part.Weight);
return true;
}
else
{
Console.WriteLine("{0} ({1})", (int)response.StatusCode, response.ReasonPhrase);
return false;
}
}
This is the XML return:
<Part>
<ItemId>12345</ItemId>
<ItemDescription>Item Description</ItemDescription>
<Weight>0.5</Weight>
<Cost>190.59</Cost>
</Part>
If I don't want to grab the ItemId in public class Part it unfortunately does not work to simply remove these two lines from public class Part:
[DataMember(Order = 1)]
public string ItemId { get; set; }
1) What is the correct approach to skip the ItemId attribute?
2) What would be the the easiest way to access Cost only and omit everything else? (I didn't add it to public class Part in my example though)
If you remove [DataMember] attribute from properties which you want to ignore then all such properties will be excluded from deserialization.
Great, thanks! Is there a way where I wouldn't even have to list them in my class?
| common-pile/stackexchange_filtered |
how to find the "UIImagePickerControllerOriginalImage" and "UIImagePickerControllerReferenceURL" when downloading the image from net
I want to get UIImagePickerControllerOriginalImage and UIImagePickerControllerReferenceURL when i download and save image to the photo album..
Can anyone suggest the proper method to find those 2 parameters?
if i use the method "
[library writeImageToSavedPhotosAlbum:[viewImage CGImage] orientation:(ALAssetOrientation)[viewImage imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error)
i will get "UIImagePickerControllerReferenceURL"
and if i use
UIImageWriteToSavedPhotosAlbum([UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:self.imageURL]]], testOriginalImage, @selector(image:didFinishSavingWithError:contextInfo:), nil)
i will get "UIImagePickerControllerOriginalImage"
what to do to get both?
When getting the image from a server?
To be clear, you want to download an image from the Internet then save the image to the Photo Library. And then you want to get original image and reference URL values that you would normally get when selecting an image from the Photo Library using UIImagePickerController. Is that correct?
ya but cant v get directly the UIImagePickerControllerOriginalImage and UIImagePickerControllerReferenceURL at the same time? on click on download i want these 2parameters..
@Erway Software : ya i am getting image from server and saving it in photo library..
Please find the above edited answer..
If you call:
[library writeImageToSavedPhotosAlbum:[viewImage CGImage] orientation:(ALAssetOrientation)[viewImage imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error)
then you get/have both. The assetURL is the UIImagePickerControllerReferenceURL and viewImage is the same as the UIImagePickerControllerOriginalImage.
how to get those? mean i want to print this.. how can i do that?
Sorry, I don't understand what you want. What do you want to print?
according to ur answer if i use "[library writeImageToSavedPhotosAlbum:[viewImage CGImage] orientation:(ALAssetOrientation)[viewImage imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error)" i will get get Original image.. but i want to display as "UIImagePickerControllerOriginalImage = "<UIImage: 0x1f50a6e0>""
The viewImage that you save is the original image. Print this out any way that you want.
the image URL and UIImagePickerControllerOriginalImage = "<UIImage: 0x1f50a6e0>" is not getting matches, because viewImage is the image before saving and URL is what i am getting after saving.. i am getting an exception..
| common-pile/stackexchange_filtered |
Tangent vector definition
I'm reading the book "Semi-Riemannian Geometry with Applications to Relativity", which defines a tangent vector at a point $p$ in a manifold $M$ to be a function $v : \mathcal{F}(M) \to \mathbb{R}$ (where $\mathcal{F}(M)$ is all smooth functions $M \to \mathbb{R}$), which is:
Linear: $v(af + bg) = av(f) + bv(g)$
Leibnizian: $v(fg) = v(f)g(p) + f(p)v(g)$
I'm confused by this definition. Why is a vector a function $\mathcal{F}(M) \to \mathbb{R}$? Given a function $f : M \to \mathbb{R}$, what does $v(f)$ represent? The Leibnizian condition also seems weird to me, what is it saying?
You will probably see later that $v$ is a derivative. On a manifold in $\mathbb{R}^n$ you can identify $v(f)$ with $\vec{v}\cdot\nabla f.$ The Leibnizian condition is the product rule of derivatives.
| common-pile/stackexchange_filtered |
UnauthorizedAccess Exception with File.Copy on a ReadOnly file
Simple question really...
How can I copy a ReadOnly file?
When running a debug build of my very simple Windows Forms app, I'm getting the UnauthoriazedAccess Exception when trying to copy a read-only file from one directory to another:
FileInfo newFile = new FileInfo(fileName);
File.Copy(newFile.FullName, Path.Combine(destinationDirPath, newFile.Name), true);
What do I need to do to get around it? I'm thinking that there's some kind of security or application permission that I need to set for this project...
Does this answer your question? How to copy a read-only file?
Is it possible that you're getting the exception not for reading the file but for the place you are writing the file? If that's the case you need to make sure you're writing the file to a directory for which the application credentials have access.
You're right. The folder created as the result of Add Folder in the FolderBrowserDialog window was read only. I guess that's the source of the problem.
| common-pile/stackexchange_filtered |
Login details security using salt n hash and a login role in postgresql
I am coding up the security for a website in express.js and postgresql db. Now I have been reading about salting and hashing and I have the code set up with pdkdf2 using the crypto module, but my issue is how I will structure the account table in the db. What If i would create a login role which will have an MD5 encrypted format for the password, which password will be the derived key from the salt n hash "procedure". Would that be an overkill of protection?
There will be a table which will be as follows: UID (the ID from the login role), SALT , HASH.
And also the loginrole.
So on a try for authentication, the code will try to login as that role, first by getting the assosiated UID, generating the salt n hashed password for the password provided and auth on a DB level.
Hope I am making some sense..
var usrSalt = crypto.randomBytes('128').toString('base64');
//text , salt ,iterations , keylen , callback
crypto.pbkdf2(usr, usrSalt, 10000, 512, function (err, derivedKey) {
if (err) { console.log(err); }
else {
usr = derivedKey;
next();
}
});
P.S Would a pgcrypto module be better again in the same scenario, just by removing the code on node.js.
This answer is at a higher level, not the low level of actual code.
"Overkill of protection" is relative to your project. The 10000 iterations will take some amount of time and MD5 will provide some level of encryption. Both might be suitable for your project, and will have to rank as a priority compared to other aspects (speed, features, etc.).
To keep speaking in generalities, some level of good security practices will protect some percentage of your user data, and with a determined attacker some other percentage might "always" be compromised.
The choice for pgcrypto is similar. If it is as sufficient as the code you plan to write (it is defacto more tested than your current code), its handling will be on your DB server. Good to keep it off the Node server? Easy to maintain? Less work? "Better" will be relative to your project.
Salting and hashing passwords is not overkill, it is the absolute minimum you should do if you cannot avoid dealing with passwords entirely.
It's hard to figure out what you mean with the second part, regarding this being used to auth on a DB level. You usually either do application level authentication using your own usernames/passwords, or you create real users in PostgreSQL and let PostgreSQL authenticate them by simply passing their password and username through to PostgreSQL when you create a connection.
There's an intermediate way, which you might be trying to get at. Authenticate the user yourself, then use SET SESSION AUTHORIZATION to have your PostgreSQL database session "become" that user. It's a bit of a specialized approach, and not one I tend to use most of the time. It is mostly used by connection poolers.
See also:
Secure method for storing/retrieving a PGP private key and passphrase?
in ur second paragraph what if we do both? thats what i am trying to explain in the question. If we create a login providing the password that is already hashed n salted and then stored in the login with an MD5 encryption, then when we need to auth we take the salt from a table that we have "bound" it with the unique login id.
@drakoumelitos Showing the code can help make that sort of thing clear. I'm guessing you're saying you CREATE USER in the DB, but you have your own separate table of passwords, with a username or user oid stored in there. You authenticate the user against your own user table then SET SESSION AUTHORIZATION to "become" that user in the DB. Is that a correct guess about what you're doing?
ye it would be nice if I could show what I mean, but I cant really show u an image of the DB :P
close, i CREATE USER in the DB and give it a password that is already hashed n salted, and store the salt in another table binding it to the unique id that USER has. my question is, if that is an overkill or too much anyway
| common-pile/stackexchange_filtered |
The notion of orientation in vector spaces
Do Carmo's Curves and Surfaces book states that "two ordered bases $e = \{e_i\}$ and $f = \{f_i\}$, $i = 1,\ldots,n$, of an $n$-dimensional vector space $V$ have the same orientation if the matrix of change of basis has positive determinant."
This same concept of "orientation" is the subject of another question on stack, but I am concerned with a purer intuitive understanding. What is the idea behind assigning this notion of same orientation to this particular circumstance with positive determinant? What properties of a matrix with positive determinant motivate this definition?
It is just because matrices respresenting orthonornal basis are elements of $O_n(\mathbb R)$ so the determinant of a matrix to change between two such basis is $+1$ or $-1$.
If you’ve done a bit of physics, you know that depending on your orientation, vector products might change signs.
Orthogonal matrices with determinant one are rotations. That group is denoted by $SO(n),.$ When the determinant is $-1$ they are reflections. Study a few low dimensional bases with that in mind. To consider non-orthonormal matrices with positive/negative determinant is the next minor generalization you should go through.
$GL_n(\mathbb R)$ has two path connected components, the subgroup with positive determinant and the 2nd coset with negative determinant
Possibly of interest: https://math.stackexchange.com/questions/1551768/geometrical-meaning-of-orientation-on-vector-space and/or https://math.stackexchange.com/questions/1884953/what-are-the-higher-dimensional-analogues-of-left-and-right-handedness
The choice of the word ‘orientation’ is very unfortunate. The english word ‘orientation’ is about compass direction. A compass needle has potentially infinitely many directions to point to, but an ordered basis has only two possible ‘orientations’. It is actually just a sign (of determinant). When two ordered bases have different signs, you cannot transform one into another linearly using only shears and positive scaling. A reflection about a hyperplane must be involved. So, loosely speaking, an ordered bases with an opposite sign is a basis in a mirrored configuration.
In general, the determinant of a matrix gives us information about the scale factor of 'volume' in our space. To make this more concrete, we consider the following image:
(I've called the angle between the $x$-axis and $a$ the angle $\alpha$ and
similar for the angle for $b$.
Suppose we have $a,b \in \mathbb{R}^2$ spanning this paralellogram $P$ (dont pay attention to the numbers in the picture). If we want to find the area of this,
we can find (by some trigonometry) that the area $A$ is given by:
\begin{align}
A = |\: \|a\| \|b\| \sin(\beta - \alpha) \:|.
\end{align}
If we want to write this as a function of the coordinates $(a_1,a_2) = a$ and $(b_1,b_2) = b$, then we may note that $$a_1 = \|a\| \cos(\alpha)$$ and $$a_2 = \|a\| \sin(\alpha).$$ Similarly $$b_1 = \|b\| \cos(\beta)$$ and $$b_2 = \|b\| \sin(\beta).$$
Then:
\begin{align*}
\pm A &= \|a \| \; \|b\| \sin(\beta-\alpha) \\
&=\|a\| \|b\| (\cos(\alpha) \sin(\beta) - \sin(\alpha) \cos(\beta)) \\
&= \|a \| \cos(\alpha) \|b\| \sin(\beta) - \|a\| \sin(\alpha) \|b\| \cos(\beta)\\
&= a_1 b_2 - a_2 b_1
\end{align*}
Then $A = |a_1 b_2 - a_2 b_1|$. By taking the absolute value, we lose some information. Note that $a_1 b_2 - a_2 b_1$ is positive if $\sin(\beta-\alpha)>0$ and that it is negative when $\sin(\beta-\alpha)<0$. From this it follows that $\sin(\beta-\alpha)$ is positive if and only if $0 < \beta-\alpha<\pi$.
Geometrically speaking, that means that if we take a point $a$, and turn it over the shortest angle so it is parallel to $b$, then this would require us to turn anticlockwise. The sign of $a_1 b_2 - a_2 b_1$ thus gives us information about how the two vectors are oriented with respect to eachother. If we instead had to turn the other way, then $b$ would have to be "below" $a$ in the image. Now we can make the connection with the determinant:
Note that the determinant $\det \begin{pmatrix} a_1 & b_1 \\ a_2 & b_2 \end{pmatrix} = a_1 b_2 - a_2 b_1$ which is exactly the 'signed area' we found before.
We call an ordered pair of vectors $a,b$ positively oriented if $\det(a,b) >0$ and negatively oriented if $\det(a,b)<0$. Note that is is of importance which of the two you pick as first one, and which is the second one.
For example, the standard basis $e_1,e_2$ is positively oriented, but the oriented pair $e_2,e_1$ is negatively oriented.
One can generalize this to higher dimensions, but the idea stays the same (in $3$ dimensions, you would not have a paralellogram, but a paralellopiped in the image instead).
Conclusion:
If you have a basis, and you apply a change of bases which changes the orientation, the sign of the determinant will be negative. This means that the 'aboveness' and 'belowness' of the two vectors has been switched.
If the determinant is positive, the orientation is preserved (altough volumes may be scaled).
Your right hand is a right hand no matter how you orient it in space. You can think of your thumb, index, and middle finger as an ordered basis for $\Bbb R^3$, or "frame" (assuming you don't contort your fingers to make them linearly dependent). The set of all frames may be topologized as a submanifold of $\Bbb R^3\times\Bbb R^3\times\Bbb R^3$. Indeed, viewing the vectors as columns of a matrix, frames correspond to invertible matrices, so we're talking about the space ${\rm GL}(3,\Bbb R)$ of invertible matrices. As you move and reorient your right hand in space, the corresponding matrix varies in ${\rm GL}(3,\Bbb R)$. The same goes for your left hand, but it lies in a different path-connected component. Thus, the orientations of a real vector space $V$ may be defined as the equivalence classes of ordered bases, or equivalently the connected components of ${\rm GL}(V)$.
In the 2D case, we can imagine circular arrows drawn on a surface differentiating clockwise vs. anticlockwise; these correspond to classes of ordered pairs of linearly independent vectors (whichever way the second vector is a convex angle from the first, the circular arrow is drawn in that direction).
The QR decomposition (via the Gram-Schmidt process, which allows us to orthonormalize any basis) tells us the multiplication map ${\rm O}(n)\times B\to{\rm GL}(n,\Bbb R)$ is a diffeomorphism, where $B$ is the subgroup of triangular matrices with positive diagonal entries. Note $B\simeq \Bbb R^{n(n+1)/2}$ is diffeomorphic to a Euclidean space, so ${\rm GL}(n,\Bbb R)$ deformation retracts onto ${\rm O}(n)$, and thus the components of ${\rm GL}(n,\Bbb R)$ correspond to those of ${\rm O}(n)$. What this means is that for real inner product spaces $V$ an equivalent definition of an orientation is an equivalence class of ordered orthonormal bases.
It is possible to show any matrix in ${\rm O}(n)$ is path-connected to one in ${\rm O}(n-1)$ for $n>1$, so by induction ${\rm O}(n)$ has at most as many components as ${\rm O}(1)=S^0\cong\Bbb Z_2$. (One can consider the long homotopy exact sequence for the fiber bundle ${\rm O}(n-1)\to{\rm O}(n)\to S^{n-1}$ for this conclusion. The elementary version of this is the path-lifting property for an orbit map ${\rm O}(n)\to S^{n-1}$. Equivalently, one may use an "arc" of a one-parameter subgroup $R$ of plane rotations to rotate the last column of a matrix $A$ to $e_n$, then apply $R(\theta)$ to $A$ pointwise.)
Consider the determinant $\det:{\rm O}(n)\to S^0=\{\pm1\}$. (The range can only contain $\pm1$, just apply $\det$ to $A^TA=I$ to see, and both values are realized, as $\det I=1$ and $\det F=-1$ for any reflection $F$.) Since $S^0$ is disconnected, its fibers must also be disconnected, so ${\rm O}(n)$ has exactly two components, corresponding to determinant $\pm1$. Moreover, elements of $B$ have positive determinant, so the components of ${\rm GL}(n,\Bbb R)$ correspond to matrices with positive/negative determinant.
For a manifold, an orientation is a choice of orientation of each tangent space which varies smoothly from point to point. For a Riemannian manifold, the tangent spaces are inner product spaces and the earlier point about those applies. A connected manifold either has zero or two orientations, in the latter case it is called orientable. For non-orientable manifolds, it is possible to pick an orientation for one tangent space, then move it around in space until it comes back to the opposite orientation at the original point (consider circular arrows drawn on a Mobius band).
| common-pile/stackexchange_filtered |
Difference between two values in choices tuple
Below I have a Car class, I have a tuple for choosing from a list of brands. My question is what is the difference between the two values?
In ('DODGE', 'Dodge') can I just name both Dodge or does one need to be uppercase?
class Car(models.Model):
BRAND_CHOICES = (
('DODGE', 'Dodge'),
('CHEVROLET', 'Chevrolet')
)
title = models.CharField(max_length=255)
brand = models.CharField(max_length=255, choices=BRAND_CHOICES)
def __str__(self):
return self.title
Not exactly answering the question but a coding-style note here: usually the left one would be written using variable name (e.g. DODGE not "DODGE") so that it can be referred to as a constant elsewhere in the code like Car.DODGE, instead of using string literals directly. The reason for the uppercased name is that it's sort of like a constant in this case. The actual value of the variable does not need to be an uppercase string.
I am also wondering about the purpose of using two values: python names vs 'free strings' (e.g. "A proper sentence"), or database efficiency? Anything else?
No, it does not need to be uppercase, you can name it whatever you want.
And for the question about the difference between each value, the first element in each tuple is the actual value to be set on the model, and the second element is the human-readable name.
Basically the first value in a key-pair will be the value saved to the database, and the second value in the pair will be the value showed to the user on the client side, the human-readable name for the value.
Read more here: https://www.geeksforgeeks.org/how-to-use-django-field-choices/
| common-pile/stackexchange_filtered |
Prove that a.e. convergence on a $\sigma$-finite measure space $(X,\mathcal{M},\mu)$ implies uniform convergence on a sequence of measurable sets.
I am currently reading the book "Modern Real Analysis" by Ziemer and have come to an exercise I am having trouble with. The exercise appears in the chapter on measurable functions within the section on approximation of measurable functions. The exercise statement goes :
Let $(X,\mathcal{M},\mu)$ be a $\sigma$-finite measure space and suppose that
$f$, $f_{k}$, $k=1,2,...$, are measurable functions that are finite almost everywhere and :
\begin{equation}
\lim_{k \rightarrow \infty} f_{k}(x) = f(x)
\end{equation}
for $\mu$ almost all $x \in X$. Prove that there are measurable sets $E_{0},E_{1},E_{2},...$, such that $\mu(E_{0}) = 0$
\begin{equation}
X = \bigcup_{i=0}^{\infty} E_{i}
\end{equation}
and $\{f_{k}\} \rightarrow f$ uniformly on each $E_{i}$, $i > 0$.
The following theorem appears in the text of the chapter :
Theorem 5.18 (Egorov)
Let $(X,\mathcal{M},\mu)$ be a finite measure space and suppose $\{ f_{i} \}$
and $f$ are measurable functions that are finite almost everywhere
on $X$. Also, suppose that $\{ f_{i} \}$ converges pointwise a.e. to
$f$. Then for each $\epsilon > 0$ there exists a set $A \in \mathcal{M}$ such
that $\mu(\tilde{A}) < \epsilon$ and $\{f_{i}\} \rightarrow f$ uniformly
on $A$.
Here the notation $A^{c}$ and $\tilde{A}$ represent the complement of $A$.
Here is my solution so far :
We know that since $(X,\mathcal{M},\mu)$ is $\sigma$-finite that $\exists \{ A_{i} \}_{i \in \mathbb{N}} \subset \mathcal{M}$
s.t. $\mu(A_{i}) < \infty \; \forall i \in \mathbb{N}$ and :
\begin{equation}
X = \bigcup_{i \in \mathbb{N}} A_{i}
\end{equation}
Define :
\begin{equation}
\Gamma_{f}(x) := \lim_{k \rightarrow \infty} f_{k}(x) = f(x)
\end{equation}
Let :
\begin{equation}
D = \{ x \in X \; \mid \; \neg \Gamma_{f}(x) \}
\end{equation}
We know $\mu(D) = 0$. (Note that $D$ is null and therefore measurable).
Let :
\begin{equation}
E_{0} = \bigcup_{i \in \mathbb{N}} (D \cap A_{i})
\end{equation}
We see that :
\begin{equation}
A_{i} \in \mathcal{M} \text{ and } D \in \mathcal{M} \text{ and } E_{0} \subset D \Rightarrow E_{0} \in \mathcal{M} \text{ and } \mu(E_{0}) = 0
\end{equation}
Now let :
\begin{equation}
E_{i} = A_{i} \cap D^{c} \; \forall i \geq 1
\end{equation}
We know $A_{i} \in \mathcal{M}$ and $D \in \mathcal{M}$ implies $E_{i} \in \mathcal{M}$. We also see :
\begin{align}
X = \bigcup_{i \in \mathbb{N}} A_{i}
& = \left( D \cap \bigcup_{i \in \mathbb{N}} A_{i} \right) \cup \left( D^{c} \cap \bigcup_{i \in \mathbb{N}} A_{i} \right)\\
& = \left[ \bigcup_{i \in \mathbb{N}} (A_{i} \cap D) \right] \cup \left[ \bigcup_{i \in \mathbb{N}} (A_{i} \cap D^{c}) \right]\\
& = E_{0} \cup \left[ \bigcup_{i \in \mathbb{N}} E_{i} \right]\\
& = \bigcup_{i=0}^{\infty} E_{i}
\end{align}
We also know :
\begin{equation}
\Gamma_{f}(x) \; \forall x \in D^{c} \Rightarrow \Gamma_{f}(x) \; \forall x \in E_{i} \text{ , } \forall i \geq 1
\end{equation}
Also :
\begin{equation}
\mu(A_{i}) < \infty \text{ and } E_{i} \subset A_{i} \; \forall i \geq 1 \Rightarrow \mu(E_{i}) < \infty \; \forall i \geq 1
\end{equation}
So by theorem 5.18 we have for each $i \geq 1$ :
\begin{equation}
\epsilon > 0 \Rightarrow \exists H \in \mathcal{M} \text{ s.t. } H \subset E_{i} \text{ and } f_{i} \rightarrow f \text{ uniformly on } E_{i} \setminus H \text{ and } \mu(H) < \epsilon
\end{equation}
This means that $f_{k} \rightarrow f$ almost uniformly on each $E_{i}$. But how can I show that $f_{k} \rightarrow f$ converges uniformly on
each $E_{i}$ ?
Can anyone help with this ?
By the theorem of Egorov we have for each $n$ a set $E_n$ such that
$$
\mu(X\setminus E_n)<\frac{1}{n}
$$
and $f_k\to f$ uniformly on $E_n$. It is easy to see that $f_k\to f$ uniformly on $E_n\cup E_{n+1}$ and
$$
\mu(X\setminus (E_n\cup E_{n+1}))<\frac{1}{n+1}.
$$
This allows to choose the sequence of sets $E_n$ being increasing. It follows that $X\setminus E_n$ is decreasing and therefore,
$$
\mu\left(\bigcap_{n=1}^N(X\setminus E_n)\right)=\mu(X\setminus E_N)<\frac{1}{N}\,.
$$
Therefore,
$$
\mu\left(\bigcap_{n=1}^\infty (X\setminus E_n)\right)=0
$$
and we can set
$$E_0=\bigcap_{n=1}^\infty (X\setminus E_n)\,.$$
The sequence $E_0,E_1,...$ now satisfies all requirements from the exercise.
| common-pile/stackexchange_filtered |
Scala errors "value toInt is not a member of String" and "not found: type"
I've set up Scala project using Maven. It does not compile however. I get strange errors like something very basic is missing. Some of them are:
[ERROR] /home/victor/Work/Projects/Own/Scraper/src/main/scala/me/crawler/Node.scala:17: error: not found: type Map
[INFO] var attributes: Map[String, String] = null
[INFO] ^
[ERROR] /home/victor/Work/Projects/Own/Scraper/src/main/scala/me/crawler/CompanySiteEmailCrawlerController.scala:137: error: not found: type Set
[INFO] private def addEmailToCompanyList(harvestedRecordsCompanyList: List[Company], company: Company, emailSet: Set[String],[INFO] ^
[ERROR] /home/victor/Work/Projects/Own/Scraper/src/main/scala/me/crawler/CompanySiteEmailCrawlerController.scala:186: error: value toInt is not a member of String
[INFO] lineFrom = args(3).toInt
[INFO] ^
[ERROR] /home/victor/Work/Projects/Own/Scraper/src/main/scala/me/crawler/crawler4j/Crawler4jAdaptee.scala:25: error: not found: value classOf
[INFO] private val log: Logger = Logger.getLogger(classOf[Crawler4jAdaptee])
[INFO] ^
[ERROR] /home/victor/Work/Projects/Own/Scraper/src/main/scala/me/crawler/crawler4j/Crawler4jAdaptee.scala:126: error: not found: type Map
[INFO] val attributesMap: Map[String, String] = attributes.map(a => (a.getKey, a.getValue)).toMap
[INFO] ^
So Map and Set collections are not fount and toInt method doesn't work for Strings. In my pom.xml I have:
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.10.2</version>
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/scala</sourceDirectory>
<plugins>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<version>2.15.2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
<repositories>
<repository>
<id>scala</id>
<name>Scala Tools</name>
<url>http://scala-tools.org/repo-releases/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>scala</id>
<name>Scala Tools</name>
<url>http://scala-tools.org/repo-releases/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
The same errors I get when I run it in Idea, although the IDE does not complain about the code, only the compiler does. I am quite new to Scala. Can you please help me out here?
Something similar is with for cycles. I can't use for (i <- 1 to 10) instead I have to use for (i <- Range(0, 10)), otherwise I get the error to is not a member of Int.
Importing scala.collection.immutable solved the problems with collections, for the classOf problem I found a workaround - using getClass instead. toInt problem remains unsolved. There is a workaround though - using the exact code from that definition: java.lang.Integer.parseInt. I have a feeling that this is also a problem with imports.
| common-pile/stackexchange_filtered |
implementation of attention r2unet network in Keras - tf 2.x
I want to perform segmentation of optic disc from fundus images using attention networks. The architecture of the model is picked from "https://github.com/lixiaolei1982/Keras-Implementation-of-U-Net-R2U-Net-Attention-U-Net-Attention-R2U-Net.-" (courtesy, credits to - lixiaolei1982).
When I track the training and validation loss, it decreases (see image below) but the training loss approaches 0 after 20 epochs. During first 20 epochs, the segmented image is completely black or completely white. I tried to normalize the predicted image. But it is the same.
Can anyone please help me rectify the issue? Is it the role of the loss function that is causing the output image to be completely black? Below is the code that uses attention net to train the images
import os
import pandas as pd
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.backend import flatten
from skimage.exposure import equalize_hist as clhe
import network as new_model
batch_size = 32
no_epochs = 20
img_height, img_width, img_num_channels = 512, 512, 3
mdl1 = "att_r2unet"
m1 = new_model.att_r2_unet(img_height, img_width, n_label=1)
def preprocess_im(imgs):
"""Make input image values lie between -1 and 1."""
#imgs = clhe(imgs)
out_imgs = imgs - np.max(imgs)/2.
out_imgs /= np.max(imgs)/2.
return out_imgs
##
seg_train_gen = ImageDataGenerator(preprocessing_function=preprocess_im)
seg_train_x = seg_train_gen.flow_from_directory(directory="../train_im",target_size=(512,512),batch_size=batch_size,color_mode="rgb",class_mode="sparse",shuffle=True,seed=30)
seg_train_y = seg_train_gen.flow_from_directory(directory="../train_gt",target_size=(512,512),batch_size=batch_size,color_mode="grayscale",class_mode="sparse",shuffle=True,seed=30)
seg_val_x = seg_train_gen.flow_from_directory(directory="../val_im",target_size=(512,512),batch_size=batch_size,color_mode="rgb",class_mode="sparse",shuffle=True,seed=30)
seg_val_y = seg_train_gen.flow_from_directory(directory="../val_gt",target_size=(512,512),batch_size=batch_size,color_mode="grayscale",class_mode="sparse",shuffle=True,seed=30)
b_iter = int(np.ceil(seg_train_x.n / batch_size))
hist = {'m1_loss': [],'m1_vloss': []}
epoch = 0
for e in range(epoch, no_epochs):
l1 = list(np.zeros(b_iter))
for it in range(b_iter):
x_batch,_ = seg_train_x.next()
y_batch, _ = seg_train_y.next()
loss1 = m1.train_on_batch(x_batch, y_batch)
l1.append(loss1[0])
xval,_ = seg_val_x.next()
yval,_ = seg_val_y.next()
loss2a = m1.evaluate(xval,yval, batch_size=batch_size, verbose=1)
print('Epoch %d / %d tr_loss %.6f val_loss %.6f, ' % (e + 1, no_epochs, np.mean(l1), loss2a[0]))
hist['m1_vloss'].append(loss2a[0])
# Save best model
if e > epoch+1:
Eopt1 = np.min(hist['m1_vloss'][:-1])
if hist['m1_vloss'][-1] < Eopt1:
m1.save((save_dir+mdl1+'_best_model.h5'),overwrite=True)
m1.save_weights((save_dir+mdl1+'_best_weights.h5'),overwrite=True)
# save intermediate to folder results every 2 epochs
if e % 2 == 0:
x_plt = (xval[0] - xval[0].min()) / (xval[0].max() - xval[0].min())
ypred = m1.predict(xval)
fix, ax = plt.subplots(1,3, figsize=(10,10))
ax[0].imshow(x_plt)
ax[1].imshow(yval[0, :, :, 0], cmap='gray')
ax[2].imshow(ypred[0, :, :, 0], cmap='gray')
plt.savefig((save_dir+mdl1+'e_' + str(e) + '.jpg'))
plt.close()
print("Completed training...")
I have tried adding Batch normalization and also modified the activation function from sigmoid to relu in final layers. Tried changing the optimizer as well. But none of these helped. In fact, sometimes the loss is negative without modifying the architecture.
Below is the sample output saved during training (Left side is the input image, the middle is ground truth and right is predicted output)
Thank you for your time.
You can check the learning rate , type of optimizer used. If it is the resultant image is coming as totally white or black may be your dataset is class imbalanced.
Some class is overpowering the other . You have to balance the dataset or use better loss function like iou, dice or focal loss.
Please fix misspelling in your post
| common-pile/stackexchange_filtered |
how to upload image created using php to users timeline on facebook
I have developed a facebook application that contains following files
1) index.php
2) cap3.php
where cap3.php generates an image using GD in PHP.
index.php displays this image using following code
<img src="cap3.php">
now I want this generated image to be posted on user's timeline.
I have tried to do so by using following code:
$facebook->setFileUploadSupport(true);
$img = 'cap3.php';
$args=array( 'source' => '@' .$current , 'message' => 'Photo uploaded via!' );
$photo = $facebook->api('/me/photos', 'POST', $args);
but the image is not posted on user's timeline
please help.
What response do you get from the Facebook API ?
When you are giving a .php file as image source file, no code inside that file will be interpreted – this is not a case of accessing that script via HTTP, so only the unparsed PHP code will get posted to Facebook, and of course they do not recognize that as a valid image file. You will either have to save that image data locally and then use that file as source; or you have to make the image script publicly available via HTTP and somehow (GET/Session) pass the necessary data to it, and then have Facebook read it themselves, by providing its URL as a parameter url instead of source.
One way to do that is to put your cap3.php file in your remote host to get an URL to it. For example : http://www.example.com/cap3.php. Then, you download that image from your app and send it to facebook.
Let's see an example:
$local_img = 'cap3.png'; // I assume your cap3.php generates a PNG file
$remote_img = 'http://www.example.com/cap3.php';
// sample: check for errors in your production app!
$img_content = file_get_contents($remote_img);
file_put_contents($locale_img, $img_content);
$facebook->setFileUploadSupport(true);
$args=array( 'source' => '@' .$locale_img , 'message' => 'Photo uploaded via!' );
$photo = $facebook->api('/me/photos', 'POST', $args);
Note : I do not know if 'source' => '@'<path> takes a realpath or not, but you'll get the idea.
| common-pile/stackexchange_filtered |
Decipher LINQ to Objects query involving strings
(Disclaimer: I am new to LINQ, and I did do my homework, but all my research came up with examples that deal with strings directly. My code deals with an ObservableCollection of objects where each object contains a string.)
I have the following code:
public class Word
{
public string _word { get; set; }
// Other properties.
}
ObservableCollection<Word> items = new ObservableCollection<Word>();
items.Add(new Word() { _word = "Alfa" });
items.Add(new Word() { _word = "Bravo" });
items.Add(new Word() { _word = "Charlie" });
I am trying to find the words that have an "r" in them:
IEnumerable<Word> subset = from ii in items select ii;
subset = subset.Where(p => p._word.Contains("r"));
The above works (I get 'Bravo' and 'Charlie').
My question: I have devised the above LINQ query from bits and pieces of examples I found online/in books.
How does it do what it does?
Is there be a better/more straightforward way?
Thanks.
You're mixing query- and method syntax. Btw, why do you habe a class Word with a single property -word at all? Also, please follow .NET Naming Conventions.
@TimSchmelter The class contains other properties as I have shown with a comment - I did not detail them here. Also, could you please expand on your comment re: query/method syntax? Which is which?
Query-syntax: var query = from item in collection method-syntax: var query = collection.Select(item => item). Here's a question about the pros and cons: http://stackoverflow.com/questions/214500/linq-fluent-and-query-expression-is-there-any-benefits-of-one-over-other
You could achieve that you want more simple like below:
IEnumerable<Word> subset = items.Where(x => x._word.Contains("r"));
With the above linq query, you filter the collection called items using an extension method called Where. Each element in the collection of items that satisfies the filter in the Where method, it will be contained on the subset. Inside the Where you have defined your fitler, using a lambda expression. On the left side of => you have your input -the random element of items, while on the right side of => you have your output, which in your case will be either true or false. On the rigth side of =>, you usethe string's method called Contains, which in simple terms checks if the string you pass as a parameter to this method is contained in the property called _word, of the element x.
Yes, this works, thank you. But is Where an extension method? I was under the impression that extension methods are those that you define yourself. Sorry if this is a stupid question.
Where is an extension method defined in the Enumerable-class, but @Christos is wrong regarding the Contains method, it is a normal member method of the String class. You can read about extension methods here: http://msdn.microsoft.com/en-us/library/bb383977.aspx
@frankkoch oops you are correct. Thanks for pointing this out.
| common-pile/stackexchange_filtered |
Add shortcode to woocommerce product short description
I'm trying to add a shortcode that will be placed on all single product posts. Specifically on the product's short description. I went to woocommerce templates and found a file "short-description" and I tried placed following code, but it's not working:
<?php echo do_shortcode("[wpqr-code]"); ?>
This shortcode is to supposed to generate and display a qr code on each product post where it is placed.
Please elaborate it more.... e.g. where this description is showing ? listing page or details page ?
I found a solution if anyone needs. Place this code in functions.phpfile:
add_action( 'woocommerce_before_add_to_cart_form', 'enfold_customization_extra_product_content', 15 );
function enfold_customization_extra_product_content() {
echo do_shortcode("[wpqr-code]");
}
| common-pile/stackexchange_filtered |
Parenthesis Checking Using Stack Python
I am learning data structures in Python and working on stacks. I am trying to create a code that uses stacks to match the parenthesis in an expression, however I am not getting the right answer. This needs to be done without the use of Python libraries.
class Stack():
def __init__(self): # Initialize a new stack
self._items = []
self._size = 0
self._top = -1
def push(self, new_item): # Append the new item to the stack
self._items.append(new_item)
self._top += 1
self._size += 1
self._items[self._top] = new_item
def pop(self): # Remove and return the last item from the stack
old_item = self._items[self._top]
self._top -= 1
self._size -= 1
return old_item
def size(self): # Return the total number of elements in the stack
return len(self._items)
def isEmpty(self): # Return True if the stack is empty and False if it is not empty
if self._items == 0:
return True
else:
return False
def peek(self): # Return the element at the top of the stack or return None if the stack is
empty
if self.isEmpty():
return None
else:
return self._items[self._top]
def check_Par(exp):
opening = ['(', '[', '{']
closing = [')', ']', '}']
balanced = True
s = Stack()
for x in exp:
if x in opening:
s.push(x)
elif x in closing:
position = s.pop()
if closing.index(x) and opening.index(position):
balanced = True
else:
balanced = False
pass
if s.isEmpty():
balanced = False
return balanced
exp1 = "(2)+(-5)" # True
exp2 = "((2))*((3))" # True
exp3 = "(4))]" #False
print(check_Par(exp1))
print(check_Par(exp2))
print(check_Par(exp3))
I get an error for line 14 IndexError: list index out of range
Also, I know the for loop is not completed and I am having a hard time fix it.
Any advice it would be greatly appreciated.
Lois
This doesn't answer your problem, but can use a list directly instead of making your own stack class. You can use list.append and list.pop.
Do you really need to implement stack?
@Moosefeather - I know that list.append will get me the right answer, however, i need to use the stack class....
@ toRex - unfortunately yes.
@Lois You are getting IndexError because s.pop() gets called when s._top < 0 and s._size == 0 at some point.
@Moosefeather - any advice on how to fix it?
@Lois actually the isEmpty method is completely wrong, it will always return False.
@Moosefeather - are you referring to isEmpty in the stack class or in the check_par function?
Both, they're the same, in check_Par you're just calling the method from the Stack object. Hope my answer helps.
@Moosefeather thank you
@Moosefeather - Do you mind elaborating a bit more on your answer? If the purpose is to use the Stack class to check if the parenthesis are balanced shouldn't the isEmpty() from the Stack class be used, the same way the pop() is? or do I have this all wrong???
@Lois isEmpty should tell you whether your stack object is empty or not. With lists it's the equivalent of doing len(lst) > 0 or not lst. So you can replace the not stack with stack.isEmpty() (Assuming isEmpty works correctly). Finally, lst.pop() removes the element at the top of the stack and returns it. You can always check the docs or search for these questions yourself).
Sorry I mean't len(lst) == 0 not len(lst) > 0
Here's a solution using a list as a stack, you can translate it to your stack class:
closing_to_opening = {')': '(', ']': '[', '}': '{'}
opening = list(closing_to_opening.values())
def check_par(exp):
stack = []
for c in exp:
if c in opening:
stack.append(c)
elif c in closing_to_opening:
if not stack or stack.pop() != closing_to_opening[c]:
# The above checks '(' <-> ')', '[' <-> ']' etc
return False
# return True if stack is empty:
return not stack
| common-pile/stackexchange_filtered |
how do I fix the filter map and mapWithState function
I am a new scala corder, I have a flatMap function which return a FlatMappedDStream object, it is a sparking streaming job, the handle function return a Map[String, Any], the code is below:
val parseAction = filterActions.flatMap(record => ParseOperation.parseMatch(categoryMap, record))
function def:
val parseMatch = (categoryMap: collection.Map[Int, String], record: Map[String, Any]) => {
record.get("operation").get.toString match {
case "view" => parseView(categoryMap, record)
case "impression" => parseRecord(record)
case "click" => parseRecord(record)
case _ => ListBuffer.apply(record)
}
}
the parseMatch function returns the processed streaming records whose type is Map[String, Any],Now I want to print all results and put in the new filter map function and mapWithState function, I try it but it doesn't work.
the wrong code is below:
val finalActions = parseAction.filter(record => record.get("invalid").get == None)
val userModels = finalActions.map(record => (record.asInstanceOf[Map[String, Any]].getOrElse("deviceid", ""), record))
.mapWithState(StateSpec.function(stateUpdateFunction))
the mapWithState function is:
val stateUpdateFunction = (deviceId: Any, newRecord: Option[Map[String, Any]], stateData: State[Map[String, Any]]) => {
XXXX
}
but the filter function and mapWithState function are not correct, how do I fix it ?
I had fixed it, I modify my returning type from Map[String,Any] to ListBuffer[Map[String,Any]], it does work!
| common-pile/stackexchange_filtered |
Is AskDifferent the right place for bash questions?
If I had a question solely bash or bash-related, would AskDifferent be the right place to ask? Or is there another community more appropriate for that?
As bash is a part of OSX then Ask Different is a place to ask as can be seen by the bash and shell tags.
For OSX specific bash questions e.g. how to set environment variables for GUI and what files are read when a Terminal starts this is really the main site (SuperUser is also OK with these as any OSX question here is also on topic there)
Bash is also on topic on Superuser - Unix & Linux - Ubuntu and no doubt others as a user. Some question re programming bash could be on topic for Stack Overflow but I think that is not going to be the best site.
| common-pile/stackexchange_filtered |
ecpg insert null with host variable (psotgreSQL)
I want to insert a null value into psql table with ecpg host variable, but I have no idea how to do this, it is a simple example below:
EXEC SQL BEGIN DECLARE SECTION;
char var1;
int var2;
EXEC SQL END DECLARE SECTION;
int main(){
EXEC SQL CONNECT TO .....
create();
insert();
EXEC SQL COMMIT WORK;
return 0;
}
void create(){
CREATE TABLE mytable(var1 char(10), var2 int );
}
void insert(){
EXEC SQL INSERT INTO mytable (var1, var2 ) VALUE (:var1, :var2);
}
I want to insert NULL into var1 and var2 in Database, do anyone know how to do that with host variables (:var1, :var2)
*to replace ":var1" to "NULL" works fine, but it seems no a good method.
*I have known that it can determine whether the variable is null by indicator
http://www.postgresql.org/docs/8.3/static/ecpg-variables.html
but it doesn't tell me how to insert or update the value with this method?
yooooo
I tried that "insert" can use indicator too, if you want to like this:
short var1_ind, var2_ind;
void insert(){
EXEC SQL INSERT INTO mytable (var1, var2 )
VALUE (:var1 INDICATOR :var1_ind, :var2 INDICATOR :var2_ind);
}
If you want to insert NULL into var1, just make indicator < 0:
var1_ind = -1
after assign -1 to var1_ind, it would insert NULL to var1 in DB whetever the value of :var1
it is some information from the manual
The indicator variable val_ind will be zero if the value was not null,
and it will be negative if the value was null.
The indicator has another function: if the indicator value is
positive, it means that the value is not null, but it was truncated
when it was stored in the host variable.
| common-pile/stackexchange_filtered |
Docbook - more images in one figure
Is there in Docbook something similar to Subfig from LaTeX?
I want to put two images in a figure, side by side - how is this done in Docbook?
Try my answer and the examples provided here:
http://tex.stackexchange.com/questions/68001/side-by-side-minipage-figures/193104#193104
You can have two (or more) images inside single figure.
<figure><title>The Pythagorean Theorem Illustrated</title>
<mediaobject>
<imageobject>
<imagedata fileref="figures/pythag.png"/>
</imageobject>
<textobject><phrase>An illustration of the Pythagorean Theorem</phrase></textobject>
</mediaobject>
<mediaobject>
<imageobject>
<imagedata fileref="figures/pythag2.png"/>
</imageobject>
<textobject><phrase>the second</phrase></textobject>
</mediaobject>
</figure>
But according to http://docbook.org/tdg/en/html/figure.html DocBook standard does not specify how these elements are to be presented with respect to one another. In other words, you have to develop your representation on your own.
If you'd have XSLT for converting DocBook to HTML (like I do), you could add a CSS rule for images inside figure block to float.
I can not modify the used XSLT (the transformation is done by the publisher). Is there some other way? Tables come to mind...
Then you have to ask this question to the publisher.
Well, another solution would be to create single image out of two by merging.
You might be able to get around your limitation by using two inline mediaobjects sized appropriately.
<inlinemediaobject>
<imageobject><imagedata fileref='image1' /></imageobject>
</inlinemediaobject>
<inlinemediaobject>
<imageobject><imagedata fileref='image2' /></imageobject>
</inlinemediaobject>
This works marginally well with many stylesheets, thought the end result will depend on your publisher. I never thought this was very "good" xml though...
Here is the way I did (after a lot of tries....) - I am using this to generate a pdf (using publican to generate the pdf). I will need to check if will work with html....
<figure id="fig09">
<title>.....</title>
<inlinemediaobject>
<imageobject>
<imagedata align="left" fileref="images/waveformSingle.png" scale="30"/>
</imageobject>
</inlinemediaobject>
<inlinemediaobject>
<imageobject>
<imagedata align="right" fileref="images/waveformAll.png" scale="30"/>
</imageobject>
<textobject>
<phrase>.....</phrase>
</textobject>
</inlinemediaobject>
</figure>
DocBook 5.2 is bringing the subfigure feature from LaTeX using formalgroup (available from b05). akond's example would look like this (and work as expected, when the tooling is updated):
<formalgroup>
<title>The Pythagorean Theorem Illustrated</title>
<figure>
<title>An illustration of the Pythagorean Theorem</title>
<mediaobject>
<imageobject>
<imagedata fileref="figures/pythag.png"/>
</imageobject>
</mediaobject>
</figure>
<figure>
<title>The second illustration of the Pythagorean Theorem</title>
<mediaobject>
<imageobject>
<imagedata fileref="figures/pythag2.png"/>
</imageobject>
</mediaobject>
</figure>
</formalgroup>
| common-pile/stackexchange_filtered |
Java Runtime.getRuntime().exec unable to catch all output
My shell script run.sh
echo "Running my.r"
Rscript /opt/myproject/my.r
Output of run.sh from command line
Running my.r
2019-06-14 job starts at 2019-06-13 16:52:21
========================================================================
1. Load parameters from csv files.
Error in file(file, "rt") : cannot open the connection
Calls: automation.engine ... %>% -> eval -> eval -> read.csv -> read.table -> file
In addition: Warning message:
In file(file, "rt") :
cannot open file '/...../_nodes.csv': No such file or directory
However my Java program can only capture the first line "Running my.r". How can I catch every single line of the output? Below is my Java code:
Process proc = Runtime.getRuntime().exec("/opt/myproject/run.sh");
BufferedReader read = new BufferedReader(new InputStreamReader(
proc.getInputStream()));
try {
proc.waitFor();
} catch (InterruptedException e) {
e.printStackTrace();
throw e;
}
while (read.ready()) {
String line = new String(read.readLine());
results += "\r\n" + line;
System.out.println(line);
}
That is because you are only capturing standard output from Process#getInputStream (like System.out) and errors are printed in error stream (System.err). You will have to capture Process#getErrorStream() as well.
Instead of Runtime.exec() you can use ProcessBuilder and then invoke redirectErrorStream on it to merge those 2 streams.
https://docs.oracle.com/javase/7/docs/api/java/lang/ProcessBuilder.html#redirectErrorStream()
Yes it is the getErrorStream. Thank you for your quick answer!
Also note that if you wait for the process to finish before reading, the process may deadlock if it outputs more than the system's pipe buffer size (e.g. 65k on Linux).
| common-pile/stackexchange_filtered |
Promotion of products and suggestions for illegal product usage
In the last couple of questions I filed in here at AE I was advised to solve my issues by acquiring a Microsoft Windows licence.
I have been struggling to get TWRP to boot on one of my devices and on a first a reaction a user directed me to do it with Windows. While I regard this mostly as a raw attempt to help, even after I succeeded in flashing TWRP to the device from my system the same user kept promoting Microsoft Windows.
In a follow up question I specifically stated I am not willing to acquire a Microsoft Windows licence. This lead not only to more promotion of Microsoft's system, but even to a clear hint to use it illegally.
I do not regard the behaviour of these two users particularly malicious (one at least is clearly trying to help), they are possibly just being somewhat careless. In other StackExchange sites this kind of behaviour would be less welcomed. So the question boils down to how tolerable this sort of product promotion should be at AE.
Perhaps you could specify in your questions what type of PC resources you have access to. Something like "I'm looking for solutions that will work with an Ubuntu PC" would probably suffice, and would (hopefully) avoid discussions/posturing about "Well why can't you use X instead?" since everyone's expectations would be set equally.
I see three downvotes on your question on main site. Could they be the result of meta effect?
I didn't happen to see these questions and answers when they were posted. Coming to them afresh now, I'm going to ask you to consider if maybe you've read too much into this. As Andy Yan said, he uses Ubuntu as his main desktop. He's used both Odin and Heimdall, and based on this experience, he thinks the easiest way for you to achieve your goal is to use Samsung's official tool, even if this means using Windows as a one-off, or borrowing the use of a Windows PC from a friend or workplace.
Just consider how much easier he must think it must be, if it would be worth even that extra effort. Little wonder then, that when you persisted with trying to do it the hard way, got no help, and got stuck again, that he checked whether you really need to do it that way. That's not "promotion of Microsoft's system": it's just someone trying to find the expedient way, and taking care that you're not making things unnecessarily difficult for yourself because he hasn't made the trade-offs clear. I think that's the opposite of "careless".
Far from "tolerating" this kind of behaviour: we encourage it. Lots of users come here with problems and artificial constraints, and most of the time it's easier to remove the constraint than to work around it. You're welcome to hold out for an answer that involves fixing your Heimdall problem, or for one where you don't need to install a recovery at all, but you can't expect anyone to feel bad for telling you an easier answer, even if you have special reasons why you can't or won't follow it.
Hi Dan, I appreciate your answer. Just consider this issue from my perspective. To follow the suggestion that includes using Odin I would have start by acquiring a Microsoft Windows licence (99 €) and then install it. For that I would have to partition the hard drive, set up dual boot, install Windows and reinstall my system. That all could easily take me a couple of days. In contrast, installing heimdall and going through its manual took some dozens of minutes - it clearly was the most expedient procedure in my situation.
Another point here regards heimdall. I have not yet reached a definitive conclusion, but at this stage it does not look like it failed to flash TWRP. The conjecture that it is the cause of the boot failure might be precipitated.
It doesn't matter how much quicker it is to set up Heimdall if it doesn't work for you. And I don't think it's unreasonable for Andy to guess that you might be able to use someone else's computer, even if you know you can't.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.