qid
int64
1
74.7M
question
stringlengths
15
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
4
30.2k
response_k
stringlengths
11
36.5k
8,990,711
I have need to get the word between `.` and `)` I am using this: `\..*\)` But if there is more than one `.` I am getting the first `.` to `)` instead of the last `.` to `)`. E.g: ``` abc.def.ghi.jkl.mymodname(Object sender, CompletedEventArgs e) ``` I am getting : ``` def.ghi.jkl.mymodname(Object sender, CompletedEventArgs e) ``` I want: ``` mymodname(Object sender, CompletedEventArgs e) ``` Any pointers? I'm new to regex, as you can see...
2012/01/24
[ "https://Stackoverflow.com/questions/8990711", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1167499/" ]
Be explicit. If you don't want any dots between your starting dot and the closing parenthesis, you can specify this: ``` \.[^.]*\) ``` `[^.]` is a negated [character class](http://www.regular-expressions.info/charclass.html), meaning "any character except a dot". That way, only non-dots are allowed to match after the dot. And if you don't want the leading dot to be a part of the match (but do want it to be present before the start of the match), you can use a [lookbehind assertion](http://www.regular-expressions.info/lookaround.html): ``` (?<=\.)[^.]*\) ``` This works in nearly all regex engines except for JavaScript and Ruby (until version 1.8), both of which do not support lookbehind.
It sounds like what you really want to do is strip off everything to the last dot. In shell: ``` sed 's/.*\.//' ``` or in PHP ``` preg_replace('/^.*\./', '', $foo); ``` Beware of stray dots within your method, though. ``` def.ghi.jkl.mymodname(Object sender, "Some text.", CompletedEventArgs e) ``` For this, you might want: ``` sed -r 's/.*\.([a-zA-Z]\()/\1/' ``` or something equivalent.
4,622,184
I've been reading a tutorial called [Adding SQL Database support to your iPhone App](http://wiki.phonegap.com/w/page/16494756/Adding-SQL-Database-support-to-your-iPhone-App)(I use PhoneGap because I won't get on Objective-C), I've done that all, but now when I tried to display the result(`celebsDataHandler`) like this it shows nothing: ``` <script type="text/javascript" charset="utf-8" src="db.js"></script> <script type="text/javascript" charset="utf-8"> document.write(celebsDataHandler); </script> ``` What should I do to correct this problem?
2011/01/07
[ "https://Stackoverflow.com/questions/4622184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126353/" ]
`celebsDataHandler` appears to be a function, based on what the tutorial says. You can't `document.write` a function. Have you tried using a more versatile command like `console.log` to see what the value of `celebsDataHandler` is? Or, you may wish to change the last line of the function from ``` alert(html); ``` to ``` return html; ```
First, you need to have some part of your document set up to handle showing the celebrities, for this example. Make sure you have a DOM element in your HTML page that has an ID that you can reference, for example: ``` <div id="myCelebs"></div> ``` Next, change the last line of the `celebsDataHandler` function to: ``` document.getElementById('myCelebs').innerHTML = html; ``` Finally, in your inline script, change your `document.write` call to: ``` loadCelebs(); ``` To recap what is going on: * The `loadCelebs` function contains the SQLite code that queries the database and retrieves your data. It references a **callback function** (in this case named `celebsDataHandler`) that is invoked once the data is ready to be parsed. * The `celebsDataHandler` callback function iterates over results and compiles celebrity data into HTML - which it then injects into the "myCelebs" DOM element. Hope that helps.
42,988,727
Atomic Design seem to be an interesting methodology to construct an UX structure. But can one organism contain others organisms ?
2017/03/23
[ "https://Stackoverflow.com/questions/42988727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2705328/" ]
According to this React implementation of Atomic Design: > > An organism is a group of atoms, molecules and/or other organisms: > > > <https://github.com/diegohaz/arc/wiki/Atomic-Design>
Yes it is completely okay to include an organism inside another organism. An `organism` is actually: A group of atoms, molecules and/or other organisms.
50,914,377
I just can't seem to get is\_user\_logged\_in working outside of my wordpress directory. I have wordpress installed in mydomain/news/ If I execute for example mydomain/test.htm and have within the html an iframe where the source is within mydomain/news then the following code in that iframe works: ``` <?php session_start(); define( 'WP_USE_THEMES', false ); Include_once($_SERVER['DOCUMENT_ROOT'] . '/news/wp-load.php'); if ( is_user_logged_in() ) { $current_user = wp_get_current_user(); $username=$current_user->user_login; } ?> ``` But I need to execute the test from within the mydomain/test.htm and not from within an iframe that is within mydomain/news When I try the following code directly in mydomain/test.htm it does not return the $username - is\_user\_logged\_in() returns false. ``` <?php session_start(); define( 'WP_USE_THEMES', false ); Include_once($_SERVER['DOCUMENT_ROOT'] . '/news/wp-load.php'); if ( is_user_logged_in() ) { $current_user = wp_get_current_user(); $username=$current_user->user_login; } ?> ``` Has anyone any idea why I am not getting is\_user\_logged\_in() to return true? - I am definitely logged in when executing the test code.
2018/06/18
[ "https://Stackoverflow.com/questions/50914377", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6907119/" ]
I did small corrections to your code: ``` DECLARE match_count INTEGER; v_search_string VARCHAR2 (11) := 'JOHN'; v_search_date DATE := date '1984-01-01'; BEGIN FOR t IN ( SELECT A.owner, A.table_name, A.column_name text_column_name, B.column_name date_column_name FROM all_tab_columns A JOIN all_tab_columns B ON A.TABLE_NAME = B.TABLE_NAME AND A.OWNER = B.OWNER AND A.COLUMN_NAME IN ('NAME', 'FULLNAME') AND B.DATA_TYPE = 'DATE' AND A.TABLE_NAME LIKE 'DATA%' ) LOOP BEGIN EXECUTE IMMEDIATE 'SELECT count(*) FROM ' || t.owner || '.' || t.table_name || ' WHERE ' || t.text_column_name || ' = :1' || ' and ' || t.date_column_name || ' = :2' INTO match_count USING v_search_string, v_search_date; IF match_count > 0 THEN DBMS_OUTPUT.put_line ( 'Found! '||t.table_name ); ELSE DBMS_OUTPUT.put_line ( 'No matches for '||t.table_name||'('||t.text_column_name||','||t.date_column_name||')' ); END IF; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.put_line ( 'Error encountered trying to read from ' || t.owner || '.' || t.table_name); END; END LOOP; END; / ```
What you could try to do is to construct the where clause using a select from the `all_tab_columns` and query the corresponding table in a dynamic where clause. ``` SET serveroutput ON DECLARE v_count INTEGER; BEGIN FOR r IN (SELECT a.table_name, ' WHERE ' || LISTAGG(CASE data_type WHEN 'DATE' THEN column_name || ' = DATE ' || '''1984-01-01''' ELSE column_name || ' = ' || '''JOHN''' END, ' or ') WITHIN GROUP( ORDER BY column_name ) AS where_clause FROM user_tab_columns a WHERE column_name IN ( 'NAME', 'FULLNAME' ) OR ( data_type = 'DATE' AND column_name IN ( 'DTTM', 'BDAY', 'DATEFLD', 'DTFIELD' ) ) GROUP BY table_name) LOOP EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM '|| r.table_name|| r.where_clause INTO v_count; IF v_count > 0 THEN dbms_output.Put_line(r.table_name); END IF; END LOOP; END; / ``` Note that what I have given you is an example that gives you some idea and you may construct it and tweak the query for the implicit cursor loop ( within `r in ()` )to suit your needs using `AND/OR/JOIN` conditions `INTERSECT` operator etc however you want.I did it this way in order to test it from my end. **EDIT** : Another option is a brute force approach, i.e to simply search for all the tables in the database with a dynamic where clause and ignore the exception if columns in the where clause are not found in a table. ``` SET serveroutput ON DECLARE v_count NUMBER; v_where_clause VARCHAR2(400) := ' WHERE DTTM = DATE ''1984-01-01'' AND FULLNAME = ''JOHN'''; BEGIN FOR r IN (SELECT owner, table_name FROM all_tables) LOOP BEGIN EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM '|| r.owner|| '.'|| r.table_name || v_where_clause INTO v_count; IF v_count > 0 THEN dbms_output.put_line(r.owner||'.'||r.table_name); END IF; EXCEPTION WHEN OTHERS THEN NULL; --It is ok in this case as you know why it fails. END; END LOOP; END; / ```
18,017,326
I am getting "Segmentation fault (core dumped)" after I enter the first element of the matrix. I know Segmentation fault occurs when the something is tried to accessed which is not physically in the memory, but I am not getting why this error is occuring here. I am using pointers on purpose because I am learning the usage of pointers. ``` #include<stdio.h> #include<stdlib.h> void main() { int i, j, m, n; int **p, **q, **res; p = (int**) malloc(10 * sizeof(int)); q = (int**) malloc(10 * sizeof(int)); res = (int**) malloc(10 * sizeof(int)); printf("Enter the number of rows and columns:"); scanf("%d %d", &m, &n); printf("Enter the elements of the matrix\n"); for(i=0;i<m;i++) { for(j=0;j<n;j++) { scanf("%d", &(*(*(p+i)+j))); } } for(i=0;i<m;i++) { for(j=0;j<n;j++) { printf("%d ", (*(*(p+i)+j))); } printf("\n"); } } ```
2013/08/02
[ "https://Stackoverflow.com/questions/18017326", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2631074/" ]
It's because you don't actually allocate memory for the data inside `p` or `q` or `res`. You allocate size for ten integers, but you should start by allocateing for ten integer *pointers* and then allocate for the data inside that. So like this: ``` /* Get `m` and `n`... */ p = malloc(m * sizeof(int *)); for (i = 0; i < m; i++) p[i] = malloc(n * sizeof(int)); ``` This of course have to be done for the others too. --- Also, you *do* know that you can access these using the same syntax as arrays? No need for the pointer arithmetic or pointer dereference. Just a simple `p[i][j]` will work fine.
It starts here: ``` p = (int**) malloc(10 * sizeof(int)); ``` As `p` ist `int **` this should be ``` p = malloc(10 * sizeof(int *)); ``` Now you have allocated memory for 10 pointers, but still no memory for the individual ints. So add ``` for( i=0; i<10; i++ ) p[i] = malloc(10 * sizeof(int)); ``` Noy you can use `&(*(*(p+i)+j)` (i would prefer to write `&p[i][j]` ) for 0 <= i,j < 10; The same would apply to `q` and `res` too, if you had used them
18,017,326
I am getting "Segmentation fault (core dumped)" after I enter the first element of the matrix. I know Segmentation fault occurs when the something is tried to accessed which is not physically in the memory, but I am not getting why this error is occuring here. I am using pointers on purpose because I am learning the usage of pointers. ``` #include<stdio.h> #include<stdlib.h> void main() { int i, j, m, n; int **p, **q, **res; p = (int**) malloc(10 * sizeof(int)); q = (int**) malloc(10 * sizeof(int)); res = (int**) malloc(10 * sizeof(int)); printf("Enter the number of rows and columns:"); scanf("%d %d", &m, &n); printf("Enter the elements of the matrix\n"); for(i=0;i<m;i++) { for(j=0;j<n;j++) { scanf("%d", &(*(*(p+i)+j))); } } for(i=0;i<m;i++) { for(j=0;j<n;j++) { printf("%d ", (*(*(p+i)+j))); } printf("\n"); } } ```
2013/08/02
[ "https://Stackoverflow.com/questions/18017326", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2631074/" ]
Various errors in your code. I have inlined the comments to the changes. I have removed `q` and `res`. There are two variants of the code, one using a single "block" of memory of size `m * n` and another using one block of memory of size `m` to keep `m` pointers to `m` other blocks of memory of size `n`. **using a single "block" of memory of size `m * n`** *useful when n is constant for each of the m lines* ``` #include <stdio.h> #include <stdlib.h> void main() { int i,j,m,n; /* Changed to *p. Instead of an array of arrays you'll use a single block of memory */ int *p; printf("Enter the number of rows and columns:"); scanf("%d %d",&m,&n); /* m rows and n columns = m * n "cells" */ p = (int*) malloc(m * n * sizeof(int)); printf("Enter the elements of the matrix\n"); for (i=0;i<m;i++) { for (j=0;j<n;j++) { /* the element at row i and column j is the (i * m) + j element of the block of memory, so p + (i*m) + j . It's already an address to memory, so you don't need the & */ scanf("%d", (p + (i*m) + j)); } } for (i=0;i<m;i++) { for (j=0;j<n;j++) { /* same as before, but this time you have to dereference the address */ printf("%d ", *(p + (i*m) + j)); } printf("\n"); } } ``` --- **using one block of memory of size `m` to keep `m` pointers to `m` other blocks of memory of size `n`** *useful when n is variable for each of the m lines ("jagged" arrays)* ``` #include<stdio.h> #include<stdlib.h> void main() { int i,j,m,n; int **p; printf("Enter the number of rows and columns:"); scanf("%d %d",&m,&n); /* We will have m "rows", each element a ptr to a full row of n columns Note that each element has size == sizeof(int*) because it's a ptr to an array of int */ p = (int**) malloc(m * sizeof(int*)); printf("Enter the elements of the matrix\n"); for (i=0;i<m;i++) { /* For each row we have to malloc the space for the columns (elements) */ *(p + i) = (int*)malloc(n * sizeof(int)); for (j=0;j<n;j++) { /* Compare the reference to the one before, note that we first dereference *(p + i) to get the ptr of the i row and then to this ptr we add the column index */ scanf("%d", *(p + i) + j); } } for (i=0;i<m;i++) { for (j=0;j<n;j++) { /* Note the double dereferencing, first to the address of the row of data and then to the exact column */ printf("%d ", *(*(p + i) + j)); } printf("\n"); } } ```
It starts here: ``` p = (int**) malloc(10 * sizeof(int)); ``` As `p` ist `int **` this should be ``` p = malloc(10 * sizeof(int *)); ``` Now you have allocated memory for 10 pointers, but still no memory for the individual ints. So add ``` for( i=0; i<10; i++ ) p[i] = malloc(10 * sizeof(int)); ``` Noy you can use `&(*(*(p+i)+j)` (i would prefer to write `&p[i][j]` ) for 0 <= i,j < 10; The same would apply to `q` and `res` too, if you had used them
1,721,443
configuring some macros inside of Logitech G hub, I'm setting them up for each application. I was going through the shortcuts in file explorer and assigning them to hotkeys, although cant work out if there is a shortcut to change how files are organised. Anyone know the shortcut for that? Thanks in advance, Tom
2022/05/17
[ "https://superuser.com/questions/1721443", "https://superuser.com", "https://superuser.com/users/1693448/" ]
Use `Ctrl``Shift``1` … `Ctrl``Shift``9` to switch between layouts. Use `Ctrl` + mouse scroll wheel to zoom; this will also switch between layouts automatically after the zoom limit for the current layout has been reached.
The closest I can get is `Alt`+`V`, then `L` to focus the layout panel. Then you need to use the cursor keys to navigate around the layout pane. If you talking about sorting, then `Alt`+`V`, then `O`
26,052
> > Les têtes se courbèrent sur les cartons, et le nouveau resta pendant > deux heures dans une tenue exemplaire, quoiqu'il y eût bien, de temps > à autre, quelque boulette de papier lancée d'un bec de plume **qui > vînt** s'éclabousser sur sa figure. > > > (*Madame Bovary*, Chapitre I) La forme *qu'il vînt* est l'imparfait subjonctif du verbe *venir*. Mais dans cette phrase le verbe *vînt* est utilisé sans *que*. Pourquoi ?
2017/06/09
[ "https://french.stackexchange.com/questions/26052", "https://french.stackexchange.com", "https://french.stackexchange.com/users/13276/" ]
Le subjonctif est courant dans les subordonnées qui ont un rôle de caractérisation. Autre exemple: > > J'aimerais une voiture qui ait une belle couleur. > > > Dans cette phrase la couleur est ce qui importe. On fait une présupposition sur les voitures possibles. C'est différent de: > > J'aimerais une voiture qui a une belle couleur. > > > Ici la voiture a pu être choisie pour une autre raison, il est simplement précisé qu'elle a une belle couleur.
I do think that the kind of subjunctive in this sentence is what Glanville Price calls the [generic subjunctive](https://books.google.ro/books?id=p0lZAwAAQBAJ&pg=PT286&lpg=PT286&dq=%22generic%20subjunctive%22&source=bl&ots=kVd_cFtO-J&sig=U9GqZnRybpM8k3-8OEWcq3zWf0o&hl=ro&sa=X&ved=0ahUKEwig49uHjMXUAhXMbZoKHeoUDLMQ6AEIWDAG#v=onepage&q=%22generic%20subjunctive%22&f=false), which is to be found in relative clauses. This particular type of subjunctive, as Glanville Price further explains, relates to "a possible member or members of a class." The member of the class of "paper," which makes the subjunctive the appropriate choice in this context, is "[une] boulette de papier," whose indefiniteness is only emphasized by "quelque." If you think I am right, please let me know by upvoting my self-answer... Edit to add the examples from Glanville Price's *Comprehensive French Grammar*: > > An example will help to make this clear. If I ask someone: ‘Could > you show me the road that leads to the station?’, the relative > clause ‘that leads . . . etc.’ describes a particular road that I know (or, at any rate, that I assume) actually exists – the French > equivalent has the indicative, *Pourriez-vous m’indiquer le chemin > qui conduit à la gare* ? Likewise, if I say: ‘I am looking for a road > [i.e. a road that I know exists and that I am describing] that leads > to the station’, the French equivalent is: *Je cherche un chemin > qui conduit à la gare*. But if I ask: ‘Could you show me a road > that leads to the station?’ (i.e. I am in fact enquiring whether any > such road exists), or if I say: ‘I am looking for a road that [if such a road exists] leads to the station’, the relative clause rather than describing a particular road indicates the type of road that I want,i.e. it relates to any members of the class (which may or may not exist) of ‘roads leading to the station’. In such cases, French has the subjunctive, viz. *Pourriez-vous m’indiquer un chemin qui conduise à la gare ?*, or *Je cherche un chemin qui conduise à la gare*. Likewise, the subjunctive is of course used when the existence of the class in question is represented as hypothetical, as in ‘If you know a road that leads to the station’, *Si vous connaissez un chemin qui conduise à la gare*, or is denied , as in ‘There is no road that leads to the station’, Il n’y a pas de chemin qui conduise à la gare. > > >
52,827,202
I got quite confused about what is what. Would you please tell me what each variables type is? ```cpp char foo[] = "bar"; char *bar = nullptr; char const *qux = nullptr; ``` Aditionally, what is the type of `"bar"`?
2018/10/16
[ "https://Stackoverflow.com/questions/52827202", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3975177/" ]
The type of `foo` is `char[4]`, i.e. a [character array](https://en.cppreference.com/w/cpp/language/string_literal#Notes) containing 4 `char`s (including the trailing null character `'\0'`.) > > String literals can be used to initialize character arrays. If an array is initialized like `char str[] = "foo";`, `str` will contain a copy of the string `"foo"`. > > > The type of `bar` is `char *`, `qux` is `char const *`, just as you declared. `"bar"` is [string literal](https://en.cppreference.com/w/cpp/language/string_literal#Notes) with type `const char[4]`, i.e. an array containing 4 `const` `char`s (also including the trailing null character `'\0'`.) > > The null character (`'\0'`, `L'\0'`, `char16_t()`, etc) is always appended > to the string literal: thus, a string literal `"Hello"` is a `const char[6]` > holding the characters `'H'`, `'e'`, `'l'`, `'l'`, `'o'`, and `'\0'`. > > > Here's a helper class which could give the exact type at compile-time (the idea is borrowed from *Effective.Modern.C++* written by *Scott Meyers*). ``` template <typename> struct TD; ``` then use it like ``` TD<decltype(foo)> td1; TD<decltype("bar")> td2; TD<decltype(bar)> td3; TD<decltype(qux)> td4; ``` e.g. from [clang](https://wandbox.org/permlink/gE2ghrwjXRXehW9C) you'll get error message containing type information like: > > > ``` > prog.cc:12:23: error: implicit instantiation of undefined template 'TD<char [4]>' > TD<decltype(foo)> td1; > ^ > prog.cc:13:25: error: implicit instantiation of undefined template 'TD<char const (&)[4]>' > TD<decltype("bar")> td2; > ^ > prog.cc:14:23: error: implicit instantiation of undefined template 'TD<char *>' > TD<decltype(bar)> td3; > ^ > prog.cc:15:23: error: implicit instantiation of undefined template 'TD<const char *>' > TD<decltype(qux)> td4; > ^ > > ``` > > --- BTW: Because [string literals are treated as lvalues](https://stackoverflow.com/q/10004511/3309790), and [`decltype`](https://en.cppreference.com/w/cpp/language/decltype) yields type of `T&` for lvalues, so the above message from clang gives the type of `"bar"` as an lvalue-reference to array, i.e. `char const (&)[4]`.
The variable `foo` is a character array. Sort of. Somewhere in the memory of your computer the compiler has organised things so that it contains the bytes `[ 0x62, 0x61, 0x72, 0x00 ] "bar\0"`. The compiler added the trailing `\0` (0x00) for you, to mark the end of the string. Let's say the compiler put these bytes at memory address 0x00001000 - the 4096th byte. So even though we think of `foo` as a character array, the variable `foo` is actually the address of the first element of those four bytes, so `foo = 0x00001000`. The variable `bar` is a *pointer*, which is just a number. The number it holds is the address in memory of whatever it is "pointing at". Initially you set `bar` to be `nullptr`, so (probably) `bar = 0x00000000`. It's quite OK to say: ``` bar = foo; ``` Which would mean that `bar` now *points at* `foo`. Since we said the bytes for `foo` were stored at some location in memory (an "address"), that number is just copied into `bar`. So now `bar = 0x00001000` too. The variable `qux` is a pointer to a constant variable. This is a special compiler note so it can generate an error if you try to modify what it's pointing at. It's OK to code: ``` qux = foo; qux = bar; ``` Since all these things are pointers to characters.
62,355,234
I'm writing a lambda function in Python 3.8. The function connects with a dynamodb using boto3: ``` db = boto3.resource('dynamodb', region_name='foo', aws_access_key_id='foo', aws_secret_access_key='foo') ``` That is what I have while I am developing on my local machine and need to test the function. But, when I deploy this to lambda, I can just remove the credentials and my function will connect to the dynamodb [if I have the proper IAM roles and policies setup in place](https://stackoverflow.com/a/43162008/9376397). For example, this code would work fine when deployed to lambda: ``` db = boto3.resource('dynamodb', region_name='foo') ``` The question is, how can I manage this in terms of pushing code to lambda? I am using AWS SAM to deploy to AWS. Right now what I do is once I'm done developing my function, I remove the `aws_access_key_id='foo'` and `aws_secret_access_key='foo'` parts manually and then deploy the functions using SAM. There must be a better way to do this? Could I embed these into my IDE instead? I'm using PyCharm. Would that be a better way? If not, what else?
2020/06/13
[ "https://Stackoverflow.com/questions/62355234", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9376397/" ]
In sam you can invoke your function locally using [sam local invoke](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-invoke.html) or [sam local start-lambda](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-start-lambda.html). Both of them take `--profile` parameter: > > The **AWS credentials profile** to use. > > > This will ensure that your local lambda environment executes with **correct credentials** without needing to hard code them in your code. Subsequently, you can test your code without modifications which would otherwise be needed when hard coding the key id and secret key.
You can use environment variables. Environment variables can be configured both [in pycharm](https://stackoverflow.com/questions/42708389/how-to-set-environment-variables-in-pycharm), as well as in [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html) and [AWS SAM](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-specification-template-anatomy-globals.html). As stated in the [Lambda best practices](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html): "Use environment variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable." You can also use an environment variable to specify which environment is being used, which can then be used to explicitly determine whether credentials are necessary.
62,355,234
I'm writing a lambda function in Python 3.8. The function connects with a dynamodb using boto3: ``` db = boto3.resource('dynamodb', region_name='foo', aws_access_key_id='foo', aws_secret_access_key='foo') ``` That is what I have while I am developing on my local machine and need to test the function. But, when I deploy this to lambda, I can just remove the credentials and my function will connect to the dynamodb [if I have the proper IAM roles and policies setup in place](https://stackoverflow.com/a/43162008/9376397). For example, this code would work fine when deployed to lambda: ``` db = boto3.resource('dynamodb', region_name='foo') ``` The question is, how can I manage this in terms of pushing code to lambda? I am using AWS SAM to deploy to AWS. Right now what I do is once I'm done developing my function, I remove the `aws_access_key_id='foo'` and `aws_secret_access_key='foo'` parts manually and then deploy the functions using SAM. There must be a better way to do this? Could I embed these into my IDE instead? I'm using PyCharm. Would that be a better way? If not, what else?
2020/06/13
[ "https://Stackoverflow.com/questions/62355234", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9376397/" ]
You should **never** put credentials in the code like that. When running the code locally, use the AWS CLI `aws configure` command to store local credentials in a `~/.aws/config` file. The AWS SDKs will automatically look in that file to obtain credentials.
You can use environment variables. Environment variables can be configured both [in pycharm](https://stackoverflow.com/questions/42708389/how-to-set-environment-variables-in-pycharm), as well as in [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html) and [AWS SAM](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-specification-template-anatomy-globals.html). As stated in the [Lambda best practices](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html): "Use environment variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable." You can also use an environment variable to specify which environment is being used, which can then be used to explicitly determine whether credentials are necessary.
62,355,234
I'm writing a lambda function in Python 3.8. The function connects with a dynamodb using boto3: ``` db = boto3.resource('dynamodb', region_name='foo', aws_access_key_id='foo', aws_secret_access_key='foo') ``` That is what I have while I am developing on my local machine and need to test the function. But, when I deploy this to lambda, I can just remove the credentials and my function will connect to the dynamodb [if I have the proper IAM roles and policies setup in place](https://stackoverflow.com/a/43162008/9376397). For example, this code would work fine when deployed to lambda: ``` db = boto3.resource('dynamodb', region_name='foo') ``` The question is, how can I manage this in terms of pushing code to lambda? I am using AWS SAM to deploy to AWS. Right now what I do is once I'm done developing my function, I remove the `aws_access_key_id='foo'` and `aws_secret_access_key='foo'` parts manually and then deploy the functions using SAM. There must be a better way to do this? Could I embed these into my IDE instead? I'm using PyCharm. Would that be a better way? If not, what else?
2020/06/13
[ "https://Stackoverflow.com/questions/62355234", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9376397/" ]
You should **never** put credentials in the code like that. When running the code locally, use the AWS CLI `aws configure` command to store local credentials in a `~/.aws/config` file. The AWS SDKs will automatically look in that file to obtain credentials.
In sam you can invoke your function locally using [sam local invoke](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-invoke.html) or [sam local start-lambda](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-start-lambda.html). Both of them take `--profile` parameter: > > The **AWS credentials profile** to use. > > > This will ensure that your local lambda environment executes with **correct credentials** without needing to hard code them in your code. Subsequently, you can test your code without modifications which would otherwise be needed when hard coding the key id and secret key.
51,405,867
I'm working through Selenium Webdriver, and I've come up to the issue of dynamic objects as a DOM ID. One of my instances gets generated as an ID, something like this: ``` "//*[@id="888itsAnExampleBoi573"]/div[1]/div[2]", ``` and I need to click on the button in the example item to Make Stuff Happen. Because I cannot predict what an objectID will be for my dynamic content, however, I would like to be able to do this, instead: ``` //*[contains(text(), 'example')]/div[1]/div[2]. ``` I've tried to do this, but I'm returned a strange error: ``` Caused by: org.openqa.selenium.InvalidSelectorException: invalid selector: Unable to locate an element with the xpath expression //*[contains(text(), 'example')]/div/div/div[1]/div[3]/div because of the following error: SyntaxError: Failed to execute 'evaluate' on 'Document': The string '//*[contains(text(), 'example')]/div/div/div[1]/div[3]/div' is not a valid XPath expression. ``` On a different element that is a hyperlink with text elements, I've been able to use contains(text()) to solve things, so I believe I've formatted this correctly. I've tried a few different things to solve this issue, but am at somewhat of a loss as to how to solve this. Does anyone have any ideas or resources to point me towards? Or better yet, a solution?
2018/07/18
[ "https://Stackoverflow.com/questions/51405867", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6796162/" ]
You are specifying the bucket name twice in the URL or you are actually using the string "bucket". Your can use the virtual hosted style as: <http://bucketname.s3.amazonaws.com/path/to/file> <http://bucketname.s3-aws-region.amazonaws.com/path/to/file> or the path style URL: <http://s3.amazonaws.com/bucketnamepath/to/file> <http://s3-aws-region.amazonaws.com/bucketname/path/to/file> Replace "aws-region" with the region. Use the "s3-aws-region" style for regions that are not us-east-1. Examples for a bucket in South America: <http://bucketname.s3-sa-east-1.amazonaws.com/path/to/file> <http://s3-sa-east-1.amazonaws.com/bucketname/path/to/file>
Note: AllAccessDisabled error will be displayed when non existing folder path is specified (misspelling) .
25,090,599
I'm trying to limit user storage i already set up foreign key and all but was wondering how can i achieve this.if anyone can point me to the right direction I'm stuck. here is my model.py ``` class Document(models.Model): docfile = models.FileField(upload_to=only_filename) created_at = models.DateTimeField(auto_now_add=True) user = models.ForeignKey(User, related_name='uploaded_files') ```
2014/08/02
[ "https://Stackoverflow.com/questions/25090599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
change setting variable FILE\_UPLOAD\_MAX\_MEMORY\_SIZE to limit max size check this [uploadfile](https://docs.djangoproject.com/en/dev/topics/http/file-uploads/#changing-upload-handler-behavior)
You're trying to give the user a total size limit for his/her storage? Does each user have the same amount or can it be different? I would think if it can be different you'd first of all need a [custom user model](https://docs.djangoproject.com/en/dev/topics/auth/customizing/#specifying-a-custom-user-model) or some other model that keeps track of the user's storage limit. Alternately, if you just make all users have the same limit you can skip creating a model. After that, you'll need some program to calculate how much storage the user has used. One way to achieve that would be to make your `upload_to` field upload to a user-specific directory, something like: ``` class Document(models.Model): docfile = models.FileField(upload_to=USERNAME + '/' + only_filename) ``` And then, somewhere *before* the file gets uploaded, you can do something like this (in the views.py? in a custom model validator with a parameter?): ``` import os def get_user_storage(username): user_files = os.listdir(os.path.join(os.path.abspath(UPLOAD_PATH), username) total = sum(os.path.getsize(fl) for fl in user_files) return total < WHATEVER_THE_MAXIMUM_IS ``` Once you have this function, you pass it a username, and it returns `True` if the user has not exceeded their storage or `False` if they have. One big problem with this, is that it will return `True` until a user exceeds his/her storage, but it doesn't care by how much. So if you set a storage limit of 1gb and a user with no previously uploaded files uploads something that's a terrabyte, this function on its own would allow it. To prevent that from happening, you can set a maximum file upload limit. I haven't tried any of this code, but I think it might get you started in piecing together a solution.
20,036
I've been hearing a lot of enthusiasm about functional programming languages lately, with regards to Scala, Clojure, and F#. I've recently started studying Haskell, to learn the FP paradigm. I love it, it's really fun, and fits my math background. But will it ever really matter? Obviously, it's hardly a new idea. Here's my questions: 1. What has contributed to the recent FP enthusiasm? Is is merely boredom with OO, or has something changed to make FP more needed than before? 2. Is this indicative of a FP future? Or is this a fad, like object orient databases? In other words, can anyone help me understand where this comes from and where it might be going?
2010/11/19
[ "https://softwareengineering.stackexchange.com/questions/20036", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2329/" ]
I think the main answer to that question is 'exposure'. Functional programming is nothing new, I was taught Haskell at university some 12 years ago and loved it. But rarely got to use the language in my professional work. Recently there have been a number of languages gaining traction in the main stream that use a multi-paradigm approach; [F#](http://www.fsharp.net/), JavaScript being prime examples. JavaScript in particular, especially when used with a functional-style framework language like [jQuery](http://jquery.com/) or [Prototype](http://www.prototypejs.org/), is becoming an everyday language for many people due to all the work on dynamic modern websites. This exposure to the functional style makes people realise the power it grants, especially when one is able to drop back to an imperative style at will. Once people are exposed, they try out more fully fledged variants of functional languages and start to use them for day-to-day tasks. With F# becoming a first-class language in Visual Studio 2010 and jQuery (et al) becoming so important, it is becoming realistic to use these languages, rather than just something obscure to play with or make isolated programs. Remember that code has to be maintainable - a critical mass of developers must use and support languages in order for companies to feel safe in using them.
It's neaty and nifty and tickles your brain. That's fine. It's also, IMHO, a classic bandwagon. A solution looking for a problem. It's like all those startup companies founded by engineers dazzled with a favorite idea, that burn brightly for a while, but are quietly overtaken by companies founded on providing what is needed. That's the new idea I would like to see take off, need-based programming, not nifty-idea-based programming. Maybe that sounds mundane, but I think it can in fact be pretty creative, and nifty.
20,036
I've been hearing a lot of enthusiasm about functional programming languages lately, with regards to Scala, Clojure, and F#. I've recently started studying Haskell, to learn the FP paradigm. I love it, it's really fun, and fits my math background. But will it ever really matter? Obviously, it's hardly a new idea. Here's my questions: 1. What has contributed to the recent FP enthusiasm? Is is merely boredom with OO, or has something changed to make FP more needed than before? 2. Is this indicative of a FP future? Or is this a fad, like object orient databases? In other words, can anyone help me understand where this comes from and where it might be going?
2010/11/19
[ "https://softwareengineering.stackexchange.com/questions/20036", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2329/" ]
> > Here's my questions: > 1. What has contributed to the recent FP enthusiasm? Is is merely boredom with OO, or has something changed to make FP more needed than before? > 2. Is this indicative of a FP future? Or is this a fad, like object orient databases? > > > Others have given good technical reason. I think the main reason FP is gaining traction among the average developer and manager types is it's promise to allow better use of multi-core CPUs. From everything I have read FP allows easier (not easy) parallel programming. It's future widespread use will be if the promise is real and is fulfilled.
I think it's a combination of two trends: 1. Functional features being added to mainstream languages (e.g. C#). 2. New functional languages being created. There's probably a natural limit on the first trend, but my guess is that any new language will have to support functional programming, at least as an option, in order to be taken seriously.
20,036
I've been hearing a lot of enthusiasm about functional programming languages lately, with regards to Scala, Clojure, and F#. I've recently started studying Haskell, to learn the FP paradigm. I love it, it's really fun, and fits my math background. But will it ever really matter? Obviously, it's hardly a new idea. Here's my questions: 1. What has contributed to the recent FP enthusiasm? Is is merely boredom with OO, or has something changed to make FP more needed than before? 2. Is this indicative of a FP future? Or is this a fad, like object orient databases? In other words, can anyone help me understand where this comes from and where it might be going?
2010/11/19
[ "https://softwareengineering.stackexchange.com/questions/20036", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2329/" ]
In the late 80ies/early 90ies computers became powerful enough for Smalltalk-style OOP. Nowadays computers are powerful enough for FP. FP is programming at a higher-level und thus often - while more pleasant to program in - not the most efficient way do solve a certain problem. But computers are so fast that you don't care. Multi-core prorgamming can be easier with purely functional programming languages since you are forced to isolate state-changing code. Also programming language borders are blurred today. You don't have to give up one paradigm if you want to use another. You can do FP in most popular languages so the entry barrier is low.
I think it's a combination of two trends: 1. Functional features being added to mainstream languages (e.g. C#). 2. New functional languages being created. There's probably a natural limit on the first trend, but my guess is that any new language will have to support functional programming, at least as an option, in order to be taken seriously.
20,036
I've been hearing a lot of enthusiasm about functional programming languages lately, with regards to Scala, Clojure, and F#. I've recently started studying Haskell, to learn the FP paradigm. I love it, it's really fun, and fits my math background. But will it ever really matter? Obviously, it's hardly a new idea. Here's my questions: 1. What has contributed to the recent FP enthusiasm? Is is merely boredom with OO, or has something changed to make FP more needed than before? 2. Is this indicative of a FP future? Or is this a fad, like object orient databases? In other words, can anyone help me understand where this comes from and where it might be going?
2010/11/19
[ "https://softwareengineering.stackexchange.com/questions/20036", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2329/" ]
I think it is to do with the close correlation between the functional programming paradigm and programming for the web. Ruby on Rails brought the whole functional programming approach into sharp relief since it offered a very quick path to a functional (heh heh) web application. There is [an interesting discussion on SO](https://stackoverflow.com/questions/292033/is-functional-programming-relevant-to-web-development/292280#292280) about this, and one particular answer stands out: > > Functional programming matches web > apps very well. The web app recieves a > HTTP request and produces a HTML > result. This could be considered a > function from requests to pages. > > > Compare with desktop apps, where we > typically have a long running process, > a stateful UI and dataflow in several > directions. This is more suited to OO > which is concerned about objects with > state and message passing. > > > Given that functional programming has been around for ages, I wonder why I don't see many job adverts looking for Lisp developers for greenfield web projects.
Definitely because of F#, though sometimes it is hard to tell which one is the cause and which one is the effect.
20,036
I've been hearing a lot of enthusiasm about functional programming languages lately, with regards to Scala, Clojure, and F#. I've recently started studying Haskell, to learn the FP paradigm. I love it, it's really fun, and fits my math background. But will it ever really matter? Obviously, it's hardly a new idea. Here's my questions: 1. What has contributed to the recent FP enthusiasm? Is is merely boredom with OO, or has something changed to make FP more needed than before? 2. Is this indicative of a FP future? Or is this a fad, like object orient databases? In other words, can anyone help me understand where this comes from and where it might be going?
2010/11/19
[ "https://softwareengineering.stackexchange.com/questions/20036", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2329/" ]
Functional programming gives me the same tingling sense of "*wow, this is new*" as when I first started dabbling with objects years ago. I realize that FP is not a new concept by far, but neither was OO when it got its real break in the nineties when "everyone" suddenly jumped ship from procedural programming. This was largely due to the timely success of Java and later C#. I imagine the same thing will happen with FP eventually once the next batch of languages starts spreading in the same way. Much as they already have, at least in some circles, with languages like Scala and F#.
It used to be that people were writing programs to run on the desktop using the operating system's native APIs, and those APIs were (generally) written in C, so for the most part if you wanted to write a program for the native APIs, you wrote that program in C. I think the new innovation in the last 10 years or so is for a diversity of APIs to catch on, particularly for things like web development where the platform APIs are irrelevant (since constructing a web page basically involves string manipulation). Since you're not coding directly to the Win32 API or the POSIX API, that gives people the freedom to try out functional languages.
20,036
I've been hearing a lot of enthusiasm about functional programming languages lately, with regards to Scala, Clojure, and F#. I've recently started studying Haskell, to learn the FP paradigm. I love it, it's really fun, and fits my math background. But will it ever really matter? Obviously, it's hardly a new idea. Here's my questions: 1. What has contributed to the recent FP enthusiasm? Is is merely boredom with OO, or has something changed to make FP more needed than before? 2. Is this indicative of a FP future? Or is this a fad, like object orient databases? In other words, can anyone help me understand where this comes from and where it might be going?
2010/11/19
[ "https://softwareengineering.stackexchange.com/questions/20036", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2329/" ]
I think the main answer to that question is 'exposure'. Functional programming is nothing new, I was taught Haskell at university some 12 years ago and loved it. But rarely got to use the language in my professional work. Recently there have been a number of languages gaining traction in the main stream that use a multi-paradigm approach; [F#](http://www.fsharp.net/), JavaScript being prime examples. JavaScript in particular, especially when used with a functional-style framework language like [jQuery](http://jquery.com/) or [Prototype](http://www.prototypejs.org/), is becoming an everyday language for many people due to all the work on dynamic modern websites. This exposure to the functional style makes people realise the power it grants, especially when one is able to drop back to an imperative style at will. Once people are exposed, they try out more fully fledged variants of functional languages and start to use them for day-to-day tasks. With F# becoming a first-class language in Visual Studio 2010 and jQuery (et al) becoming so important, it is becoming realistic to use these languages, rather than just something obscure to play with or make isolated programs. Remember that code has to be maintainable - a critical mass of developers must use and support languages in order for companies to feel safe in using them.
> > Here's my questions: > 1. What has contributed to the recent FP enthusiasm? Is is merely boredom with OO, or has something changed to make FP more needed than before? > 2. Is this indicative of a FP future? Or is this a fad, like object orient databases? > > > Others have given good technical reason. I think the main reason FP is gaining traction among the average developer and manager types is it's promise to allow better use of multi-core CPUs. From everything I have read FP allows easier (not easy) parallel programming. It's future widespread use will be if the promise is real and is fulfilled.
20,036
I've been hearing a lot of enthusiasm about functional programming languages lately, with regards to Scala, Clojure, and F#. I've recently started studying Haskell, to learn the FP paradigm. I love it, it's really fun, and fits my math background. But will it ever really matter? Obviously, it's hardly a new idea. Here's my questions: 1. What has contributed to the recent FP enthusiasm? Is is merely boredom with OO, or has something changed to make FP more needed than before? 2. Is this indicative of a FP future? Or is this a fad, like object orient databases? In other words, can anyone help me understand where this comes from and where it might be going?
2010/11/19
[ "https://softwareengineering.stackexchange.com/questions/20036", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2329/" ]
In [this talk](http://blip.tv/file/1317881) Anders Hejlsberg explains his view on the topic. [EDIT] Sorry, the link was wrong. Now it points to the right place. Extremely short summary of some points of the one-hour-talk: Functional languages allow for a more declarative style of programming than procedural languages, so programs written in FLs usually concentrate more on the *what* instead of the *how*. Because of their elegant mathematical structure FLs are also easier to optimize and transform by compilers, which also enables easy meta-programming and the construction of embedded DSLs. All this together makes funtional programs more succinct and self-documenting than procedural programs. Also, in the face of the manycore era of the near future, programming languages need to be able to utilize multi threading/processing in different ways. Multi threading on single core machines was in effect a time sharing mechanism, and the architecture of systems reflected this. Multi threading on manycore machines will be very different. Funcional languages are especially suitable for parallelization, since they mostly avoid state, so one needn't worry as much about the integrity of shared mutable data (because there tend to be no shared mutable data).
It's neaty and nifty and tickles your brain. That's fine. It's also, IMHO, a classic bandwagon. A solution looking for a problem. It's like all those startup companies founded by engineers dazzled with a favorite idea, that burn brightly for a while, but are quietly overtaken by companies founded on providing what is needed. That's the new idea I would like to see take off, need-based programming, not nifty-idea-based programming. Maybe that sounds mundane, but I think it can in fact be pretty creative, and nifty.
20,036
I've been hearing a lot of enthusiasm about functional programming languages lately, with regards to Scala, Clojure, and F#. I've recently started studying Haskell, to learn the FP paradigm. I love it, it's really fun, and fits my math background. But will it ever really matter? Obviously, it's hardly a new idea. Here's my questions: 1. What has contributed to the recent FP enthusiasm? Is is merely boredom with OO, or has something changed to make FP more needed than before? 2. Is this indicative of a FP future? Or is this a fad, like object orient databases? In other words, can anyone help me understand where this comes from and where it might be going?
2010/11/19
[ "https://softwareengineering.stackexchange.com/questions/20036", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2329/" ]
One of the major problems when programming traditional languages like C, Java, C#, assembler etc. is that you have an awkward *sequence* of steps you have to take in order to accomplish a given task because you need to have prepared all the dependencies first and THEIR dependencies earlier An example: In order to do A, you must have B and C present, and B depends on D and E, resulting in something like * D * E * C * B * A because you have to have the ingredients prepared before you can use them. Functional languages, especially the lazy ones, turn this one upside down. By letting A say it needs B and C, and let the language runtime figure out when to get B and C (which in turn requires D and E) all of which are evaluated when needed to evaluate A, you can create very small and concise building blocks, which result in small and concise programs. The lazy languages also allows using infinite lists as only the elements actually used are calculated, and without needing to store the whole datastructure in memory before being able to use it. The really nice trick, is that this automatic "oh, I need a B and a C" mechanism is *scalable* because there is no restriction - as in the sequential program - about where and when this evaluation can happen, so it can happen at the same time and even on different processors or computers. *That*'s why functional languages are interesting - because the "what to do when" mechanism is taken over by the runtime system as opposed to the programmer having to do it manually. This is as important a difference as the automatic garbage collection was for Java to C, and one of the major reasons why it is easier to write robust, scalable multi-threaded software in Java than in C. It is even easier to write robust, scalable multi-threaded software in functional languages...
Definitely because of F#, though sometimes it is hard to tell which one is the cause and which one is the effect.
20,036
I've been hearing a lot of enthusiasm about functional programming languages lately, with regards to Scala, Clojure, and F#. I've recently started studying Haskell, to learn the FP paradigm. I love it, it's really fun, and fits my math background. But will it ever really matter? Obviously, it's hardly a new idea. Here's my questions: 1. What has contributed to the recent FP enthusiasm? Is is merely boredom with OO, or has something changed to make FP more needed than before? 2. Is this indicative of a FP future? Or is this a fad, like object orient databases? In other words, can anyone help me understand where this comes from and where it might be going?
2010/11/19
[ "https://softwareengineering.stackexchange.com/questions/20036", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2329/" ]
I think the main answer to that question is 'exposure'. Functional programming is nothing new, I was taught Haskell at university some 12 years ago and loved it. But rarely got to use the language in my professional work. Recently there have been a number of languages gaining traction in the main stream that use a multi-paradigm approach; [F#](http://www.fsharp.net/), JavaScript being prime examples. JavaScript in particular, especially when used with a functional-style framework language like [jQuery](http://jquery.com/) or [Prototype](http://www.prototypejs.org/), is becoming an everyday language for many people due to all the work on dynamic modern websites. This exposure to the functional style makes people realise the power it grants, especially when one is able to drop back to an imperative style at will. Once people are exposed, they try out more fully fledged variants of functional languages and start to use them for day-to-day tasks. With F# becoming a first-class language in Visual Studio 2010 and jQuery (et al) becoming so important, it is becoming realistic to use these languages, rather than just something obscure to play with or make isolated programs. Remember that code has to be maintainable - a critical mass of developers must use and support languages in order for companies to feel safe in using them.
Definitely because of F#, though sometimes it is hard to tell which one is the cause and which one is the effect.
20,036
I've been hearing a lot of enthusiasm about functional programming languages lately, with regards to Scala, Clojure, and F#. I've recently started studying Haskell, to learn the FP paradigm. I love it, it's really fun, and fits my math background. But will it ever really matter? Obviously, it's hardly a new idea. Here's my questions: 1. What has contributed to the recent FP enthusiasm? Is is merely boredom with OO, or has something changed to make FP more needed than before? 2. Is this indicative of a FP future? Or is this a fad, like object orient databases? In other words, can anyone help me understand where this comes from and where it might be going?
2010/11/19
[ "https://softwareengineering.stackexchange.com/questions/20036", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2329/" ]
In the late 80ies/early 90ies computers became powerful enough for Smalltalk-style OOP. Nowadays computers are powerful enough for FP. FP is programming at a higher-level und thus often - while more pleasant to program in - not the most efficient way do solve a certain problem. But computers are so fast that you don't care. Multi-core prorgamming can be easier with purely functional programming languages since you are forced to isolate state-changing code. Also programming language borders are blurred today. You don't have to give up one paradigm if you want to use another. You can do FP in most popular languages so the entry barrier is low.
> > Here's my questions: > 1. What has contributed to the recent FP enthusiasm? Is is merely boredom with OO, or has something changed to make FP more needed than before? > 2. Is this indicative of a FP future? Or is this a fad, like object orient databases? > > > Others have given good technical reason. I think the main reason FP is gaining traction among the average developer and manager types is it's promise to allow better use of multi-core CPUs. From everything I have read FP allows easier (not easy) parallel programming. It's future widespread use will be if the promise is real and is fulfilled.
6,200,908
I have a string variable with the value `"Top;Left"`. Is it possible to easily parse this to `Control.Anchor` (without using `if`'s)? `Enum.Parse` doesn't work because `Anchor` can take a value, for example, `Top;Left;`, but the `AnchorEnum` can only take `Top`, or `Left`, or `Right`, or `Bottom`, or `None`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/777621/" ]
Use `String.Split`, parse each of them individually using `Enum.TryParse` or `Enum.Parse`, then `OR` the resulting values.
There is no direct way to parse this, but it's pretty easy to write something that does: ``` public static AnchorStyles Parse(string str) { return str.Split(';') .Select(s => (AnchorStyles) Enum.Parse(typeof (AnchorStyles), s, true)) .Aggregate((a1, a2) => a1 | a2); } } ```
6,200,908
I have a string variable with the value `"Top;Left"`. Is it possible to easily parse this to `Control.Anchor` (without using `if`'s)? `Enum.Parse` doesn't work because `Anchor` can take a value, for example, `Top;Left;`, but the `AnchorEnum` can only take `Top`, or `Left`, or `Right`, or `Bottom`, or `None`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/777621/" ]
Use `String.Split`, parse each of them individually using `Enum.TryParse` or `Enum.Parse`, then `OR` the resulting values.
It's easy to parse this without iterating over each option. ``` var s = "Top;Left"; s = s.Replace(";", ", "); var e = Enum.Parse(typeof(AnchorStyles), s); ``` The only gotcha is it has to be comma separated, not semicolon.
6,200,908
I have a string variable with the value `"Top;Left"`. Is it possible to easily parse this to `Control.Anchor` (without using `if`'s)? `Enum.Parse` doesn't work because `Anchor` can take a value, for example, `Top;Left;`, but the `AnchorEnum` can only take `Top`, or `Left`, or `Right`, or `Bottom`, or `None`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/777621/" ]
Use `String.Split`, parse each of them individually using `Enum.TryParse` or `Enum.Parse`, then `OR` the resulting values.
``` string value ="Top;Left"; var anchor =(System.Windows.Forms.AnchorStyles)Enum.Parse(typeof(System.Windows.Forms.AnchorStyles), value.Replace(";", " , ")) ```
6,200,908
I have a string variable with the value `"Top;Left"`. Is it possible to easily parse this to `Control.Anchor` (without using `if`'s)? `Enum.Parse` doesn't work because `Anchor` can take a value, for example, `Top;Left;`, but the `AnchorEnum` can only take `Top`, or `Left`, or `Right`, or `Bottom`, or `None`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/777621/" ]
It's easy to parse this without iterating over each option. ``` var s = "Top;Left"; s = s.Replace(";", ", "); var e = Enum.Parse(typeof(AnchorStyles), s); ``` The only gotcha is it has to be comma separated, not semicolon.
There is no direct way to parse this, but it's pretty easy to write something that does: ``` public static AnchorStyles Parse(string str) { return str.Split(';') .Select(s => (AnchorStyles) Enum.Parse(typeof (AnchorStyles), s, true)) .Aggregate((a1, a2) => a1 | a2); } } ```
6,200,908
I have a string variable with the value `"Top;Left"`. Is it possible to easily parse this to `Control.Anchor` (without using `if`'s)? `Enum.Parse` doesn't work because `Anchor` can take a value, for example, `Top;Left;`, but the `AnchorEnum` can only take `Top`, or `Left`, or `Right`, or `Bottom`, or `None`.
2011/06/01
[ "https://Stackoverflow.com/questions/6200908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/777621/" ]
It's easy to parse this without iterating over each option. ``` var s = "Top;Left"; s = s.Replace(";", ", "); var e = Enum.Parse(typeof(AnchorStyles), s); ``` The only gotcha is it has to be comma separated, not semicolon.
``` string value ="Top;Left"; var anchor =(System.Windows.Forms.AnchorStyles)Enum.Parse(typeof(System.Windows.Forms.AnchorStyles), value.Replace(";", " , ")) ```
174,761
I'm writing a game that will largely take place on only a screen-worth of hexagons, but which will require map to expand in every direction. Also the way I want to display those hexagons requires that I have to be able to iterate over them in order from top to bottom, left to right. I have already written coordinate system, which can represent coords as 2d (axial) or 3d (cube). The system regards (0, 0) as the center hex, so it means I need to store negative coords as well. What I don't have is data structure that fufills my needs. I'm thinking about writing a data structure myself, which will consist of 4 two-dimensional vectors (each for one quadrant of the map in regards to positive/negative) that would be able to get hexes in order I want, but this solution (using 4 vectors) seems unintuitive and long to write. So is there a simpler, possibly already implemented, way?
2019/08/19
[ "https://gamedev.stackexchange.com/questions/174761", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/130865/" ]
There are several options. For example: * An [`std::map`](http://www.cplusplus.com/reference/map/map/) which maps coordinate pairs to tiles * An adjacency graph (each tile has pointers to its neighbors - pretty useful for pathfinding or other applications where you need to navigate from tile to tile) * A [two-dimensional tree](https://en.wikipedia.org/wiki/K-d_tree) (allows you to quickly obtain a rectangular area of your map - useful for rendering or obtaining the local surrounding of an entity) But all these methods have a drawback: While they grow, they allocate memory for new tiles as they need. That means that your tile data will be spread wildly across your RAM. This is bad for CPU caching, which might result in bad performance. (not as bad, though as using growing `std::vector`s, which will from time to time copy the whole content to a different memory location when they grow) When your game world is very large, then it might be useful to divide it into fixed-size chunks. Fixed size means that each chunk can be easily represented as a fixed length array, which is great for memory locality. The chunks can then be connected through one of the options above. Another option could be to allocate tile data in bulk and then put that data into the data structures described above as needed.
Start with a 2d array of tile. If they are all displayed on screen, there should not be too many tiles. Each of your tile hold lot of data? Split data by concern and create one 2d array per concern. For example with a sim city like game path finding array will hold data about terrain surface, electricity availability could be another array... It all depends on how your algorithm access the data. If you just need one kind of data for one process, then give it his own view. This will even allow for efficient multithreading. You have a hell lot of data? Split in squares. Maybe octree could help. Need to apply same function on each tile? Just process as a 1d array. But clearly if your tiles are on screen you should not have more than thousand, then any data design will work.
21,632,696
I have read the posts about sql injection and there I saw that they use the query strings of sites to hack them. I want to know is it safe to use query strings or not and how to make my site stable against sql injection?
2014/02/07
[ "https://Stackoverflow.com/questions/21632696", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3260312/" ]
A sql injection usually comes from bugs in code that runs server side and submit sql queries to a database. Many bugs in the way you implement this can result to a sql injection. You can read values from a url, but before you plug these values to a sql query you should make some checking. In order to answer to your question, query strings are safe the way you use the variables that are in them may be not. As for making your site not vulnerable to them you should implement all your data access layer code (calling of stored procedures, of CRUD operations, of functions etc.) not vulnerable to them. For instance if you use queries, in which you pass parameterized variables then you can avoid a great deal of sql injections. Please take a look here <https://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet>
If you build your SQL statements from untrusted data, such as query strings, then you are vulnerable to SQL injection.
13,560,445
I have an android/java task where I want to get JSON values into a String array to display in a ListView and I am not sure where to begin? Thanks. ``` private String[] values; ... // this is what is returned from the web server (Debug view) // jObj = {"success":1,"0":"Mike","1":"message 1","2":"Fred","3":"message 2","4":"John","5":"message 3"}; try { if (jObj.getInt("success") == 1) { . // what i'm trying to do here is iterate thru JObj and assign values to the // values array to populate the ArrayAdapter so that the ListView displays this: // // Mike: Message 1 // Fred: Message 2 // John: Message 3 // . this.setListAdapter(new ArrayAdapter<String>( this, android.R.layout.simple_list_item_1, android.R.id.text1, values)); ListView listView = getListView(); } } catch (JSONException e) { Log.e(TAG, e.toString()); } ```
2012/11/26
[ "https://Stackoverflow.com/questions/13560445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1126991/" ]
Use `ArrayList` instead of `Array` to add values retrieved from Json Obejct and then set ArrayList for ListView as data-source. change your code as: ``` ArrayList<String> array_list_values = new ArrayList<String>(); try { if (jObj.getInt("success") == 1) { array_list_values.add(jObj.getString("0")); array_list_values.add(jObj.getString("1")); array_list_values.add(jObj.getString("2")); array_list_values.add(jObj.getString("3")); array_list_values.add(jObj.getString("4")); array_list_values.add(jObj.getString("5")); this.setListAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, android.R.id.text1, array_list_values)); ListView listView = getListView(); } } catch (JSONException e) { Log.e(TAG, e.toString()); } ``` EDIT : if number of messages is not always the same it may 1 to a larger number then you can Iterate JsonObject as: ``` ArrayList<String> array_list_values = new ArrayList<String>(); try { if (jObj.getInt("success") == 1) { Iterator iter = jObj.keys(); while(iter.hasNext()){ String key = (String)iter.next(); String value = jObj.getString(key); array_list_values.add(value); } this.setListAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, android.R.id.text1, array_list_values)); ListView listView = getListView(); } } catch (JSONException e) { Log.e(TAG, e.toString()); } ```
You should do something like that and use the awesome GSON library: ``` InputStream source = retrieveStream(url); Gson gson = new Gson(); Reader reader = new InputStreamReader(source); APIResponse response = gson.fromJson(reader, APIResponse.class); ``` With: ``` Public class APIResponse{ public String 0; public String 1; public String 2; public String 3; public String 4; public String 5; public String succes; } ``` But this won't work as you need to find an other name that 0,1,2,3,4,5 for the variables. Please change this on server side
13,560,445
I have an android/java task where I want to get JSON values into a String array to display in a ListView and I am not sure where to begin? Thanks. ``` private String[] values; ... // this is what is returned from the web server (Debug view) // jObj = {"success":1,"0":"Mike","1":"message 1","2":"Fred","3":"message 2","4":"John","5":"message 3"}; try { if (jObj.getInt("success") == 1) { . // what i'm trying to do here is iterate thru JObj and assign values to the // values array to populate the ArrayAdapter so that the ListView displays this: // // Mike: Message 1 // Fred: Message 2 // John: Message 3 // . this.setListAdapter(new ArrayAdapter<String>( this, android.R.layout.simple_list_item_1, android.R.id.text1, values)); ListView listView = getListView(); } } catch (JSONException e) { Log.e(TAG, e.toString()); } ```
2012/11/26
[ "https://Stackoverflow.com/questions/13560445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1126991/" ]
Use `ArrayList` instead of `Array` to add values retrieved from Json Obejct and then set ArrayList for ListView as data-source. change your code as: ``` ArrayList<String> array_list_values = new ArrayList<String>(); try { if (jObj.getInt("success") == 1) { array_list_values.add(jObj.getString("0")); array_list_values.add(jObj.getString("1")); array_list_values.add(jObj.getString("2")); array_list_values.add(jObj.getString("3")); array_list_values.add(jObj.getString("4")); array_list_values.add(jObj.getString("5")); this.setListAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, android.R.id.text1, array_list_values)); ListView listView = getListView(); } } catch (JSONException e) { Log.e(TAG, e.toString()); } ``` EDIT : if number of messages is not always the same it may 1 to a larger number then you can Iterate JsonObject as: ``` ArrayList<String> array_list_values = new ArrayList<String>(); try { if (jObj.getInt("success") == 1) { Iterator iter = jObj.keys(); while(iter.hasNext()){ String key = (String)iter.next(); String value = jObj.getString(key); array_list_values.add(value); } this.setListAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, android.R.id.text1, array_list_values)); ListView listView = getListView(); } } catch (JSONException e) { Log.e(TAG, e.toString()); } ```
``` JSONObject jsonObject = new JSONObject(jsonString); String value0 = jsonObject.getString("0"); ``` or in for loop ``` String tempArray[] = new String[5]; ``` just do ``` for(int i=0;condition;i++){ tempArray[i] = jsonObject.getString(String.ValueOf(i)); } ``` and pass the array to adapter
535,777
I have a Training Set of respiratory disease sounds, so there are 2 classes: * 0 for respiratory sounds of healthy patients. * 1 for breathing sounds of patients with a disease. The Training Set is heavily unbalanced, there are many more examples of class 1 than of class 0. So my network architecture has problems learning on it. So I decided to try two strategies: * class\_weight: which is present in Keras, going to weight class 0 more than class 1 in the cost function. * UnderSampling **Question:** How do I choose between these two strategies? is it correct to run the Model Training on the Training Set twice applying the previous two strategies and choose the strategy that performs best on the Test Set? I think this is correct because I am not fine-tuning the hyper-parameters. Or is it wrong and I should use a Validation Set?
2021/07/24
[ "https://stats.stackexchange.com/questions/535777", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/295112/" ]
Use a probability model, e.g., binary logistic regression model. This automatically handles even extreme "class" imbalance (outcome imbalance). The goal of most analyses is not [forced choice classification](https://fharrell.com/post/classification) but is rather the estimation of *tendencies*, i.e., probabilities. Any method that requires you to discard valid data is bogus. Stay away from sampling.
To add a little to @Frank\_Harrell's answer, The class imbalance problem is often not because of imbalance per se, but because you have to few examples of the minority class to properly characterise it's distribution. Adding more data may resolve the imbalance problem if that is possible. Unless you have a *very* large dataset, undersampling is likely to make things worse by having too few examples to properly characterise the majority class as well. Given a choice between the two, I would choose differential weighting of examples of the two classes. In most practical applications, where you do have to make a forced choice, especially things like medical screening tests, the false positive and false negative misclassification costs are differen. You may find that weighting the examples according to their misclassification costs is sufficient to deal with the problem (i.e. minimum risk classification). Accounting for misclassification costs is a good thing to do anyway, but if you have a probabilistic classifier that does not have an estimation issue in unbalanced settings, then as @Frank-Harrell suggests, it is better to apply them to well-calibrated probabilities from your classifier. If you are going to weight the patterns, I suggest selecting the weights to exactly balance the positive and negative classes to avoid any bias in the estimation of the model parameters caused by the imbalance, but then scale the output probabilities from the model to reflect the operational class frequencies (because otherwise the model is likely to wildly over-predict the minority class in operational use). I would also investigate the data with something simple, like regularised logistic regression before looking at deep learning (if you haven't already). There are far fewer pitfalls and DL may not necessarily work better (and could be a lot worse if you fall into one of the many pitfalls). I would strongly recommend ***never*** use machine learning models with their [default parameter settings](https://arxiv.org/abs/1703.06777).
4,621,935
I was just wondering what was the most efficient way of adding lots (sometimes about 1000) of table rows through jQuery. At the moment I use jQuery ajax to get the data through JSON then i loop over the json object and construct html into a variable and then add that html into the table. e.g. ``` var arrayObject = $.parseJSON(data); var htmlToShow = ''; $.each(arrayObject, function(index, value){ var htmlToShow += '<tr><td>' + value[0] + '</td><td>' + value[1] + '</td></tr>'; }); $('#tableResults').html(htmlToShow); ``` I was just wondering what is the least intensive method of doing this? Thanks in advance.
2011/01/07
[ "https://Stackoverflow.com/questions/4621935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/531746/" ]
I think this is about as good as you're going to get: ``` var arrayObject = $.parseJSON(data); var htmlToShow = $.map(arrayObject,function(o){ return '<tr><td>' + o[0] + '</td><td>' + o[1] + '</td></tr>'; }).join(''); $('#tableResults').html(htmlToShow); ``` Incrementally building all the intermediary strings with += is not a good idea. Using `html()` to set one big blob of HTML (as you did) is, however, a good idea. If you need to add additional rows to an existing table, note that it is legal to add multiple `tbody` elements to the same table. Do this and then set the `html()` of the new tbody *en mass* as above. **Edit**: Well, I modify my answer. If you really, really want to eek every last bit of performance from your script, don't use jQuery. Don't use `map`, don't use `each`. Write your own for loop to map one array to an array of strings, join them, and then set the `innerHTML` directly. But I think you'll find the performance savings in doing this do not merit the extra work, testing, or possible fragility.
You might try: ``` $(htmlToShow).appendTo('#tableResults'); ```
4,621,935
I was just wondering what was the most efficient way of adding lots (sometimes about 1000) of table rows through jQuery. At the moment I use jQuery ajax to get the data through JSON then i loop over the json object and construct html into a variable and then add that html into the table. e.g. ``` var arrayObject = $.parseJSON(data); var htmlToShow = ''; $.each(arrayObject, function(index, value){ var htmlToShow += '<tr><td>' + value[0] + '</td><td>' + value[1] + '</td></tr>'; }); $('#tableResults').html(htmlToShow); ``` I was just wondering what is the least intensive method of doing this? Thanks in advance.
2011/01/07
[ "https://Stackoverflow.com/questions/4621935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/531746/" ]
I think sending row data in JSON is better than making HTML code on server side. Efficient way to do this on client side is (array operations are much faster than string concatenation): ``` var rows = $.parseJSON(data); var html = []; for (var i=0; i < rows.length; i++){ html.push('<tr><td>' + rows[i].column + '</td></tr>'); } $('#tableResults').html(html.join('')); ``` **UPDATE**: As hradac showed in comment below, on newer browsers with faster JS engines (Chrome, FF) string concatecation is faster than array joins - [proof](http://jsperf.com/array-join-vs-string-connect)
The most efficient option would be to generate HTML on the server, then set it directly to the innerHTML.
4,621,935
I was just wondering what was the most efficient way of adding lots (sometimes about 1000) of table rows through jQuery. At the moment I use jQuery ajax to get the data through JSON then i loop over the json object and construct html into a variable and then add that html into the table. e.g. ``` var arrayObject = $.parseJSON(data); var htmlToShow = ''; $.each(arrayObject, function(index, value){ var htmlToShow += '<tr><td>' + value[0] + '</td><td>' + value[1] + '</td></tr>'; }); $('#tableResults').html(htmlToShow); ``` I was just wondering what is the least intensive method of doing this? Thanks in advance.
2011/01/07
[ "https://Stackoverflow.com/questions/4621935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/531746/" ]
I don't believe it's a good idea to bury HTML code on the server side, or even in JavaScript code. It's a generally accepted practice that you should keep your content, presentation, and behavior separate. This is similar to the idea of MVC. There is a plug-in for JQuery called DataTables. It's great for developing dynamic HTML tables without having to code from scratch. This example demonstrates an AJAX call to the server that returns a JSON response representing the table. I believe JSON is a better option for data transport because it keeps your content where it belongs, on the client-side. It also makes your server-side code more reusable, as other client-side applications can then consume that data and render it in whatever format that client-side chooses, be it HTML, XML, JSON, or some other format. <http://www.datatables.net/examples/data_sources/server_side.html> There is also an example of adding a row to an existing table, without reloading the page. In this example, the row is passed into the API as a JSON Array. This example could easily be modified to take a series of arrays, if you were returning more than one row of data in your response. The main benefit is that there are third-party libraries to convert server-side objects to JSON, and DataTables of course will convert that to your HTML, eliminating the need for you to troubleshoot bugs in your HTML-generation strategy. <http://www.datatables.net/examples/api/add_row.html> In the end, your web designers will thank you for keeping the HTML where they can easily access it, on the client side, instead of buried in some server-side function surrounded by code that they may or may not be familiar with.
You may try to display your data as **paginated data.** By that way, you only need to show part of the all data at a time. It's because few people can scroll to view all 10.000 data row at a time in a website. For example, you may have a function like this: getPaginatedData(offset, max, condition). In that, you get the data that fits condition, the max row it should get, and the offset of the data row you begin with. For example: instead getting 10000 data rows from table CUSTOMER, you can do that. If the viewer is on page x, you can get 100 customer out of 10000 customer (sorted by name), begin at the position x, then construct the html at client side. I believe that this approach (if it fits your desires) is more efficient than constructing HTML at server side, which increase the traffic and server side complexity.
4,621,935
I was just wondering what was the most efficient way of adding lots (sometimes about 1000) of table rows through jQuery. At the moment I use jQuery ajax to get the data through JSON then i loop over the json object and construct html into a variable and then add that html into the table. e.g. ``` var arrayObject = $.parseJSON(data); var htmlToShow = ''; $.each(arrayObject, function(index, value){ var htmlToShow += '<tr><td>' + value[0] + '</td><td>' + value[1] + '</td></tr>'; }); $('#tableResults').html(htmlToShow); ``` I was just wondering what is the least intensive method of doing this? Thanks in advance.
2011/01/07
[ "https://Stackoverflow.com/questions/4621935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/531746/" ]
I think sending row data in JSON is better than making HTML code on server side. Efficient way to do this on client side is (array operations are much faster than string concatenation): ``` var rows = $.parseJSON(data); var html = []; for (var i=0; i < rows.length; i++){ html.push('<tr><td>' + rows[i].column + '</td></tr>'); } $('#tableResults').html(html.join('')); ``` **UPDATE**: As hradac showed in comment below, on newer browsers with faster JS engines (Chrome, FF) string concatecation is faster than array joins - [proof](http://jsperf.com/array-join-vs-string-connect)
I don't believe it's a good idea to bury HTML code on the server side, or even in JavaScript code. It's a generally accepted practice that you should keep your content, presentation, and behavior separate. This is similar to the idea of MVC. There is a plug-in for JQuery called DataTables. It's great for developing dynamic HTML tables without having to code from scratch. This example demonstrates an AJAX call to the server that returns a JSON response representing the table. I believe JSON is a better option for data transport because it keeps your content where it belongs, on the client-side. It also makes your server-side code more reusable, as other client-side applications can then consume that data and render it in whatever format that client-side chooses, be it HTML, XML, JSON, or some other format. <http://www.datatables.net/examples/data_sources/server_side.html> There is also an example of adding a row to an existing table, without reloading the page. In this example, the row is passed into the API as a JSON Array. This example could easily be modified to take a series of arrays, if you were returning more than one row of data in your response. The main benefit is that there are third-party libraries to convert server-side objects to JSON, and DataTables of course will convert that to your HTML, eliminating the need for you to troubleshoot bugs in your HTML-generation strategy. <http://www.datatables.net/examples/api/add_row.html> In the end, your web designers will thank you for keeping the HTML where they can easily access it, on the client side, instead of buried in some server-side function surrounded by code that they may or may not be familiar with.
4,621,935
I was just wondering what was the most efficient way of adding lots (sometimes about 1000) of table rows through jQuery. At the moment I use jQuery ajax to get the data through JSON then i loop over the json object and construct html into a variable and then add that html into the table. e.g. ``` var arrayObject = $.parseJSON(data); var htmlToShow = ''; $.each(arrayObject, function(index, value){ var htmlToShow += '<tr><td>' + value[0] + '</td><td>' + value[1] + '</td></tr>'; }); $('#tableResults').html(htmlToShow); ``` I was just wondering what is the least intensive method of doing this? Thanks in advance.
2011/01/07
[ "https://Stackoverflow.com/questions/4621935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/531746/" ]
I think sending row data in JSON is better than making HTML code on server side. Efficient way to do this on client side is (array operations are much faster than string concatenation): ``` var rows = $.parseJSON(data); var html = []; for (var i=0; i < rows.length; i++){ html.push('<tr><td>' + rows[i].column + '</td></tr>'); } $('#tableResults').html(html.join('')); ``` **UPDATE**: As hradac showed in comment below, on newer browsers with faster JS engines (Chrome, FF) string concatecation is faster than array joins - [proof](http://jsperf.com/array-join-vs-string-connect)
You might try: ``` $(htmlToShow).appendTo('#tableResults'); ```
4,621,935
I was just wondering what was the most efficient way of adding lots (sometimes about 1000) of table rows through jQuery. At the moment I use jQuery ajax to get the data through JSON then i loop over the json object and construct html into a variable and then add that html into the table. e.g. ``` var arrayObject = $.parseJSON(data); var htmlToShow = ''; $.each(arrayObject, function(index, value){ var htmlToShow += '<tr><td>' + value[0] + '</td><td>' + value[1] + '</td></tr>'; }); $('#tableResults').html(htmlToShow); ``` I was just wondering what is the least intensive method of doing this? Thanks in advance.
2011/01/07
[ "https://Stackoverflow.com/questions/4621935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/531746/" ]
I think sending row data in JSON is better than making HTML code on server side. Efficient way to do this on client side is (array operations are much faster than string concatenation): ``` var rows = $.parseJSON(data); var html = []; for (var i=0; i < rows.length; i++){ html.push('<tr><td>' + rows[i].column + '</td></tr>'); } $('#tableResults').html(html.join('')); ``` **UPDATE**: As hradac showed in comment below, on newer browsers with faster JS engines (Chrome, FF) string concatecation is faster than array joins - [proof](http://jsperf.com/array-join-vs-string-connect)
You may try to display your data as **paginated data.** By that way, you only need to show part of the all data at a time. It's because few people can scroll to view all 10.000 data row at a time in a website. For example, you may have a function like this: getPaginatedData(offset, max, condition). In that, you get the data that fits condition, the max row it should get, and the offset of the data row you begin with. For example: instead getting 10000 data rows from table CUSTOMER, you can do that. If the viewer is on page x, you can get 100 customer out of 10000 customer (sorted by name), begin at the position x, then construct the html at client side. I believe that this approach (if it fits your desires) is more efficient than constructing HTML at server side, which increase the traffic and server side complexity.
4,621,935
I was just wondering what was the most efficient way of adding lots (sometimes about 1000) of table rows through jQuery. At the moment I use jQuery ajax to get the data through JSON then i loop over the json object and construct html into a variable and then add that html into the table. e.g. ``` var arrayObject = $.parseJSON(data); var htmlToShow = ''; $.each(arrayObject, function(index, value){ var htmlToShow += '<tr><td>' + value[0] + '</td><td>' + value[1] + '</td></tr>'; }); $('#tableResults').html(htmlToShow); ``` I was just wondering what is the least intensive method of doing this? Thanks in advance.
2011/01/07
[ "https://Stackoverflow.com/questions/4621935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/531746/" ]
I don't believe it's a good idea to bury HTML code on the server side, or even in JavaScript code. It's a generally accepted practice that you should keep your content, presentation, and behavior separate. This is similar to the idea of MVC. There is a plug-in for JQuery called DataTables. It's great for developing dynamic HTML tables without having to code from scratch. This example demonstrates an AJAX call to the server that returns a JSON response representing the table. I believe JSON is a better option for data transport because it keeps your content where it belongs, on the client-side. It also makes your server-side code more reusable, as other client-side applications can then consume that data and render it in whatever format that client-side chooses, be it HTML, XML, JSON, or some other format. <http://www.datatables.net/examples/data_sources/server_side.html> There is also an example of adding a row to an existing table, without reloading the page. In this example, the row is passed into the API as a JSON Array. This example could easily be modified to take a series of arrays, if you were returning more than one row of data in your response. The main benefit is that there are third-party libraries to convert server-side objects to JSON, and DataTables of course will convert that to your HTML, eliminating the need for you to troubleshoot bugs in your HTML-generation strategy. <http://www.datatables.net/examples/api/add_row.html> In the end, your web designers will thank you for keeping the HTML where they can easily access it, on the client side, instead of buried in some server-side function surrounded by code that they may or may not be familiar with.
The most efficient option would be to generate HTML on the server, then set it directly to the innerHTML.
4,621,935
I was just wondering what was the most efficient way of adding lots (sometimes about 1000) of table rows through jQuery. At the moment I use jQuery ajax to get the data through JSON then i loop over the json object and construct html into a variable and then add that html into the table. e.g. ``` var arrayObject = $.parseJSON(data); var htmlToShow = ''; $.each(arrayObject, function(index, value){ var htmlToShow += '<tr><td>' + value[0] + '</td><td>' + value[1] + '</td></tr>'; }); $('#tableResults').html(htmlToShow); ``` I was just wondering what is the least intensive method of doing this? Thanks in advance.
2011/01/07
[ "https://Stackoverflow.com/questions/4621935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/531746/" ]
The most efficient option would be to generate HTML on the server, then set it directly to the innerHTML.
You might try: ``` $(htmlToShow).appendTo('#tableResults'); ```
4,621,935
I was just wondering what was the most efficient way of adding lots (sometimes about 1000) of table rows through jQuery. At the moment I use jQuery ajax to get the data through JSON then i loop over the json object and construct html into a variable and then add that html into the table. e.g. ``` var arrayObject = $.parseJSON(data); var htmlToShow = ''; $.each(arrayObject, function(index, value){ var htmlToShow += '<tr><td>' + value[0] + '</td><td>' + value[1] + '</td></tr>'; }); $('#tableResults').html(htmlToShow); ``` I was just wondering what is the least intensive method of doing this? Thanks in advance.
2011/01/07
[ "https://Stackoverflow.com/questions/4621935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/531746/" ]
I don't believe it's a good idea to bury HTML code on the server side, or even in JavaScript code. It's a generally accepted practice that you should keep your content, presentation, and behavior separate. This is similar to the idea of MVC. There is a plug-in for JQuery called DataTables. It's great for developing dynamic HTML tables without having to code from scratch. This example demonstrates an AJAX call to the server that returns a JSON response representing the table. I believe JSON is a better option for data transport because it keeps your content where it belongs, on the client-side. It also makes your server-side code more reusable, as other client-side applications can then consume that data and render it in whatever format that client-side chooses, be it HTML, XML, JSON, or some other format. <http://www.datatables.net/examples/data_sources/server_side.html> There is also an example of adding a row to an existing table, without reloading the page. In this example, the row is passed into the API as a JSON Array. This example could easily be modified to take a series of arrays, if you were returning more than one row of data in your response. The main benefit is that there are third-party libraries to convert server-side objects to JSON, and DataTables of course will convert that to your HTML, eliminating the need for you to troubleshoot bugs in your HTML-generation strategy. <http://www.datatables.net/examples/api/add_row.html> In the end, your web designers will thank you for keeping the HTML where they can easily access it, on the client side, instead of buried in some server-side function surrounded by code that they may or may not be familiar with.
I think this is about as good as you're going to get: ``` var arrayObject = $.parseJSON(data); var htmlToShow = $.map(arrayObject,function(o){ return '<tr><td>' + o[0] + '</td><td>' + o[1] + '</td></tr>'; }).join(''); $('#tableResults').html(htmlToShow); ``` Incrementally building all the intermediary strings with += is not a good idea. Using `html()` to set one big blob of HTML (as you did) is, however, a good idea. If you need to add additional rows to an existing table, note that it is legal to add multiple `tbody` elements to the same table. Do this and then set the `html()` of the new tbody *en mass* as above. **Edit**: Well, I modify my answer. If you really, really want to eek every last bit of performance from your script, don't use jQuery. Don't use `map`, don't use `each`. Write your own for loop to map one array to an array of strings, join them, and then set the `innerHTML` directly. But I think you'll find the performance savings in doing this do not merit the extra work, testing, or possible fragility.
4,621,935
I was just wondering what was the most efficient way of adding lots (sometimes about 1000) of table rows through jQuery. At the moment I use jQuery ajax to get the data through JSON then i loop over the json object and construct html into a variable and then add that html into the table. e.g. ``` var arrayObject = $.parseJSON(data); var htmlToShow = ''; $.each(arrayObject, function(index, value){ var htmlToShow += '<tr><td>' + value[0] + '</td><td>' + value[1] + '</td></tr>'; }); $('#tableResults').html(htmlToShow); ``` I was just wondering what is the least intensive method of doing this? Thanks in advance.
2011/01/07
[ "https://Stackoverflow.com/questions/4621935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/531746/" ]
I don't believe it's a good idea to bury HTML code on the server side, or even in JavaScript code. It's a generally accepted practice that you should keep your content, presentation, and behavior separate. This is similar to the idea of MVC. There is a plug-in for JQuery called DataTables. It's great for developing dynamic HTML tables without having to code from scratch. This example demonstrates an AJAX call to the server that returns a JSON response representing the table. I believe JSON is a better option for data transport because it keeps your content where it belongs, on the client-side. It also makes your server-side code more reusable, as other client-side applications can then consume that data and render it in whatever format that client-side chooses, be it HTML, XML, JSON, or some other format. <http://www.datatables.net/examples/data_sources/server_side.html> There is also an example of adding a row to an existing table, without reloading the page. In this example, the row is passed into the API as a JSON Array. This example could easily be modified to take a series of arrays, if you were returning more than one row of data in your response. The main benefit is that there are third-party libraries to convert server-side objects to JSON, and DataTables of course will convert that to your HTML, eliminating the need for you to troubleshoot bugs in your HTML-generation strategy. <http://www.datatables.net/examples/api/add_row.html> In the end, your web designers will thank you for keeping the HTML where they can easily access it, on the client side, instead of buried in some server-side function surrounded by code that they may or may not be familiar with.
You might try: ``` $(htmlToShow).appendTo('#tableResults'); ```
61,305
We have a Postgres RDS instance on Amazon Web Services. We have automatic backups enabled, and we take snapshots on a daily basis. We would like to generate a local 'up-to-date' backup of the RDS instance that we can manage ourselves. Running pg\_dump against the instance is not sufficient because we want to be able to restore the database to any point in time. We would prefer to have a local backup of RDS and all WAL files since that backup was taken. Questions: 1. Is it possible to access the WAL files and backups that RDS is automatically generating in its backup routine? This would be ideal. I would want to download a local copy of them. After initial investigation, I feel the answer to this question is 'no'. It sounds like RDS is storing its WAL files and backups in S3, but it makes them inaccessible to us. I would love confirmation. 2. Is there any other way to access transactions (WAL files) that have occurred on the RDS instance? I imagine we should be able to create a Postgres database on an EC2 and 'feed' transactions from our primary 'live' RDS instance into this EC2 instance. Once our EC2 instance is updated, we could pull WAL files from there. What a headache, though :/ Is this setup possible? What is the magic to 'feed' from our RDS instance to the EC2 instance so that it is always up to date? Thanks!
2014/03/19
[ "https://dba.stackexchange.com/questions/61305", "https://dba.stackexchange.com", "https://dba.stackexchange.com/users/35679/" ]
Update: [I've posted about this to the AWS forums - please go chime in and ask for it there](https://forums.aws.amazon.com/message.jspa?messageID=547192#547192). --- At time of writing, Amazon RDS does not support physical replication outside RDS. You can `GRANT` users the `REPLICATION` right using an `rds_superuser` login, but you can't configure `replication` entries for outside IPs in `pg_hba.conf`. Furthermore, when you create a DB parameter group in RDS, some key parameters are shown but locked, e.g. `archive_command`, which is locked to `/etc/rds/dbbin/pgscripts/rds_wal_archive %p`. AWS RDS for PostgreSQL does not appear to expose these WALs for external access (say, via S3) as it would need to if you were to use WAL-shipping replication for external PITR. So at this point, if you want wal-shipping, don't use RDS. It's a canned easy-to-use database, but easy-to-use often means that it's also limited, and that's certainly the case here. As Joe Love points out in the comments, it provides WAL shipping and PITR *within RDS*, but you can't get access to the WAL to that from *outside* RDS. So you need to use RDS's own backup facilities - dumps, snapshots and its own WAL-based PITR. --- Even if RDS did let you make replication connections (for `pg_basebackup` or streaming replication) and allowed you to access archived WAL, you might not be able to actually consume that WAL. RDS runs a patched PostgreSQL, though nobody knows how heavily patched or whether it significantly alters the on-disk format. It also runs on the architecture selected by Amazon, which is probably x64 Linux, but not easily determined. Since PostgreSQL's on disk format and replication are architecture dependent, you could only replicate to hosts with the same architecture as that used by Amazon RDS, and only if your PostgreSQL build was compatible with theirs. Among other things this means that you don't have any easy way to migrate away from RDS. You'd have to stop all writes to the database for long enough to take a `pg_dump`, restore it, and get the new DB running. The usual tricks with replication and failover, with rsync, etc, won't work because you don't have direct access to the DB host. Even if RDS ran an unpatched PostgreSQL Amazon probably wouldn't want to permit you to do WAL streaming into RDS or import into RDS using `pg_basebackup` for security reasons. PostgreSQL treats the data directory as trusted content, and if you've crafted any clever 'LANGUAGE c' functions that hook internal functionality or done anything else tricky you might be able to exploit the server to get greater access than you're supposed to have. So Amazon aren't going to permit inbound WAL anytime soon. They could support outbound WAL sending, but the above issues with format compatibility, freedom to make changes, etc still apply. --- Instead you should use a tool like Londiste or Bucardo.
Replication using trigger-based systems like Londiste and Bucardo into and out of RDS is [now supported as of Nov. 10th 2014](https://forums.aws.amazon.com/message.jspa?messageID=582787#582787), per a reply on that forum thread. Announcement [here](http://aws.amazon.com/about-aws/whats-new/2014/11/10/amazon-rds-postgresql-read-replicas/)
61,305
We have a Postgres RDS instance on Amazon Web Services. We have automatic backups enabled, and we take snapshots on a daily basis. We would like to generate a local 'up-to-date' backup of the RDS instance that we can manage ourselves. Running pg\_dump against the instance is not sufficient because we want to be able to restore the database to any point in time. We would prefer to have a local backup of RDS and all WAL files since that backup was taken. Questions: 1. Is it possible to access the WAL files and backups that RDS is automatically generating in its backup routine? This would be ideal. I would want to download a local copy of them. After initial investigation, I feel the answer to this question is 'no'. It sounds like RDS is storing its WAL files and backups in S3, but it makes them inaccessible to us. I would love confirmation. 2. Is there any other way to access transactions (WAL files) that have occurred on the RDS instance? I imagine we should be able to create a Postgres database on an EC2 and 'feed' transactions from our primary 'live' RDS instance into this EC2 instance. Once our EC2 instance is updated, we could pull WAL files from there. What a headache, though :/ Is this setup possible? What is the magic to 'feed' from our RDS instance to the EC2 instance so that it is always up to date? Thanks!
2014/03/19
[ "https://dba.stackexchange.com/questions/61305", "https://dba.stackexchange.com", "https://dba.stackexchange.com/users/35679/" ]
Update: [I've posted about this to the AWS forums - please go chime in and ask for it there](https://forums.aws.amazon.com/message.jspa?messageID=547192#547192). --- At time of writing, Amazon RDS does not support physical replication outside RDS. You can `GRANT` users the `REPLICATION` right using an `rds_superuser` login, but you can't configure `replication` entries for outside IPs in `pg_hba.conf`. Furthermore, when you create a DB parameter group in RDS, some key parameters are shown but locked, e.g. `archive_command`, which is locked to `/etc/rds/dbbin/pgscripts/rds_wal_archive %p`. AWS RDS for PostgreSQL does not appear to expose these WALs for external access (say, via S3) as it would need to if you were to use WAL-shipping replication for external PITR. So at this point, if you want wal-shipping, don't use RDS. It's a canned easy-to-use database, but easy-to-use often means that it's also limited, and that's certainly the case here. As Joe Love points out in the comments, it provides WAL shipping and PITR *within RDS*, but you can't get access to the WAL to that from *outside* RDS. So you need to use RDS's own backup facilities - dumps, snapshots and its own WAL-based PITR. --- Even if RDS did let you make replication connections (for `pg_basebackup` or streaming replication) and allowed you to access archived WAL, you might not be able to actually consume that WAL. RDS runs a patched PostgreSQL, though nobody knows how heavily patched or whether it significantly alters the on-disk format. It also runs on the architecture selected by Amazon, which is probably x64 Linux, but not easily determined. Since PostgreSQL's on disk format and replication are architecture dependent, you could only replicate to hosts with the same architecture as that used by Amazon RDS, and only if your PostgreSQL build was compatible with theirs. Among other things this means that you don't have any easy way to migrate away from RDS. You'd have to stop all writes to the database for long enough to take a `pg_dump`, restore it, and get the new DB running. The usual tricks with replication and failover, with rsync, etc, won't work because you don't have direct access to the DB host. Even if RDS ran an unpatched PostgreSQL Amazon probably wouldn't want to permit you to do WAL streaming into RDS or import into RDS using `pg_basebackup` for security reasons. PostgreSQL treats the data directory as trusted content, and if you've crafted any clever 'LANGUAGE c' functions that hook internal functionality or done anything else tricky you might be able to exploit the server to get greater access than you're supposed to have. So Amazon aren't going to permit inbound WAL anytime soon. They could support outbound WAL sending, but the above issues with format compatibility, freedom to make changes, etc still apply. --- Instead you should use a tool like Londiste or Bucardo.
This is now possible using Logical Replication: <https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.FeatureSupport.LogicalReplication>
61,305
We have a Postgres RDS instance on Amazon Web Services. We have automatic backups enabled, and we take snapshots on a daily basis. We would like to generate a local 'up-to-date' backup of the RDS instance that we can manage ourselves. Running pg\_dump against the instance is not sufficient because we want to be able to restore the database to any point in time. We would prefer to have a local backup of RDS and all WAL files since that backup was taken. Questions: 1. Is it possible to access the WAL files and backups that RDS is automatically generating in its backup routine? This would be ideal. I would want to download a local copy of them. After initial investigation, I feel the answer to this question is 'no'. It sounds like RDS is storing its WAL files and backups in S3, but it makes them inaccessible to us. I would love confirmation. 2. Is there any other way to access transactions (WAL files) that have occurred on the RDS instance? I imagine we should be able to create a Postgres database on an EC2 and 'feed' transactions from our primary 'live' RDS instance into this EC2 instance. Once our EC2 instance is updated, we could pull WAL files from there. What a headache, though :/ Is this setup possible? What is the magic to 'feed' from our RDS instance to the EC2 instance so that it is always up to date? Thanks!
2014/03/19
[ "https://dba.stackexchange.com/questions/61305", "https://dba.stackexchange.com", "https://dba.stackexchange.com/users/35679/" ]
This is now possible using Logical Replication: <https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.FeatureSupport.LogicalReplication>
Replication using trigger-based systems like Londiste and Bucardo into and out of RDS is [now supported as of Nov. 10th 2014](https://forums.aws.amazon.com/message.jspa?messageID=582787#582787), per a reply on that forum thread. Announcement [here](http://aws.amazon.com/about-aws/whats-new/2014/11/10/amazon-rds-postgresql-read-replicas/)
25,570,867
I thought that little-endian and big-endian numbers were the same size (in bytes). But python 2.7's struct module says this: ``` In [46]: struct.unpack('>L', datalen[4:8])[0] Out[46]: 35098131 In [47]: struct.unpack('L', datalen[4:8])[0] --------------------------------------------------------------------------- error Traceback (most recent call last) <ipython-input-47-f18e2a303d6c> in <module>() ----> 1 struct.unpack('L', datalen[4:8])[0] error: unpack requires a string argument of length 8 ``` Why is the big endian long 4 bytes but struct expects the little-endian long to be 8 bytes?
2014/08/29
[ "https://Stackoverflow.com/questions/25570867", "https://Stackoverflow.com", "https://Stackoverflow.com/users/939259/" ]
Specifying `'L'` without `>` or `<` is not "little-endian"; it's native endianness **and** native size. The standard size is 4 bytes, but your machine's native size is 8. If you explicitly want standard sized little-endian, use: ``` struct.unpack('<L', datalen[4:8])[0] ```
The default is `@` or *native* order, which is not necessarily little endian. Native order uses native alignment: > > By default, C types are represented in the machine’s native format and byte order, and **properly aligned by skipping pad bytes** if necessary (according to the rules used by the C compiler). > > > (emphasis mine) and > > Native byte order is big-endian or little-endian, depending on the host system. For example, Intel x86 and AMD64 (x86-64) are little-endian; Motorola 68000 and PowerPC G5 are big-endian; ARM and Intel Itanium feature switchable endianness (bi-endian). Use `sys.byteorder` to check the endianness of your system. > > > It is the *alignment* that causes the size to differ, not the endianess. The [C data structure alignment](http://en.wikipedia.org/wiki/Data_structure_alignment#Typical_alignment_of_C_structs_on_x86) is used to improve memory performance; you need to make sure you pick the right type for your data input. The C native alignment for a long is: > > A long long (eight bytes) will be 8-byte aligned. > > > To compare between little and big endianess without native alignment, use `<` and `>` when comparing: ``` struct.unpack('<L', datalen[4:8])[0] ```
4,483,309
I want to move and call a boost::packaged\_task inside a lambda. However, I can't figure out an elegant solution. e.g. This won't compile. ``` template<typename Func> auto begin_invoke(Func&& func) -> boost::unique_future<decltype(func())> // noexcept { typedef boost::packaged_task<decltype(func())> task_type; auto task = task_type(std::forward<Func>(func)); auto future = task.get_future(); execution_queue_.try_push([=] { try{task();} catch(boost::task_already_started&){} }); return std::move(future); } int _tmain(int argc, _TCHAR* argv[]) { executor ex; ex.begin_invoke([]{std::cout << "Hello world!";}); //error C3848: expression having type 'const boost::packaged_task<R>' would lose some const-volatile qualifiers in order to call 'void boost::packaged_task<R>::operator ()(void)' // with // [ // R=void // ] return 0; } ``` My rather ugly solution: ``` struct task_adaptor_t { // copy-constructor acts as move constructor task_adaptor_t(const task_adaptor_t& other) : task(std::move(other.task)){} task_adaptor_t(task_type&& task) : task(std::move(task)){} void operator()() const { task(); } mutable task_type task; } task_adaptor(std::move(task)); execution_queue_.try_push([=] { try{task_adaptor();} catch(boost::task_already_started&){} }); ``` What is the "proper" way to move a packaged\_task into a lambda which calls it?
2010/12/19
[ "https://Stackoverflow.com/questions/4483309", "https://Stackoverflow.com", "https://Stackoverflow.com/users/346804/" ]
Have you tried `autoScroll: true` ?
Ah, I really forgot. You need to describe next app structure: Viewport->Panel->items(your grids). Panel must have layout: fit If I correctly understand your problem.
1,216,054
We have this function: $f(x)=\frac{2x+3}{x+2}$ and we need to find this: $$\lim \_{x\to \infty \:}\frac{\int \_x^{2x}f(t)\,dt}{x}$$ Now I will tell how I solved this: I suppose that $$\int \_x^{2x} f(t) \, dt=\left.\vphantom{\frac11} F(x)\right|^{2x}\_{x}=F(2x)-F(x)$$ and after apply L'Hospital in our limit and we obtain: $$\lim \_{x\to \infty } (f(2x)-f(x))=0$$ The problem is that in my book they say this limit is equal with $2$. Where am I wrong?
2015/04/01
[ "https://math.stackexchange.com/questions/1216054", "https://math.stackexchange.com", "https://math.stackexchange.com/users/227060/" ]
For any $c \in (x,2x)$ we have $$\int\_{x}^{2x} f(t) dt= \int\_{x}^{c} f(t) dt +\int\_{c}^{2x} f(t) dt= \int\_{c}^{2x} f(t) - \int\_{c}^{x} f(t) dt $$ Then using L'hospital, the Fundamental Theorem of Calculus and the Chain Rule we have $$\lim\_{x\to \infty} \frac{\color{red}2f(2x) - f(x)}{1} = \lim\_{x\to \infty} 2\bigg(\frac{4x + 3}{2x +2}\Bigg) - \frac{2x + 3}{x+2}$$ And you take it from here.
$\displaystyle \int\_{x}^{2x} f(t)dt = \left(2t-\ln(t+2)\right)|\_{x}^{2x}=2x-\ln(2x+2)+\ln(x+2)\to \displaystyle \lim\_{x\to \infty} \dfrac{\displaystyle \int\_{x}^{2x} f(t)dt}{x} = \displaystyle \lim\_{x\to \infty} \dfrac{2x-\ln(2x+2)+\ln(x+2)}{x}= 2 - \displaystyle \lim\_{x\to \infty} \dfrac{\ln(2x+2)}{x}+ \displaystyle \lim\_{x\to \infty} \dfrac{\ln(x+2)}{x} = 2 - 0 + 0 = 2$
1,216,054
We have this function: $f(x)=\frac{2x+3}{x+2}$ and we need to find this: $$\lim \_{x\to \infty \:}\frac{\int \_x^{2x}f(t)\,dt}{x}$$ Now I will tell how I solved this: I suppose that $$\int \_x^{2x} f(t) \, dt=\left.\vphantom{\frac11} F(x)\right|^{2x}\_{x}=F(2x)-F(x)$$ and after apply L'Hospital in our limit and we obtain: $$\lim \_{x\to \infty } (f(2x)-f(x))=0$$ The problem is that in my book they say this limit is equal with $2$. Where am I wrong?
2015/04/01
[ "https://math.stackexchange.com/questions/1216054", "https://math.stackexchange.com", "https://math.stackexchange.com/users/227060/" ]
$\displaystyle \int\_{x}^{2x} f(t)dt = \left(2t-\ln(t+2)\right)|\_{x}^{2x}=2x-\ln(2x+2)+\ln(x+2)\to \displaystyle \lim\_{x\to \infty} \dfrac{\displaystyle \int\_{x}^{2x} f(t)dt}{x} = \displaystyle \lim\_{x\to \infty} \dfrac{2x-\ln(2x+2)+\ln(x+2)}{x}= 2 - \displaystyle \lim\_{x\to \infty} \dfrac{\ln(2x+2)}{x}+ \displaystyle \lim\_{x\to \infty} \dfrac{\ln(x+2)}{x} = 2 - 0 + 0 = 2$
Generalization (this is what's really going on): If $f$ is integrable on every bounded subinterval of some $[a,\infty),$ and $\lim\_{x\to \infty} f(x)=L,$ then $\lim\_{x\to \infty}\frac{1}{x}\int\_x^{2x}f(t)\,dt = L.$
1,216,054
We have this function: $f(x)=\frac{2x+3}{x+2}$ and we need to find this: $$\lim \_{x\to \infty \:}\frac{\int \_x^{2x}f(t)\,dt}{x}$$ Now I will tell how I solved this: I suppose that $$\int \_x^{2x} f(t) \, dt=\left.\vphantom{\frac11} F(x)\right|^{2x}\_{x}=F(2x)-F(x)$$ and after apply L'Hospital in our limit and we obtain: $$\lim \_{x\to \infty } (f(2x)-f(x))=0$$ The problem is that in my book they say this limit is equal with $2$. Where am I wrong?
2015/04/01
[ "https://math.stackexchange.com/questions/1216054", "https://math.stackexchange.com", "https://math.stackexchange.com/users/227060/" ]
For any $c \in (x,2x)$ we have $$\int\_{x}^{2x} f(t) dt= \int\_{x}^{c} f(t) dt +\int\_{c}^{2x} f(t) dt= \int\_{c}^{2x} f(t) - \int\_{c}^{x} f(t) dt $$ Then using L'hospital, the Fundamental Theorem of Calculus and the Chain Rule we have $$\lim\_{x\to \infty} \frac{\color{red}2f(2x) - f(x)}{1} = \lim\_{x\to \infty} 2\bigg(\frac{4x + 3}{2x +2}\Bigg) - \frac{2x + 3}{x+2}$$ And you take it from here.
Mean value theorem in $[x,2x]$. There is a $z\in (x,2x)$ with $F'(z)=\frac{F(2x)-F(x)}{2x-x}$ which means $$\frac{2z+3}{z+2}=\frac{\int \_x^{2x}f\left(t\right)dt\:}{x}$$ But as $x \to +\infty \Rightarrow z\to+\infty$ the limit of $\frac{2z+3}{z+2}$ equals $2$. This is another way to find a solution in many cases where the limit involves this type of functions.
1,216,054
We have this function: $f(x)=\frac{2x+3}{x+2}$ and we need to find this: $$\lim \_{x\to \infty \:}\frac{\int \_x^{2x}f(t)\,dt}{x}$$ Now I will tell how I solved this: I suppose that $$\int \_x^{2x} f(t) \, dt=\left.\vphantom{\frac11} F(x)\right|^{2x}\_{x}=F(2x)-F(x)$$ and after apply L'Hospital in our limit and we obtain: $$\lim \_{x\to \infty } (f(2x)-f(x))=0$$ The problem is that in my book they say this limit is equal with $2$. Where am I wrong?
2015/04/01
[ "https://math.stackexchange.com/questions/1216054", "https://math.stackexchange.com", "https://math.stackexchange.com/users/227060/" ]
For any $c \in (x,2x)$ we have $$\int\_{x}^{2x} f(t) dt= \int\_{x}^{c} f(t) dt +\int\_{c}^{2x} f(t) dt= \int\_{c}^{2x} f(t) - \int\_{c}^{x} f(t) dt $$ Then using L'hospital, the Fundamental Theorem of Calculus and the Chain Rule we have $$\lim\_{x\to \infty} \frac{\color{red}2f(2x) - f(x)}{1} = \lim\_{x\to \infty} 2\bigg(\frac{4x + 3}{2x +2}\Bigg) - \frac{2x + 3}{x+2}$$ And you take it from here.
Generalization (this is what's really going on): If $f$ is integrable on every bounded subinterval of some $[a,\infty),$ and $\lim\_{x\to \infty} f(x)=L,$ then $\lim\_{x\to \infty}\frac{1}{x}\int\_x^{2x}f(t)\,dt = L.$
1,216,054
We have this function: $f(x)=\frac{2x+3}{x+2}$ and we need to find this: $$\lim \_{x\to \infty \:}\frac{\int \_x^{2x}f(t)\,dt}{x}$$ Now I will tell how I solved this: I suppose that $$\int \_x^{2x} f(t) \, dt=\left.\vphantom{\frac11} F(x)\right|^{2x}\_{x}=F(2x)-F(x)$$ and after apply L'Hospital in our limit and we obtain: $$\lim \_{x\to \infty } (f(2x)-f(x))=0$$ The problem is that in my book they say this limit is equal with $2$. Where am I wrong?
2015/04/01
[ "https://math.stackexchange.com/questions/1216054", "https://math.stackexchange.com", "https://math.stackexchange.com/users/227060/" ]
Mean value theorem in $[x,2x]$. There is a $z\in (x,2x)$ with $F'(z)=\frac{F(2x)-F(x)}{2x-x}$ which means $$\frac{2z+3}{z+2}=\frac{\int \_x^{2x}f\left(t\right)dt\:}{x}$$ But as $x \to +\infty \Rightarrow z\to+\infty$ the limit of $\frac{2z+3}{z+2}$ equals $2$. This is another way to find a solution in many cases where the limit involves this type of functions.
Generalization (this is what's really going on): If $f$ is integrable on every bounded subinterval of some $[a,\infty),$ and $\lim\_{x\to \infty} f(x)=L,$ then $\lim\_{x\to \infty}\frac{1}{x}\int\_x^{2x}f(t)\,dt = L.$
47,878,888
\*Edit: I got CURL working in VS 2017 on a 64 bit machine following these steps (see below for original problem): First install vcpkg: 1. Clone [vcpkg](https://github.com/Microsoft/vcpkg) using gitbash into `C:\Program Files` 2. In a command prompt navigate to `C:\Program Files\vcpkg` 3. Run in the command prompt: `.\bootstrap-vcpkg.bat` 4. Run in the command prompt: `vcpkg integrate install` Then use vcpkg and Visual Studios 2017 command prompt to install cURL: 5. Open a `VS 2017 Command prompt` and navigate to the vcpkg folder (where the `vcpkg.exe` is) 6. Run: `vcpkg install curl[*]:x64-windows` (note this can take around a half hour to download and run, don't worry if it looks like it is "stuck" at parts). \*Edit: previously my instructions said to run `vcpkg install curl:x64-windows` but I added on the `[*]` at the behest of @i7clock to enable sftp and scp protocols. 7. After this step, you should check to make sure that curl installed correctly. To do this you should create a new project in VS 2017 and try and include `#include curl/curl.h` without adding any additional include directories. If you cannot do this then something went wrong with your install of curl. You should remove curl (and perhaps even the vcpkg folder and do a clean install) until you can include `curl/curl.h`. **\*Important Note**: this will only work if you are using x64 debugger/compiling in x64! If you cannot include the curl directory check to make sure your debug is set to the correct version of Windows. You may need to disable SSL peer verification as well: 8. Place the code `curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, FALSE);` before the request (see below). Note this is only necessary because I could not figure how to get certificates to work with curl. I have an as-of-yet unanswered stackoverflow post regarding this problem [here](https://stackoverflow.com/questions/47963081/getting-curl-certificates-to-work-with-c-and-vcpkg-install). Here are some other steps you may need to try to get things running, but I ended up finding them not necessary: 9. Navigate to vcpkg\packages\curl\_x64-windows\lib to find the libcurl.lib file. 10. Include the path to libcurl.lib in Additional Library Directories under Properties -> Linker 11. Included libcurl.lib in Additional Dependencies under Linker -> Input -> Additional Dependencies 12. Place `CURL_STATICLIB` in Properties -> C/C++ -> Preprocessor -> Preprocessor Definitions Here is my now working code: ``` #include "curl/curl.h" void testCurl() { CURL *curl; CURLcode res; curl_global_init(CURL_GLOBAL_ALL); curl = curl_easy_init(); if (curl) { curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, FALSE); curl_easy_setopt(curl, CURLOPT_URL, "https://example.com/"); #ifdef SKIP_PEER_VERIFICATION curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0L); #endif #ifdef SKIP_HOSTNAME_VERIFICATION curl_easy_setopt(curl, CURLOPT_SSL_VERIFYHOST, 0L); #endif res = curl_easy_perform(curl); if (res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); curl_easy_cleanup(curl); } curl_global_cleanup(); } int main(){ testCurl(); return 0; } ``` \*Edit: Here is the rest of the explanation of my old problem before it was fixed: I am trying to use cURL to make an API call so I can start getting real time stock data, but I am running into difficulties getting it to function in VS 2017. I have attempted an [install using vcpckg](https://github.com/Microsoft/vcpkg) using the following steps: According to vcpkg documentation I should be able to now simply #include , but it can't find the folder. If I try including the "include" directory from vcpkg\packages\curl\_x86\include and #include I can build my project. I can also access some of the classes, but if I try and set the curl\_global\_init(CURL\_GLOBAL\_DEFAULT) as in this example I get linker errors. [![Linker error](https://i.stack.imgur.com/vZhYZ.png)](https://i.stack.imgur.com/vZhYZ.png) [![curl_global_init error](https://i.stack.imgur.com/2IZIe.png)](https://i.stack.imgur.com/2IZIe.png)
2017/12/19
[ "https://Stackoverflow.com/questions/47878888", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7952542/" ]
You've installed the x86 version of curl with vcpkg (That's the x86 in `vcpkg\packages\curl_x86\include`). You need to install the x64 version to match your project: `>vcpkg install curl:x64-windows`
Here in 2021, on windows 10, using the current Visual Studio. `vcpkg install curl[*]:x64-windows` does not work. I get a BUILD\_FAILED error. `vcpkg install curl` does work for me, and only takes ~30 seconds
57,742,638
Using the [dotnet cli](https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet) how can you list the available package versions for a specific package?
2019/09/01
[ "https://Stackoverflow.com/questions/57742638", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53158/" ]
***Short answer:*** It's not possible (as far as I know, as of Nov 25, 2019). There are some other options, depending on what you're trying to accomplish. * You can see the [latest version(s) of *installed* packages](https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-list-package#options) you have that are outdated by running `dotnet list package --outdated` (requires .NET Core CLI 2.2+). * You can manually view the available versions for a package at [nuget.org/packages](https://www.nuget.org/packages/). Not a CLI option, but that's *an* option.
It's possible with "dotnet-search" a dotnet tool. <https://github.com/billpratt/dotnet-search> * You only need dotnet-core runtime 2.1 * Install via `dotnet tool install --global dotnet-search` * Search via `dotnet search <package-name>`
8,516,700
Let's say I have this code: ``` <div id="content"> </div> ``` and I use a simple ajax request to fill the content with data which work when I click some button ``` $.get("page?num=" + pageNumber, function(data){ $("div#content").html(data); }); ``` one time the data will be ``` <div id="Data1"> </div> ``` and other time it will be ``` <div id="Data2"> </div> ``` for each id I have differnet jquery functions. for exampe: ``` $('#Data1').click(function(){}); ``` some of the functions are similar and some are different. **My question** - if I click the button I load all the scripts I need. when I click the other button I need to load the scripts again for the new content, **but what happens to the old functions that was relevant to the previous content?** Each time I click I load new scripts without deleting the last ones. **How do I delete/manage my scripts correctly?** thanks, Alon
2011/12/15
[ "https://Stackoverflow.com/questions/8516700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/657801/" ]
I am not sure if i understood your question. are you expecting something like ``` $("div#content").html(""); // deleting existing data $("div#content").html(new_data); //loading new data ```
May should just use the [`.live()`](http://api.jquery.com/live/) function. > > Description: Attach an event handler for all elements which match the current selector, now and in the future. > > >
8,516,700
Let's say I have this code: ``` <div id="content"> </div> ``` and I use a simple ajax request to fill the content with data which work when I click some button ``` $.get("page?num=" + pageNumber, function(data){ $("div#content").html(data); }); ``` one time the data will be ``` <div id="Data1"> </div> ``` and other time it will be ``` <div id="Data2"> </div> ``` for each id I have differnet jquery functions. for exampe: ``` $('#Data1').click(function(){}); ``` some of the functions are similar and some are different. **My question** - if I click the button I load all the scripts I need. when I click the other button I need to load the scripts again for the new content, **but what happens to the old functions that was relevant to the previous content?** Each time I click I load new scripts without deleting the last ones. **How do I delete/manage my scripts correctly?** thanks, Alon
2011/12/15
[ "https://Stackoverflow.com/questions/8516700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/657801/" ]
jQuery binds events to DOM objects (`divs`) in this case when they are called. Which means it looks for an element with the given ID and then binds it. Since Edit1 and Edit2 does not exists when the script is run, they are not bound. You can try to bind them each time you change their ID. ``` function rebind() { $('#content').click(function(){}); $('#Data1').click(function(){}); $('#Data2').click(function(){}); } ``` Call this whenever you load new content in the div.
May should just use the [`.live()`](http://api.jquery.com/live/) function. > > Description: Attach an event handler for all elements which match the current selector, now and in the future. > > >
8,516,700
Let's say I have this code: ``` <div id="content"> </div> ``` and I use a simple ajax request to fill the content with data which work when I click some button ``` $.get("page?num=" + pageNumber, function(data){ $("div#content").html(data); }); ``` one time the data will be ``` <div id="Data1"> </div> ``` and other time it will be ``` <div id="Data2"> </div> ``` for each id I have differnet jquery functions. for exampe: ``` $('#Data1').click(function(){}); ``` some of the functions are similar and some are different. **My question** - if I click the button I load all the scripts I need. when I click the other button I need to load the scripts again for the new content, **but what happens to the old functions that was relevant to the previous content?** Each time I click I load new scripts without deleting the last ones. **How do I delete/manage my scripts correctly?** thanks, Alon
2011/12/15
[ "https://Stackoverflow.com/questions/8516700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/657801/" ]
Gurung is right about the event handler, but as of jQuery 1.7, the .live() method is deprecated. Use .on() to attach event handlers instead. (I could not post this information as a comment, so it had to be a new answer, sorry aboyt that).
May should just use the [`.live()`](http://api.jquery.com/live/) function. > > Description: Attach an event handler for all elements which match the current selector, now and in the future. > > >
8,516,700
Let's say I have this code: ``` <div id="content"> </div> ``` and I use a simple ajax request to fill the content with data which work when I click some button ``` $.get("page?num=" + pageNumber, function(data){ $("div#content").html(data); }); ``` one time the data will be ``` <div id="Data1"> </div> ``` and other time it will be ``` <div id="Data2"> </div> ``` for each id I have differnet jquery functions. for exampe: ``` $('#Data1').click(function(){}); ``` some of the functions are similar and some are different. **My question** - if I click the button I load all the scripts I need. when I click the other button I need to load the scripts again for the new content, **but what happens to the old functions that was relevant to the previous content?** Each time I click I load new scripts without deleting the last ones. **How do I delete/manage my scripts correctly?** thanks, Alon
2011/12/15
[ "https://Stackoverflow.com/questions/8516700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/657801/" ]
I am not sure if i understood your question. are you expecting something like ``` $("div#content").html(""); // deleting existing data $("div#content").html(new_data); //loading new data ```
jQuery binds events to DOM objects (`divs`) in this case when they are called. Which means it looks for an element with the given ID and then binds it. Since Edit1 and Edit2 does not exists when the script is run, they are not bound. You can try to bind them each time you change their ID. ``` function rebind() { $('#content').click(function(){}); $('#Data1').click(function(){}); $('#Data2').click(function(){}); } ``` Call this whenever you load new content in the div.
8,516,700
Let's say I have this code: ``` <div id="content"> </div> ``` and I use a simple ajax request to fill the content with data which work when I click some button ``` $.get("page?num=" + pageNumber, function(data){ $("div#content").html(data); }); ``` one time the data will be ``` <div id="Data1"> </div> ``` and other time it will be ``` <div id="Data2"> </div> ``` for each id I have differnet jquery functions. for exampe: ``` $('#Data1').click(function(){}); ``` some of the functions are similar and some are different. **My question** - if I click the button I load all the scripts I need. when I click the other button I need to load the scripts again for the new content, **but what happens to the old functions that was relevant to the previous content?** Each time I click I load new scripts without deleting the last ones. **How do I delete/manage my scripts correctly?** thanks, Alon
2011/12/15
[ "https://Stackoverflow.com/questions/8516700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/657801/" ]
I am not sure if i understood your question. are you expecting something like ``` $("div#content").html(""); // deleting existing data $("div#content").html(new_data); //loading new data ```
Gurung is right about the event handler, but as of jQuery 1.7, the .live() method is deprecated. Use .on() to attach event handlers instead. (I could not post this information as a comment, so it had to be a new answer, sorry aboyt that).
7,650,666
I'm having the error at the line: ins.ExecuteNonQuery().ToString(); **OledDbException was unhandled Syntax error in INSERT INTO statement.** How do I fix this? ``` string strOleDbConnectionString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\\Project.mdb"; OleDbConnection objConnection = new OleDbConnection(strOleDbConnectionString); string newTagID = textBox1.Text; string newUser = textBox2.Text; string newAge = textBox3.Text; string newPhoneNumber = textBox4.Text; string insertString = "INSERT INTO jiahe ([Tag ID], User, Age, [Phone Number]) VALUES ('" + newTagID + "', '" + newUser + "', '" + newAge + "', '" + newPhoneNumber + "')"; OleDbCommand ins = new OleDbCommand(insertString, objConnection); ins.Connection.Open(); ins.ExecuteNonQuery().ToString(); ins.Connection.Close(); ```
2011/10/04
[ "https://Stackoverflow.com/questions/7650666", "https://Stackoverflow.com", "https://Stackoverflow.com/users/977120/" ]
Your problem is probably one these three: 1. Outright syntax error not clearly visible with the hideous [unparametrized](http://msdn.microsoft.com/en-us/library/system.data.oledb.oledbparameter.aspx) SQL statement :p 2. `newUser` or some other field has a `'` somewhere and is screwing up the syntax. 3. You are trying to insert a numeric value (`Age`?) as a string. You should easily solve the first two creating a breakpoint after the `insertString` statement construction and checking out what the string really contains. The third one is even easier to check, just review the data types of the table's fields in your data base.
Notwithstanding, you should change the use of your command to use parameters and not build the query string with string concatenation (which is susceptible to sql injection attacks). The issue is most likely because [Tag ID], User, Age, [Phone Number] are not all strings. In your SQL, any non-string column data should not be wrapped by quotes ('). To fix the immediate problem (assuming [Tag ID] is an integer): ``` string insertString = "INSERT INTO jiahe ([Tag ID], User, Age, [Phone Number]) VALUES (" + newTagID + ", '" + newUser + "', '" + newAge + "', '" + newPhoneNumber + "')"; ``` However, you should structure your code to avoid sql injection, have cleaner code, and also not worry about the quotes: ``` string insertString = "INSERT INTO jiahe ([Tag ID], User, Age, [Phone Number]) VALUES (@TagID, @User, @Age, @PhoneNumber)"; OleDbCommand ins = new OleDbCommand(insertString, objConnection); ins.Parameters.Add(new OleDbParameter("@TagID",newTagID); ins.Parameters.Add(new OleDbParameter("@User",newUser); ins.Parameters.Add(new OleDbParameter("@Age",newAge); ins.Parameters.Add(new OleDbParameter("@PhoneNumber",newPhoneNumber); ins.Connection.Open(); ins.ExecuteNonQuery(); ins.Connection.Close(); ```
8,708,048
This question has been asked [before](https://stackoverflow.com/questions/257717/position-of-the-sun-given-time-of-day-and-lat-long/258106#comment10668948_258106) a little over three years ago. There was an answer given, however I've found a glitch in the solution. Code below is in R. I've ported it to another language, however have tested the original code directly in R to ensure the issue wasn't with my porting. ``` sunPosition <- function(year, month, day, hour=12, min=0, sec=0, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 # Get day of the year, e.g. Feb 1 = 32, Mar 1 = 61 on leap years month.days <- c(0,31,28,31,30,31,30,31,31,30,31,30) day <- day + cumsum(month.days)[month] leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 day[leapdays] <- day[leapdays] + 1 # Get Julian date - 2400000 hour <- hour + min / 60 + sec / 3600 # hour plus fraction delta <- year - 1949 leap <- trunc(delta / 4) # former leapyears jd <- 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) time <- jd - 51545. # Ecliptic coordinates # Mean longitude mnlong <- 280.460 + .9856474 * time mnlong <- mnlong %% 360 mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 # Mean anomaly mnanom <- 357.528 + .9856003 * time mnanom <- mnanom %% 360 mnanom[mnanom < 0] <- mnanom[mnanom < 0] + 360 mnanom <- mnanom * deg2rad # Ecliptic longitude and obliquity of ecliptic eclong <- mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom) eclong <- eclong %% 360 eclong[eclong < 0] <- eclong[eclong < 0] + 360 oblqec <- 23.429 - 0.0000004 * time eclong <- eclong * deg2rad oblqec <- oblqec * deg2rad # Celestial coordinates # Right ascension and declination num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + twopi dec <- asin(sin(oblqec) * sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst <- 6.697375 + .0657098242 * time + hour gmst <- gmst %% 24 gmst[gmst < 0] <- gmst[gmst < 0] + 24. # Local mean sidereal time lmst <- gmst + long / 15. lmst <- lmst %% 24. lmst[lmst < 0] <- lmst[lmst < 0] + 24. lmst <- lmst * 15. * deg2rad # Hour angle ha <- lmst - ra ha[ha < -pi] <- ha[ha < -pi] + twopi ha[ha > pi] <- ha[ha > pi] - twopi # Latitude to radians lat <- lat * deg2rad # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad return(list(elevation=el, azimuth=az)) } ``` The problem I'm hitting is that the azimuth it returns seems wrong. For example, if I run the function on the (southern) summer solstice at 12:00 for locations 0ºE and 41ºS, 3ºS, 3ºN and 41ºN: ``` > sunPosition(2012,12,22,12,0,0,-41,0) $elevation [1] 72.42113 $azimuth [1] 180.9211 > sunPosition(2012,12,22,12,0,0,-3,0) $elevation [1] 69.57493 $azimuth [1] -0.79713 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,3,0) $elevation [1] 63.57538 $azimuth [1] -0.6250971 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,41,0) $elevation [1] 25.57642 $azimuth [1] 180.3084 ``` These numbers just don't seem right. The elevation I'm happy with - the first two should be roughly the same, the third a touch lower, and the fourth much lower. However the first azimuth should be roughly due North, whereas the number it gives is the complete opposite. The remaining three should point roughly due South, however only the last one does. The two in the middle point just off North, again 180º out. As you can see there are also a couple of errors triggered with the low latitudes (close the equator) I believe the fault is in this section, with the error being triggered at the third line (starting with `elc`). ``` # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi ``` I googled around and found a similar chunk of code in C, converted to R the line it uses to calculate the azimuth would be something like ``` az <- atan(sin(ha) / (cos(ha) * sin(lat) - tan(dec) * cos(lat))) ``` The output here seems to be heading in the right direction, but I just can't get it to give me the right answer all the time when it's converted back to degrees. A correction of the code (suspect it's just the few lines above) to make it calculate the correct azimuth would be fantastic.
2012/01/03
[ "https://Stackoverflow.com/questions/8708048", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103964/" ]
This seems like an important topic, so I've posted a longer than typical answer: if this algorithm is to be used by others in the future, I think it's important that it be accompanied by references to the literature from which it has been derived. The short answer ================ As you've noted, your posted code does not work properly for locations near the equator, or in the southern hemisphere. To fix it, simply replace these lines in your original code: ``` elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi ``` with these: ``` cosAzPos <- (0 <= sin(dec) - sin(el) * sin(lat)) sinAzNeg <- (sin(az) < 0) az[cosAzPos & sinAzNeg] <- az[cosAzPos & sinAzNeg] + twopi az[!cosAzPos] <- pi - az[!cosAzPos] ``` It should now work for any location on the globe. Discussion ========== The code in your example is adapted almost verbatim from a 1988 article by J.J. Michalsky (Solar Energy. 40:227-235). That article in turn refined an algorithm presented in a 1978 article by R. Walraven (Solar Energy. 20:393-397). Walraven reported that the method had been used successfully for several years to precisely position a polarizing radiometer in Davis, CA (38° 33' 14" N, 121° 44' 17" W). **Both Michalsky's and Walraven's code contains important/fatal errors.** In particular, while Michalsky's algorithm works just fine in most of the United States, it fails (as you've found) for areas near the equator, or in the southern hemisphere. In 1989, J.W. Spencer of Victoria, Australia, noted the same thing (Solar Energy. 42(4):353): > > Dear Sir: > > > Michalsky's method for assigning the calculated azimuth to the correct quadrant, derived from Walraven, does not give correct values when applied for Southern (negative) latitudes. Further the calculation of the critical elevation (elc) will fail for a latitude of zero because of division by zero. Both these objections can be avoided simply by assigning the azimuth to the correct quadrant by considering the sign of cos(azimuth). > > > My edits to your code are based on the corrections suggested by Spencer in that published Comment. I have simply altered them somewhat to ensure that the R function `sunPosition()` remains 'vectorized' (i.e. working properly on vectors of point locations, rather than needing to be passed one point at a time). Accuracy of the function `sunPosition()` ======================================== To test that `sunPosition()` works correctly, I've compared its results with those calculated by the National Oceanic and Atmospheric Administration's [Solar Calculator](http://www.esrl.noaa.gov/gmd/grad/solcalc/). In both cases, sun positions were calculated for midday (12:00 PM) on the southern summer solstice (December 22nd), 2012. All results were in agreement to within 0.02 degrees. ``` testPts <- data.frame(lat = c(-41,-3,3, 41), long = c(0, 0, 0, 0)) # Sun's position as returned by the NOAA Solar Calculator, NOAA <- data.frame(elevNOAA = c(72.44, 69.57, 63.57, 25.6), azNOAA = c(359.09, 180.79, 180.62, 180.3)) # Sun's position as returned by sunPosition() sunPos <- sunPosition(year = 2012, month = 12, day = 22, hour = 12, min = 0, sec = 0, lat = testPts$lat, long = testPts$long) cbind(testPts, NOAA, sunPos) # lat long elevNOAA azNOAA elevation azimuth # 1 -41 0 72.44 359.09 72.43112 359.0787 # 2 -3 0 69.57 180.79 69.56493 180.7965 # 3 3 0 63.57 180.62 63.56539 180.6247 # 4 41 0 25.60 180.30 25.56642 180.3083 ``` Other errors in the code ======================== There are at least two other (quite minor) errors in the posted code. The first causes February 29th and March 1st of leap years to both be tallied as day 61 of the year. The second error derives from a typo in the original article, which was corrected by Michalsky in a 1989 note (Solar Energy. 43(5):323). This code block shows the offending lines, commented out and followed immediately by corrected versions: ``` # leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 & !(month==2 & day==60) # oblqec <- 23.429 - 0.0000004 * time oblqec <- 23.439 - 0.0000004 * time ``` Corrected version of `sunPosition()` ==================================== Here is the corrected code that was verified above: ``` sunPosition <- function(year, month, day, hour=12, min=0, sec=0, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 # Get day of the year, e.g. Feb 1 = 32, Mar 1 = 61 on leap years month.days <- c(0,31,28,31,30,31,30,31,31,30,31,30) day <- day + cumsum(month.days)[month] leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 & !(month==2 & day==60) day[leapdays] <- day[leapdays] + 1 # Get Julian date - 2400000 hour <- hour + min / 60 + sec / 3600 # hour plus fraction delta <- year - 1949 leap <- trunc(delta / 4) # former leapyears jd <- 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) time <- jd - 51545. # Ecliptic coordinates # Mean longitude mnlong <- 280.460 + .9856474 * time mnlong <- mnlong %% 360 mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 # Mean anomaly mnanom <- 357.528 + .9856003 * time mnanom <- mnanom %% 360 mnanom[mnanom < 0] <- mnanom[mnanom < 0] + 360 mnanom <- mnanom * deg2rad # Ecliptic longitude and obliquity of ecliptic eclong <- mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom) eclong <- eclong %% 360 eclong[eclong < 0] <- eclong[eclong < 0] + 360 oblqec <- 23.439 - 0.0000004 * time eclong <- eclong * deg2rad oblqec <- oblqec * deg2rad # Celestial coordinates # Right ascension and declination num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + twopi dec <- asin(sin(oblqec) * sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst <- 6.697375 + .0657098242 * time + hour gmst <- gmst %% 24 gmst[gmst < 0] <- gmst[gmst < 0] + 24. # Local mean sidereal time lmst <- gmst + long / 15. lmst <- lmst %% 24. lmst[lmst < 0] <- lmst[lmst < 0] + 24. lmst <- lmst * 15. * deg2rad # Hour angle ha <- lmst - ra ha[ha < -pi] <- ha[ha < -pi] + twopi ha[ha > pi] <- ha[ha > pi] - twopi # Latitude to radians lat <- lat * deg2rad # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) # For logic and names, see Spencer, J.W. 1989. Solar Energy. 42(4):353 cosAzPos <- (0 <= sin(dec) - sin(el) * sin(lat)) sinAzNeg <- (sin(az) < 0) az[cosAzPos & sinAzNeg] <- az[cosAzPos & sinAzNeg] + twopi az[!cosAzPos] <- pi - az[!cosAzPos] # if (0 < sin(dec) - sin(el) * sin(lat)) { # if(sin(az) < 0) az <- az + twopi # } else { # az <- pi - az # } el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad return(list(elevation=el, azimuth=az)) } ``` References: =========== Michalsky, J.J. 1988. The Astronomical Almanac's algorithm for approximate solar position (1950-2050). Solar Energy. 40(3):227-235. Michalsky, J.J. 1989. Errata. Solar Energy. 43(5):323. Spencer, J.W. 1989. Comments on "The Astronomical Almanac's Algorithm for Approximate Solar Position (1950-2050)". Solar Energy. 42(4):353. Walraven, R. 1978. Calculating the position of the sun. Solar Energy. 20:393-397.
I needed sun position in a Python project. I adapted Josh O'Brien's algorithm. Thank you Josh. In case it could be useful to anyone, here's my adaptation. Note that my project only needed instant sun position so time is not a parameter. ``` def sunPosition(lat=46.5, long=6.5): # Latitude [rad] lat_rad = math.radians(lat) # Get Julian date - 2400000 day = time.gmtime().tm_yday hour = time.gmtime().tm_hour + \ time.gmtime().tm_min/60.0 + \ time.gmtime().tm_sec/3600.0 delta = time.gmtime().tm_year - 1949 leap = delta / 4 jd = 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) t = jd - 51545 # Ecliptic coordinates # Mean longitude mnlong_deg = (280.460 + .9856474 * t) % 360 # Mean anomaly mnanom_rad = math.radians((357.528 + .9856003 * t) % 360) # Ecliptic longitude and obliquity of ecliptic eclong = math.radians((mnlong_deg + 1.915 * math.sin(mnanom_rad) + 0.020 * math.sin(2 * mnanom_rad) ) % 360) oblqec_rad = math.radians(23.439 - 0.0000004 * t) # Celestial coordinates # Right ascension and declination num = math.cos(oblqec_rad) * math.sin(eclong) den = math.cos(eclong) ra_rad = math.atan(num / den) if den < 0: ra_rad = ra_rad + math.pi elif num < 0: ra_rad = ra_rad + 2 * math.pi dec_rad = math.asin(math.sin(oblqec_rad) * math.sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst = (6.697375 + .0657098242 * t + hour) % 24 # Local mean sidereal time lmst = (gmst + long / 15) % 24 lmst_rad = math.radians(15 * lmst) # Hour angle (rad) ha_rad = (lmst_rad - ra_rad) % (2 * math.pi) # Elevation el_rad = math.asin( math.sin(dec_rad) * math.sin(lat_rad) + \ math.cos(dec_rad) * math.cos(lat_rad) * math.cos(ha_rad)) # Azimuth az_rad = math.asin( - math.cos(dec_rad) * math.sin(ha_rad) / math.cos(el_rad)) if (math.sin(dec_rad) - math.sin(el_rad) * math.sin(lat_rad) < 0): az_rad = math.pi - az_rad elif (math.sin(az_rad) < 0): az_rad += 2 * math.pi return el_rad, az_rad ```
8,708,048
This question has been asked [before](https://stackoverflow.com/questions/257717/position-of-the-sun-given-time-of-day-and-lat-long/258106#comment10668948_258106) a little over three years ago. There was an answer given, however I've found a glitch in the solution. Code below is in R. I've ported it to another language, however have tested the original code directly in R to ensure the issue wasn't with my porting. ``` sunPosition <- function(year, month, day, hour=12, min=0, sec=0, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 # Get day of the year, e.g. Feb 1 = 32, Mar 1 = 61 on leap years month.days <- c(0,31,28,31,30,31,30,31,31,30,31,30) day <- day + cumsum(month.days)[month] leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 day[leapdays] <- day[leapdays] + 1 # Get Julian date - 2400000 hour <- hour + min / 60 + sec / 3600 # hour plus fraction delta <- year - 1949 leap <- trunc(delta / 4) # former leapyears jd <- 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) time <- jd - 51545. # Ecliptic coordinates # Mean longitude mnlong <- 280.460 + .9856474 * time mnlong <- mnlong %% 360 mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 # Mean anomaly mnanom <- 357.528 + .9856003 * time mnanom <- mnanom %% 360 mnanom[mnanom < 0] <- mnanom[mnanom < 0] + 360 mnanom <- mnanom * deg2rad # Ecliptic longitude and obliquity of ecliptic eclong <- mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom) eclong <- eclong %% 360 eclong[eclong < 0] <- eclong[eclong < 0] + 360 oblqec <- 23.429 - 0.0000004 * time eclong <- eclong * deg2rad oblqec <- oblqec * deg2rad # Celestial coordinates # Right ascension and declination num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + twopi dec <- asin(sin(oblqec) * sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst <- 6.697375 + .0657098242 * time + hour gmst <- gmst %% 24 gmst[gmst < 0] <- gmst[gmst < 0] + 24. # Local mean sidereal time lmst <- gmst + long / 15. lmst <- lmst %% 24. lmst[lmst < 0] <- lmst[lmst < 0] + 24. lmst <- lmst * 15. * deg2rad # Hour angle ha <- lmst - ra ha[ha < -pi] <- ha[ha < -pi] + twopi ha[ha > pi] <- ha[ha > pi] - twopi # Latitude to radians lat <- lat * deg2rad # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad return(list(elevation=el, azimuth=az)) } ``` The problem I'm hitting is that the azimuth it returns seems wrong. For example, if I run the function on the (southern) summer solstice at 12:00 for locations 0ºE and 41ºS, 3ºS, 3ºN and 41ºN: ``` > sunPosition(2012,12,22,12,0,0,-41,0) $elevation [1] 72.42113 $azimuth [1] 180.9211 > sunPosition(2012,12,22,12,0,0,-3,0) $elevation [1] 69.57493 $azimuth [1] -0.79713 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,3,0) $elevation [1] 63.57538 $azimuth [1] -0.6250971 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,41,0) $elevation [1] 25.57642 $azimuth [1] 180.3084 ``` These numbers just don't seem right. The elevation I'm happy with - the first two should be roughly the same, the third a touch lower, and the fourth much lower. However the first azimuth should be roughly due North, whereas the number it gives is the complete opposite. The remaining three should point roughly due South, however only the last one does. The two in the middle point just off North, again 180º out. As you can see there are also a couple of errors triggered with the low latitudes (close the equator) I believe the fault is in this section, with the error being triggered at the third line (starting with `elc`). ``` # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi ``` I googled around and found a similar chunk of code in C, converted to R the line it uses to calculate the azimuth would be something like ``` az <- atan(sin(ha) / (cos(ha) * sin(lat) - tan(dec) * cos(lat))) ``` The output here seems to be heading in the right direction, but I just can't get it to give me the right answer all the time when it's converted back to degrees. A correction of the code (suspect it's just the few lines above) to make it calculate the correct azimuth would be fantastic.
2012/01/03
[ "https://Stackoverflow.com/questions/8708048", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103964/" ]
Here's a rewrite in that's more idiomatic to R, and easier to debug and maintain. It is essentially Josh's answer, but with azimuth calculated using both Josh and Charlie's algorithms for comparison. I've also included the simplifications to the date code from my other answer. The basic principle was to split the code up into lots of smaller functions that you can more easily write unit tests for. ``` astronomersAlmanacTime <- function(x) { # Astronomer's almanach time is the number of # days since (noon, 1 January 2000) origin <- as.POSIXct("2000-01-01 12:00:00") as.numeric(difftime(x, origin, units = "days")) } hourOfDay <- function(x) { x <- as.POSIXlt(x) with(x, hour + min / 60 + sec / 3600) } degreesToRadians <- function(degrees) { degrees * pi / 180 } radiansToDegrees <- function(radians) { radians * 180 / pi } meanLongitudeDegrees <- function(time) { (280.460 + 0.9856474 * time) %% 360 } meanAnomalyRadians <- function(time) { degreesToRadians((357.528 + 0.9856003 * time) %% 360) } eclipticLongitudeRadians <- function(mnlong, mnanom) { degreesToRadians( (mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom)) %% 360 ) } eclipticObliquityRadians <- function(time) { degreesToRadians(23.439 - 0.0000004 * time) } rightAscensionRadians <- function(oblqec, eclong) { num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + 2 * pi ra } rightDeclinationRadians <- function(oblqec, eclong) { asin(sin(oblqec) * sin(eclong)) } greenwichMeanSiderealTimeHours <- function(time, hour) { (6.697375 + 0.0657098242 * time + hour) %% 24 } localMeanSiderealTimeRadians <- function(gmst, long) { degreesToRadians(15 * ((gmst + long / 15) %% 24)) } hourAngleRadians <- function(lmst, ra) { ((lmst - ra + pi) %% (2 * pi)) - pi } elevationRadians <- function(lat, dec, ha) { asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) } solarAzimuthRadiansJosh <- function(lat, dec, ha, el) { az <- asin(-cos(dec) * sin(ha) / cos(el)) cosAzPos <- (0 <= sin(dec) - sin(el) * sin(lat)) sinAzNeg <- (sin(az) < 0) az[cosAzPos & sinAzNeg] <- az[cosAzPos & sinAzNeg] + 2 * pi az[!cosAzPos] <- pi - az[!cosAzPos] az } solarAzimuthRadiansCharlie <- function(lat, dec, ha) { zenithAngle <- acos(sin(lat) * sin(dec) + cos(lat) * cos(dec) * cos(ha)) az <- acos((sin(lat) * cos(zenithAngle) - sin(dec)) / (cos(lat) * sin(zenithAngle))) ifelse(ha > 0, az + pi, 3 * pi - az) %% (2 * pi) } sunPosition <- function(when = Sys.time(), format, lat = 46.5, long = 6.5) { if(is.character(when)) when <- strptime(when, format) when <- lubridate::with_tz(when, "UTC") time <- astronomersAlmanacTime(when) hour <- hourOfDay(when) # Ecliptic coordinates mnlong <- meanLongitudeDegrees(time) mnanom <- meanAnomalyRadians(time) eclong <- eclipticLongitudeRadians(mnlong, mnanom) oblqec <- eclipticObliquityRadians(time) # Celestial coordinates ra <- rightAscensionRadians(oblqec, eclong) dec <- rightDeclinationRadians(oblqec, eclong) # Local coordinates gmst <- greenwichMeanSiderealTimeHours(time, hour) lmst <- localMeanSiderealTimeRadians(gmst, long) # Hour angle ha <- hourAngleRadians(lmst, ra) # Latitude to radians lat <- degreesToRadians(lat) # Azimuth and elevation el <- elevationRadians(lat, dec, ha) azJ <- solarAzimuthRadiansJosh(lat, dec, ha, el) azC <- solarAzimuthRadiansCharlie(lat, dec, ha) data.frame( elevation = radiansToDegrees(el), azimuthJ = radiansToDegrees(azJ), azimuthC = radiansToDegrees(azC) ) } ```
I encountered a slight problem with a data point & Richie Cotton's functions above (in the implementation of Charlie's code) ``` longitude= 176.0433687000000020361767383292317390441894531250 latitude= -39.173830619999996827118593500927090644836425781250 event_time = as.POSIXct("2013-10-24 12:00:00", format="%Y-%m-%d %H:%M:%S", tz = "UTC") sunPosition(when=event_time, lat = latitude, long = longitude) elevation azimuthJ azimuthC 1 -38.92275 180 NaN Warning message: In acos((sin(lat) * cos(zenithAngle) - sin(dec))/(cos(lat) * sin(zenithAngle))) : NaNs produced ``` because in the solarAzimuthRadiansCharlie function there has been floating point excitement around an angle of 180 such that `(sin(lat) * cos(zenithAngle) - sin(dec)) / (cos(lat) * sin(zenithAngle))` is the tiniest amount over 1, 1.0000000000000004440892098, which generates a NaN as the input to acos should not be above 1 or below -1. I suspect there might be similar edge cases for Josh's calculation, where floating point rounding effects cause the input for the asin step to be outside -1:1 but I have not hit them in my particular dataset. In the half-dozen or so cases I have hit this, the "true" (middle of the day or night) is when the issue occurs so empirically the true value should be 1/-1. For that reason, I would be comfortable fixing that by applying a rounding step within `solarAzimuthRadiansJosh` and `solarAzimuthRadiansCharlie`. I'm not sure what the theoretical accuracy of the NOAA algorithm is (the point at which numerical accuracy stops mattering anyway) but rounding to 12 decimal places fixed the data in my data set.
8,708,048
This question has been asked [before](https://stackoverflow.com/questions/257717/position-of-the-sun-given-time-of-day-and-lat-long/258106#comment10668948_258106) a little over three years ago. There was an answer given, however I've found a glitch in the solution. Code below is in R. I've ported it to another language, however have tested the original code directly in R to ensure the issue wasn't with my porting. ``` sunPosition <- function(year, month, day, hour=12, min=0, sec=0, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 # Get day of the year, e.g. Feb 1 = 32, Mar 1 = 61 on leap years month.days <- c(0,31,28,31,30,31,30,31,31,30,31,30) day <- day + cumsum(month.days)[month] leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 day[leapdays] <- day[leapdays] + 1 # Get Julian date - 2400000 hour <- hour + min / 60 + sec / 3600 # hour plus fraction delta <- year - 1949 leap <- trunc(delta / 4) # former leapyears jd <- 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) time <- jd - 51545. # Ecliptic coordinates # Mean longitude mnlong <- 280.460 + .9856474 * time mnlong <- mnlong %% 360 mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 # Mean anomaly mnanom <- 357.528 + .9856003 * time mnanom <- mnanom %% 360 mnanom[mnanom < 0] <- mnanom[mnanom < 0] + 360 mnanom <- mnanom * deg2rad # Ecliptic longitude and obliquity of ecliptic eclong <- mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom) eclong <- eclong %% 360 eclong[eclong < 0] <- eclong[eclong < 0] + 360 oblqec <- 23.429 - 0.0000004 * time eclong <- eclong * deg2rad oblqec <- oblqec * deg2rad # Celestial coordinates # Right ascension and declination num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + twopi dec <- asin(sin(oblqec) * sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst <- 6.697375 + .0657098242 * time + hour gmst <- gmst %% 24 gmst[gmst < 0] <- gmst[gmst < 0] + 24. # Local mean sidereal time lmst <- gmst + long / 15. lmst <- lmst %% 24. lmst[lmst < 0] <- lmst[lmst < 0] + 24. lmst <- lmst * 15. * deg2rad # Hour angle ha <- lmst - ra ha[ha < -pi] <- ha[ha < -pi] + twopi ha[ha > pi] <- ha[ha > pi] - twopi # Latitude to radians lat <- lat * deg2rad # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad return(list(elevation=el, azimuth=az)) } ``` The problem I'm hitting is that the azimuth it returns seems wrong. For example, if I run the function on the (southern) summer solstice at 12:00 for locations 0ºE and 41ºS, 3ºS, 3ºN and 41ºN: ``` > sunPosition(2012,12,22,12,0,0,-41,0) $elevation [1] 72.42113 $azimuth [1] 180.9211 > sunPosition(2012,12,22,12,0,0,-3,0) $elevation [1] 69.57493 $azimuth [1] -0.79713 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,3,0) $elevation [1] 63.57538 $azimuth [1] -0.6250971 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,41,0) $elevation [1] 25.57642 $azimuth [1] 180.3084 ``` These numbers just don't seem right. The elevation I'm happy with - the first two should be roughly the same, the third a touch lower, and the fourth much lower. However the first azimuth should be roughly due North, whereas the number it gives is the complete opposite. The remaining three should point roughly due South, however only the last one does. The two in the middle point just off North, again 180º out. As you can see there are also a couple of errors triggered with the low latitudes (close the equator) I believe the fault is in this section, with the error being triggered at the third line (starting with `elc`). ``` # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi ``` I googled around and found a similar chunk of code in C, converted to R the line it uses to calculate the azimuth would be something like ``` az <- atan(sin(ha) / (cos(ha) * sin(lat) - tan(dec) * cos(lat))) ``` The output here seems to be heading in the right direction, but I just can't get it to give me the right answer all the time when it's converted back to degrees. A correction of the code (suspect it's just the few lines above) to make it calculate the correct azimuth would be fantastic.
2012/01/03
[ "https://Stackoverflow.com/questions/8708048", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103964/" ]
Here's a rewrite in that's more idiomatic to R, and easier to debug and maintain. It is essentially Josh's answer, but with azimuth calculated using both Josh and Charlie's algorithms for comparison. I've also included the simplifications to the date code from my other answer. The basic principle was to split the code up into lots of smaller functions that you can more easily write unit tests for. ``` astronomersAlmanacTime <- function(x) { # Astronomer's almanach time is the number of # days since (noon, 1 January 2000) origin <- as.POSIXct("2000-01-01 12:00:00") as.numeric(difftime(x, origin, units = "days")) } hourOfDay <- function(x) { x <- as.POSIXlt(x) with(x, hour + min / 60 + sec / 3600) } degreesToRadians <- function(degrees) { degrees * pi / 180 } radiansToDegrees <- function(radians) { radians * 180 / pi } meanLongitudeDegrees <- function(time) { (280.460 + 0.9856474 * time) %% 360 } meanAnomalyRadians <- function(time) { degreesToRadians((357.528 + 0.9856003 * time) %% 360) } eclipticLongitudeRadians <- function(mnlong, mnanom) { degreesToRadians( (mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom)) %% 360 ) } eclipticObliquityRadians <- function(time) { degreesToRadians(23.439 - 0.0000004 * time) } rightAscensionRadians <- function(oblqec, eclong) { num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + 2 * pi ra } rightDeclinationRadians <- function(oblqec, eclong) { asin(sin(oblqec) * sin(eclong)) } greenwichMeanSiderealTimeHours <- function(time, hour) { (6.697375 + 0.0657098242 * time + hour) %% 24 } localMeanSiderealTimeRadians <- function(gmst, long) { degreesToRadians(15 * ((gmst + long / 15) %% 24)) } hourAngleRadians <- function(lmst, ra) { ((lmst - ra + pi) %% (2 * pi)) - pi } elevationRadians <- function(lat, dec, ha) { asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) } solarAzimuthRadiansJosh <- function(lat, dec, ha, el) { az <- asin(-cos(dec) * sin(ha) / cos(el)) cosAzPos <- (0 <= sin(dec) - sin(el) * sin(lat)) sinAzNeg <- (sin(az) < 0) az[cosAzPos & sinAzNeg] <- az[cosAzPos & sinAzNeg] + 2 * pi az[!cosAzPos] <- pi - az[!cosAzPos] az } solarAzimuthRadiansCharlie <- function(lat, dec, ha) { zenithAngle <- acos(sin(lat) * sin(dec) + cos(lat) * cos(dec) * cos(ha)) az <- acos((sin(lat) * cos(zenithAngle) - sin(dec)) / (cos(lat) * sin(zenithAngle))) ifelse(ha > 0, az + pi, 3 * pi - az) %% (2 * pi) } sunPosition <- function(when = Sys.time(), format, lat = 46.5, long = 6.5) { if(is.character(when)) when <- strptime(when, format) when <- lubridate::with_tz(when, "UTC") time <- astronomersAlmanacTime(when) hour <- hourOfDay(when) # Ecliptic coordinates mnlong <- meanLongitudeDegrees(time) mnanom <- meanAnomalyRadians(time) eclong <- eclipticLongitudeRadians(mnlong, mnanom) oblqec <- eclipticObliquityRadians(time) # Celestial coordinates ra <- rightAscensionRadians(oblqec, eclong) dec <- rightDeclinationRadians(oblqec, eclong) # Local coordinates gmst <- greenwichMeanSiderealTimeHours(time, hour) lmst <- localMeanSiderealTimeRadians(gmst, long) # Hour angle ha <- hourAngleRadians(lmst, ra) # Latitude to radians lat <- degreesToRadians(lat) # Azimuth and elevation el <- elevationRadians(lat, dec, ha) azJ <- solarAzimuthRadiansJosh(lat, dec, ha, el) azC <- solarAzimuthRadiansCharlie(lat, dec, ha) data.frame( elevation = radiansToDegrees(el), azimuthJ = radiansToDegrees(azJ), azimuthC = radiansToDegrees(azC) ) } ```
This is a suggested update to Josh's excellent answer. Much of the start of the function is boilerplate code for calculating the number of days since midday on 1st Jan 2000. This is much better dealt with using R's existing date and time function. I also think that rather than having six different variables to specify the date and time, it's easier (and more consistent with other R functions) to specify an existing date object or a date strings + format strings. Here are two helper functions ``` astronomers_almanac_time <- function(x) { origin <- as.POSIXct("2000-01-01 12:00:00") as.numeric(difftime(x, origin, units = "days")) } hour_of_day <- function(x) { x <- as.POSIXlt(x) with(x, hour + min / 60 + sec / 3600) } ``` And the start of the function now simplifies to ``` sunPosition <- function(when = Sys.time(), format, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 if(is.character(when)) when <- strptime(when, format) time <- astronomers_almanac_time(when) hour <- hour_of_day(when) #... ``` --- The other oddity is in the lines like ``` mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 ``` Since `mnlong` has had `%%` called on its values, they should all be non-negative already, so this line is superfluous.
8,708,048
This question has been asked [before](https://stackoverflow.com/questions/257717/position-of-the-sun-given-time-of-day-and-lat-long/258106#comment10668948_258106) a little over three years ago. There was an answer given, however I've found a glitch in the solution. Code below is in R. I've ported it to another language, however have tested the original code directly in R to ensure the issue wasn't with my porting. ``` sunPosition <- function(year, month, day, hour=12, min=0, sec=0, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 # Get day of the year, e.g. Feb 1 = 32, Mar 1 = 61 on leap years month.days <- c(0,31,28,31,30,31,30,31,31,30,31,30) day <- day + cumsum(month.days)[month] leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 day[leapdays] <- day[leapdays] + 1 # Get Julian date - 2400000 hour <- hour + min / 60 + sec / 3600 # hour plus fraction delta <- year - 1949 leap <- trunc(delta / 4) # former leapyears jd <- 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) time <- jd - 51545. # Ecliptic coordinates # Mean longitude mnlong <- 280.460 + .9856474 * time mnlong <- mnlong %% 360 mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 # Mean anomaly mnanom <- 357.528 + .9856003 * time mnanom <- mnanom %% 360 mnanom[mnanom < 0] <- mnanom[mnanom < 0] + 360 mnanom <- mnanom * deg2rad # Ecliptic longitude and obliquity of ecliptic eclong <- mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom) eclong <- eclong %% 360 eclong[eclong < 0] <- eclong[eclong < 0] + 360 oblqec <- 23.429 - 0.0000004 * time eclong <- eclong * deg2rad oblqec <- oblqec * deg2rad # Celestial coordinates # Right ascension and declination num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + twopi dec <- asin(sin(oblqec) * sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst <- 6.697375 + .0657098242 * time + hour gmst <- gmst %% 24 gmst[gmst < 0] <- gmst[gmst < 0] + 24. # Local mean sidereal time lmst <- gmst + long / 15. lmst <- lmst %% 24. lmst[lmst < 0] <- lmst[lmst < 0] + 24. lmst <- lmst * 15. * deg2rad # Hour angle ha <- lmst - ra ha[ha < -pi] <- ha[ha < -pi] + twopi ha[ha > pi] <- ha[ha > pi] - twopi # Latitude to radians lat <- lat * deg2rad # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad return(list(elevation=el, azimuth=az)) } ``` The problem I'm hitting is that the azimuth it returns seems wrong. For example, if I run the function on the (southern) summer solstice at 12:00 for locations 0ºE and 41ºS, 3ºS, 3ºN and 41ºN: ``` > sunPosition(2012,12,22,12,0,0,-41,0) $elevation [1] 72.42113 $azimuth [1] 180.9211 > sunPosition(2012,12,22,12,0,0,-3,0) $elevation [1] 69.57493 $azimuth [1] -0.79713 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,3,0) $elevation [1] 63.57538 $azimuth [1] -0.6250971 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,41,0) $elevation [1] 25.57642 $azimuth [1] 180.3084 ``` These numbers just don't seem right. The elevation I'm happy with - the first two should be roughly the same, the third a touch lower, and the fourth much lower. However the first azimuth should be roughly due North, whereas the number it gives is the complete opposite. The remaining three should point roughly due South, however only the last one does. The two in the middle point just off North, again 180º out. As you can see there are also a couple of errors triggered with the low latitudes (close the equator) I believe the fault is in this section, with the error being triggered at the third line (starting with `elc`). ``` # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi ``` I googled around and found a similar chunk of code in C, converted to R the line it uses to calculate the azimuth would be something like ``` az <- atan(sin(ha) / (cos(ha) * sin(lat) - tan(dec) * cos(lat))) ``` The output here seems to be heading in the right direction, but I just can't get it to give me the right answer all the time when it's converted back to degrees. A correction of the code (suspect it's just the few lines above) to make it calculate the correct azimuth would be fantastic.
2012/01/03
[ "https://Stackoverflow.com/questions/8708048", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103964/" ]
Here's a rewrite in that's more idiomatic to R, and easier to debug and maintain. It is essentially Josh's answer, but with azimuth calculated using both Josh and Charlie's algorithms for comparison. I've also included the simplifications to the date code from my other answer. The basic principle was to split the code up into lots of smaller functions that you can more easily write unit tests for. ``` astronomersAlmanacTime <- function(x) { # Astronomer's almanach time is the number of # days since (noon, 1 January 2000) origin <- as.POSIXct("2000-01-01 12:00:00") as.numeric(difftime(x, origin, units = "days")) } hourOfDay <- function(x) { x <- as.POSIXlt(x) with(x, hour + min / 60 + sec / 3600) } degreesToRadians <- function(degrees) { degrees * pi / 180 } radiansToDegrees <- function(radians) { radians * 180 / pi } meanLongitudeDegrees <- function(time) { (280.460 + 0.9856474 * time) %% 360 } meanAnomalyRadians <- function(time) { degreesToRadians((357.528 + 0.9856003 * time) %% 360) } eclipticLongitudeRadians <- function(mnlong, mnanom) { degreesToRadians( (mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom)) %% 360 ) } eclipticObliquityRadians <- function(time) { degreesToRadians(23.439 - 0.0000004 * time) } rightAscensionRadians <- function(oblqec, eclong) { num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + 2 * pi ra } rightDeclinationRadians <- function(oblqec, eclong) { asin(sin(oblqec) * sin(eclong)) } greenwichMeanSiderealTimeHours <- function(time, hour) { (6.697375 + 0.0657098242 * time + hour) %% 24 } localMeanSiderealTimeRadians <- function(gmst, long) { degreesToRadians(15 * ((gmst + long / 15) %% 24)) } hourAngleRadians <- function(lmst, ra) { ((lmst - ra + pi) %% (2 * pi)) - pi } elevationRadians <- function(lat, dec, ha) { asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) } solarAzimuthRadiansJosh <- function(lat, dec, ha, el) { az <- asin(-cos(dec) * sin(ha) / cos(el)) cosAzPos <- (0 <= sin(dec) - sin(el) * sin(lat)) sinAzNeg <- (sin(az) < 0) az[cosAzPos & sinAzNeg] <- az[cosAzPos & sinAzNeg] + 2 * pi az[!cosAzPos] <- pi - az[!cosAzPos] az } solarAzimuthRadiansCharlie <- function(lat, dec, ha) { zenithAngle <- acos(sin(lat) * sin(dec) + cos(lat) * cos(dec) * cos(ha)) az <- acos((sin(lat) * cos(zenithAngle) - sin(dec)) / (cos(lat) * sin(zenithAngle))) ifelse(ha > 0, az + pi, 3 * pi - az) %% (2 * pi) } sunPosition <- function(when = Sys.time(), format, lat = 46.5, long = 6.5) { if(is.character(when)) when <- strptime(when, format) when <- lubridate::with_tz(when, "UTC") time <- astronomersAlmanacTime(when) hour <- hourOfDay(when) # Ecliptic coordinates mnlong <- meanLongitudeDegrees(time) mnanom <- meanAnomalyRadians(time) eclong <- eclipticLongitudeRadians(mnlong, mnanom) oblqec <- eclipticObliquityRadians(time) # Celestial coordinates ra <- rightAscensionRadians(oblqec, eclong) dec <- rightDeclinationRadians(oblqec, eclong) # Local coordinates gmst <- greenwichMeanSiderealTimeHours(time, hour) lmst <- localMeanSiderealTimeRadians(gmst, long) # Hour angle ha <- hourAngleRadians(lmst, ra) # Latitude to radians lat <- degreesToRadians(lat) # Azimuth and elevation el <- elevationRadians(lat, dec, ha) azJ <- solarAzimuthRadiansJosh(lat, dec, ha, el) azC <- solarAzimuthRadiansCharlie(lat, dec, ha) data.frame( elevation = radiansToDegrees(el), azimuthJ = radiansToDegrees(azJ), azimuthC = radiansToDegrees(azC) ) } ```
I needed sun position in a Python project. I adapted Josh O'Brien's algorithm. Thank you Josh. In case it could be useful to anyone, here's my adaptation. Note that my project only needed instant sun position so time is not a parameter. ``` def sunPosition(lat=46.5, long=6.5): # Latitude [rad] lat_rad = math.radians(lat) # Get Julian date - 2400000 day = time.gmtime().tm_yday hour = time.gmtime().tm_hour + \ time.gmtime().tm_min/60.0 + \ time.gmtime().tm_sec/3600.0 delta = time.gmtime().tm_year - 1949 leap = delta / 4 jd = 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) t = jd - 51545 # Ecliptic coordinates # Mean longitude mnlong_deg = (280.460 + .9856474 * t) % 360 # Mean anomaly mnanom_rad = math.radians((357.528 + .9856003 * t) % 360) # Ecliptic longitude and obliquity of ecliptic eclong = math.radians((mnlong_deg + 1.915 * math.sin(mnanom_rad) + 0.020 * math.sin(2 * mnanom_rad) ) % 360) oblqec_rad = math.radians(23.439 - 0.0000004 * t) # Celestial coordinates # Right ascension and declination num = math.cos(oblqec_rad) * math.sin(eclong) den = math.cos(eclong) ra_rad = math.atan(num / den) if den < 0: ra_rad = ra_rad + math.pi elif num < 0: ra_rad = ra_rad + 2 * math.pi dec_rad = math.asin(math.sin(oblqec_rad) * math.sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst = (6.697375 + .0657098242 * t + hour) % 24 # Local mean sidereal time lmst = (gmst + long / 15) % 24 lmst_rad = math.radians(15 * lmst) # Hour angle (rad) ha_rad = (lmst_rad - ra_rad) % (2 * math.pi) # Elevation el_rad = math.asin( math.sin(dec_rad) * math.sin(lat_rad) + \ math.cos(dec_rad) * math.cos(lat_rad) * math.cos(ha_rad)) # Azimuth az_rad = math.asin( - math.cos(dec_rad) * math.sin(ha_rad) / math.cos(el_rad)) if (math.sin(dec_rad) - math.sin(el_rad) * math.sin(lat_rad) < 0): az_rad = math.pi - az_rad elif (math.sin(az_rad) < 0): az_rad += 2 * math.pi return el_rad, az_rad ```
8,708,048
This question has been asked [before](https://stackoverflow.com/questions/257717/position-of-the-sun-given-time-of-day-and-lat-long/258106#comment10668948_258106) a little over three years ago. There was an answer given, however I've found a glitch in the solution. Code below is in R. I've ported it to another language, however have tested the original code directly in R to ensure the issue wasn't with my porting. ``` sunPosition <- function(year, month, day, hour=12, min=0, sec=0, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 # Get day of the year, e.g. Feb 1 = 32, Mar 1 = 61 on leap years month.days <- c(0,31,28,31,30,31,30,31,31,30,31,30) day <- day + cumsum(month.days)[month] leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 day[leapdays] <- day[leapdays] + 1 # Get Julian date - 2400000 hour <- hour + min / 60 + sec / 3600 # hour plus fraction delta <- year - 1949 leap <- trunc(delta / 4) # former leapyears jd <- 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) time <- jd - 51545. # Ecliptic coordinates # Mean longitude mnlong <- 280.460 + .9856474 * time mnlong <- mnlong %% 360 mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 # Mean anomaly mnanom <- 357.528 + .9856003 * time mnanom <- mnanom %% 360 mnanom[mnanom < 0] <- mnanom[mnanom < 0] + 360 mnanom <- mnanom * deg2rad # Ecliptic longitude and obliquity of ecliptic eclong <- mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom) eclong <- eclong %% 360 eclong[eclong < 0] <- eclong[eclong < 0] + 360 oblqec <- 23.429 - 0.0000004 * time eclong <- eclong * deg2rad oblqec <- oblqec * deg2rad # Celestial coordinates # Right ascension and declination num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + twopi dec <- asin(sin(oblqec) * sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst <- 6.697375 + .0657098242 * time + hour gmst <- gmst %% 24 gmst[gmst < 0] <- gmst[gmst < 0] + 24. # Local mean sidereal time lmst <- gmst + long / 15. lmst <- lmst %% 24. lmst[lmst < 0] <- lmst[lmst < 0] + 24. lmst <- lmst * 15. * deg2rad # Hour angle ha <- lmst - ra ha[ha < -pi] <- ha[ha < -pi] + twopi ha[ha > pi] <- ha[ha > pi] - twopi # Latitude to radians lat <- lat * deg2rad # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad return(list(elevation=el, azimuth=az)) } ``` The problem I'm hitting is that the azimuth it returns seems wrong. For example, if I run the function on the (southern) summer solstice at 12:00 for locations 0ºE and 41ºS, 3ºS, 3ºN and 41ºN: ``` > sunPosition(2012,12,22,12,0,0,-41,0) $elevation [1] 72.42113 $azimuth [1] 180.9211 > sunPosition(2012,12,22,12,0,0,-3,0) $elevation [1] 69.57493 $azimuth [1] -0.79713 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,3,0) $elevation [1] 63.57538 $azimuth [1] -0.6250971 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,41,0) $elevation [1] 25.57642 $azimuth [1] 180.3084 ``` These numbers just don't seem right. The elevation I'm happy with - the first two should be roughly the same, the third a touch lower, and the fourth much lower. However the first azimuth should be roughly due North, whereas the number it gives is the complete opposite. The remaining three should point roughly due South, however only the last one does. The two in the middle point just off North, again 180º out. As you can see there are also a couple of errors triggered with the low latitudes (close the equator) I believe the fault is in this section, with the error being triggered at the third line (starting with `elc`). ``` # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi ``` I googled around and found a similar chunk of code in C, converted to R the line it uses to calculate the azimuth would be something like ``` az <- atan(sin(ha) / (cos(ha) * sin(lat) - tan(dec) * cos(lat))) ``` The output here seems to be heading in the right direction, but I just can't get it to give me the right answer all the time when it's converted back to degrees. A correction of the code (suspect it's just the few lines above) to make it calculate the correct azimuth would be fantastic.
2012/01/03
[ "https://Stackoverflow.com/questions/8708048", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103964/" ]
I needed sun position in a Python project. I adapted Josh O'Brien's algorithm. Thank you Josh. In case it could be useful to anyone, here's my adaptation. Note that my project only needed instant sun position so time is not a parameter. ``` def sunPosition(lat=46.5, long=6.5): # Latitude [rad] lat_rad = math.radians(lat) # Get Julian date - 2400000 day = time.gmtime().tm_yday hour = time.gmtime().tm_hour + \ time.gmtime().tm_min/60.0 + \ time.gmtime().tm_sec/3600.0 delta = time.gmtime().tm_year - 1949 leap = delta / 4 jd = 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) t = jd - 51545 # Ecliptic coordinates # Mean longitude mnlong_deg = (280.460 + .9856474 * t) % 360 # Mean anomaly mnanom_rad = math.radians((357.528 + .9856003 * t) % 360) # Ecliptic longitude and obliquity of ecliptic eclong = math.radians((mnlong_deg + 1.915 * math.sin(mnanom_rad) + 0.020 * math.sin(2 * mnanom_rad) ) % 360) oblqec_rad = math.radians(23.439 - 0.0000004 * t) # Celestial coordinates # Right ascension and declination num = math.cos(oblqec_rad) * math.sin(eclong) den = math.cos(eclong) ra_rad = math.atan(num / den) if den < 0: ra_rad = ra_rad + math.pi elif num < 0: ra_rad = ra_rad + 2 * math.pi dec_rad = math.asin(math.sin(oblqec_rad) * math.sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst = (6.697375 + .0657098242 * t + hour) % 24 # Local mean sidereal time lmst = (gmst + long / 15) % 24 lmst_rad = math.radians(15 * lmst) # Hour angle (rad) ha_rad = (lmst_rad - ra_rad) % (2 * math.pi) # Elevation el_rad = math.asin( math.sin(dec_rad) * math.sin(lat_rad) + \ math.cos(dec_rad) * math.cos(lat_rad) * math.cos(ha_rad)) # Azimuth az_rad = math.asin( - math.cos(dec_rad) * math.sin(ha_rad) / math.cos(el_rad)) if (math.sin(dec_rad) - math.sin(el_rad) * math.sin(lat_rad) < 0): az_rad = math.pi - az_rad elif (math.sin(az_rad) < 0): az_rad += 2 * math.pi return el_rad, az_rad ```
I encountered a slight problem with a data point & Richie Cotton's functions above (in the implementation of Charlie's code) ``` longitude= 176.0433687000000020361767383292317390441894531250 latitude= -39.173830619999996827118593500927090644836425781250 event_time = as.POSIXct("2013-10-24 12:00:00", format="%Y-%m-%d %H:%M:%S", tz = "UTC") sunPosition(when=event_time, lat = latitude, long = longitude) elevation azimuthJ azimuthC 1 -38.92275 180 NaN Warning message: In acos((sin(lat) * cos(zenithAngle) - sin(dec))/(cos(lat) * sin(zenithAngle))) : NaNs produced ``` because in the solarAzimuthRadiansCharlie function there has been floating point excitement around an angle of 180 such that `(sin(lat) * cos(zenithAngle) - sin(dec)) / (cos(lat) * sin(zenithAngle))` is the tiniest amount over 1, 1.0000000000000004440892098, which generates a NaN as the input to acos should not be above 1 or below -1. I suspect there might be similar edge cases for Josh's calculation, where floating point rounding effects cause the input for the asin step to be outside -1:1 but I have not hit them in my particular dataset. In the half-dozen or so cases I have hit this, the "true" (middle of the day or night) is when the issue occurs so empirically the true value should be 1/-1. For that reason, I would be comfortable fixing that by applying a rounding step within `solarAzimuthRadiansJosh` and `solarAzimuthRadiansCharlie`. I'm not sure what the theoretical accuracy of the NOAA algorithm is (the point at which numerical accuracy stops mattering anyway) but rounding to 12 decimal places fixed the data in my data set.
8,708,048
This question has been asked [before](https://stackoverflow.com/questions/257717/position-of-the-sun-given-time-of-day-and-lat-long/258106#comment10668948_258106) a little over three years ago. There was an answer given, however I've found a glitch in the solution. Code below is in R. I've ported it to another language, however have tested the original code directly in R to ensure the issue wasn't with my porting. ``` sunPosition <- function(year, month, day, hour=12, min=0, sec=0, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 # Get day of the year, e.g. Feb 1 = 32, Mar 1 = 61 on leap years month.days <- c(0,31,28,31,30,31,30,31,31,30,31,30) day <- day + cumsum(month.days)[month] leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 day[leapdays] <- day[leapdays] + 1 # Get Julian date - 2400000 hour <- hour + min / 60 + sec / 3600 # hour plus fraction delta <- year - 1949 leap <- trunc(delta / 4) # former leapyears jd <- 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) time <- jd - 51545. # Ecliptic coordinates # Mean longitude mnlong <- 280.460 + .9856474 * time mnlong <- mnlong %% 360 mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 # Mean anomaly mnanom <- 357.528 + .9856003 * time mnanom <- mnanom %% 360 mnanom[mnanom < 0] <- mnanom[mnanom < 0] + 360 mnanom <- mnanom * deg2rad # Ecliptic longitude and obliquity of ecliptic eclong <- mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom) eclong <- eclong %% 360 eclong[eclong < 0] <- eclong[eclong < 0] + 360 oblqec <- 23.429 - 0.0000004 * time eclong <- eclong * deg2rad oblqec <- oblqec * deg2rad # Celestial coordinates # Right ascension and declination num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + twopi dec <- asin(sin(oblqec) * sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst <- 6.697375 + .0657098242 * time + hour gmst <- gmst %% 24 gmst[gmst < 0] <- gmst[gmst < 0] + 24. # Local mean sidereal time lmst <- gmst + long / 15. lmst <- lmst %% 24. lmst[lmst < 0] <- lmst[lmst < 0] + 24. lmst <- lmst * 15. * deg2rad # Hour angle ha <- lmst - ra ha[ha < -pi] <- ha[ha < -pi] + twopi ha[ha > pi] <- ha[ha > pi] - twopi # Latitude to radians lat <- lat * deg2rad # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad return(list(elevation=el, azimuth=az)) } ``` The problem I'm hitting is that the azimuth it returns seems wrong. For example, if I run the function on the (southern) summer solstice at 12:00 for locations 0ºE and 41ºS, 3ºS, 3ºN and 41ºN: ``` > sunPosition(2012,12,22,12,0,0,-41,0) $elevation [1] 72.42113 $azimuth [1] 180.9211 > sunPosition(2012,12,22,12,0,0,-3,0) $elevation [1] 69.57493 $azimuth [1] -0.79713 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,3,0) $elevation [1] 63.57538 $azimuth [1] -0.6250971 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,41,0) $elevation [1] 25.57642 $azimuth [1] 180.3084 ``` These numbers just don't seem right. The elevation I'm happy with - the first two should be roughly the same, the third a touch lower, and the fourth much lower. However the first azimuth should be roughly due North, whereas the number it gives is the complete opposite. The remaining three should point roughly due South, however only the last one does. The two in the middle point just off North, again 180º out. As you can see there are also a couple of errors triggered with the low latitudes (close the equator) I believe the fault is in this section, with the error being triggered at the third line (starting with `elc`). ``` # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi ``` I googled around and found a similar chunk of code in C, converted to R the line it uses to calculate the azimuth would be something like ``` az <- atan(sin(ha) / (cos(ha) * sin(lat) - tan(dec) * cos(lat))) ``` The output here seems to be heading in the right direction, but I just can't get it to give me the right answer all the time when it's converted back to degrees. A correction of the code (suspect it's just the few lines above) to make it calculate the correct azimuth would be fantastic.
2012/01/03
[ "https://Stackoverflow.com/questions/8708048", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103964/" ]
This seems like an important topic, so I've posted a longer than typical answer: if this algorithm is to be used by others in the future, I think it's important that it be accompanied by references to the literature from which it has been derived. The short answer ================ As you've noted, your posted code does not work properly for locations near the equator, or in the southern hemisphere. To fix it, simply replace these lines in your original code: ``` elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi ``` with these: ``` cosAzPos <- (0 <= sin(dec) - sin(el) * sin(lat)) sinAzNeg <- (sin(az) < 0) az[cosAzPos & sinAzNeg] <- az[cosAzPos & sinAzNeg] + twopi az[!cosAzPos] <- pi - az[!cosAzPos] ``` It should now work for any location on the globe. Discussion ========== The code in your example is adapted almost verbatim from a 1988 article by J.J. Michalsky (Solar Energy. 40:227-235). That article in turn refined an algorithm presented in a 1978 article by R. Walraven (Solar Energy. 20:393-397). Walraven reported that the method had been used successfully for several years to precisely position a polarizing radiometer in Davis, CA (38° 33' 14" N, 121° 44' 17" W). **Both Michalsky's and Walraven's code contains important/fatal errors.** In particular, while Michalsky's algorithm works just fine in most of the United States, it fails (as you've found) for areas near the equator, or in the southern hemisphere. In 1989, J.W. Spencer of Victoria, Australia, noted the same thing (Solar Energy. 42(4):353): > > Dear Sir: > > > Michalsky's method for assigning the calculated azimuth to the correct quadrant, derived from Walraven, does not give correct values when applied for Southern (negative) latitudes. Further the calculation of the critical elevation (elc) will fail for a latitude of zero because of division by zero. Both these objections can be avoided simply by assigning the azimuth to the correct quadrant by considering the sign of cos(azimuth). > > > My edits to your code are based on the corrections suggested by Spencer in that published Comment. I have simply altered them somewhat to ensure that the R function `sunPosition()` remains 'vectorized' (i.e. working properly on vectors of point locations, rather than needing to be passed one point at a time). Accuracy of the function `sunPosition()` ======================================== To test that `sunPosition()` works correctly, I've compared its results with those calculated by the National Oceanic and Atmospheric Administration's [Solar Calculator](http://www.esrl.noaa.gov/gmd/grad/solcalc/). In both cases, sun positions were calculated for midday (12:00 PM) on the southern summer solstice (December 22nd), 2012. All results were in agreement to within 0.02 degrees. ``` testPts <- data.frame(lat = c(-41,-3,3, 41), long = c(0, 0, 0, 0)) # Sun's position as returned by the NOAA Solar Calculator, NOAA <- data.frame(elevNOAA = c(72.44, 69.57, 63.57, 25.6), azNOAA = c(359.09, 180.79, 180.62, 180.3)) # Sun's position as returned by sunPosition() sunPos <- sunPosition(year = 2012, month = 12, day = 22, hour = 12, min = 0, sec = 0, lat = testPts$lat, long = testPts$long) cbind(testPts, NOAA, sunPos) # lat long elevNOAA azNOAA elevation azimuth # 1 -41 0 72.44 359.09 72.43112 359.0787 # 2 -3 0 69.57 180.79 69.56493 180.7965 # 3 3 0 63.57 180.62 63.56539 180.6247 # 4 41 0 25.60 180.30 25.56642 180.3083 ``` Other errors in the code ======================== There are at least two other (quite minor) errors in the posted code. The first causes February 29th and March 1st of leap years to both be tallied as day 61 of the year. The second error derives from a typo in the original article, which was corrected by Michalsky in a 1989 note (Solar Energy. 43(5):323). This code block shows the offending lines, commented out and followed immediately by corrected versions: ``` # leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 & !(month==2 & day==60) # oblqec <- 23.429 - 0.0000004 * time oblqec <- 23.439 - 0.0000004 * time ``` Corrected version of `sunPosition()` ==================================== Here is the corrected code that was verified above: ``` sunPosition <- function(year, month, day, hour=12, min=0, sec=0, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 # Get day of the year, e.g. Feb 1 = 32, Mar 1 = 61 on leap years month.days <- c(0,31,28,31,30,31,30,31,31,30,31,30) day <- day + cumsum(month.days)[month] leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 & !(month==2 & day==60) day[leapdays] <- day[leapdays] + 1 # Get Julian date - 2400000 hour <- hour + min / 60 + sec / 3600 # hour plus fraction delta <- year - 1949 leap <- trunc(delta / 4) # former leapyears jd <- 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) time <- jd - 51545. # Ecliptic coordinates # Mean longitude mnlong <- 280.460 + .9856474 * time mnlong <- mnlong %% 360 mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 # Mean anomaly mnanom <- 357.528 + .9856003 * time mnanom <- mnanom %% 360 mnanom[mnanom < 0] <- mnanom[mnanom < 0] + 360 mnanom <- mnanom * deg2rad # Ecliptic longitude and obliquity of ecliptic eclong <- mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom) eclong <- eclong %% 360 eclong[eclong < 0] <- eclong[eclong < 0] + 360 oblqec <- 23.439 - 0.0000004 * time eclong <- eclong * deg2rad oblqec <- oblqec * deg2rad # Celestial coordinates # Right ascension and declination num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + twopi dec <- asin(sin(oblqec) * sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst <- 6.697375 + .0657098242 * time + hour gmst <- gmst %% 24 gmst[gmst < 0] <- gmst[gmst < 0] + 24. # Local mean sidereal time lmst <- gmst + long / 15. lmst <- lmst %% 24. lmst[lmst < 0] <- lmst[lmst < 0] + 24. lmst <- lmst * 15. * deg2rad # Hour angle ha <- lmst - ra ha[ha < -pi] <- ha[ha < -pi] + twopi ha[ha > pi] <- ha[ha > pi] - twopi # Latitude to radians lat <- lat * deg2rad # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) # For logic and names, see Spencer, J.W. 1989. Solar Energy. 42(4):353 cosAzPos <- (0 <= sin(dec) - sin(el) * sin(lat)) sinAzNeg <- (sin(az) < 0) az[cosAzPos & sinAzNeg] <- az[cosAzPos & sinAzNeg] + twopi az[!cosAzPos] <- pi - az[!cosAzPos] # if (0 < sin(dec) - sin(el) * sin(lat)) { # if(sin(az) < 0) az <- az + twopi # } else { # az <- pi - az # } el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad return(list(elevation=el, azimuth=az)) } ``` References: =========== Michalsky, J.J. 1988. The Astronomical Almanac's algorithm for approximate solar position (1950-2050). Solar Energy. 40(3):227-235. Michalsky, J.J. 1989. Errata. Solar Energy. 43(5):323. Spencer, J.W. 1989. Comments on "The Astronomical Almanac's Algorithm for Approximate Solar Position (1950-2050)". Solar Energy. 42(4):353. Walraven, R. 1978. Calculating the position of the sun. Solar Energy. 20:393-397.
This is a suggested update to Josh's excellent answer. Much of the start of the function is boilerplate code for calculating the number of days since midday on 1st Jan 2000. This is much better dealt with using R's existing date and time function. I also think that rather than having six different variables to specify the date and time, it's easier (and more consistent with other R functions) to specify an existing date object or a date strings + format strings. Here are two helper functions ``` astronomers_almanac_time <- function(x) { origin <- as.POSIXct("2000-01-01 12:00:00") as.numeric(difftime(x, origin, units = "days")) } hour_of_day <- function(x) { x <- as.POSIXlt(x) with(x, hour + min / 60 + sec / 3600) } ``` And the start of the function now simplifies to ``` sunPosition <- function(when = Sys.time(), format, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 if(is.character(when)) when <- strptime(when, format) time <- astronomers_almanac_time(when) hour <- hour_of_day(when) #... ``` --- The other oddity is in the lines like ``` mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 ``` Since `mnlong` has had `%%` called on its values, they should all be non-negative already, so this line is superfluous.
8,708,048
This question has been asked [before](https://stackoverflow.com/questions/257717/position-of-the-sun-given-time-of-day-and-lat-long/258106#comment10668948_258106) a little over three years ago. There was an answer given, however I've found a glitch in the solution. Code below is in R. I've ported it to another language, however have tested the original code directly in R to ensure the issue wasn't with my porting. ``` sunPosition <- function(year, month, day, hour=12, min=0, sec=0, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 # Get day of the year, e.g. Feb 1 = 32, Mar 1 = 61 on leap years month.days <- c(0,31,28,31,30,31,30,31,31,30,31,30) day <- day + cumsum(month.days)[month] leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 day[leapdays] <- day[leapdays] + 1 # Get Julian date - 2400000 hour <- hour + min / 60 + sec / 3600 # hour plus fraction delta <- year - 1949 leap <- trunc(delta / 4) # former leapyears jd <- 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) time <- jd - 51545. # Ecliptic coordinates # Mean longitude mnlong <- 280.460 + .9856474 * time mnlong <- mnlong %% 360 mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 # Mean anomaly mnanom <- 357.528 + .9856003 * time mnanom <- mnanom %% 360 mnanom[mnanom < 0] <- mnanom[mnanom < 0] + 360 mnanom <- mnanom * deg2rad # Ecliptic longitude and obliquity of ecliptic eclong <- mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom) eclong <- eclong %% 360 eclong[eclong < 0] <- eclong[eclong < 0] + 360 oblqec <- 23.429 - 0.0000004 * time eclong <- eclong * deg2rad oblqec <- oblqec * deg2rad # Celestial coordinates # Right ascension and declination num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + twopi dec <- asin(sin(oblqec) * sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst <- 6.697375 + .0657098242 * time + hour gmst <- gmst %% 24 gmst[gmst < 0] <- gmst[gmst < 0] + 24. # Local mean sidereal time lmst <- gmst + long / 15. lmst <- lmst %% 24. lmst[lmst < 0] <- lmst[lmst < 0] + 24. lmst <- lmst * 15. * deg2rad # Hour angle ha <- lmst - ra ha[ha < -pi] <- ha[ha < -pi] + twopi ha[ha > pi] <- ha[ha > pi] - twopi # Latitude to radians lat <- lat * deg2rad # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad return(list(elevation=el, azimuth=az)) } ``` The problem I'm hitting is that the azimuth it returns seems wrong. For example, if I run the function on the (southern) summer solstice at 12:00 for locations 0ºE and 41ºS, 3ºS, 3ºN and 41ºN: ``` > sunPosition(2012,12,22,12,0,0,-41,0) $elevation [1] 72.42113 $azimuth [1] 180.9211 > sunPosition(2012,12,22,12,0,0,-3,0) $elevation [1] 69.57493 $azimuth [1] -0.79713 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,3,0) $elevation [1] 63.57538 $azimuth [1] -0.6250971 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,41,0) $elevation [1] 25.57642 $azimuth [1] 180.3084 ``` These numbers just don't seem right. The elevation I'm happy with - the first two should be roughly the same, the third a touch lower, and the fourth much lower. However the first azimuth should be roughly due North, whereas the number it gives is the complete opposite. The remaining three should point roughly due South, however only the last one does. The two in the middle point just off North, again 180º out. As you can see there are also a couple of errors triggered with the low latitudes (close the equator) I believe the fault is in this section, with the error being triggered at the third line (starting with `elc`). ``` # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi ``` I googled around and found a similar chunk of code in C, converted to R the line it uses to calculate the azimuth would be something like ``` az <- atan(sin(ha) / (cos(ha) * sin(lat) - tan(dec) * cos(lat))) ``` The output here seems to be heading in the right direction, but I just can't get it to give me the right answer all the time when it's converted back to degrees. A correction of the code (suspect it's just the few lines above) to make it calculate the correct azimuth would be fantastic.
2012/01/03
[ "https://Stackoverflow.com/questions/8708048", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103964/" ]
Using "NOAA Solar Calculations" from one of the links above I have changed a bit the final part of the function by using a slighly different algorithm that, I hope, have translated without errors. I have commented out the now-useless code and added the new algorithm just after the latitude to radians conversion: ``` # ----------------------------------------------- # New code # Solar zenith angle zenithAngle <- acos(sin(lat) * sin(dec) + cos(lat) * cos(dec) * cos(ha)) # Solar azimuth az <- acos(((sin(lat) * cos(zenithAngle)) - sin(dec)) / (cos(lat) * sin(zenithAngle))) rm(zenithAngle) # ----------------------------------------------- # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) #az <- asin(-cos(dec) * sin(ha) / cos(el)) #elc <- asin(sin(dec) / sin(lat)) #az[el >= elc] <- pi - az[el >= elc] #az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad # ----------------------------------------------- # New code if (ha > 0) az <- az + 180 else az <- 540 - az az <- az %% 360 # ----------------------------------------------- return(list(elevation=el, azimuth=az)) ``` To verify azimuth trend in the four cases you mentioned let's plot it against time of day: ``` hour <- seq(from = 0, to = 23, by = 0.5) azimuth <- data.frame(hour = hour) az41S <- apply(azimuth, 1, function(x) sunPosition(2012,12,22,x,0,0,-41,0)$azimuth) az03S <- apply(azimuth, 1, function(x) sunPosition(2012,12,22,x,0,0,-03,0)$azimuth) az03N <- apply(azimuth, 1, function(x) sunPosition(2012,12,22,x,0,0,03,0)$azimuth) az41N <- apply(azimuth, 1, function(x) sunPosition(2012,12,22,x,0,0,41,0)$azimuth) azimuth <- cbind(azimuth, az41S, az03S, az41N, az03N) rm(az41S, az03S, az41N, az03N) library(ggplot2) azimuth.plot <- melt(data = azimuth, id.vars = "hour") ggplot(aes(x = hour, y = value, color = variable), data = azimuth.plot) + geom_line(size = 2) + geom_vline(xintercept = 12) + facet_wrap(~ variable) ``` Image attached: ![enter image description here](https://i.stack.imgur.com/QXOZn.png)
I needed sun position in a Python project. I adapted Josh O'Brien's algorithm. Thank you Josh. In case it could be useful to anyone, here's my adaptation. Note that my project only needed instant sun position so time is not a parameter. ``` def sunPosition(lat=46.5, long=6.5): # Latitude [rad] lat_rad = math.radians(lat) # Get Julian date - 2400000 day = time.gmtime().tm_yday hour = time.gmtime().tm_hour + \ time.gmtime().tm_min/60.0 + \ time.gmtime().tm_sec/3600.0 delta = time.gmtime().tm_year - 1949 leap = delta / 4 jd = 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) t = jd - 51545 # Ecliptic coordinates # Mean longitude mnlong_deg = (280.460 + .9856474 * t) % 360 # Mean anomaly mnanom_rad = math.radians((357.528 + .9856003 * t) % 360) # Ecliptic longitude and obliquity of ecliptic eclong = math.radians((mnlong_deg + 1.915 * math.sin(mnanom_rad) + 0.020 * math.sin(2 * mnanom_rad) ) % 360) oblqec_rad = math.radians(23.439 - 0.0000004 * t) # Celestial coordinates # Right ascension and declination num = math.cos(oblqec_rad) * math.sin(eclong) den = math.cos(eclong) ra_rad = math.atan(num / den) if den < 0: ra_rad = ra_rad + math.pi elif num < 0: ra_rad = ra_rad + 2 * math.pi dec_rad = math.asin(math.sin(oblqec_rad) * math.sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst = (6.697375 + .0657098242 * t + hour) % 24 # Local mean sidereal time lmst = (gmst + long / 15) % 24 lmst_rad = math.radians(15 * lmst) # Hour angle (rad) ha_rad = (lmst_rad - ra_rad) % (2 * math.pi) # Elevation el_rad = math.asin( math.sin(dec_rad) * math.sin(lat_rad) + \ math.cos(dec_rad) * math.cos(lat_rad) * math.cos(ha_rad)) # Azimuth az_rad = math.asin( - math.cos(dec_rad) * math.sin(ha_rad) / math.cos(el_rad)) if (math.sin(dec_rad) - math.sin(el_rad) * math.sin(lat_rad) < 0): az_rad = math.pi - az_rad elif (math.sin(az_rad) < 0): az_rad += 2 * math.pi return el_rad, az_rad ```
8,708,048
This question has been asked [before](https://stackoverflow.com/questions/257717/position-of-the-sun-given-time-of-day-and-lat-long/258106#comment10668948_258106) a little over three years ago. There was an answer given, however I've found a glitch in the solution. Code below is in R. I've ported it to another language, however have tested the original code directly in R to ensure the issue wasn't with my porting. ``` sunPosition <- function(year, month, day, hour=12, min=0, sec=0, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 # Get day of the year, e.g. Feb 1 = 32, Mar 1 = 61 on leap years month.days <- c(0,31,28,31,30,31,30,31,31,30,31,30) day <- day + cumsum(month.days)[month] leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 day[leapdays] <- day[leapdays] + 1 # Get Julian date - 2400000 hour <- hour + min / 60 + sec / 3600 # hour plus fraction delta <- year - 1949 leap <- trunc(delta / 4) # former leapyears jd <- 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) time <- jd - 51545. # Ecliptic coordinates # Mean longitude mnlong <- 280.460 + .9856474 * time mnlong <- mnlong %% 360 mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 # Mean anomaly mnanom <- 357.528 + .9856003 * time mnanom <- mnanom %% 360 mnanom[mnanom < 0] <- mnanom[mnanom < 0] + 360 mnanom <- mnanom * deg2rad # Ecliptic longitude and obliquity of ecliptic eclong <- mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom) eclong <- eclong %% 360 eclong[eclong < 0] <- eclong[eclong < 0] + 360 oblqec <- 23.429 - 0.0000004 * time eclong <- eclong * deg2rad oblqec <- oblqec * deg2rad # Celestial coordinates # Right ascension and declination num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + twopi dec <- asin(sin(oblqec) * sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst <- 6.697375 + .0657098242 * time + hour gmst <- gmst %% 24 gmst[gmst < 0] <- gmst[gmst < 0] + 24. # Local mean sidereal time lmst <- gmst + long / 15. lmst <- lmst %% 24. lmst[lmst < 0] <- lmst[lmst < 0] + 24. lmst <- lmst * 15. * deg2rad # Hour angle ha <- lmst - ra ha[ha < -pi] <- ha[ha < -pi] + twopi ha[ha > pi] <- ha[ha > pi] - twopi # Latitude to radians lat <- lat * deg2rad # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad return(list(elevation=el, azimuth=az)) } ``` The problem I'm hitting is that the azimuth it returns seems wrong. For example, if I run the function on the (southern) summer solstice at 12:00 for locations 0ºE and 41ºS, 3ºS, 3ºN and 41ºN: ``` > sunPosition(2012,12,22,12,0,0,-41,0) $elevation [1] 72.42113 $azimuth [1] 180.9211 > sunPosition(2012,12,22,12,0,0,-3,0) $elevation [1] 69.57493 $azimuth [1] -0.79713 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,3,0) $elevation [1] 63.57538 $azimuth [1] -0.6250971 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,41,0) $elevation [1] 25.57642 $azimuth [1] 180.3084 ``` These numbers just don't seem right. The elevation I'm happy with - the first two should be roughly the same, the third a touch lower, and the fourth much lower. However the first azimuth should be roughly due North, whereas the number it gives is the complete opposite. The remaining three should point roughly due South, however only the last one does. The two in the middle point just off North, again 180º out. As you can see there are also a couple of errors triggered with the low latitudes (close the equator) I believe the fault is in this section, with the error being triggered at the third line (starting with `elc`). ``` # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi ``` I googled around and found a similar chunk of code in C, converted to R the line it uses to calculate the azimuth would be something like ``` az <- atan(sin(ha) / (cos(ha) * sin(lat) - tan(dec) * cos(lat))) ``` The output here seems to be heading in the right direction, but I just can't get it to give me the right answer all the time when it's converted back to degrees. A correction of the code (suspect it's just the few lines above) to make it calculate the correct azimuth would be fantastic.
2012/01/03
[ "https://Stackoverflow.com/questions/8708048", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103964/" ]
Using "NOAA Solar Calculations" from one of the links above I have changed a bit the final part of the function by using a slighly different algorithm that, I hope, have translated without errors. I have commented out the now-useless code and added the new algorithm just after the latitude to radians conversion: ``` # ----------------------------------------------- # New code # Solar zenith angle zenithAngle <- acos(sin(lat) * sin(dec) + cos(lat) * cos(dec) * cos(ha)) # Solar azimuth az <- acos(((sin(lat) * cos(zenithAngle)) - sin(dec)) / (cos(lat) * sin(zenithAngle))) rm(zenithAngle) # ----------------------------------------------- # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) #az <- asin(-cos(dec) * sin(ha) / cos(el)) #elc <- asin(sin(dec) / sin(lat)) #az[el >= elc] <- pi - az[el >= elc] #az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad # ----------------------------------------------- # New code if (ha > 0) az <- az + 180 else az <- 540 - az az <- az %% 360 # ----------------------------------------------- return(list(elevation=el, azimuth=az)) ``` To verify azimuth trend in the four cases you mentioned let's plot it against time of day: ``` hour <- seq(from = 0, to = 23, by = 0.5) azimuth <- data.frame(hour = hour) az41S <- apply(azimuth, 1, function(x) sunPosition(2012,12,22,x,0,0,-41,0)$azimuth) az03S <- apply(azimuth, 1, function(x) sunPosition(2012,12,22,x,0,0,-03,0)$azimuth) az03N <- apply(azimuth, 1, function(x) sunPosition(2012,12,22,x,0,0,03,0)$azimuth) az41N <- apply(azimuth, 1, function(x) sunPosition(2012,12,22,x,0,0,41,0)$azimuth) azimuth <- cbind(azimuth, az41S, az03S, az41N, az03N) rm(az41S, az03S, az41N, az03N) library(ggplot2) azimuth.plot <- melt(data = azimuth, id.vars = "hour") ggplot(aes(x = hour, y = value, color = variable), data = azimuth.plot) + geom_line(size = 2) + geom_vline(xintercept = 12) + facet_wrap(~ variable) ``` Image attached: ![enter image description here](https://i.stack.imgur.com/QXOZn.png)
I encountered a slight problem with a data point & Richie Cotton's functions above (in the implementation of Charlie's code) ``` longitude= 176.0433687000000020361767383292317390441894531250 latitude= -39.173830619999996827118593500927090644836425781250 event_time = as.POSIXct("2013-10-24 12:00:00", format="%Y-%m-%d %H:%M:%S", tz = "UTC") sunPosition(when=event_time, lat = latitude, long = longitude) elevation azimuthJ azimuthC 1 -38.92275 180 NaN Warning message: In acos((sin(lat) * cos(zenithAngle) - sin(dec))/(cos(lat) * sin(zenithAngle))) : NaNs produced ``` because in the solarAzimuthRadiansCharlie function there has been floating point excitement around an angle of 180 such that `(sin(lat) * cos(zenithAngle) - sin(dec)) / (cos(lat) * sin(zenithAngle))` is the tiniest amount over 1, 1.0000000000000004440892098, which generates a NaN as the input to acos should not be above 1 or below -1. I suspect there might be similar edge cases for Josh's calculation, where floating point rounding effects cause the input for the asin step to be outside -1:1 but I have not hit them in my particular dataset. In the half-dozen or so cases I have hit this, the "true" (middle of the day or night) is when the issue occurs so empirically the true value should be 1/-1. For that reason, I would be comfortable fixing that by applying a rounding step within `solarAzimuthRadiansJosh` and `solarAzimuthRadiansCharlie`. I'm not sure what the theoretical accuracy of the NOAA algorithm is (the point at which numerical accuracy stops mattering anyway) but rounding to 12 decimal places fixed the data in my data set.
8,708,048
This question has been asked [before](https://stackoverflow.com/questions/257717/position-of-the-sun-given-time-of-day-and-lat-long/258106#comment10668948_258106) a little over three years ago. There was an answer given, however I've found a glitch in the solution. Code below is in R. I've ported it to another language, however have tested the original code directly in R to ensure the issue wasn't with my porting. ``` sunPosition <- function(year, month, day, hour=12, min=0, sec=0, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 # Get day of the year, e.g. Feb 1 = 32, Mar 1 = 61 on leap years month.days <- c(0,31,28,31,30,31,30,31,31,30,31,30) day <- day + cumsum(month.days)[month] leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 day[leapdays] <- day[leapdays] + 1 # Get Julian date - 2400000 hour <- hour + min / 60 + sec / 3600 # hour plus fraction delta <- year - 1949 leap <- trunc(delta / 4) # former leapyears jd <- 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) time <- jd - 51545. # Ecliptic coordinates # Mean longitude mnlong <- 280.460 + .9856474 * time mnlong <- mnlong %% 360 mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 # Mean anomaly mnanom <- 357.528 + .9856003 * time mnanom <- mnanom %% 360 mnanom[mnanom < 0] <- mnanom[mnanom < 0] + 360 mnanom <- mnanom * deg2rad # Ecliptic longitude and obliquity of ecliptic eclong <- mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom) eclong <- eclong %% 360 eclong[eclong < 0] <- eclong[eclong < 0] + 360 oblqec <- 23.429 - 0.0000004 * time eclong <- eclong * deg2rad oblqec <- oblqec * deg2rad # Celestial coordinates # Right ascension and declination num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + twopi dec <- asin(sin(oblqec) * sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst <- 6.697375 + .0657098242 * time + hour gmst <- gmst %% 24 gmst[gmst < 0] <- gmst[gmst < 0] + 24. # Local mean sidereal time lmst <- gmst + long / 15. lmst <- lmst %% 24. lmst[lmst < 0] <- lmst[lmst < 0] + 24. lmst <- lmst * 15. * deg2rad # Hour angle ha <- lmst - ra ha[ha < -pi] <- ha[ha < -pi] + twopi ha[ha > pi] <- ha[ha > pi] - twopi # Latitude to radians lat <- lat * deg2rad # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad return(list(elevation=el, azimuth=az)) } ``` The problem I'm hitting is that the azimuth it returns seems wrong. For example, if I run the function on the (southern) summer solstice at 12:00 for locations 0ºE and 41ºS, 3ºS, 3ºN and 41ºN: ``` > sunPosition(2012,12,22,12,0,0,-41,0) $elevation [1] 72.42113 $azimuth [1] 180.9211 > sunPosition(2012,12,22,12,0,0,-3,0) $elevation [1] 69.57493 $azimuth [1] -0.79713 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,3,0) $elevation [1] 63.57538 $azimuth [1] -0.6250971 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,41,0) $elevation [1] 25.57642 $azimuth [1] 180.3084 ``` These numbers just don't seem right. The elevation I'm happy with - the first two should be roughly the same, the third a touch lower, and the fourth much lower. However the first azimuth should be roughly due North, whereas the number it gives is the complete opposite. The remaining three should point roughly due South, however only the last one does. The two in the middle point just off North, again 180º out. As you can see there are also a couple of errors triggered with the low latitudes (close the equator) I believe the fault is in this section, with the error being triggered at the third line (starting with `elc`). ``` # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi ``` I googled around and found a similar chunk of code in C, converted to R the line it uses to calculate the azimuth would be something like ``` az <- atan(sin(ha) / (cos(ha) * sin(lat) - tan(dec) * cos(lat))) ``` The output here seems to be heading in the right direction, but I just can't get it to give me the right answer all the time when it's converted back to degrees. A correction of the code (suspect it's just the few lines above) to make it calculate the correct azimuth would be fantastic.
2012/01/03
[ "https://Stackoverflow.com/questions/8708048", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103964/" ]
Using "NOAA Solar Calculations" from one of the links above I have changed a bit the final part of the function by using a slighly different algorithm that, I hope, have translated without errors. I have commented out the now-useless code and added the new algorithm just after the latitude to radians conversion: ``` # ----------------------------------------------- # New code # Solar zenith angle zenithAngle <- acos(sin(lat) * sin(dec) + cos(lat) * cos(dec) * cos(ha)) # Solar azimuth az <- acos(((sin(lat) * cos(zenithAngle)) - sin(dec)) / (cos(lat) * sin(zenithAngle))) rm(zenithAngle) # ----------------------------------------------- # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) #az <- asin(-cos(dec) * sin(ha) / cos(el)) #elc <- asin(sin(dec) / sin(lat)) #az[el >= elc] <- pi - az[el >= elc] #az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad # ----------------------------------------------- # New code if (ha > 0) az <- az + 180 else az <- 540 - az az <- az %% 360 # ----------------------------------------------- return(list(elevation=el, azimuth=az)) ``` To verify azimuth trend in the four cases you mentioned let's plot it against time of day: ``` hour <- seq(from = 0, to = 23, by = 0.5) azimuth <- data.frame(hour = hour) az41S <- apply(azimuth, 1, function(x) sunPosition(2012,12,22,x,0,0,-41,0)$azimuth) az03S <- apply(azimuth, 1, function(x) sunPosition(2012,12,22,x,0,0,-03,0)$azimuth) az03N <- apply(azimuth, 1, function(x) sunPosition(2012,12,22,x,0,0,03,0)$azimuth) az41N <- apply(azimuth, 1, function(x) sunPosition(2012,12,22,x,0,0,41,0)$azimuth) azimuth <- cbind(azimuth, az41S, az03S, az41N, az03N) rm(az41S, az03S, az41N, az03N) library(ggplot2) azimuth.plot <- melt(data = azimuth, id.vars = "hour") ggplot(aes(x = hour, y = value, color = variable), data = azimuth.plot) + geom_line(size = 2) + geom_vline(xintercept = 12) + facet_wrap(~ variable) ``` Image attached: ![enter image description here](https://i.stack.imgur.com/QXOZn.png)
This is a suggested update to Josh's excellent answer. Much of the start of the function is boilerplate code for calculating the number of days since midday on 1st Jan 2000. This is much better dealt with using R's existing date and time function. I also think that rather than having six different variables to specify the date and time, it's easier (and more consistent with other R functions) to specify an existing date object or a date strings + format strings. Here are two helper functions ``` astronomers_almanac_time <- function(x) { origin <- as.POSIXct("2000-01-01 12:00:00") as.numeric(difftime(x, origin, units = "days")) } hour_of_day <- function(x) { x <- as.POSIXlt(x) with(x, hour + min / 60 + sec / 3600) } ``` And the start of the function now simplifies to ``` sunPosition <- function(when = Sys.time(), format, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 if(is.character(when)) when <- strptime(when, format) time <- astronomers_almanac_time(when) hour <- hour_of_day(when) #... ``` --- The other oddity is in the lines like ``` mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 ``` Since `mnlong` has had `%%` called on its values, they should all be non-negative already, so this line is superfluous.
8,708,048
This question has been asked [before](https://stackoverflow.com/questions/257717/position-of-the-sun-given-time-of-day-and-lat-long/258106#comment10668948_258106) a little over three years ago. There was an answer given, however I've found a glitch in the solution. Code below is in R. I've ported it to another language, however have tested the original code directly in R to ensure the issue wasn't with my porting. ``` sunPosition <- function(year, month, day, hour=12, min=0, sec=0, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 # Get day of the year, e.g. Feb 1 = 32, Mar 1 = 61 on leap years month.days <- c(0,31,28,31,30,31,30,31,31,30,31,30) day <- day + cumsum(month.days)[month] leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 day[leapdays] <- day[leapdays] + 1 # Get Julian date - 2400000 hour <- hour + min / 60 + sec / 3600 # hour plus fraction delta <- year - 1949 leap <- trunc(delta / 4) # former leapyears jd <- 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) time <- jd - 51545. # Ecliptic coordinates # Mean longitude mnlong <- 280.460 + .9856474 * time mnlong <- mnlong %% 360 mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 # Mean anomaly mnanom <- 357.528 + .9856003 * time mnanom <- mnanom %% 360 mnanom[mnanom < 0] <- mnanom[mnanom < 0] + 360 mnanom <- mnanom * deg2rad # Ecliptic longitude and obliquity of ecliptic eclong <- mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom) eclong <- eclong %% 360 eclong[eclong < 0] <- eclong[eclong < 0] + 360 oblqec <- 23.429 - 0.0000004 * time eclong <- eclong * deg2rad oblqec <- oblqec * deg2rad # Celestial coordinates # Right ascension and declination num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + twopi dec <- asin(sin(oblqec) * sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst <- 6.697375 + .0657098242 * time + hour gmst <- gmst %% 24 gmst[gmst < 0] <- gmst[gmst < 0] + 24. # Local mean sidereal time lmst <- gmst + long / 15. lmst <- lmst %% 24. lmst[lmst < 0] <- lmst[lmst < 0] + 24. lmst <- lmst * 15. * deg2rad # Hour angle ha <- lmst - ra ha[ha < -pi] <- ha[ha < -pi] + twopi ha[ha > pi] <- ha[ha > pi] - twopi # Latitude to radians lat <- lat * deg2rad # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad return(list(elevation=el, azimuth=az)) } ``` The problem I'm hitting is that the azimuth it returns seems wrong. For example, if I run the function on the (southern) summer solstice at 12:00 for locations 0ºE and 41ºS, 3ºS, 3ºN and 41ºN: ``` > sunPosition(2012,12,22,12,0,0,-41,0) $elevation [1] 72.42113 $azimuth [1] 180.9211 > sunPosition(2012,12,22,12,0,0,-3,0) $elevation [1] 69.57493 $azimuth [1] -0.79713 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,3,0) $elevation [1] 63.57538 $azimuth [1] -0.6250971 Warning message: In asin(sin(dec)/sin(lat)) : NaNs produced > sunPosition(2012,12,22,12,0,0,41,0) $elevation [1] 25.57642 $azimuth [1] 180.3084 ``` These numbers just don't seem right. The elevation I'm happy with - the first two should be roughly the same, the third a touch lower, and the fourth much lower. However the first azimuth should be roughly due North, whereas the number it gives is the complete opposite. The remaining three should point roughly due South, however only the last one does. The two in the middle point just off North, again 180º out. As you can see there are also a couple of errors triggered with the low latitudes (close the equator) I believe the fault is in this section, with the error being triggered at the third line (starting with `elc`). ``` # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi ``` I googled around and found a similar chunk of code in C, converted to R the line it uses to calculate the azimuth would be something like ``` az <- atan(sin(ha) / (cos(ha) * sin(lat) - tan(dec) * cos(lat))) ``` The output here seems to be heading in the right direction, but I just can't get it to give me the right answer all the time when it's converted back to degrees. A correction of the code (suspect it's just the few lines above) to make it calculate the correct azimuth would be fantastic.
2012/01/03
[ "https://Stackoverflow.com/questions/8708048", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103964/" ]
This seems like an important topic, so I've posted a longer than typical answer: if this algorithm is to be used by others in the future, I think it's important that it be accompanied by references to the literature from which it has been derived. The short answer ================ As you've noted, your posted code does not work properly for locations near the equator, or in the southern hemisphere. To fix it, simply replace these lines in your original code: ``` elc <- asin(sin(dec) / sin(lat)) az[el >= elc] <- pi - az[el >= elc] az[el <= elc & ha > 0] <- az[el <= elc & ha > 0] + twopi ``` with these: ``` cosAzPos <- (0 <= sin(dec) - sin(el) * sin(lat)) sinAzNeg <- (sin(az) < 0) az[cosAzPos & sinAzNeg] <- az[cosAzPos & sinAzNeg] + twopi az[!cosAzPos] <- pi - az[!cosAzPos] ``` It should now work for any location on the globe. Discussion ========== The code in your example is adapted almost verbatim from a 1988 article by J.J. Michalsky (Solar Energy. 40:227-235). That article in turn refined an algorithm presented in a 1978 article by R. Walraven (Solar Energy. 20:393-397). Walraven reported that the method had been used successfully for several years to precisely position a polarizing radiometer in Davis, CA (38° 33' 14" N, 121° 44' 17" W). **Both Michalsky's and Walraven's code contains important/fatal errors.** In particular, while Michalsky's algorithm works just fine in most of the United States, it fails (as you've found) for areas near the equator, or in the southern hemisphere. In 1989, J.W. Spencer of Victoria, Australia, noted the same thing (Solar Energy. 42(4):353): > > Dear Sir: > > > Michalsky's method for assigning the calculated azimuth to the correct quadrant, derived from Walraven, does not give correct values when applied for Southern (negative) latitudes. Further the calculation of the critical elevation (elc) will fail for a latitude of zero because of division by zero. Both these objections can be avoided simply by assigning the azimuth to the correct quadrant by considering the sign of cos(azimuth). > > > My edits to your code are based on the corrections suggested by Spencer in that published Comment. I have simply altered them somewhat to ensure that the R function `sunPosition()` remains 'vectorized' (i.e. working properly on vectors of point locations, rather than needing to be passed one point at a time). Accuracy of the function `sunPosition()` ======================================== To test that `sunPosition()` works correctly, I've compared its results with those calculated by the National Oceanic and Atmospheric Administration's [Solar Calculator](http://www.esrl.noaa.gov/gmd/grad/solcalc/). In both cases, sun positions were calculated for midday (12:00 PM) on the southern summer solstice (December 22nd), 2012. All results were in agreement to within 0.02 degrees. ``` testPts <- data.frame(lat = c(-41,-3,3, 41), long = c(0, 0, 0, 0)) # Sun's position as returned by the NOAA Solar Calculator, NOAA <- data.frame(elevNOAA = c(72.44, 69.57, 63.57, 25.6), azNOAA = c(359.09, 180.79, 180.62, 180.3)) # Sun's position as returned by sunPosition() sunPos <- sunPosition(year = 2012, month = 12, day = 22, hour = 12, min = 0, sec = 0, lat = testPts$lat, long = testPts$long) cbind(testPts, NOAA, sunPos) # lat long elevNOAA azNOAA elevation azimuth # 1 -41 0 72.44 359.09 72.43112 359.0787 # 2 -3 0 69.57 180.79 69.56493 180.7965 # 3 3 0 63.57 180.62 63.56539 180.6247 # 4 41 0 25.60 180.30 25.56642 180.3083 ``` Other errors in the code ======================== There are at least two other (quite minor) errors in the posted code. The first causes February 29th and March 1st of leap years to both be tallied as day 61 of the year. The second error derives from a typo in the original article, which was corrected by Michalsky in a 1989 note (Solar Energy. 43(5):323). This code block shows the offending lines, commented out and followed immediately by corrected versions: ``` # leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 & !(month==2 & day==60) # oblqec <- 23.429 - 0.0000004 * time oblqec <- 23.439 - 0.0000004 * time ``` Corrected version of `sunPosition()` ==================================== Here is the corrected code that was verified above: ``` sunPosition <- function(year, month, day, hour=12, min=0, sec=0, lat=46.5, long=6.5) { twopi <- 2 * pi deg2rad <- pi / 180 # Get day of the year, e.g. Feb 1 = 32, Mar 1 = 61 on leap years month.days <- c(0,31,28,31,30,31,30,31,31,30,31,30) day <- day + cumsum(month.days)[month] leapdays <- year %% 4 == 0 & (year %% 400 == 0 | year %% 100 != 0) & day >= 60 & !(month==2 & day==60) day[leapdays] <- day[leapdays] + 1 # Get Julian date - 2400000 hour <- hour + min / 60 + sec / 3600 # hour plus fraction delta <- year - 1949 leap <- trunc(delta / 4) # former leapyears jd <- 32916.5 + delta * 365 + leap + day + hour / 24 # The input to the Atronomer's almanach is the difference between # the Julian date and JD 2451545.0 (noon, 1 January 2000) time <- jd - 51545. # Ecliptic coordinates # Mean longitude mnlong <- 280.460 + .9856474 * time mnlong <- mnlong %% 360 mnlong[mnlong < 0] <- mnlong[mnlong < 0] + 360 # Mean anomaly mnanom <- 357.528 + .9856003 * time mnanom <- mnanom %% 360 mnanom[mnanom < 0] <- mnanom[mnanom < 0] + 360 mnanom <- mnanom * deg2rad # Ecliptic longitude and obliquity of ecliptic eclong <- mnlong + 1.915 * sin(mnanom) + 0.020 * sin(2 * mnanom) eclong <- eclong %% 360 eclong[eclong < 0] <- eclong[eclong < 0] + 360 oblqec <- 23.439 - 0.0000004 * time eclong <- eclong * deg2rad oblqec <- oblqec * deg2rad # Celestial coordinates # Right ascension and declination num <- cos(oblqec) * sin(eclong) den <- cos(eclong) ra <- atan(num / den) ra[den < 0] <- ra[den < 0] + pi ra[den >= 0 & num < 0] <- ra[den >= 0 & num < 0] + twopi dec <- asin(sin(oblqec) * sin(eclong)) # Local coordinates # Greenwich mean sidereal time gmst <- 6.697375 + .0657098242 * time + hour gmst <- gmst %% 24 gmst[gmst < 0] <- gmst[gmst < 0] + 24. # Local mean sidereal time lmst <- gmst + long / 15. lmst <- lmst %% 24. lmst[lmst < 0] <- lmst[lmst < 0] + 24. lmst <- lmst * 15. * deg2rad # Hour angle ha <- lmst - ra ha[ha < -pi] <- ha[ha < -pi] + twopi ha[ha > pi] <- ha[ha > pi] - twopi # Latitude to radians lat <- lat * deg2rad # Azimuth and elevation el <- asin(sin(dec) * sin(lat) + cos(dec) * cos(lat) * cos(ha)) az <- asin(-cos(dec) * sin(ha) / cos(el)) # For logic and names, see Spencer, J.W. 1989. Solar Energy. 42(4):353 cosAzPos <- (0 <= sin(dec) - sin(el) * sin(lat)) sinAzNeg <- (sin(az) < 0) az[cosAzPos & sinAzNeg] <- az[cosAzPos & sinAzNeg] + twopi az[!cosAzPos] <- pi - az[!cosAzPos] # if (0 < sin(dec) - sin(el) * sin(lat)) { # if(sin(az) < 0) az <- az + twopi # } else { # az <- pi - az # } el <- el / deg2rad az <- az / deg2rad lat <- lat / deg2rad return(list(elevation=el, azimuth=az)) } ``` References: =========== Michalsky, J.J. 1988. The Astronomical Almanac's algorithm for approximate solar position (1950-2050). Solar Energy. 40(3):227-235. Michalsky, J.J. 1989. Errata. Solar Energy. 43(5):323. Spencer, J.W. 1989. Comments on "The Astronomical Almanac's Algorithm for Approximate Solar Position (1950-2050)". Solar Energy. 42(4):353. Walraven, R. 1978. Calculating the position of the sun. Solar Energy. 20:393-397.
I encountered a slight problem with a data point & Richie Cotton's functions above (in the implementation of Charlie's code) ``` longitude= 176.0433687000000020361767383292317390441894531250 latitude= -39.173830619999996827118593500927090644836425781250 event_time = as.POSIXct("2013-10-24 12:00:00", format="%Y-%m-%d %H:%M:%S", tz = "UTC") sunPosition(when=event_time, lat = latitude, long = longitude) elevation azimuthJ azimuthC 1 -38.92275 180 NaN Warning message: In acos((sin(lat) * cos(zenithAngle) - sin(dec))/(cos(lat) * sin(zenithAngle))) : NaNs produced ``` because in the solarAzimuthRadiansCharlie function there has been floating point excitement around an angle of 180 such that `(sin(lat) * cos(zenithAngle) - sin(dec)) / (cos(lat) * sin(zenithAngle))` is the tiniest amount over 1, 1.0000000000000004440892098, which generates a NaN as the input to acos should not be above 1 or below -1. I suspect there might be similar edge cases for Josh's calculation, where floating point rounding effects cause the input for the asin step to be outside -1:1 but I have not hit them in my particular dataset. In the half-dozen or so cases I have hit this, the "true" (middle of the day or night) is when the issue occurs so empirically the true value should be 1/-1. For that reason, I would be comfortable fixing that by applying a rounding step within `solarAzimuthRadiansJosh` and `solarAzimuthRadiansCharlie`. I'm not sure what the theoretical accuracy of the NOAA algorithm is (the point at which numerical accuracy stops mattering anyway) but rounding to 12 decimal places fixed the data in my data set.
12,743,575
I'm creating a WPF tab control which contains tabs rotated 270° When I mouse the mouse over between tabs they appear to be "jumping". I'm looking for a way to prevent this from happening.The tabs should have the same behaviour as the tabs in Microsoft Office ribbon UI(stay in a fixed position). Is it possible to modify the `XAML` below to achieve this? **XAML:** ``` <Window x:Class="WpfApplication2.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="350" Width="300" TextOptions.TextFormattingMode="Display" TextOptions.TextRenderingMode="ClearType" UseLayoutRounding="true"> <TabControl TabStripPlacement="Left" BorderThickness="1,0,0,0"> <TabControl.ItemContainerStyle> <Style TargetType="{x:Type TabItem}"> <Setter Property="LayoutTransform"> <Setter.Value> <RotateTransform Angle="270"/> </Setter.Value> </Setter> <Setter Property="Foreground" Value="#330033" /> <Setter Property="FontSize" Value="9pt"/> <Setter Property="Height" Value="22" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type TabItem}"> <Grid> <Border Background="White" Name="Border" BorderBrush="#A9A9A9" BorderThickness="0" CornerRadius="2,2,0,0"> <Border Padding="2" BorderBrush="Transparent"> <ContentPresenter x:Name="ContentSite" VerticalAlignment="Center" HorizontalAlignment="Center" ContentSource="Header" Margin="12,0,12,2"/> </Border> </Border> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsSelected" Value="True"> <Setter Property="Panel.ZIndex" Value="100" /> <Setter TargetName="Border" Property="BorderThickness" Value="1,1,1,0" /> </Trigger> <Trigger Property="IsMouseOver" Value="True"> <Setter Property="Panel.ZIndex" Value="100" /> <Setter TargetName="Border" Property="BorderThickness" Value="1,1,1,0" /> <Setter TargetName="Border" Property="Background" Value="White" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> </TabControl.ItemContainerStyle> <TabItem Header="Tab item 1" /> <TabItem Header="Tab item 2" /> <TabItem Header="Tab item 3" /> </TabControl> </Window> ```
2012/10/05
[ "https://Stackoverflow.com/questions/12743575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/923095/" ]
I think it would be better to hide a `Border` when the mouse is not over a tab by setting the `Border`'s `BorderBrush` to `Transparent` rather than setting its `BorderThickness` to zero. In other words, change ``` <Border Background="White" Name="Border" BorderBrush="#A9A9A9" BorderThickness="0" CornerRadius="2,2,0,0"> ``` to ``` <Border Background="White" Name="Border" BorderBrush="Transparent" BorderThickness="1,1,1,0" CornerRadius="2,2,0,0"> ``` and in the `IsMouseOver` trigger, use ``` <Setter TargetName="Border" Property="BorderBrush" Value="#A9A9A9" /> ``` in place of a setter on the `BorderThickness`. In your case, the border element itself changes size when you change its `BorderThickness` because the `BorderThickness` contributes to the 'size' of the `Border`, and the control inside it doesn't change size.
If you remove the below line (border) from the IsMouseOver trigger it stops jumping and visually it doesn't really have a huge effect (in my opinion). ``` <Setter TargetName="Border" Property="BorderThickness" Value="0,0,0,0" /> ```
39,467,265
I've experimented some cases in which calling `this.setState({ foo: true })` and immediately checking `this.state.foo` does not return the new value `true`. I can't tell now the exact case in which this happens (maybe when calling multiple times to `setState()` in the same iteration, I don't know). So I wonder: Does reactjs guarantee that the following will **always** work? ``` this.setState({ foo: true }); console.log(this.state.foo); // => prints true ```
2016/09/13
[ "https://Stackoverflow.com/questions/39467265", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4827838/" ]
> > Does reactjs guarantee that the following will always work? > > > No, [`setState`](https://facebook.github.io/react/docs/component-api.html#setstate) is *asynchronous* method., if you want to get updated state you should use callback as second argument ``` this.setState({ foo: true }, () => { console.log(this.state.foo); }); ```
It's done asynchronously. So you may not see the update within the method. Check that in componentDidUpdate. or you can see the result in the setState callback , like this : ``` this.setState({name: nextProps.site.name} , ()=>{ console.log(this.state.name); }); ```
4,710,945
i am creating a database applicatin in .Net. I am using a DataAccessLayer for communicating .net objects with database but i am not sure that this class is correct or not Can anyone cross check it and rectify any mistakes ``` namespace IDataaccess { #region Collection Class public class SPParamCollection : List<SPParams> { } public class SPParamReturnCollection : List<SPParams> { } #endregion #region struct public struct SPParams { public string Name { get; set; } public object Value { get; set; } public ParameterDirection ParamDirection { get; set; } public SqlDbType Type { get; set; } public int Size { get; set; } public string TypeName { get; set; } // public string datatype; } #endregion /// <summary> /// Interface DataAccess Layer implimentation New version /// </summary> public interface IDataAccess { DataTable getDataUsingSP(string spName); DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection); DataSet getDataSetUsingSP(string spName); DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection); SqlDataReader getDataReaderUsingSP(string spName); SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection); int executeSP(string spName); int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas); int executeSP(string spName, SPParamCollection spParamCollection); DataTable getDataUsingSqlQuery(string strSqlQuery); int executeSqlQuery(string strSqlQuery); SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas); int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection); object getScalarUsingSP(string spName); object getScalarUsingSP(string spName, SPParamCollection spParamCollection); } } using IDataaccess; namespace Dataaccess { /// <summary> /// Class DataAccess Layer implimentation New version /// </summary> public class DataAccess : IDataaccess.IDataAccess { #region Public variables static string Strcon; DataSet dts = new DataSet(); public DataAccess() { Strcon = sReadConnectionString(); } private string sReadConnectionString() { try { //dts.ReadXml("C:\\cnn.config"); //Strcon = dts.Tables[0].Rows[0][0].ToString(); //System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); //Strcon = config.ConnectionStrings.ConnectionStrings["connectionString"].ConnectionString; // Add an Application Setting. //Strcon = "Data Source=192.168.50.103;Initial Catalog=erpDB;User ID=ipixerp1;Password=NogoXVc3"; Strcon = System.Configuration.ConfigurationManager.AppSettings["connection"]; //Strcon = System.Configuration.ConfigurationSettings.AppSettings[0].ToString(); } catch (Exception) { } return Strcon; } public SqlConnection connection; public SqlCommand cmd; public SqlDataAdapter adpt; public DataTable dt; public int intresult; public SqlDataReader sqdr; #endregion #region Public Methods public DataTable getDataUsingSP(string spName) { return getDataUsingSP(spName, null); } public DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public DataSet getDataSetUsingSP(string spName) { return getDataSetUsingSP(spName, null); } public DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); adpt.Fill(ds); return ds; } } } finally { connection.Close(); } } public SqlDataReader getDataReaderUsingSP(string spName) { return getDataReaderUsingSP(spName, null); } public SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; sqdr = cmd.ExecuteReader(); return (sqdr); } } } finally { connection.Close(); } } public int executeSP(string spName) { return executeSP(spName, null); } public int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; return (cmd.ExecuteNonQuery()); } } } finally { connection.Close(); } } public int executeSP(string spName, SPParamCollection spParamCollection) { return executeSP(spName, spParamCollection, false); } public DataTable getDataUsingSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) connection.Open(); { using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public int executeSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); return (intresult); } } } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, null, spParamReturnCollection); } public int executeSPReturnParam() { return 0; } public int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Size); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); connection.Close(); //for (int i = 0; i < spParamReturnCollection.Count; i++) //{ // spParamReturned.Add(new SPParams // { // Name = spParamReturnCollection[i].Name, // Value = cmd.Parameters[spParamReturnCollection[i].Name].Value // }); //} } } return intresult; } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, spParamCollection, spParamReturnCollection, false); } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { //cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Value); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; cmd.ExecuteNonQuery(); connection.Close(); for (int i = 0; i < spParamReturnCollection.Count; i++) { spParamReturned.Add(new SPParams { Name = spParamReturnCollection[i].Name, Value = cmd.Parameters[spParamReturnCollection[i].Name].Value }); } } } return spParamReturned; } catch (Exception ex) { return null; } finally { connection.Close(); } } public object getScalarUsingSP(string spName) { return getScalarUsingSP(spName, null); } public object getScalarUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); cmd.CommandTimeout = 60; } cmd.CommandType = CommandType.StoredProcedure; return cmd.ExecuteScalar(); } } } finally { connection.Close(); cmd.Dispose(); } } #endregion } } ```
2011/01/17
[ "https://Stackoverflow.com/questions/4710945", "https://Stackoverflow.com", "https://Stackoverflow.com/users/543047/" ]
As I can see your current approach doesn't care about transactional environment, This days no one care about how to create DAL, when [EF](http://msdn.microsoft.com/en-us/library/aa697427%28v=vs.80%29.aspx), [nhibernate](http://nhforge.org/Default.aspx), ... are available, I suggest check Entity Framework and use it for startup check [code first](http://weblogs.asp.net/scottgu/archive/2010/07/16/code-first-development-with-entity-framework-4.aspx). it is available for 2008 and 2010.
If you have the opportunity try using an available tool to map from code to a database. [NHibernate](http://nhforge.org/) is a popular one and also the [MS Entity framework 4.0](http://msdn.microsoft.com/en-us/data/aa937723) The code you exposed here is quite complex to be asked in this pages, I doubt you will get a specific answer about it.
4,710,945
i am creating a database applicatin in .Net. I am using a DataAccessLayer for communicating .net objects with database but i am not sure that this class is correct or not Can anyone cross check it and rectify any mistakes ``` namespace IDataaccess { #region Collection Class public class SPParamCollection : List<SPParams> { } public class SPParamReturnCollection : List<SPParams> { } #endregion #region struct public struct SPParams { public string Name { get; set; } public object Value { get; set; } public ParameterDirection ParamDirection { get; set; } public SqlDbType Type { get; set; } public int Size { get; set; } public string TypeName { get; set; } // public string datatype; } #endregion /// <summary> /// Interface DataAccess Layer implimentation New version /// </summary> public interface IDataAccess { DataTable getDataUsingSP(string spName); DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection); DataSet getDataSetUsingSP(string spName); DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection); SqlDataReader getDataReaderUsingSP(string spName); SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection); int executeSP(string spName); int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas); int executeSP(string spName, SPParamCollection spParamCollection); DataTable getDataUsingSqlQuery(string strSqlQuery); int executeSqlQuery(string strSqlQuery); SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas); int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection); object getScalarUsingSP(string spName); object getScalarUsingSP(string spName, SPParamCollection spParamCollection); } } using IDataaccess; namespace Dataaccess { /// <summary> /// Class DataAccess Layer implimentation New version /// </summary> public class DataAccess : IDataaccess.IDataAccess { #region Public variables static string Strcon; DataSet dts = new DataSet(); public DataAccess() { Strcon = sReadConnectionString(); } private string sReadConnectionString() { try { //dts.ReadXml("C:\\cnn.config"); //Strcon = dts.Tables[0].Rows[0][0].ToString(); //System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); //Strcon = config.ConnectionStrings.ConnectionStrings["connectionString"].ConnectionString; // Add an Application Setting. //Strcon = "Data Source=192.168.50.103;Initial Catalog=erpDB;User ID=ipixerp1;Password=NogoXVc3"; Strcon = System.Configuration.ConfigurationManager.AppSettings["connection"]; //Strcon = System.Configuration.ConfigurationSettings.AppSettings[0].ToString(); } catch (Exception) { } return Strcon; } public SqlConnection connection; public SqlCommand cmd; public SqlDataAdapter adpt; public DataTable dt; public int intresult; public SqlDataReader sqdr; #endregion #region Public Methods public DataTable getDataUsingSP(string spName) { return getDataUsingSP(spName, null); } public DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public DataSet getDataSetUsingSP(string spName) { return getDataSetUsingSP(spName, null); } public DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); adpt.Fill(ds); return ds; } } } finally { connection.Close(); } } public SqlDataReader getDataReaderUsingSP(string spName) { return getDataReaderUsingSP(spName, null); } public SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; sqdr = cmd.ExecuteReader(); return (sqdr); } } } finally { connection.Close(); } } public int executeSP(string spName) { return executeSP(spName, null); } public int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; return (cmd.ExecuteNonQuery()); } } } finally { connection.Close(); } } public int executeSP(string spName, SPParamCollection spParamCollection) { return executeSP(spName, spParamCollection, false); } public DataTable getDataUsingSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) connection.Open(); { using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public int executeSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); return (intresult); } } } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, null, spParamReturnCollection); } public int executeSPReturnParam() { return 0; } public int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Size); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); connection.Close(); //for (int i = 0; i < spParamReturnCollection.Count; i++) //{ // spParamReturned.Add(new SPParams // { // Name = spParamReturnCollection[i].Name, // Value = cmd.Parameters[spParamReturnCollection[i].Name].Value // }); //} } } return intresult; } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, spParamCollection, spParamReturnCollection, false); } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { //cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Value); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; cmd.ExecuteNonQuery(); connection.Close(); for (int i = 0; i < spParamReturnCollection.Count; i++) { spParamReturned.Add(new SPParams { Name = spParamReturnCollection[i].Name, Value = cmd.Parameters[spParamReturnCollection[i].Name].Value }); } } } return spParamReturned; } catch (Exception ex) { return null; } finally { connection.Close(); } } public object getScalarUsingSP(string spName) { return getScalarUsingSP(spName, null); } public object getScalarUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); cmd.CommandTimeout = 60; } cmd.CommandType = CommandType.StoredProcedure; return cmd.ExecuteScalar(); } } } finally { connection.Close(); cmd.Dispose(); } } #endregion } } ```
2011/01/17
[ "https://Stackoverflow.com/questions/4710945", "https://Stackoverflow.com", "https://Stackoverflow.com/users/543047/" ]
As I can see your current approach doesn't care about transactional environment, This days no one care about how to create DAL, when [EF](http://msdn.microsoft.com/en-us/library/aa697427%28v=vs.80%29.aspx), [nhibernate](http://nhforge.org/Default.aspx), ... are available, I suggest check Entity Framework and use it for startup check [code first](http://weblogs.asp.net/scottgu/archive/2010/07/16/code-first-development-with-entity-framework-4.aspx). it is available for 2008 and 2010.
If i was code reviewing this, i would reject it. My first comment is: that is NOT a data access layer - you could call this a *Data Independence Layer* (in reality you very seldom need one of those). A data access layer is meant to abstract the data repository from the layer calling it. Exposing functions like `getScalarUsingSP()` and `getDataUsingSP()` is not abstracting the database at all, it is just creating a couple of wrapper functions more suited for your style of calling. You are on the right track by exposing stuff via an interface, but you need to change the exposed functions. Instead of `getDataUsingSP()` you need to have a call like `GetSelectedAccounts()` or `SaveCustomer()` - the `getDataUsingSP()` would be private and called from within the "friendly" exposed function.
4,710,945
i am creating a database applicatin in .Net. I am using a DataAccessLayer for communicating .net objects with database but i am not sure that this class is correct or not Can anyone cross check it and rectify any mistakes ``` namespace IDataaccess { #region Collection Class public class SPParamCollection : List<SPParams> { } public class SPParamReturnCollection : List<SPParams> { } #endregion #region struct public struct SPParams { public string Name { get; set; } public object Value { get; set; } public ParameterDirection ParamDirection { get; set; } public SqlDbType Type { get; set; } public int Size { get; set; } public string TypeName { get; set; } // public string datatype; } #endregion /// <summary> /// Interface DataAccess Layer implimentation New version /// </summary> public interface IDataAccess { DataTable getDataUsingSP(string spName); DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection); DataSet getDataSetUsingSP(string spName); DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection); SqlDataReader getDataReaderUsingSP(string spName); SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection); int executeSP(string spName); int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas); int executeSP(string spName, SPParamCollection spParamCollection); DataTable getDataUsingSqlQuery(string strSqlQuery); int executeSqlQuery(string strSqlQuery); SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas); int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection); object getScalarUsingSP(string spName); object getScalarUsingSP(string spName, SPParamCollection spParamCollection); } } using IDataaccess; namespace Dataaccess { /// <summary> /// Class DataAccess Layer implimentation New version /// </summary> public class DataAccess : IDataaccess.IDataAccess { #region Public variables static string Strcon; DataSet dts = new DataSet(); public DataAccess() { Strcon = sReadConnectionString(); } private string sReadConnectionString() { try { //dts.ReadXml("C:\\cnn.config"); //Strcon = dts.Tables[0].Rows[0][0].ToString(); //System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); //Strcon = config.ConnectionStrings.ConnectionStrings["connectionString"].ConnectionString; // Add an Application Setting. //Strcon = "Data Source=192.168.50.103;Initial Catalog=erpDB;User ID=ipixerp1;Password=NogoXVc3"; Strcon = System.Configuration.ConfigurationManager.AppSettings["connection"]; //Strcon = System.Configuration.ConfigurationSettings.AppSettings[0].ToString(); } catch (Exception) { } return Strcon; } public SqlConnection connection; public SqlCommand cmd; public SqlDataAdapter adpt; public DataTable dt; public int intresult; public SqlDataReader sqdr; #endregion #region Public Methods public DataTable getDataUsingSP(string spName) { return getDataUsingSP(spName, null); } public DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public DataSet getDataSetUsingSP(string spName) { return getDataSetUsingSP(spName, null); } public DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); adpt.Fill(ds); return ds; } } } finally { connection.Close(); } } public SqlDataReader getDataReaderUsingSP(string spName) { return getDataReaderUsingSP(spName, null); } public SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; sqdr = cmd.ExecuteReader(); return (sqdr); } } } finally { connection.Close(); } } public int executeSP(string spName) { return executeSP(spName, null); } public int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; return (cmd.ExecuteNonQuery()); } } } finally { connection.Close(); } } public int executeSP(string spName, SPParamCollection spParamCollection) { return executeSP(spName, spParamCollection, false); } public DataTable getDataUsingSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) connection.Open(); { using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public int executeSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); return (intresult); } } } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, null, spParamReturnCollection); } public int executeSPReturnParam() { return 0; } public int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Size); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); connection.Close(); //for (int i = 0; i < spParamReturnCollection.Count; i++) //{ // spParamReturned.Add(new SPParams // { // Name = spParamReturnCollection[i].Name, // Value = cmd.Parameters[spParamReturnCollection[i].Name].Value // }); //} } } return intresult; } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, spParamCollection, spParamReturnCollection, false); } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { //cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Value); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; cmd.ExecuteNonQuery(); connection.Close(); for (int i = 0; i < spParamReturnCollection.Count; i++) { spParamReturned.Add(new SPParams { Name = spParamReturnCollection[i].Name, Value = cmd.Parameters[spParamReturnCollection[i].Name].Value }); } } } return spParamReturned; } catch (Exception ex) { return null; } finally { connection.Close(); } } public object getScalarUsingSP(string spName) { return getScalarUsingSP(spName, null); } public object getScalarUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); cmd.CommandTimeout = 60; } cmd.CommandType = CommandType.StoredProcedure; return cmd.ExecuteScalar(); } } } finally { connection.Close(); cmd.Dispose(); } } #endregion } } ```
2011/01/17
[ "https://Stackoverflow.com/questions/4710945", "https://Stackoverflow.com", "https://Stackoverflow.com/users/543047/" ]
If you have the opportunity try using an available tool to map from code to a database. [NHibernate](http://nhforge.org/) is a popular one and also the [MS Entity framework 4.0](http://msdn.microsoft.com/en-us/data/aa937723) The code you exposed here is quite complex to be asked in this pages, I doubt you will get a specific answer about it.
If i was code reviewing this, i would reject it. My first comment is: that is NOT a data access layer - you could call this a *Data Independence Layer* (in reality you very seldom need one of those). A data access layer is meant to abstract the data repository from the layer calling it. Exposing functions like `getScalarUsingSP()` and `getDataUsingSP()` is not abstracting the database at all, it is just creating a couple of wrapper functions more suited for your style of calling. You are on the right track by exposing stuff via an interface, but you need to change the exposed functions. Instead of `getDataUsingSP()` you need to have a call like `GetSelectedAccounts()` or `SaveCustomer()` - the `getDataUsingSP()` would be private and called from within the "friendly" exposed function.
637,499
Can I disable the logging features for any app running under Ubuntu? I want to make my system totally logless is it possible?
2015/06/17
[ "https://askubuntu.com/questions/637499", "https://askubuntu.com", "https://askubuntu.com/users/420915/" ]
If you want to disable all logs: Run this command to stop the log daemon. ``` sudo stop rsyslog ``` To disable starting the daemon from boot create a file called `/etc/init/rsyslog.override` with a line containg the word "manual" You can do this with the following command: ``` sudo -H gedit /etc/init/rsyslog.override ``` Now add the word ``` manual ``` Then save and exit. Or do it in single command: ``` echo "manual" | sudo tee --append /etc/init/rsyslog.override ```
tmpfs can be used for log/tmp/spool directories. Then after reboot, they are totally cleared. <https://en.wikipedia.org/wiki/Tmpfs> ``` $ sudo vim /etc/fstab tmpfs /tmp tmpfs defaults,noatime,size=2024M,mode=1777 0 0 tmpfs /var/spool tmpfs defaults,noatime,size=512M,mode=1777 0 0 tmpfs /var/tmp tmpfs defaults,noatime,size=512M,mode=1777 0 0 tmpfs /var/log tmpfs defaults,noatime,size=512M,mode=0755 0 0 ```
31,488,959
So I have a file `foo.js` which contains the following: ``` $( document ).ready(function() { alert("hello world"); }); ``` and if I put it to the web/static/js folder then it doesn't get executed, but if I put it to the web/static/vendor folder then it does, so I wonder why it doesn't work from the js folder? And where should I put my js files? The vendor folder doesn't seem to be the right place ...
2015/07/18
[ "https://Stackoverflow.com/questions/31488959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2790174/" ]
As phoenixframework is using bruch.io as default. In it default configuration. There are two javascript folders 1. web/static/js 2. web/static/vendor When you add .js files under /web/static/vendor these files will be put in ***non wrapped*** codebase. These files will undergo concatinations and other processes and brunch.io with other js files (which also include files under web/static/js) and then will be put it in priv/static/js/app.js When you add .js files under web/static/js these files content will be put in ***wrapped*** codebase and then these file will undergo concatination with other brunch.io processes like previously mentioned. To reference to these file you need to use require() To require it first then you can use it. I hope you understand the reasons here. I researched from <https://github.com/brunch/brunch-guide/blob/master/content/en/chapter04-starting-from-scratch.md>. And these configuration can be overriden in the file brunch-config.js or brunch-config.coffee in the phoenixframework geterated folder content.
It turns out that when you add new files to the js folder, you have to require it either in your html file or in app.js, that's one of the brunch's features
50,536,634
I have two forms, each form has multiple hidden fields and a submit button. My goal is to align those forms with two other normal buttons horizontally (inline) using bootstrap. Here's what I've tried so far: ``` <div class="container"> <div class="row"> <div class="col-xs-6"> <a class="btn btn-primary" href="/employees/new">Add new Employee</a> <button type="button" class="btn btn-primary" data-toggle="modal" data-target="#myModal">Search</button> <form class="form-inline" action="../reports/pdf/employees" method="GET"> <input type="hidden" id="column" th:if="${param.column !=null}" name="column" th:value="${param.column}"> <input type="hidden" th:if="${param.sort == null}" id="sort" name="sort" value="ASC"> <input type="hidden" th:if="${param.sort != null}" id="sort" name="sort" th:value="${param.sort}"> <input type="hidden" th:if="${param.name!=null}" name="name" id="id" th:value="${param.name}"> <input type="hidden" th:if="${param.surname!=null}" name="surname" id="surname" th:value="${param.surname}"> <input type="hidden" th:if="${param.hiredate!=null}" name="hiredate" id="hiredate" th:value="${param.hiredate}"> <input type="hidden" th:if="${param.birthdate!=null}" name="birthdate" id="birthdate" th:value="${param.birthdate}"> <input type="hidden" th:if="${param.email!=null}" name="email" id="email" th:value="${param.email}"> <input type="hidden" th:if="${param.address!=null}" name="address" id="address" th:value="${param.address}"> <input type="hidden" th:if="${param.phone!=null}" name="phone" id="phone" th:value="${param.phone}"> <button class="btn btn-primary" type="submit">Print pdf report</button> </form> <form action="../reports/csv/employees" method="GET"> <input type="hidden" id="column" th:if="${param.column !=null}" name="column" th:value="${param.column}"> <input type="hidden" th:if="${param.sort == null}" id="sort" name="sort" value="ASC"> <input type="hidden" th:if="${param.sort != null}" id="sort" name="sort" th:value="${param.sort}"> <input type="hidden" th:if="${param.name!=null}" name="name" id="id" th:value="${param.name}"> <input type="hidden" th:if="${param.surname!=null}" name="surname" id="surname" th:value="${param.surname}"> <input type="hidden" th:if="${param.hiredate!=null}" name="hiredate" id="hiredate" th:value="${param.hiredate}"> <input type="hidden" th:if="${param.birthdate!=null}" name="birthdate" id="birthdate" th:value="${param.birthdate}"> <input type="hidden" th:if="${param.email!=null}" name="email" id="email" th:value="${param.email}"> <input type="hidden" th:if="${param.address!=null}" name="address" id="address" th:value="${param.address}"> <input type="hidden" th:if="${param.phone!=null}" name="phone" id="phone" th:value="${param.phone}"> <button class="btn btn-primary" type="submit">Print csv report</button> </form> </div> ```
2018/05/25
[ "https://Stackoverflow.com/questions/50536634", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7222288/" ]
I tried to disable some trigger that I found as suggested. However, I was not allowed because I am not part of the ClearCase group or object owner. For this reason, a college of my work gave this tip to create a read-only child streams, that avoid to show the message (screen). And it worked. **mkstream -in "stream parent" -readonly "stream name"** After that, I could do: **rebase -bas "baseline"** **rebase -complete**
Since the message "Do you wish to name the Deliver/Rebase activity" does not seem to be a standard/native one, it is possible it comes from a trigger. From there, see "[How to disable a trigger in a VOB or determine if an existing trigger is already disabled](http://www-01.ibm.com/support/docview.wss?uid=swg21368084)" Check its list with [`cleartool lstype`](https://www.ibm.com/support/knowledgecenter/en/SSSH27_9.0.1/com.ibm.rational.clearcase.cc_ref.doc/topics/ct_lstype.htm): ``` cd/path/to/my/view/myVob cleartool lstype -invob \aVob -kind trtype ```
49,205,227
I'm working on a redesign of my photography site and am setting up a simple slider for some of my photos. I'm having issues getting the images to be centered in divs with padding all around. I'm trying to mimic how photos look on my [Instagram](https://www.instagram.com/joe_scotto/) where they're centered in a square but keep their aspect ratio. Here is a [link](https://codepen.io/Joe_Scotto/pen/pLvdRj) to my Codepen. Example of what I'm going for: [![enter image description here](https://i.stack.imgur.com/VEOYs.png)](https://i.stack.imgur.com/VEOYs.png) SCSS: ``` .slick-slider { .image_container { background: red; position: relative; overflow: hidden; height: 256px; padding: 20px; img { position: absolute; top: 0; bottom: 0; margin: 0 auto; max-height: 300px; left: 0; right: 0; padding: 15px; } } } ``` HTML: ``` <div class="slick-slider"> <div class="image_container"> <img src="http://cdn.jscotto.com/sites/joescottophotography.com/images/portraits/photo-portraits-1.jpg"> </div> <div class="image_container"> <img src="http://cdn.jscotto.com/sites/joescottophotography.com/images/portraits/photo-portraits-2.jpg"> </div> </div> ```
2018/03/10
[ "https://Stackoverflow.com/questions/49205227", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3418376/" ]
You can utilize the function I created below to compare the two data sets. ``` library(dplyr) compare_them <- function(data1,data2) { sum1 <- apply(data1,2,summary) %>% data.frame() sum2 <- apply(data2,2,summary) %>% data.frame() names(sum1) <- paste0(names(sum1),"1") names(sum2) <- paste0(names(sum2),"2") final <- cbind(sum1,sum2) final1 <- t(final) final2 <- final1[order(row.names(final1)), ] final_1 <- t(final2) %>% data.frame() final_1 } compare_them(mtcars,mtcars*2) %>% View() ``` data1 variables will have "1" in the end, data2 variables will have "2" in the end. I used mtcars and mtcars\*2 as an example. The end results looks like this. [![enter image description here](https://i.stack.imgur.com/BIenL.png)](https://i.stack.imgur.com/BIenL.png)
An option is to use `summarise.all`, `dcast`, `unite` and `separate` to calculate desired stats for each data.frame arrange the same. Note: The sample data provided by `OP` has been slightly modified for `df_b` to have different stats than `df_a`. ``` library(tidyverse) library(reshape2) df_a %>% mutate(Grp = "A") %>% bind_rows(mutate(df_b, Grp = "B")) %>% select(-Pkey) %>% group_by(Grp) %>% { inner_join(inner_join(inner_join(summarise_all(.,funs(min,mean,median, max)), summarise_all(.,funs(Q1 = quantile), probs = 0.25), by = "Grp"), summarise_all(.,funs(Q2 = quantile), probs = 0.50), by = "Grp"), summarise_all(.,funs(Q3 = quantile), probs = 0.75), by = "Grp" ) } %>% as.data.frame() %>% gather(key, val, -Grp) %>% separate("key", c("sub", "param"), sep = "_") %>% unite("sub", c("sub", "Grp"), sep = "_") %>% dcast(param~sub, value.var = "val") %>% select_at(vars(param, sort(names(select(.,-param))))) # param Math.marks_A Math.marks_B Phy.marks_A Phy.marks_B #1 max 45.0 100.0 47 99.0 #2 mean 33.2 66.4 45 63.6 #3 median 34.0 80.0 45 60.0 #4 min 21.0 24.0 43 25.0 #5 Q1 32.0 40.0 44 44.0 #6 Q2 34.0 80.0 45 60.0 #7 Q3 34.0 88.0 46 90.0 ``` **data** ``` df_a <- structure(list(Pkey = c(1, 2, 3, 4, 5), Phy.marks = c(43, 44, 45, 46, 47), Math.marks = c(34, 34, 45, 32, 21)), .Names = c("Pkey", "Phy.marks", "Math.marks"), row.names = c(NA, -5L), class = "data.frame") df_b <- structure(list(Pkey = c(11, 12, 13, 14, 15), Phy.marks = c(90, 44, 60, 25, 99), Math.marks = c(24, 40, 80, 88, 100)), .Names = c("Pkey", "Phy.marks", "Math.marks"), row.names = c(NA, -5L), class = "data.frame") ```
1,023,293
I want to retrieve Bangla data that is written in MS word file using UNICODE. How can I retrieve this data using PHP? I can retrieve English data from DOC file using Antiword. But I cannot retrieve Bangla.
2009/06/21
[ "https://Stackoverflow.com/questions/1023293", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I have used PHP and COM (only on Windows Servers) to read document files. Extracting text from Word Documents via PHP and COM ``` $word = new COM("word.application") or die ("Could not initialise MS Word object."); $word->Documents->Open(realpath("Sample.doc")); # Extract content. $content = (string) $word->ActiveDocument->Content; echo $content; $word->ActiveDocument->Close(false); $word->Quit(); $word = null; unset($word); ``` I think you will have to use Windows Servers to get this done correctly. Or can you convert the document into OpenOffice format and give it a go? More details on PHP COM are available here. <http://us3.php.net/manual/en/book.com.php>
You may solve this by using **fopen()** function.
3,697
I'm very interested in **psychoanalysis ontologization of the "self"-concept** meaning: The idea that there is a self - a continuous entity with some inner dynamic, that we must fight (defense mechanism) to preserve, and that the disintegration of such a self would be horrible, which is in contrast to accepting Buddhist belief that we now see become popular in forms of mindfulness in clinical settings as well. I'm having a hard time tracing back who's writing about this ontoligized self - does anyone have any literature that describes or deals with this?
2012/09/19
[ "https://philosophy.stackexchange.com/questions/3697", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/2397/" ]
I think what you´re looking for is generally summed up as "Philosophy of Mind". Since you´re looking for positions, that argue for a seperate mind, you should check out the SEP on Dualism, which is a good start to get into it: <http://plato.stanford.edu/entries/dualism/> If you´re looking for a particular author, you could start with Descartes and his Meditations, they are pretty famous and accesible online, e.g. here: <http://oregonstate.edu/instruct/phl302/texts/descartes/meditations/meditations.html> I also found "The Philosophy of Mind" by P. Smith and O.R. Jones to be a very good introduction into the topic.
an interesting conversation after a lecture on the history of the debate over behaviourisms reductions and whether or not they legitimised psychology as being a science on par with the other physical sciences .. the lecturers point was that it didn't, and the resultant view is really just acceptance that many principles and constructs of psychology are only of a more heuristic value, for the reason the degree of certainty for predicting individual cases is just *not* the same as the physical sciences, constructs and principles contain this implicit acknowledgement of their limitations. To create a model with predictive power is different to an ontology of the self, and although Freud was not a scientist in this sense, i think the model he created is of a kind which has this same acknowledgement of the limits of its completeness. +1 for a good question :)
10,203,819
I've implemented pretty much the standard examples: ``` <script> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-mycode']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> <script> function recordOutboundLink(link, category, action) { try { var myTracker=_gat._getTrackerByName(); _gaq.push(['myTracker._trackEvent', category , action ]); setTimeout('document.location = "' + link.href + '"', 100) }catch(err){} } </script> ``` and the links have this onclick event: ``` <a id="latestDownload" href="https://example.com" onClick="recordOutboundLink(this, 'newDownloads', 'latest');return false;">Download latest version</a> ``` No events have been tracked for the past 3 days, which just sound wrong to me. I've tested the page with the GA debug plugin for chrome, which shows events are send. Have I made some mistake here? The Google GA debug addon shows (literally, not obfuscated): ``` Account ID : UA-XXXXX-X &utmac=UA-XXXXX-X ``` Do I need to push the '\_setAccount' again?
2012/04/18
[ "https://Stackoverflow.com/questions/10203819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/775265/" ]
tl;dr... leave out the `_getTrackerByName()` call, just use ``` _gaq.push(['myTracker._trackEvent', category , action ]); ``` Longer explanation: Async tracking allows pushing commands to multiple trackers (see [Tracking Basics](http://code.google.com/apis/analytics/docs/tracking/asyncUsageGuide.html#MultipleCommands)) using a syntax like ``` _gaq.push(['_setAccount', 'UA-XXXXX-1']); _gaq.push(['_trackPageview']); _gaq.push(['b._setAccount', 'UA-XXXXX-2']); _gaq.push(['b._trackPageview']); ``` The `_gaq.push(['myTracker._trackEvent', category , action ]);` code assumes you've already initialized `myTracker` like the `b` tracker above. Since `myTracker` has never had an accountId set, it shows the `UA-XXXXX-X` accountId while debugging. The analytics code on [Specialized Tracking/Outbound Links](http://support.google.com/analytics/bin/answer.py?hl=en&answer=1136920) is wrong, or would only work if the setup code named `myTracker`.
`myTracker` is a variable, so you cannot really refer to it inside a string. Following should work: ``` _gaq.push(['_trackEvent', category , action ]); ```
10,203,819
I've implemented pretty much the standard examples: ``` <script> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-mycode']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> <script> function recordOutboundLink(link, category, action) { try { var myTracker=_gat._getTrackerByName(); _gaq.push(['myTracker._trackEvent', category , action ]); setTimeout('document.location = "' + link.href + '"', 100) }catch(err){} } </script> ``` and the links have this onclick event: ``` <a id="latestDownload" href="https://example.com" onClick="recordOutboundLink(this, 'newDownloads', 'latest');return false;">Download latest version</a> ``` No events have been tracked for the past 3 days, which just sound wrong to me. I've tested the page with the GA debug plugin for chrome, which shows events are send. Have I made some mistake here? The Google GA debug addon shows (literally, not obfuscated): ``` Account ID : UA-XXXXX-X &utmac=UA-XXXXX-X ``` Do I need to push the '\_setAccount' again?
2012/04/18
[ "https://Stackoverflow.com/questions/10203819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/775265/" ]
`myTracker` is a variable, so you cannot really refer to it inside a string. Following should work: ``` _gaq.push(['_trackEvent', category , action ]); ```
The setTimeout thing seems a bit risky to me - it assumes that the Google Analytics call has been made within 100 ms. I prefer this: ``` function trackOutboundLink(url) { _gaq.push(['_trackEvent', 'outbound', 'click']); _gaq.push(function() { window.location = url; }); } ``` This queues up the redirect until after the Google Analytics async call has completed. To hook up: ``` <a href="#" onclick="trackOutboundLink('your-url');return false;">Link</a> ```
10,203,819
I've implemented pretty much the standard examples: ``` <script> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-mycode']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> <script> function recordOutboundLink(link, category, action) { try { var myTracker=_gat._getTrackerByName(); _gaq.push(['myTracker._trackEvent', category , action ]); setTimeout('document.location = "' + link.href + '"', 100) }catch(err){} } </script> ``` and the links have this onclick event: ``` <a id="latestDownload" href="https://example.com" onClick="recordOutboundLink(this, 'newDownloads', 'latest');return false;">Download latest version</a> ``` No events have been tracked for the past 3 days, which just sound wrong to me. I've tested the page with the GA debug plugin for chrome, which shows events are send. Have I made some mistake here? The Google GA debug addon shows (literally, not obfuscated): ``` Account ID : UA-XXXXX-X &utmac=UA-XXXXX-X ``` Do I need to push the '\_setAccount' again?
2012/04/18
[ "https://Stackoverflow.com/questions/10203819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/775265/" ]
tl;dr... leave out the `_getTrackerByName()` call, just use ``` _gaq.push(['myTracker._trackEvent', category , action ]); ``` Longer explanation: Async tracking allows pushing commands to multiple trackers (see [Tracking Basics](http://code.google.com/apis/analytics/docs/tracking/asyncUsageGuide.html#MultipleCommands)) using a syntax like ``` _gaq.push(['_setAccount', 'UA-XXXXX-1']); _gaq.push(['_trackPageview']); _gaq.push(['b._setAccount', 'UA-XXXXX-2']); _gaq.push(['b._trackPageview']); ``` The `_gaq.push(['myTracker._trackEvent', category , action ]);` code assumes you've already initialized `myTracker` like the `b` tracker above. Since `myTracker` has never had an accountId set, it shows the `UA-XXXXX-X` accountId while debugging. The analytics code on [Specialized Tracking/Outbound Links](http://support.google.com/analytics/bin/answer.py?hl=en&answer=1136920) is wrong, or would only work if the setup code named `myTracker`.
The setTimeout thing seems a bit risky to me - it assumes that the Google Analytics call has been made within 100 ms. I prefer this: ``` function trackOutboundLink(url) { _gaq.push(['_trackEvent', 'outbound', 'click']); _gaq.push(function() { window.location = url; }); } ``` This queues up the redirect until after the Google Analytics async call has completed. To hook up: ``` <a href="#" onclick="trackOutboundLink('your-url');return false;">Link</a> ```
589,891
Establish that 7 is a primitive root of any prime of the form $p = 2^{4n} + 1$. [Hint: Because $p ≡ 3$ or $5 \ (mod \ 7)$, $(7/p) = (p/7)=-1$]. I get that $p ≡ 2, 3, 5 \ (mod \ 7)$. Not only 3,5. How should i think?
2013/12/02
[ "https://math.stackexchange.com/questions/589891", "https://math.stackexchange.com", "https://math.stackexchange.com/users/112939/" ]
We're interested in the group $(\Bbb Z / p \Bbb Z)^\times$. This group is cyclic of size $2^{4n}$, so it has $\varphi(2^{4n}) = 2^{4n-1} = \frac{1}{2} 2^{4n}$ generators. That is, half of its elements are generators, and since quadratic residues cannot be generators, we see that the generators are precisely the quadratic nonresidues, which includes $7$.
Observe that $$2^{4n}+1=16^n+1=2^n+1\pmod 7=\begin{cases}3\\5\end{cases}\pmod 7$$ Note that $$2^{3}+1=(2+1)(2^2-2+1)\;\;\text{can't be prime}$$ and this is why $\;2\pmod 7\;$ isn't included in the hints...
18,241,627
I have a model that I only want to ever return JSON, regardless of any conneg or file-like extensions on the URI (e.g. `/app/model.json`). Google-fu is coming up short and this can't be that difficult.
2013/08/14
[ "https://Stackoverflow.com/questions/18241627", "https://Stackoverflow.com", "https://Stackoverflow.com/users/714478/" ]
In your controllers you simple have to create a respond\_to block that only responds to JSON: ``` respond_to do |format| format.json { render :json => @model } end ```
This is actually a decision made by the controller, not because there is/is not a model or view present. In your [controller](http://guides.rubyonrails.org/action_controller_overview.html#rendering-xml-and-json-data) you can: ``` render json: @your_model ``` However you will quickly find that the default implementation of `to_json` (which is what is used internally, above) can be annoying hard to do exactly what you want. When you reach that point you can use [RABL](https://github.com/nesquena/rabl) to create views that massage the JSON from your model(s) exactly as you want.
2,948
How valuable (or not) do you think [daily stand-up meetings](http://en.wikipedia.org/wiki/Stand-up_meeting) are? If you're not familiar with it, this refers to a daily meeting that is part of Scrum adherents (and some other agile methodologies). The idea is that you hold a daily meeting, timeboxed to 15 minutes, and in which everyone must stand (to encourage people to be to-the-point). In the meeting, you go around the room and each say: - What you did yesterday - What you plan to do today - Any blockers or impediments to your progress. Do you think this practice has value? Has anyone worked at a place that's done it, and what did you think?
2010/09/12
[ "https://softwareengineering.stackexchange.com/questions/2948", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6/" ]
I have not found them useful as practiced at my workplace, where the "Daily 15-Minute Standup" stretched to 30, then 45, and now often 60 minutes; where everyone sits down while waiting for the project manager to fiddle with the projector, or the network share, or whatever else the day's random demon is; where he insists on everyone taking the time to provide status updates before the meeting, but then queries everyone again (just in case we've accomplished something else in the last few moments); The only part of the original concept that remains is "Daily". Don't do this.
*Yes or no* and *are they valuable* are two different questions. The answers may also be different. In case of the latter question, the answer may depend on the perspective. The 1st question, yes or no? It's a **yes**. From the Scrum or XP point of view, the standup is an essential activity. If you don't have daily scrums, then it's not really Scrum, it's called "scrum, but we don't do daily standups" or *scrumbut* for short. if you want to include a Kanban perspective, most Kanban teams do standups, even though their method doesn't prescribe them. The 2nd question, (how) are they valuable, is more complicated. If you practice Scrum or XP, you've got to believe that the standups are essential to promote collaboration, teamwork and make your team more effective. So the answer is **definitely valuable**. The lean proponents' perspective is **very different**. An extreme lean view is that your customer doesn't care if you do standups, so they are just waste. What do you with waste? You cut it to the minimum, ideally to zero. A more moderate lean view is that while not exactly waste, daily standups are a *coordination cost* and **not a value-added activity**. You can play devil's advocate with your Scrum colleagues and ask them: if you think your 15-minute standups are a value-added activity, why don't you do 30 minutes of them every day or 45 minutes and easily magnify the value added? Kanban, which has lean roots but aims to deliver on the Agile Manifesto principles, resolves this paradox by doing standups, but using a very different meeting structure than the traditional agile standup format. The result is a much shorter meeting, which aligns with the Lean point of view. [This book has an example where a 50-person Kanban team does daily standups in 10 minutes](http://rads.stackoverflow.com/amzn/click/0984521402). To summarize, whether or not to do daily standups, the answer is a definite **yes**. But, are they valuable, how valuable are they -- **it depends**.
2,948
How valuable (or not) do you think [daily stand-up meetings](http://en.wikipedia.org/wiki/Stand-up_meeting) are? If you're not familiar with it, this refers to a daily meeting that is part of Scrum adherents (and some other agile methodologies). The idea is that you hold a daily meeting, timeboxed to 15 minutes, and in which everyone must stand (to encourage people to be to-the-point). In the meeting, you go around the room and each say: - What you did yesterday - What you plan to do today - Any blockers or impediments to your progress. Do you think this practice has value? Has anyone worked at a place that's done it, and what did you think?
2010/09/12
[ "https://softwareengineering.stackexchange.com/questions/2948", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6/" ]
Long before SCRUM and Agile were ever thought of, I was a team lead on a manpower study that took 2 years to do. It would have taken far longer without daily meetings. In the first place, human beings being human beings, they will slack off if they know no one is paying attention. If they have to show progress every day, they slack off less. If Joe seems to be making more progress than they are, they slack off less. Further, it lets the manager (or whoever) know when problems occur before they become a crisis. So if Steve is going to be a week late and Harry is ahead, then we can move some tasks around. This keeps the project from getting behind because one person got stuck. Further, usually someone else will be able to help the person get unstuck. The daily meeting was the key to my success on this team and the later job where I managed 30 studies going on all over the world and not one was late or over budget and all easily passed quality control. Now I worked one place where we gave a big project to a new employee. (I was not his boss.) The progress reports he gave his managers were "everything is great, it will all be done on time" but no specifics and no one required him to say exactly what progress he had made since the day before. He, as I'm sure the experienced among you have guessed, quit with no notice a week before the deadline and none of his tasks were complete or even in a state where the "work" he had done was useable. This is why the daily meetings are needed - to keep these people making real progress and to find out when they aren't before the whole project goes down the tubes. I ended up doing his tasks and mine and working overtime all summer so we could keep the multimillion dollar customer. Yeah we all like to believe that our devs are all internally motivated and will always produce the goods for us, but the truth is you have to protect the team and the organization from people like this. You never know who they will be; sometimes it's not the new employee, but the one who is mad at the organization (justifiably or not) or the one who just lost his wife (at least you often know who these people are, but not everyone shares their personal problems).
On the plus side, these meetings can have the effect of boosting productivity in terms of achieving urgent and immediate business objectives. On the negative side, these meetings (daily: what did you do? what are you going to do? what's in your way?) discourage what we might call "google-time", or working/learning on a side project that has no immediate impact on the business, but could have significant impact down the line. One product that I did the prototype for would never have passed the daily meeting test, and thankfully, my old manager gave boatloads of freedom to work on my own, and now the real product is live. But now that the old manager has left, and a new one has entered who embraces the concept of a daily scrum, I don't see how I could have ever developed the prototype in the daily scrum environment.
2,948
How valuable (or not) do you think [daily stand-up meetings](http://en.wikipedia.org/wiki/Stand-up_meeting) are? If you're not familiar with it, this refers to a daily meeting that is part of Scrum adherents (and some other agile methodologies). The idea is that you hold a daily meeting, timeboxed to 15 minutes, and in which everyone must stand (to encourage people to be to-the-point). In the meeting, you go around the room and each say: - What you did yesterday - What you plan to do today - Any blockers or impediments to your progress. Do you think this practice has value? Has anyone worked at a place that's done it, and what did you think?
2010/09/12
[ "https://softwareengineering.stackexchange.com/questions/2948", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6/" ]
I think they are very valuable if they are performed correctly. The format that has worked well for me is this.. **Each person gives a short answer to the following questions.** a) What are you working on? b) What will you get done by the next meeting (tomorrow)? c) Did you accomplish what you said you would get done at the last meeting? d) What obstacles are slowing/stopping your progress? Any extended discussion of the above should be curbed during the meeting to keep it short. Anyone can stay after (or meet later in the day) to discuss with the relevant person(s) anything that needs extended coverage. **This achieves the following goals:** a) The team lead/product owner is in the loop about possible delays quickly. b) The team lead can eliminate obstacles quickly. c) The team lead can identify people spinning their wheels quickly. d) It encourages collaboration between team members who might be too introverted to ask for help when they need it. e) It encourages momentum by keeping the commitments short (minimizes work expanding to time available for project).
How valuable (or not) do you think daily stand-up meetings are? Offcource all the scrum Framework meetings are important but I think the daily standup meeting is the 'most' important meeting in the Scrum Framework. It is like a heart in a body. If the heart does not pump blood regularly then the body is bound to die, the body in this case is the Organisation or the Project following scrum and the heart is the scrum meeting. Do you think this practice has value? Yes.Daily Scrums improve communications, eliminate other meetings, identify and remove impediments to development, highlight and promote quick decision-making, and improve everyone's level of project knowledge. The Daily Scrum is not a status meeting. The Daily Scrum is an inspection of the progress toward that Sprint Goal (the three questions). Follow-on meetings usually occur to make adaptations to the upcoming work in the Sprint. The intent is to optimize the probability that the Team will meet its Goal. This is a key inspect and adapt meeting in the Scrum empirical process. Has anyone worked at a place that's done it, and what did you think? Yes, In my last project we followed the Scrum Framework and Agile practices. We were very serious about the Scrum Framework in general and we did not do it half heartedly. Initially I was in a team of 5, then I moved to a larger team of around 9, and then again back to a 6 spread around a 4 year time frame. The daily scrum meeting made sure everyone was in sync, impediments were transparent, and we as Team could see how the Team was progressing with the burn-down in front of us and we knew exactly who was working on what, and where we could contribute ourselves. It is definitely easier to do when you have teams of 6 or less. The purpose of a scrum meeting is to do a self inspection check, and if any thing is found going not in the direction of the goal or if something is blocked, the self organised team adapts, so I think it is very crucial that you have it.
2,948
How valuable (or not) do you think [daily stand-up meetings](http://en.wikipedia.org/wiki/Stand-up_meeting) are? If you're not familiar with it, this refers to a daily meeting that is part of Scrum adherents (and some other agile methodologies). The idea is that you hold a daily meeting, timeboxed to 15 minutes, and in which everyone must stand (to encourage people to be to-the-point). In the meeting, you go around the room and each say: - What you did yesterday - What you plan to do today - Any blockers or impediments to your progress. Do you think this practice has value? Has anyone worked at a place that's done it, and what did you think?
2010/09/12
[ "https://softwareengineering.stackexchange.com/questions/2948", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6/" ]
I have not found them useful as practiced at my workplace, where the "Daily 15-Minute Standup" stretched to 30, then 45, and now often 60 minutes; where everyone sits down while waiting for the project manager to fiddle with the projector, or the network share, or whatever else the day's random demon is; where he insists on everyone taking the time to provide status updates before the meeting, but then queries everyone again (just in case we've accomplished something else in the last few moments); The only part of the original concept that remains is "Daily". Don't do this.
On the plus side, these meetings can have the effect of boosting productivity in terms of achieving urgent and immediate business objectives. On the negative side, these meetings (daily: what did you do? what are you going to do? what's in your way?) discourage what we might call "google-time", or working/learning on a side project that has no immediate impact on the business, but could have significant impact down the line. One product that I did the prototype for would never have passed the daily meeting test, and thankfully, my old manager gave boatloads of freedom to work on my own, and now the real product is live. But now that the old manager has left, and a new one has entered who embraces the concept of a daily scrum, I don't see how I could have ever developed the prototype in the daily scrum environment.
2,948
How valuable (or not) do you think [daily stand-up meetings](http://en.wikipedia.org/wiki/Stand-up_meeting) are? If you're not familiar with it, this refers to a daily meeting that is part of Scrum adherents (and some other agile methodologies). The idea is that you hold a daily meeting, timeboxed to 15 minutes, and in which everyone must stand (to encourage people to be to-the-point). In the meeting, you go around the room and each say: - What you did yesterday - What you plan to do today - Any blockers or impediments to your progress. Do you think this practice has value? Has anyone worked at a place that's done it, and what did you think?
2010/09/12
[ "https://softwareengineering.stackexchange.com/questions/2948", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6/" ]
I can stand for hours on end. It doesn't make me any more to the point, or have any real significance/impact to a short daily catch-up meeting. *But hey, if standing up lets you re-brand something as agile, it must be good!* As for whether regular catch-up meeting in general are a good idea... well they help if you other processes are ineffective. If you want to know what someone did yesterday, and what their next task(s) are, look on the Issue Tracker, where it's already recorded. If there's not a clear filter that tells you this for the whole team, set one up (or find better software). If you want to know if anyone has any blockers, check your messages (whether email/im/forum/whatever). If anyone has any, they should be notifying both the relevant party and the project lead when they occur, not wasting a day waiting before anyone else finds out, nevermind having a chance to act. There's certainly benefit from having regular meetings to discuss the direction of a project - in a general sense, rather than specifics, to make sure everyone understands the overall goals and so on - and held weekly or fortnightly (depending on pace of things). But spending a quarter of an hour every day just so you can *feel* agile and great and stuff? Waste of time.
I have not found them useful as practiced at my workplace, where the "Daily 15-Minute Standup" stretched to 30, then 45, and now often 60 minutes; where everyone sits down while waiting for the project manager to fiddle with the projector, or the network share, or whatever else the day's random demon is; where he insists on everyone taking the time to provide status updates before the meeting, but then queries everyone again (just in case we've accomplished something else in the last few moments); The only part of the original concept that remains is "Daily". Don't do this.
2,948
How valuable (or not) do you think [daily stand-up meetings](http://en.wikipedia.org/wiki/Stand-up_meeting) are? If you're not familiar with it, this refers to a daily meeting that is part of Scrum adherents (and some other agile methodologies). The idea is that you hold a daily meeting, timeboxed to 15 minutes, and in which everyone must stand (to encourage people to be to-the-point). In the meeting, you go around the room and each say: - What you did yesterday - What you plan to do today - Any blockers or impediments to your progress. Do you think this practice has value? Has anyone worked at a place that's done it, and what did you think?
2010/09/12
[ "https://softwareengineering.stackexchange.com/questions/2948", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6/" ]
I think they are very valuable if they are performed correctly. The format that has worked well for me is this.. **Each person gives a short answer to the following questions.** a) What are you working on? b) What will you get done by the next meeting (tomorrow)? c) Did you accomplish what you said you would get done at the last meeting? d) What obstacles are slowing/stopping your progress? Any extended discussion of the above should be curbed during the meeting to keep it short. Anyone can stay after (or meet later in the day) to discuss with the relevant person(s) anything that needs extended coverage. **This achieves the following goals:** a) The team lead/product owner is in the loop about possible delays quickly. b) The team lead can eliminate obstacles quickly. c) The team lead can identify people spinning their wheels quickly. d) It encourages collaboration between team members who might be too introverted to ask for help when they need it. e) It encourages momentum by keeping the commitments short (minimizes work expanding to time available for project).
Unless there's a good reason to have a fully networked communication between the whole team daily, it sounds like a big waste of time. Rather have weeklies.
2,948
How valuable (or not) do you think [daily stand-up meetings](http://en.wikipedia.org/wiki/Stand-up_meeting) are? If you're not familiar with it, this refers to a daily meeting that is part of Scrum adherents (and some other agile methodologies). The idea is that you hold a daily meeting, timeboxed to 15 minutes, and in which everyone must stand (to encourage people to be to-the-point). In the meeting, you go around the room and each say: - What you did yesterday - What you plan to do today - Any blockers or impediments to your progress. Do you think this practice has value? Has anyone worked at a place that's done it, and what did you think?
2010/09/12
[ "https://softwareengineering.stackexchange.com/questions/2948", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6/" ]
I have not found them useful as practiced at my workplace, where the "Daily 15-Minute Standup" stretched to 30, then 45, and now often 60 minutes; where everyone sits down while waiting for the project manager to fiddle with the projector, or the network share, or whatever else the day's random demon is; where he insists on everyone taking the time to provide status updates before the meeting, but then queries everyone again (just in case we've accomplished something else in the last few moments); The only part of the original concept that remains is "Daily". Don't do this.
I think daily stand-ups are a great idea regardless of what methodology or area you're working in - provided you can keep them short 15min max. If you can't regularly keep to that then there are either too many people attending a single stand-up or the meeting isn't focussed enough. Where we are the marketing team started copying us and now also have stand-ups - so it's not just for developers either!
2,948
How valuable (or not) do you think [daily stand-up meetings](http://en.wikipedia.org/wiki/Stand-up_meeting) are? If you're not familiar with it, this refers to a daily meeting that is part of Scrum adherents (and some other agile methodologies). The idea is that you hold a daily meeting, timeboxed to 15 minutes, and in which everyone must stand (to encourage people to be to-the-point). In the meeting, you go around the room and each say: - What you did yesterday - What you plan to do today - Any blockers or impediments to your progress. Do you think this practice has value? Has anyone worked at a place that's done it, and what did you think?
2010/09/12
[ "https://softwareengineering.stackexchange.com/questions/2948", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6/" ]
I find these meetings very valuable. They offer the following benefits—in return of spending just 15 minutes! * **Keeps everyone on-topic**. It's easy to dig into your own problems and forget about what others do, or to repeat work being done by someone else. Daily meetings prevent this from happening. * **Doesn't allow people to slack**. At these meetings you make promises. Then you just *have to* try to keep them. * **Makes people interact**. Programmers usually don't like talking to people. However, in such meetings they get encouragement (and reprimands) from their colleagues, which has positive effect on morale. * **Gathers everyone in one place**. You get a bonus of knowing that everyone will come there. This can be used to arrange other meetings, make announcements and to continue this daily meeting in smaller groups to discuss specific problems. Usually, organizing such meetings takes a lot of attention and requires skills that programmers don't like to use.
We've been doing "stand up" meetings before we knew it was agile scrum, for about 7 years now. In my opinion they are a great way to get a general feeling about how the project is progressing and if someone is in need of help. Some of my team members don't like to ask for help but will accept it when offered this is often done in the daily standup. Meetings must be short ,we had meetings usually under 10 minutes with 7 team members. It helps to plan them just before the ten o clock break. Also we do not use technology in the meetings just the scrum board with post it tasks and some charts.
2,948
How valuable (or not) do you think [daily stand-up meetings](http://en.wikipedia.org/wiki/Stand-up_meeting) are? If you're not familiar with it, this refers to a daily meeting that is part of Scrum adherents (and some other agile methodologies). The idea is that you hold a daily meeting, timeboxed to 15 minutes, and in which everyone must stand (to encourage people to be to-the-point). In the meeting, you go around the room and each say: - What you did yesterday - What you plan to do today - Any blockers or impediments to your progress. Do you think this practice has value? Has anyone worked at a place that's done it, and what did you think?
2010/09/12
[ "https://softwareengineering.stackexchange.com/questions/2948", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6/" ]
We had daily standups at my first job. Well, with all the co-ops/interns/temps, it was actually on the long side - usually around 30 minutes. But the idea of a short, timeboxed, daily meeting helped a lot just to know what other people were stuck on - and if it was something I was working on, I could reprioritize my tasks to finish what they needed to continue sooner. It also gave everyone a chance to know what everyone was working on so if someone had an emergency, everyone was at least aware of what was going on - reducing a truck factor is always a good thing. Honestly, every day might be a little extreme in some cases. But the idea of short, regular meetings for everyone to stay on the same page is a valuable addition to any process.
How valuable (or not) do you think daily stand-up meetings are? Offcource all the scrum Framework meetings are important but I think the daily standup meeting is the 'most' important meeting in the Scrum Framework. It is like a heart in a body. If the heart does not pump blood regularly then the body is bound to die, the body in this case is the Organisation or the Project following scrum and the heart is the scrum meeting. Do you think this practice has value? Yes.Daily Scrums improve communications, eliminate other meetings, identify and remove impediments to development, highlight and promote quick decision-making, and improve everyone's level of project knowledge. The Daily Scrum is not a status meeting. The Daily Scrum is an inspection of the progress toward that Sprint Goal (the three questions). Follow-on meetings usually occur to make adaptations to the upcoming work in the Sprint. The intent is to optimize the probability that the Team will meet its Goal. This is a key inspect and adapt meeting in the Scrum empirical process. Has anyone worked at a place that's done it, and what did you think? Yes, In my last project we followed the Scrum Framework and Agile practices. We were very serious about the Scrum Framework in general and we did not do it half heartedly. Initially I was in a team of 5, then I moved to a larger team of around 9, and then again back to a 6 spread around a 4 year time frame. The daily scrum meeting made sure everyone was in sync, impediments were transparent, and we as Team could see how the Team was progressing with the burn-down in front of us and we knew exactly who was working on what, and where we could contribute ourselves. It is definitely easier to do when you have teams of 6 or less. The purpose of a scrum meeting is to do a self inspection check, and if any thing is found going not in the direction of the goal or if something is blocked, the self organised team adapts, so I think it is very crucial that you have it.
2,948
How valuable (or not) do you think [daily stand-up meetings](http://en.wikipedia.org/wiki/Stand-up_meeting) are? If you're not familiar with it, this refers to a daily meeting that is part of Scrum adherents (and some other agile methodologies). The idea is that you hold a daily meeting, timeboxed to 15 minutes, and in which everyone must stand (to encourage people to be to-the-point). In the meeting, you go around the room and each say: - What you did yesterday - What you plan to do today - Any blockers or impediments to your progress. Do you think this practice has value? Has anyone worked at a place that's done it, and what did you think?
2010/09/12
[ "https://softwareengineering.stackexchange.com/questions/2948", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6/" ]
It can be useful, but often is not in practice. If you have a team that does not have easy access to other team members as they work, or your organization makes it difficult to find the managers/PMs/whoever then at least you know you have one shot per day to get a question answered. In practice it sometimes encourages people not to discuss issues immediately, and it can become a considerable time suck. For example: If I am only on one active development project (usually I am on two) usually there are at least one finishing out QA and one spinning up at the same time. That is three 15 min standups a day. Mine are nearly never back-to-back. You loose some time in making sure you are at a stopping point before each one and similarly getting back on track after each one. Even if you assume those losses are one 10 min each that translates to more than a whole workday lost to standups each week. Add in the commit meetings and demos and that easily eats a whole additional day. IMHO if your team is having a communication issue then daily meetings can help, but doing them otherwise is simply too much of a resource sink.
*Yes or no* and *are they valuable* are two different questions. The answers may also be different. In case of the latter question, the answer may depend on the perspective. The 1st question, yes or no? It's a **yes**. From the Scrum or XP point of view, the standup is an essential activity. If you don't have daily scrums, then it's not really Scrum, it's called "scrum, but we don't do daily standups" or *scrumbut* for short. if you want to include a Kanban perspective, most Kanban teams do standups, even though their method doesn't prescribe them. The 2nd question, (how) are they valuable, is more complicated. If you practice Scrum or XP, you've got to believe that the standups are essential to promote collaboration, teamwork and make your team more effective. So the answer is **definitely valuable**. The lean proponents' perspective is **very different**. An extreme lean view is that your customer doesn't care if you do standups, so they are just waste. What do you with waste? You cut it to the minimum, ideally to zero. A more moderate lean view is that while not exactly waste, daily standups are a *coordination cost* and **not a value-added activity**. You can play devil's advocate with your Scrum colleagues and ask them: if you think your 15-minute standups are a value-added activity, why don't you do 30 minutes of them every day or 45 minutes and easily magnify the value added? Kanban, which has lean roots but aims to deliver on the Agile Manifesto principles, resolves this paradox by doing standups, but using a very different meeting structure than the traditional agile standup format. The result is a much shorter meeting, which aligns with the Lean point of view. [This book has an example where a 50-person Kanban team does daily standups in 10 minutes](http://rads.stackoverflow.com/amzn/click/0984521402). To summarize, whether or not to do daily standups, the answer is a definite **yes**. But, are they valuable, how valuable are they -- **it depends**.
2,856,833
I understand I can call window.print() when the page loads to load the print dialog, but can I force the window to be minimised from the point that it's opened and the print dialog is generated? Ideally it would be minimised until the user maximises it. Thanks
2010/05/18
[ "https://Stackoverflow.com/questions/2856833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/181771/" ]
Short answer: No. Abbreviated answer: you'll have every antispam tool against you if you try to open windows and play with then in javascript. But... you could use the help of @media print {} in css. Add a div with the content you want to print to the page (and load it via Ajax, or using an iframe). By using "@media print" rules in the CSS you can tell the browser to hide the page that the user is seeing in the browser (by including it in a big div and adding a display:none to the print rule) and show the hidden div. The nice thing of this solution is that you can play with both divs via javascript, for example you could add buttons to show/hide the print div, etc.
you could try to open a new window outside the screen bounds so it's not visible. (x=-10000, y=-2000). Then, when it's loaded, print it and close it.
31,420,177
My problem is this. When Im parsing my JSON data it is getting returned to me as an NSSArray with one element. Inside this one element it looks like this: ``` 2015-07-14 20:45:38.467 ICBuses[51840:14872349] Bus Location: { agencyname = Cambus; agencytag = uiowa; color = 00ff66; directions = ( { direction = Loop; directiontag = loop; stops = ( { directiontitle = Loop; stoplat = "41.659541"; stoplng = "-91.53775"; stopnumber = 1050; stoptitle = "Main Library"; }, { directiontitle = Loop; stoplat = "41.6414999"; stoplng = "-91.55676"; stopnumber = 3200; stoptitle = "Studio Arts Building"; }, { directiontitle = Loop; stoplat = "41.66504"; stoplng = "-91.54094"; stopnumber = 2100; stoptitle = "Art Building"; }, { directiontitle = Loop; stoplat = "41.6662587"; stoplng = "-91.5405201"; stopnumber = 2106; stoptitle = "Theatre Building"; }, { directiontitle = Loop; stoplat = "41.6679058"; stoplng = "-91.5401033"; stopnumber = 2120; stoptitle = "Hancher/North Riverside"; }, { directiontitle = Loop; stoplat = "41.6660209"; stoplng = "-91.5406198"; stopnumber = 2105; stoptitle = "Riverside & River St"; }, { directiontitle = Loop; stoplat = "41.66499"; stoplng = "-91.54106"; stopnumber = 2101; stoptitle = "Art Building West"; }, { directiontitle = Loop; stoplat = "41.6612"; stoplng = "-91.5403422"; stopnumber = 145; stoptitle = EPB; } ); } ); ``` I want each block to be in there once element...How do I do this? Below is my coded that parses the data and stores it to an array. code that stores it to an array: ``` self.routeInfo = [[API sharedAPI] routeInfoAgency:self.routeAgency Route:self.route]; self.busLocation = [[API sharedAPI] busLocation:self.routeAgency Route:self.route]; NSLog(@"Bus Location: %@", self.routeInfo); ``` Heres the code that gets it from the server and parses it: ``` - (NSArray *) routeInfoAgency:(NSString *)agency Route:(NSString *)route { NSError *error; NSString *requestString = [NSString stringWithFormat:@"http://api.ebongo.org/route?format=json&agency=%@&route=%@&api_key=", agency, route]; NSURL *url = [NSURL URLWithString:[NSString stringWithFormat:@"%@%@",requestString, kApiKey]]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url]; NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:&error]; if (!data) { NSLog(@"Download Error: %@", error.localizedDescription); UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error" message:[NSString stringWithFormat:@"Error %@", error.localizedDescription] delegate:self cancelButtonTitle:@"Ok" otherButtonTitles:nil]; [alert show]; return nil; } NSDictionary *JSON = [NSJSONSerialization JSONObjectWithData:data options:NSJSONReadingMutableContainers error:&error]; NSArray *JSONArray = [JSON valueForKeyPath:@"route.directions"]; return JSONArray; } ``` I want what is after stops = to be in an array that where each element is a block of directions title, stipulate, stoping, stop number and stoptitle. Thank you When I try to use floatValue I get this error: { ``` When I do that I'm getting this error {2015-07-15 10:31:05.821 ICBuses[56344:16107040] -[__NSArrayI floatValue]: unrecognized selector sent to instance 0x7f91fb5bdf90 2015-07-15 10:31:05.868 ICBuses[56344:16107040] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[__NSArrayI floatValue]: unrecognized selector sent to instance 0x7f91fb5bdf90' *** First throw call stack: ( 0 CoreFoundation 0x000000010bc53c65 __exceptionPreprocess + 165 1 libobjc.A.dylib 0x000000010b8ecbb7 objc_exception_throw + 45 2 CoreFoundation 0x000000010bc5b0ad -[NSObject(NSObject) doesNotRecognizeSelector:] + 205 3 CoreFoundation 0x000000010bbb113c ___forwarding___ + 988 4 CoreFoundation 0x000000010bbb0cd8 _CF_forwarding_prep_0 + 120 5 ICBuses 0x000000010958f26e -[MapViewController viewDidLoad] + 3950 6 UIKit 0x000000010c17d1d0 -[UIViewController loadViewIfRequired] + 738 7 UIKit 0x000000010c17d3ce -[UIViewController view] + 27 8 UIKit 0x000000010c1a2257 -[UINavigationController _startCustomTransition:] + 633 9 UIKit 0x000000010c1ae37f -[UINavigationController _startDeferredTransitionIfNeeded:] + 386 10 UIKit 0x000000010c1aeece -[UINavigationController __viewWillLayoutSubviews] + 43 11 UIKit 0x000000010c2f96d5 -[UILayoutContainerView layoutSubviews] + 202 12 UIKit 0x000000010c0cc9eb -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 536 13 QuartzCore 0x000000010b18eed2 -[CALayer layoutSublayers] + 146 14 QuartzCore 0x000000010b1836e6 _ZN2CA5Layer16layout_if_neededEPNS_11TransactionE + 380 15 QuartzCore 0x000000010b183556 _ZN2CA5Layer28layout_and_display_if_neededEPNS_11TransactionE + 24 16 QuartzCore 0x000000010b0ef86e _ZN2CA7Context18commit_transactionEPNS_11TransactionE + 242 17 QuartzCore 0x000000010b0f0a22 _ZN2CA11Transaction6commitEv + 462 18 QuartzCore 0x000000010b0f10d3 _ZN2CA11Transaction17observer_callbackEP19__CFRunLoopObservermPv + 89 19 CoreFoundation 0x000000010bb86ca7 __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 23 20 CoreFoundation 0x000000010bb86c00 __CFRunLoopDoObservers + 368 21 CoreFoundation 0x000000010bb7ca33 __CFRunLoopRun + 1123 22 CoreFoundation 0x000000010bb7c366 CFRunLoopRunSpecific + 470 23 GraphicsServices 0x000000010e727a3e GSEventRunModal + 161 24 UIKit 0x000000010c04c8c0 UIApplicationMain + 1282 25 ICBuses 0x000000010958e2cf main + 111 26 libdyld.dylib 0x000000010d481145 start + 1 27 ??? 0x0000000000000001 0x0 + 1 ) libc++abi.dylib: terminating with uncaught exception of type NSException (lldb) ``` I think it is because the for loop is saving it as an array with one element. Any suggestions?
2015/07/15
[ "https://Stackoverflow.com/questions/31420177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4555740/" ]
I see two problems with the approach: **parameter estimates:** If there are different number of repeated observations, the weight for observations with multiple categories will be larger than observations with only a single category. This could be corrected by using weights in the linear model. Use WLS with weights equal to the inverse of the number of repetitions (or the square root of it ?). Weights are not available for other models like Poisson or Logit or GLM-Binomial. This will not make a larger difference for the parameter estimates, if the "pattern", i.e. the underlying parameters are not systematically different across movies with different number of categories. **Inference, standard error of parameter estimates:** All basic models like OLS, Poisson and so on assume that each row is an independent observation. The total number of rows will be larger than the number of actual observations and the estimated standard errors of the parameters will be underestimated. (We could use cluster robust standard errors, but I never checked how well they work with duplicate observations, i.e. response is identical across several observations.) **Alternative** As an alternative to repeating observations, I would encode the categories into non-exclusive dummy variables. For example, if we have three levels of the categorical variable, movie categories in this case, then we add a `1` in each corresponding column if the observation is `"in"` that category. Patsy doesn't have premade support for this, so the design matrix for the movie category would need to be build by hand or as the sum of the individual dummy matrices (without dropping a reference category). **alternative model** This is not directly related to the issue of multiple categories in the explanatory variables. The response variable movie ratings is bound to be between 0 and 100. A linear model will work well as a local approximation, but will not take into account that observed ratings are in a limited range and will not enforce it for prediction. Poisson regression could be used to take the non-negativity into account, but wouldn't use the upper bound. Two alternatives that will be more appropriate are GLM with Binomial family and a total count for each observation set to 100 (maximum possible rating), or use a binary model, e.g. Logit or Probit, after rescaling the ratings to be between 0 and 1. The latter corresponds to estimating a model for proportions which can be estimated with the statsmodels binary response models. To have inference that is correct even if the data is not binary, we can use robust standard errors. For example `result = sm.Logity(y_proportion, x).fit(cov_type='HC0')`
Patsy doesn't have any built-in way to separate out a "multi-category" like your `Genre` variable, and as far as I know there's no direct way to represent it in Pandas either. I'd break the `Genre` into a bunch of booleans columns, one per category: Mystery = True/False, Comedy = True/False, etc. That fits better with both pandas and patsy's way of representing things.