anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Carnot's statement and thermal reservoirs
Question: In Feynman's treatment of thermodynamics, Feynman formulated Carnot's principle as follows, Carnot assumed that heat cannot be taken in at a certain temperature and converted into work with no other change in the system or the surroundings. and also, Carnot assumed that it is impossible to extract the energy of heat at a single temperature. Aren't these statement at odds with the concept of heat baths wherein an amount of heat $Q$ can be extracted without changing the reservoir's temperature? And so is it the heat baths in the Carnot engine that make the engine impractical? Answer: First I'd like to point out that Feynman is not your best resource on the subject of thermodynamics. Other posts have shown that some of his statements, while not necessarily incorrect, can be misleading. It's not his forte. Carnot assumed that heat cannot be taken in at a certain temperature and converted into work with no other change in the system or the surroundings. I don't think Carnot put it that way. Heat can be taken in at a certain temperature and converted to work. It's called a reversible isothermal expansion process. The changes that occur is there will be an increase in the entropy of the system due to the heat addition, and an equal decrease in the entropy of the surroundings due the same amount of heat being extracted from the surroundings at the same temperature. I think what Feynman meant to say is you can't convert heat into work while exchanging heat with a single temperature reservoir when operating in a cycle. If that is the case, then the statement would be consistent with the next statement, as I have clarified it. Carnot assumed that it is impossible to extract the energy of heat at a single temperature. Missing here is the reference to a complete cycle. The Kelvin Plank statement of the second law says No heat engine can operate in a cycle while transferring heat with a single heat reservoir A key phrase is "operate in a cycle". As I said, it is possible to extract heat from a single temperature and do work in a process (e.g., reversible isothermal expansion process), but it is not possible to convert heat entirely into work when operating in a cycle. Aren't these statement at odds with the concept of heat baths wherein an amount of heat can be extracted without changing the reservoir's temperature? First, with the corrections/clarifications I discussed above, the statements are consistent with each other. What's more Carnot's actual theorem and the Kelvin Plank statement both refer to heat engines operating between two fixed temperatures. The implication is the temperatures of the source and sink are constant during the heat transfer processes. And so is it the heat baths in the Carnot engine that make the engine impractical? It is not the heat baths that make the Carnot engine impractical. In practice thermodynamic cycles can operate between fixed temperatures. All that's required is the heat capacities be large enough relative to the amount of heat transfer so that the temperatures remain relatively constant. What makes the Carnot engine, or for that matter any reversible engine, impractical is that the processes need to be carried out reversibly, or quasi-statically. The requires the temperature and pressure differentials between the operating fluid and the surroundings to be infinitesimally small. That, in turn, means the processes will occur infinitely slowly. So while the Carnot engine may be the most efficient in producing work, as a practical matter the rate at which work is produced (power) would be very low. Someone said if you put a Carnot engine in your car you would get fantastic mileage, but pedestrians would be passing you by!. Hope this helps.
{ "domain": "physics.stackexchange", "id": 63971, "tags": "thermodynamics" }
What path would advanced spaceships take to move between planets?
Question: Right now, space travel is all about carefully moving between orbits. If you want to go from Earth to Mars, you wait until the two planets are correctly aligned, and then place yourself into an elliptical orbit around the sun so that the apoasis of the orbit hits Mars. You only have to make a single, long burn at the start of the orbital change, and if you do it right you'll fall into a neat Martian orbit. Using this method, it takes quite a long time to get to Mars! It seems that most of the work getting there is done by gravity - the craft's engines only do a little. It's as if the craft is a transistor - its engines provide a little seed current so that gravity can do the rest. I suppose we do it like this because fuel is hard to get into space, and our engines are not very good. 100 years from now, when these are no longer issues, how will spacecraft move from one planet to another? Will they still think about orbital changes, or will they just point towards the target and hit the accelerator? Answer: A quick Google will find lots of analyses of interplanetary travel under constant acceleration. The best one I found is here, and gives results for travel between Earth and Mars. It even provides MatLab code to do the calculation, and you could easily modify this to calculate travel between different planets. We're not supposed to just give links without discussion, but I'm not sure how much there is to say. Unsurprisingly there's no simple analytical solution to the problem so a numerical solution is necessary. The trajectory ends up looking like an S. I've nicked one of the pictures from the site to show this: Green shows the Earth's orbit, Cyan shows the orbit or Mars, the red line is the constant outward acceleration and the blue line is the constant deceleration. The journey takes around 6 days.
{ "domain": "physics.stackexchange", "id": 4724, "tags": "orbital-motion, rocket-science, space-travel" }
Coulomb's Law and conversion of nano-columbs/coulombs
Question: This is not a homework problem. I am working ahead for my Electricity and Magnetism course for next quarter and this is a Chapter 25 video tutor solution question pearson put out where they do a short video alongside a problem. A rod with charge + 350 nC is being used to levitate a charged balloon, which has mass 3.0 g. The balloon is being held stationary 15 cm below the charged rod. What is the approximate charge on the balloon? What I know: Fnet=0 because it's stationary and the balloon is negatively charged because the rod is positively charged. I know q1, r and mass of the balloon. $$Fnet = F_g -F_b$$ $$mg = F_b$$ By using coulombs law I get an expression: $$mg = \frac{K_e\lvert q_1q_2\rvert}{r^2}$$ solving for q2 $$q_2 = \frac{mgr^2}{K_eq_1}$$ Now Ke is in coulombs so during this step I convert q1 to coulombs $$q1 = 350 *10^{-9}$$ This is in coulombs and I need to convert back to nano coulombs so I multiply this answer I've found by: $$\frac{mgr^2}{K_eq_1}*10^{9} = 210 nC$$ After this point I need to assess my model and find the direction of q2. I said before the balloon was negatively charged so it's q should be negatively charged. Giving me an answer of -210 nC. This is very close to the answer Pearson got but according to the video I am off by a factor of 10. They had 21 or 20 nC(they rounded to 20 without giving explanation why). I am very confused. I have done all my work multiple times and even checked it on wolfram alpha. I really want to build a good understanding of this chapter as these are the fundamentals of E&M and this course terrifies me a bit Might you assist me with this somehow? Here is a link to the final answer that Pearson got: EDIT: After further exploration of the problem, I am almost certain Pearson forgot to multiply by g. Thank you Costrom for the feedback. I will be contacting Pearson, linking to this thread. Thanks for the feedback. Answer: In "normal" physics and engineering problems, I always try to use the base units to be extra careful (kg,m,C...) so $q_1 = 350\cdot10^{-9}C$, $r = 0.15m$ , $m=0.003kg$, $g = 9.81 \frac{m}{s^2}$ using your equation: $q_2 = -\frac{mgr^2}{K_eq_1} = -\frac{0.003 \cdot 9.81 \cdot (0.15)^2}{8.987\cdot10^{9}\cdot350\cdot10^{-9}} \approx -210 nC$ It appears that the Pearson answer is off... Is there any step in the video you mentioned that does not get the same intermediate answer as you?
{ "domain": "physics.stackexchange", "id": 26994, "tags": "homework-and-exercises, charge" }
UNIX calendar(1) in C
Question: This is a simple implementation of the calendar(1) utility included in some UNIX systems (all BSDs have it, GNU has not). I do not have much experience with <sys/time.h> and <time.h> Manual: CALENDAR(1) General Commands Manual CALENDAR(1) NAME calendar - print upcoming events SYNOPSIS calendar [-l] [-A num] [-B num] [-t [[yyyy]mm]dd] [file...] DESCRIPTION calendar reads files for events, one event per line; and writes to standard output those events beginning with either today's date or tomorrow's. On Fridays and Saturdays, events through Monday are printed. If a hyphen (-) is provided as argument or the argument is absent, calendar reads from the standard input. The options are as follows: -A num Print lines from today and next num days (forward, future). -B num Print lines from today and previous num days (backward, past). -l Rather than print the date on the same line of each event, print the date alone in a line and followed by each event indented with a tab. -t[[yyyy]mm]dd Act like the specified value is “today” instead of using the current date. Each event should begin with a date pattern in the format [[YYYY-[MM]]-DDWW[+N|-N]. The hyphen (-) that separates the values can be replaced by a slash (/) or a period (.). Several date patterns can be supplied separated by a comma (,). YYYY should be any year number. MM should be a month number or a month name (either complete or abbreviate, such as "April" or "Apr"). DD should be the number of a day in the month. WW should be the name of a day in the week (either complete or abbreviate). Either DD or WW (or both) must be supplied. The date pattern can be followed by +N or -N to specify the week on the month (for example Sun+2 is the second Sunday in the month, Mon-3 is the third from last Monday in the month). EXAMPLES Consider the following input. # holidays 01/01 New Year's day 05/01 Labor Day 07/25 Generic holiday 12/25 Christmas May/Sun+2 Mother's day 13Fri Friday the 13th # classes Mon,Wed Java Class Tue,Thu Algebra Class Tue,Thu Operating Systems Class Tue,Thu Computer Network Class If today were 09 March 2021, then running calendar with the options -l and -A7 on this input would print the following: Sunday 09 May 2021 Mother's day Monday 10 May 2021 Java Class Tuesday 11 May 2021 Algebra Class Computer Network Class Operating Systems Class Wednesday 12 May 2021 Java Class Thursday 13 May 2021 Algebra Class Computer Network Class Operating Systems Class Friday 14 May 2021 Saturday 15 May 2021 SEE ALSO at(1), cal(1), cron(1), todo(1) STANDARDS The calendar program previously selected lines which had the correct date anywhere in the line. This is no longer true: the date is only recognized when it occurs at the beginning of a line. The calendar program previously could interpret only one date for each event. This is no longer true: each event can occur in more than one date (see the examples for the classes above). The calendar program previously could not read events from the standard input. This is no longer true: this version of calendar is an actual filter, which can read from the standard input or from named files. The calendar program previously had an option to send a mail to all users. This is no longer true: to have your calendar mailed every day, use cron(8). HISTORY A calendar command appeared in Version 7 AT&T UNIX. CALENDAR(1) calendar.c: #include <sys/time.h> #include <ctype.h> #include <err.h> #include <errno.h> #include <limits.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <time.h> #include <unistd.h> #include "calendar.h" #define SECS_PER_DAY (24 * 60 * 60) #define DAYS_PER_WEEK 7 #define MIDDAY 12 #define EPOCH 1970 #define isleap(y) ((!((y) % 4) && ((y) % 100)) || !((y) % 400)) static const int days_in_month[2][13] = { {0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31}, {0, 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31}, }; /* show usage and exit */ static void usage(void) { (void)fprintf(stderr, "usage: calendar [-d] [-A num] [-B num] -t [yyyymmdd] [file ...]\n"); exit(1); } /* call calloc checking for error */ static void * ecalloc(size_t nmemb, size_t size) { void *p; if ((p = calloc(nmemb, size)) == NULL) err(1, "malloc"); return p; } /* call strdup checking for error */ static char * estrdup(const char *s) { char *t; if ((t = strdup(s)) == NULL) err(1, "strdup"); return t; } /* convert string value to int */ static int strtonum(const char *s) { long n; char *ep; errno = 0; n = strtol(s, &ep, 10); if (s[0] == '\0' || *ep != '\0') goto error; if ((errno == ERANGE && (n == LONG_MAX || n == LONG_MIN)) || (n > INT_MAX || n < 0)) goto error; return (int)n; error: errno = EINVAL; err(1, "%s", s); return -1; } /* convert YYYYMMDD to time */ static time_t strtotime(const char *s) { struct tm *tmorig; struct tm tm; size_t len; time_t t; char *ep; t = time(NULL); tmorig = localtime(&t); tm = *tmorig; len = strlen(s); if (len == 2 || len == 1) ep = strptime(s, "%d", &tm); else if (len == 4) ep = strptime(s, "%m%d", &tm); else ep = strptime(s, "%Y%m%d", &tm); if (s[0] == '\0' || ep == NULL || ep[0] != '\0') goto error; tm.tm_hour = MIDDAY; tm.tm_min = 0; tm.tm_sec = 0; t = mktime(&tm); if (t == -1) goto error; return t; error: errno = EINVAL; err(1, "%s", s); return (time_t)-1; } /* set today time for 12:00; also set number of days after today */ static void settoday(time_t *today, int *after) { struct tm *tmorig; struct tm tm; time_t t; t = time(NULL); tmorig = localtime(&t); tm = *tmorig; tm.tm_hour = MIDDAY; tm.tm_min = 0; tm.tm_sec = 0; *today = mktime(&tm); switch (tm.tm_wday) { case 5: *after = 3; break; case 6: *after = 2; break; default: *after = 1; break; } } /* check if c is separator */ static int isseparator(int c) { return c == '-' || c == '.' || c == '/'; } /* get patterns for event s; also return its name */ static struct Day * getpatterns(char *s, char **name) { struct tm tm; struct Day *patt, *oldpatt; struct Day d; size_t len; int n; char *t, *end; patt = NULL; for (;;) { memset(&d, 0, sizeof(d)); while (isspace(*s)) { s++; } n = strtol(s, &end, 10); if (n > 0 && isseparator(*end)) { /* got numeric year or month */ d.month = n; s = end + 1; n = strtol(s, &end, 10); if (n > 0 && isseparator(*end)) { /* got numeric month after year */ d.year = d.month; d.month = n; s = end + 1; } else if ((t = strptime(s, "%b", &tm)) != NULL && isseparator(*t)){ /* got month name after year */ d.year = d.month; d.month = tm.tm_mon + 1; s = t + 1; } } else if ((t = strptime(s, "%b", &tm)) != NULL && isseparator(*t)) { /* got month name */ d.month = tm.tm_mon + 1; s = t + 1; } n = strtol(s, &end, 10); if (n > 0 && *end != '\0') { /* got month day */ d.monthday = n; s = end; } if ((t = strptime(s, "%a", &tm)) != NULL) { /* got week day */ d.weekday = tm.tm_wday + 1; s = t; } if (d.monthday == 0 && d.weekday == 0) break; n = strtol(s, &end, 10); if (n >= -5 && n <= 5 && *end != '\0') { d.monthweek = n; s = end; } oldpatt = patt; patt = ecalloc(1, sizeof(*patt)); *patt = d; patt->next = oldpatt; while (isspace(*s)) { s++; } if (*s == ',') { s++; while (isspace(*s)) { s++; } } else { break; } } *name = s; len = strlen(*name); if ((*name)[len - 1] == '\n') (*name)[len - 1] = '\0'; return patt; } /* get events for file fp */ static struct Event * getevents(FILE *fp, struct Event *evs) { struct Event *p; struct Day *patt; char buf[BUFSIZ]; char *name; while (fgets(buf, BUFSIZ, fp) != NULL) { if ((patt = getpatterns(buf, &name)) != NULL) { p = ecalloc(1, sizeof(*p)); p->next = evs; p->days = patt; p->name = estrdup(name); evs = p; } } return evs; } /* get week of year */ static int getwofy(int yday, int wday) { return (yday + DAYS_PER_WEEK - (wday ? (wday - 1) : (DAYS_PER_WEEK - 1))) / DAYS_PER_WEEK; } /* check if event occurs today */ static int occurstoday(struct Event *ev, struct tm *tm, int thiswofm, int lastwofm) { struct Day *d; for (d = ev->days; d != NULL; d = d->next) { if ((d->year == 0 || d->year == tm->tm_year + EPOCH) && (d->month == 0 || d->month == tm->tm_mon + 1) && (d->monthday == 0 || d->monthday == tm->tm_mday) && (d->weekday == 0 || d->weekday == tm->tm_wday + 1) && (d->monthweek == 0 || (d->monthweek < 0 && d->monthweek == -1 * (lastwofm - thiswofm - 1)) || (d->monthweek == thiswofm))) { return 1; } } return 0; } /* print events for today and after days */ static void printevents(struct Event *evs, time_t today, int after, int lflag) { struct tm *tmorig; struct tm tm; struct Event *ev; int wofy; /* week of year of first day of month */ int thiswofm; /* this week of current month */ int lastwofm; /* last week of current month */ int n, a, b; char buf1[BUFSIZ]; char buf2[BUFSIZ]; buf1[0] = buf2[0] = '\0'; while (after-- > 0) { tmorig = localtime(&today); tm = *tmorig; n = days_in_month[isleap(tm.tm_year + EPOCH)][tm.tm_mon + 1]; a = (tm.tm_wday - tm.tm_mday + 1) % DAYS_PER_WEEK; if (a < 0) a += DAYS_PER_WEEK; b = (tm.tm_wday + n - tm.tm_mday + 1) % DAYS_PER_WEEK; if (b < 0) b += DAYS_PER_WEEK; wofy = getwofy(tm.tm_yday - tm.tm_mday + 1, a); thiswofm = getwofy(tm.tm_yday, tm.tm_wday) - wofy + 1; lastwofm = getwofy(tm.tm_yday + n - tm.tm_mday + 1, b) - wofy + 1; if (lflag) { strftime(buf1, sizeof(buf1), "%A", &tm); strftime(buf2, sizeof(buf2), "%d %B %Y", &tm); printf("%-10s %s\n", buf1, buf2); } else { strftime(buf1, sizeof(buf1), "%b %d", &tm); } for (ev = evs; ev != NULL; ev = ev->next) { if (occurstoday(ev, &tm, thiswofm, lastwofm)) { if (lflag) { printf("\t"); } else { printf("%-8s ", buf1); } printf("%s\n", ev->name); } } today += SECS_PER_DAY; } } /* calendar: print upcoming events */ int main(int argc, char *argv[]) { static struct Event *evs; FILE *fp; time_t today; /* seconds of 12:00:00 of today since epoch*/ int before; /* number of days before today */ int after; /* number of days after today */ int lflag; /* whether to print in long format */ int exitval; int ch; before = lflag = 0; settoday(&today, &after); while ((ch = getopt(argc, argv, "A:B:lt:")) != -1) { switch (ch) { case 'A': after = strtonum(optarg); break; case 'B': before = strtonum(optarg); break; case 'l': lflag = 1; break; case 't': today = strtotime(optarg); break; default: usage(); break; } } argc -= optind; argv += optind; today -= before * SECS_PER_DAY; evs = NULL; exitval = 0; if (argc == 0) { evs = getevents(stdin, evs); } else { for (; *argv != NULL; argv++) { if (strcmp(*argv, "-") == 0) { evs = getevents(stdin, evs); continue; } if ((fp = fopen(*argv, "r")) == NULL) { warn("%s", *argv); exitval = 1; continue; } evs = getevents(fp, evs); fclose(fp); } } printevents(evs, today, after, lflag); return exitval; } calendar.h: /* day or day pattern */ struct Day { /* * This structure express both a day and a day pattern. For * convenience, let's express a Day entry as YYYY-MM-DD-m-w, * where: * - year is YYYY (1 to INT_MAX) * - month is MM (1 to 12) * - monthday is DD (1 to 31) * - monthweek is m (-5 to 5) * - weekday is w (1-Monday to 7-Sunday) * * A day is expressed with all values nonzero. For example, * 2020-03-11-2-3 represents 11 March 2020, which was a * Wednesday (3) on the second week of March. * * A day pattern can have any value as zero. A zero value * matches anything. For example: * - 0000-12-25-0-0 matches 25 December of every year. * - 0000-05-00-2-7 matches the second Sunday of May. * - 2020-03-11-2-3 matches 11 March 2020. */ struct Day *next; int year; int month; int monthday; int monthweek; int weekday; }; /* event */ struct Event { struct Event *next; struct Day *days; /* list of day patterns */ char *name; /* event name */ }; Answer: Missing error checks t = time(NULL); if (t == -1) handle_error_gracefully(); tmorig = localtime(&t); if (tmorig == NULL) handle_error_gracefully(); Simplify Value check not needed. // if ((errno == ERANGE && (n == LONG_MAX || n == LONG_MIN)) || (n > INT_MAX || n < 0)) if ((errno == ERANGE) || (n > INT_MAX || n < 0)) goto error; Better description As strtonum() fails on negative numbers, consider a different comment or function name. Overflow detection As strtonum() results are used in subsequent calculations that lack overflow protection, perhaps form strtonum(const char *s, int min, int max) and pass in limiting range values. Week of the year OP’s getwofy(int yday, int wday) does not appear to follow the week-of-the-year per ISO8601 as that depends on the day of the week of Jan 1. calendar.h does not discuss next member calendar.h does not discuss a negative monthweek Unclear how negative values relate, given a value of zero is special. day vs. struct Day // A day pattern can have any value as zero. A struct Day pattern can have any value as zero. calendar.h naming Consider code re-use. calendar.h introduces struct Day and struct Event. These common names can easily conflict with other code and are surprising to find in a file called calendar.h. calendar.h missing public functions I’d expect the calendar functions in calendar.c meant for general use to be non-static and declared in calendar.h. Else as a stand-alone program, might as well put all of calendar.h in calendar.c as nothing is for general use. Pedantic: Avoid UB of negative char in is...() // while (isspace(*s)) { while (isspace(* (unsigned char *)s)) { s++; } Various strtol() lack error checks RAII // struct Day d; … for (;;) { // memset(&d, 0, sizeof(d)); struct Day d = { 0 }; Avoid hacker exploit Consider what happens when len == 0. len = strlen(*name); if ((*name)[len - 1] == '\n') (*name)[len - 1] = '\0'; Instead: (*name)[strcspn(*name, "\n")] = 0; Minor: Alternative to a < 0 // a = (tm.tm_wday - tm.tm_mday + 1) % DAYS_PER_WEEK; // if (a < 0) // a += DAYS_PER_WEEK; a = (tm.tm_wday - tm.tm_mday + 1 + 35 /* some large multiple of 7) % DAYS_PER_WEEK; Easter Reference should you want to include Easter. Minor: Overflow with 16-bit int Common in embedded processors, 24 * 60 * 60 overflows 16-bit int. Posix, I believe, requires at least 32-bit int. // #define SECS_PER_DAY (24 * 60 * 60) #define SECS_PER_DAY ((time_t) 24 * 60 * 60) // or #define SECS_PER_DAY 86400 Minor: Failure to account DST localtime() will populate .tm_isdst, yet noon of that day may have a different .tm_isdst value. Usually best to let mktime() determine best setting. tmorig = localtime(&t); tm = *tmorig; tm.tm_hour = MIDDAY; tm.tm_min = 0; tm.tm_sec = 0; tm.tm_isdst = -1; // Add *today = mktime(&tm); Also in strtotime(). Unclear if this affects overall code, yet something to consider.
{ "domain": "codereview.stackexchange", "id": 41930, "tags": "c, datetime, reinventing-the-wheel, posix, to-do-list" }
HackerRank | Roads and Libraries - Code Optimization
Question: Major Edit: I wish people threw in a comment regarding what they are expecting instead of downvoting this question. I am quite appreciative of constructive feedback. I know it is only 3 people out of 100 who have downvoted, but it still affects the poster's confidence and makes this forum unhelpful. I am trying to solve this Hackerank problem and some of my test cases are failing stating that the final answer is wrong. After going over a previous S.O question for the same problem, my understanding is that HackerRank says that the answer is wrong if the code doesn't execute within the memory and time limit. I'd like to know how I can further optimize my code to reduce the run time and space complexity def roadsAndLibraries(n, c_lib, c_road, cities): # The arguments to the function above are number of nodes (n), cost of a library and road (c_lib, c_road) and cities connected by an edge (cities) graph = {} count = 0 cc = [] visited = set() #Adding all edges to create an Adjacency List using a dictionary for source,dest in cities: graph[source] = graph.get(source,[])+[dest] graph[dest] = graph.get(dest, []) + [source] #There is a weird condition in Hackerrank wherein the number of nodes (n) in the input can be greater than the max(cities). In that case, one needs to consider the additional nodes as an independent graph in a forest. if max(graph, key = int)<n: for i in range(max(graph, key = int)+1,n+1): graph[i]=[] # Generic Depth First Search def depth_first_search(graph, visited, node, temp): if node not in visited: visited.add(node) temp.add(node) if node in graph: for neighbor in graph[node]: depth_first_search(graph, visited, neighbor, temp) return temp #Identify Connected Components using a Depth First Search. #These Connected Components will subsequentially used to calculate the cost def connected_components(graph, visited): for key in graph: #print("Key = ", key) if key not in visited: temp = set() cc.append(depth_first_search(graph, visited, key, temp)) return cc connected_components(graph, visited) #Return final cost over here cost = 0 if c_road>c_lib: return n*c_lib else: for each in cc: cost += c_lib + (len(each)-1)*c_road return cost Answer: As Hackerank already told you, your solution is wrong (failing 7 out of 13 test cases). Which should make your question off-topic, not sure how to judge your pretending that they say that incorrectly. For example, you fail roadsAndLibraries(3, 1, 1, [[2, 3]]) by returning 2 instead of 3. What you call "weird" is just unconnected cities. And it can happen for small-numbered cities as well. In my example, it's city 1 that's unconnected. Which you don't handle because you assume that it's always the largest-numbered cities that are unconnected. If you simply start with graph = {i: [] for i in range(1, n+1)}, you succeed. Then you also don't need your own half-way special treatment for the "weird" unconnected cities. Also, use append to grow the lists so you have linear instead of quadratic runtime. for source,dest in cities: graph[source].append(dest) graph[dest].append(source) Alternatively, remove your half-way special treatment and add (n - len(visited)) * c_lib to the return value at the end, in order to put libraries in the not-visited (because unconnected) cities. Or as yet another alternative fix, you could fix your half-way special treatment to fill in all unconnected cities at that point: for i in range(1, n+1): graph[i] = graph.get(i, []) But I'd say that would be weird :-). It's simpler to just initialize all with an empty list at the start, before building the graph from the edges. And that's also better because it allows those simple appends. Improved solution Same basic idea, of course, placing one library in each connected component, unless libraries don't cost more than roads: def roadsAndLibraries(n, c_lib, c_road, cities): if c_lib <= c_road: return n * c_lib graph = {u: [] for u in range(1, n+1)} for u, v in cities: graph[u].append(v) graph[v].append(u) def delete(u): for v in graph.pop(u, []): delete(v) connected_components = 0 for u in range(1, n+1): if u in graph: connected_components += 1 delete(u) return connected_components * c_lib + (n - connected_components) * c_road Note: I handle c_lib <= c_road (and not just <) right at the start. No need to get to work on the graph at all if we're going to ignore that whole work anyway. I just count the connected components, don't keep track of each component's cities. In the end we just cover connected_components cities with libraries and the remaining n - connected_components with roads. Simple formula. The roads are bidirectional and the problem calls the connected cities "u" and "v", not "source" and "dest". So both for consistency with the problem and for not implying a direction, I used u and v as well (ok ok, I also like short standard names for brevity). If I build the graph, I can tear it down. No need for an extra visited. No need to pass graph through the delete calls as argument. (And in yours, neither your depth_first_search nor your connected_components would've needed graph and visited as parameters).
{ "domain": "codereview.stackexchange", "id": 40564, "tags": "python, python-3.x, programming-challenge, time-limit-exceeded, graph" }
Display nested objects through custom displayers in WPF
Question: First off, I am very new to WPF. These are the objects I want to display: public class IssueWithComments { public Issue Issue { get; set; } public IReadOnlyList<IssueComment> Comments { get; set; } public IssueWithComments() { Comments = new List<IssueComment>(); } } // Simplified public class Issue { public string Body { get; protected set; } public Uri HtmlUrl { get; protected set; } public IReadOnlyList<Label> Labels { get; protected set; } public Milestone Milestone { get; protected set; } public int Number { get; protected set; } public PullRequest PullRequest { get; protected set; } public ItemState State { get; protected set; } public string Title { get; protected set; }} public Uri Url { get; protected set; } public User User { get; protected set; } } public class IssueComment { public string Body { get; protected set; } public Uri HtmlUrl { get; protected set; } public Uri Url { get; protected set; } public User User { get; protected set; } } The following is my XAML: Within MainWindow.xaml <ScrollViewer DockPanel.Dock="Right" CanContentScroll="True" > <StackPanel x:Name="grid" ScrollViewer.VerticalScrollBarVisibility="Auto"/> </ScrollViewer> Within GithubIssue.xaml <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:local="clr-namespace:IssuesManagment.UI.POC.Utils" xmlns:Octokit="clr-namespace:Octokit;assembly=Octokit" x:Class="IssuesManagment.UI.POC.Controls.GithubIssue" xmlns:models="clr-namespace:IssuesManagment;assembly=IssuesManagment.Models" mc:Ignorable="d" d:DesignHeight="147" d:DesignWidth="295" BorderBrush="#FFD84B4B" BorderThickness="1" Width="Auto" Height="Auto" Margin="0,0,0,2"> <UserControl.Resources> <local:StringToBrushConverter x:Key="stringToBrushConverter"/> <local:IntegerToVisiblityConverter x:Key="intToVisiblityConverter"/> </UserControl.Resources> <UserControl.DataContext> <models:IssueWithComments/> </UserControl.DataContext> <StackPanel> <TextBlock> <Hyperlink NavigateUri="{Binding Issue.HtmlUrl}" RequestNavigate="Hyperlink_RequestNavigate"> <TextBlock x:Name="issueTitle" TextWrapping="Wrap" Text="{Binding Issue.Title}" FontWeight="Bold" FontSize="16" /> </Hyperlink> </TextBlock> <ItemsControl ItemsSource="{Binding Issue.Labels}" HorizontalAlignment="Left"> <ItemsControl.ItemsPanel> <ItemsPanelTemplate > <StackPanel Orientation="Horizontal"/> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> <ItemsControl.ItemTemplate> <DataTemplate DataType="Octokit:Label"> <TextBlock Text="{Binding Name}" Background="{Binding Color, Converter={StaticResource stringToBrushConverter}}" Foreground="White" Padding="2" MaxWidth="100" Margin="0,0,1,0" /> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> <TextBlock x:Name="issueBody" TextWrapping="Wrap" Text="{Binding Issue.Body}" Margin="2"/> <Expander Visibility="{Binding Comments.Count, Converter={StaticResource intToVisiblityConverter}}" Header="Comments"> <ItemsControl Name="comments" ItemsSource="{Binding Comments}"> </ItemsControl> </Expander> </StackPanel> </UserControl> GithubIssueComment.xaml <UserControl x:Class="IssuesManagment.UI.POC.Controls.GithubIssueComment" xmlns:Octokit="clr-namespace:Octokit;assembly=Octokit" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d" d:DesignWidth="300" Width="Auto" Height="Auto"> <UserControl.DataContext> <Octokit:IssueComment/> </UserControl.DataContext> <StackPanel> <TextBlock Text="{Binding Body}" Padding="2" /> <Label Content="By:"/> <TextBlock Text="{Binding User.Login}"/> </StackPanel> </UserControl> C# Code: Main.xaml.cs: private async void ViewIssues_Click(object sender, RoutedEventArgs e) { grid.Children.Clear(); var apiClient = new Octokit.GitHubClient(new Octokit.ProductHeaderValue("Issue-Managment")); if (!string.IsNullOrWhiteSpace(userName.Text) && !string.IsNullOrWhiteSpace(repoName.Text)) { var issuesClient = new IssuesWithCommentsClient(new Octokit.ApiConnection(apiClient.Connection)); var issues = await issuesClient.GetAllForRepositoryWithComments(userName.Text, repoName.Text); GithubIssue x; foreach (var item in issues) { x = new GithubIssue(item); grid.Children.Add(x); } } else { grid.Children.Add(new Controls.Error()); } } GithubIssue.xaml.cs: public GithubIssue(IssueWithComments issue) { InitializeComponent(); this.DataContext = issue; var commentDisplayers = new List<GithubIssueComment>(); foreach (var comment in issue.Comments) { commentDisplayers.Add(new GithubIssueComment(comment)); } comments.ItemsSource = commentDisplayers; } Where have I gone very off track, and where does it just need a little improvement? Answer: GithubIssue.xaml.cs: public GithubIssue(IssueWithComments issue) { InitializeComponent(); this.DataContext = issue; var commentDisplayers = new List<GithubIssueComment>(); foreach (var comment in issue.Comments) { commentDisplayers.Add(new GithubIssueComment(comment)); } comments.ItemsSource = commentDisplayers; } This could be simplified with LINQ. How about: var commentDisplayers = issue.Comments.Select(comment=>new GithubIssueComment(comment)).ToList(); // or var commentDisplayers = (from comment in issue.Comments select new GithubIssueComment(comment)).ToList();
{ "domain": "codereview.stackexchange", "id": 13405, "tags": "c#, wpf" }
How do I make a submission of a CNN?
Question: I have built a CNN to do image classification for images representing different weather conditions. I have 4 classes of images : Haze, Rainy, Snowy, Sunny. I have built my CNN and evaluated the performances. N ow I have been given a blind test set, so images without a label, and I have to make a submission. So I have to buld a .csv file which contains should contain one line for each predicted class of images, so it should have the structure ,. Thus each line should be a string which identifies the image and its prediction. Now the problem is that I don't understand how to do this. I am really confused because I have never done something similar. My code is the following: trainingset = '/content/drive/My Drive/Colab Notebooks/Train' testset = '/content/drive/My Drive/Colab Notebooks/Test_HWI' batch_size = 31 train_datagen = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rescale = 1. / 255,\ zoom_range=0.1,\ rotation_range=10,\ width_shift_range=0.1,\ height_shift_range=0.1,\ horizontal_flip=True,\ vertical_flip=False) train_generator = train_datagen.flow_from_directory( directory=trainingset, target_size=(256, 256), color_mode="rgb", batch_size=batch_size, class_mode="categorical", shuffle=True ) test_datagen = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rescale = 1. / 255 ) test_generator = test_datagen.flow_from_directory( directory=testset, target_size=(256, 256), color_mode="rgb", batch_size=batch_size, class_mode="categorical", shuffle=False ) num_samples = train_generator.n num_classes = train_generator.num_classes input_shape = train_generator.image_shape classnames = [k for k,v in train_generator.class_indices.items()] then I build the network: def Network(input_shape, num_classes, regl2 = 0.0001, lr=0.0001): model = Sequential() # C1 Convolutional Layer model.add(Conv2D(filters=32, input_shape=input_shape, kernel_size=(3,3),\ strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation before passing it to the next layer model.add(BatchNormalization()) # C2 Convolutional Layer model.add(Conv2D(filters=64, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Batch Normalisation model.add(BatchNormalization()) # C3 Convolutional Layer model.add(Conv2D(filters=128, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Batch Normalisation model.add(BatchNormalization()) # C4 Convolutional Layer model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) #Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C5 Convolutional Layer model.add(Conv2D(filters=400, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C6 Convolutional Layer model.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C7 Convolutional Layer model.add(Conv2D(filters=800, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Conv2D(filters=800, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C8 Convolutional Layer model.add(Conv2D(filters=1000, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # Flatten model.add(Flatten()) flatten_shape = (input_shape[0]*input_shape[1]*input_shape[2],) # D1 Dense Layer model.add(Dense(4096, input_shape=flatten_shape, kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # D2 Dense Layer model.add(Dense(4096, kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # D3 Dense Layer model.add(Dense(1000,kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # Output Layer model.add(Dense(num_classes)) model.add(Activation('softmax')) # Compile adam = optimizers.Adam(lr=lr) model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy']) return model #create the model model = Network(input_shape,num_classes) model.summary() I train the network: steps_per_epoch=train_generator.n//train_generator.batch_size val_steps=test_generator.n//test_generator.batch_size+1 try: history = model.fit_generator(train_generator, epochs=100, verbose=1,\ steps_per_epoch=steps_per_epoch,\ validation_data=test_generator,\ validation_steps=val_steps) except KeyboardInterrupt: pass now, I have the images without labels in the google drive, so I define the path to them: blind_testSet = '/content/drive/My Drive/Colab Notebooks/blind_testset' but now I don't know what shoul I do. I really don't know how to define the .csv file I mentioned above. Can someone please help me? Thanks in advance. [EDIT] Ok I am trying to make the predictions on the blind test set, but it is taking really a long time. What I have done is the following: blind_testSet = '/content/drive/My Drive/Colab Notebooks/submission/blind_testset' test_datagen_blind = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rescale = 1. / 255 ) test_generator_blind = test_datagen.flow_from_directory( directory=blind_testSet, target_size=(256, 256), color_mode="rgb", batch_size=batch_size, class_mode="categorical", shuffle=False ) preds = model.predict_generator(test_generator_blind,verbose=1,steps=val_steps) the images I have inside this blind test set are 1500, but is it normal that it takes so long? Thanks. [EDIT 2] To try to make the submission I am trying to use a code similar to this: def make_submission(model, filename="submission.csv"): df = pd.read_csv("../input/test.csv") X = df.values / 255 X = X.reshape(X.shape[0], 28, 28, 1) preds = model.predict_classes(X) subm = pd.DataFrame(data=list(zip(range(1, len(preds) + 1), preds)), columns=["ImageId", "Label"]) subm.to_csv(filename, index=False) return subm but it seems to not work in my case. I have also tried to keep only the last 2 lines and use them, so : subm = pd.DataFrame(data=list(zip(range(1, len(preds) + 1), preds)), columns=["ImageId", "Label"]) subm.to_csv(filename, index=False) can someone help me creating this csv file? Thanks. Answer: You have to make predictions first. Than order these predictions to the ids in the blind_testSet. So something like this: test_set=pd.read_csv(blind_testSet) test_set["predicted_labels"]=model.predict(quntified pictures from test set) EDIT: on the question why is it taking so long to train: you have deep convolutional layers. Backpropagating is very expensive process. A lot can be said on this topic how to speed up the computations, but lets say that you should be looking at utilizing GPU power
{ "domain": "datascience.stackexchange", "id": 6547, "tags": "machine-learning, neural-network, deep-learning, cnn, image-classification" }
Could a computer unblur the image from an out of focus microscope?
Question: Basically I'm wondering what is the nature of an out of focus image. Is it randomized information? Could the blur be undone by some algorithm? Answer: The blurring is not randomised, it is predictable. See Can someone please explain what happens on microscopic scale when an image becomes unfocused on a screen from a projector lens? for a basic explanation. Each point of the in-focus image is spread out into a diffraction pattern of rings called a point spread function (PSF), and these ring patterns overlap to form the out-of-focus image. The blurred image is the convolution of the object and the PSF. Convolution is a mathematical transformation which can in some circumstances be reversed (deconvolution) - for example when the image has been made using coherent light (from a laser) and the PSF is known. When photos are taken using ordinary incoherent light, and the PSF is unknown, the blurring cannot be reversed completely, but a significant improvement can be made, eg using the blind deconvolution algorithm. Examples of objects and resulting images can be used to approximately re-construct the PSF, or a Gaussian function can be used. Blurring due to motion (of the camera or object) can also be corrected. For both cases the techniques and problems are discussed in Restoration of De-Focussed and Blurred Images, and examples given of what can be achieved. Software is available online to fix blurred images.
{ "domain": "physics.stackexchange", "id": 42305, "tags": "optics, geometric-optics, algorithms" }
Sun constantly converts mass into energy, will this cause its gravity to decrease?
Question: If the sun is constantly converting the mass into energy, then will its gravitational field continue decreasing? Answer: If the sun is constantly converting the mass into energy, then will its gravitational field go on decreasing? It's a very interesting question and the answer is yes! The solar constant indicates the mean solar radiation of electromagnetic waves (mostly in visible and near infrared light and I'll answer based on that. While the conversion of mass matter† to energy in the Sun's core now represents a loss of mass proper matter, it turns out that that energy (trapped in the Sun and slowly diffusing towards the surface) will have the same gravitational attraction as the matter it came from until it actually escapes the Sun! There is some prompt mass and energy loss via neutrinos and it's significant, perhaps several hundred keV per neutrino I simply don't know the number yet. I'll ask a separate question about it. I'm guessing that losses due to the stellar wind are small, but I'll update here as soon as the following is answered: How much mass does the Sun lose as light, neutrinos, and solar wind? update: the answer there is that loss via neutrinos is only about 2.3% of the radiative loss, and on average loss via solar wind and coronal mass ejections is about 4E+16 kg/year, or about another 30% relative to the radiative loss described below. The value $I$ is about 1360 Watts per square meter at $R$ = 1 AU which is about 150 million kilometers or 150 billion meters. So the total energy lost per second $P$ is $$P = 4 \pi R^2 I$$ Taking the time derivative of $E = m c^2$ we get $$\frac{dE}{dt} = P = \frac{dm}{dt} c^2$$ so $$ \frac{dm}{dt} = \frac{1}{c^2} \ 4 \pi R^2 I$$ That means that the value of the mass that we use to calculate the Sun's gravitational attraction changes by about 4.3E+09 kilograms per second, or 1.3E+17 kilograms per year. The Sun's current mass is about 2.00E+30 kilograms, so this effect changes by a very tiny fraction per year, about 6.7E-14. Over the age of the Earth of 4.5 billion years, that's 3E-04, or about 0.03% if the Sun's output were constant. It has probably changed over this time of course, so this is just a rough estimate. †Thanks to @S.Melted's answer for clarifying this. added to my answer by @Tosic: The Earth feels no torque from any force during this (the force from radiation is radial), which means its angular momentum is conserved. This means $$R_1 v_1 = R v$$ $$R_1 \sqrt{\frac{GM_1}{R_1}} = R \sqrt{\frac{GM}{R}}$$ $$M_1 R_1 = M R$$ We can see the Earth's orbital radius would change by a factor of 0.03% as well (M1 and M are solar masses).
{ "domain": "astronomy.stackexchange", "id": 4308, "tags": "the-sun" }
Particle in a one-sided infinite step function plus a delta-function attractive well
Question: Given the following potential: $$ \begin{cases} \infty & x\leq -d \\ -V_0 \delta(x) & x > -d \end{cases} $$ with d > 0 I would like to compute the condition that has to be verified in order to have at least one bound state. From $\hat{H} \psi(x) = E_n \psi(x)$ and assuming we have a bound state (E < 0), I get as general solution for $\psi(x):$ $$ \begin{cases} 0 & x < -d \\ A(-e^{-2kd} e^{kx} + e^{-k x}) & -d < x < 0 \\ Ce^{-kx} & x > 0 \end{cases} $$ where $k = \sqrt{\frac{2m|E|}{\hbar^2}} $ By imposing the boundary conditions in $x = 0$: $\Delta \frac{d \psi(x)}{dx} = -\frac{2 m V_o}{\hbar^2} \psi(0), \hspace{0.5 cm} \psi(0^-) = \psi(0^+)$ $\hspace{0.5 cm}$ and $\hspace{0.5 cm}$ $\psi(-d) =0$, I conclude that the quantization of the energy of the system is given by: $$ \begin{equation} \frac{1+e^{-2kd}}{1-e^{-2kd}} = 1 - \frac{2 m V_0}{\hbar^2 k} \end{equation} $$ Which doesn't make any sense due to the fact that $kd > 0$ and the left member of this equation is bigger than 1 in this region. Could someone explain me what is wrong? Answer: The position delta distribution $\delta(x)$ has units of inverse length, so that $\int \mathrm dx\ \delta(x) = 1$ is dimensionally consistent. Therefore, dimensional consistency suggests your potential should be $$ V(x) = \begin{cases} \infty & x < -d \\ -V_0 a \delta(x) & x > -d \end{cases} $$ where $a$ is a new parameter with units of length. In the limit $d\to\infty$ you would expect to recover the single bound state of the isolated delta-well potential; the additional parameter suggests you should recover the bound state in the limit $d \gg a$. In your quantization condition, note that $\frac{2mV_0}{\hbar^2 k}$ has the same units as $k$, but you’re using it in a dimensionless context. However, changing $V_0\to V_0 a$ doesn’t fix the problem you’ve identified in your quantization condition, that one side is definitely larger than one while the other side is definitely smaller. The problem is still present even if you take the $d\to\infty$ limit and write $$ \lim_{d\to\infty} \frac{1 + e^{-2kd}}{1 - e^{-2kd}} = 1 \overset{?!}{=} 1 - \frac{2mV_0 a}{\hbar^2 k} $$ There are two possible explanations for this scenario: You’ve made some error in setting up your boundary conditions. When you fix that error, you’ll uncover the known solution $E=-{m(V_0a)^2}/{2\hbar^2}$ for the $d\to\infty$ limit, and a correction of order $a/d$ for a distant infinite barrier. The bound state of the delta potential is “delicate” enough that an infinite potential barrier, at any distance, destroys it. In that case you might look at a finite barrier $$ V(x) = \begin{cases} V_1 & x < -d \\ -V_0 a \delta(x) & x > -d \end{cases} $$ and expect the existence of the bound state to have a condition like $V_1 d \lesssim V_0 a$. I’m leaning towards #1, another error, but I wouldn’t be too surprised either way. There is a sign error in your middle definition of $\psi$: $$ \psi|_{x=-d} = A\cdot(-e^{-3kd} + e^{-kd}) \neq 0 $$ Using instead $$ \psi|_{-d<x<0} = A\cdot(-e^{+2kd}e^{+kx} + e^{-kx} ) $$ gives the quantization condition $$ k = \frac{mV_0 a}{\hbar^2} ( 1 - e^{-2kd} ) $$ For $d\to\infty$ this does in fact reduce to the condition for the unperturbed attractive delta-well, and we recover the single solution $k_\infty = \frac{mV_0 a}{\hbar^2}$. For finite $d$, there is a solution with $k=0$ which corresponds (after normalization) to the zero-everywhere wavefunction. There is a nontrivial solution only if the right-hand-side is initially steeper in $k$ than the left-hand-side; that is, if $$ %\frac{\mathrm d}{\mathrm dk} k_\infty(1-e^{-2kd}) = k_\infty \cdot (+2 d) > 1 2d > \frac1{k_\infty} $$ The value of the solution will involve the Lambert W function (a normal person would find it numerically). Satisfyingly, it is the case that a "deeper" or "wider" attractive well, with larger $V_0$ or $a$, is more likely to retain its bound state at a given $d$. My intuition about splitting the "strength" of the well into two factors with interpretable units turned out to be not-helpful in this case.
{ "domain": "physics.stackexchange", "id": 83488, "tags": "quantum-mechanics, homework-and-exercises, potential, schroedinger-equation, dirac-delta-distributions" }
pcl_visualization-eigen-package
Question: I have found pcl_visualization in the following: svn: uri: https://code.ros.org/svn/ros-pkg/stacks/perception_pcl_addons/tags/unstable When I am trying to run rosmake perception_pcl_addons i get the follwoing error: 'pcl_visualization' depends on non-existent package 'eigen' I have eigen-config.cmake eigen-config-version.cmake in: /opt/ros/groovy/share/eigen/cmake$ Can I generate the eigen package from this eigen-config.cmake? Or where can I find that package? I am have groovy installed in my system Cheers Originally posted by acp on ROS Answers with karma: 556 on 2013-05-30 Post score: 0 Answer: That looks like pretty old code. Anything actively developed is probably somewhere on github now. Originally posted by joq with karma: 25443 on 2013-05-31 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14354, "tags": "ros" }
Applying MVC to a validation application
Question: I've been doing MVC for a few months now using the CodeIgniter framework in PHP but I still don't know if I'm really doing things right. What I currently do is: Model - this is where I put database queries (select, insert, update, delete). Here's a sample from one of the models that I have: function register_user($user_login, $user_profile, $department, $role) { $department_id = $this->get_department_id($department); $role_id = $this->get_role_id($role); array_push($user_login, $department_id, $role_id); $this->db->query("INSERT INTO tbl_users SET username=?, hashed_password=?, salt=?, department_id=?, role_id=?", $user_login); $user_id = $this->db->insert_id(); array_push($user_profile, $user_id); $this->db->query(" INSERT INTO tbl_userprofile SET firstname=?, midname=?, lastname=?, user_id=? ", $user_profile); } Controller - talks to the model, calls up the methods in the model which queries the database, supplies the data which the views will display(success alerts, error alerts, data from database), inherits a parent controller which checks if user is logged in. function create_user(){ $this->load->helper('encryption/Bcrypt'); $bcrypt = new Bcrypt(15); $user_data = array( 'username' => 'Username', 'firstname' => 'Firstname', 'middlename' => 'Middlename', 'lastname' => 'Lastname', 'password' => 'Password', 'department' => 'Department', 'role' => 'Role' ); foreach ($user_data as $key => $value) { $this->form_validation->set_rules($key, $value, 'required|trim'); } if ($this->form_validation->run() == FALSE) { $departments = $this->user_model->list_departments(); $it_roles = $this->user_model->list_roles(1); $tc_roles = $this->user_model->list_roles(2); $assessor_roles = $this->user_model->list_roles(3); $data['data'] = array('departments' => $departments, 'it_roles' => $it_roles, 'tc_roles' => $tc_roles, 'assessor_roles' => $assessor_roles); $data['content'] = 'admin/create_user'; parent::error_alert(); $this->load->view($this->_at, $data); } else { $username = $this->input->post('username'); $salt = $bcrypt->getSalt(); $hashed_password = $bcrypt->hash($this->input->post('password'), $salt); $fname = $this->input->post('firstname'); $mname = $this->input->post('middlename'); $lname = $this->input->post('lastname'); $department = $this->input->post('department'); $role = $this->input->post('role'); $user_login = array($username, $hashed_password, $salt); $user_profile = array($fname, $mname, $lname); $this->user_model->register_user($user_login, $user_profile, $department, $role); $data['content'] = 'admin/view_user'; parent::success_alert(4, 'User Sucessfully Registered!', 'You may now login using your account'); $data['data'] = array('username' => $username, 'fname' => $fname, 'mname' => $mname, 'lname' => $lname, 'department' => $department, 'role' => $role); $this->load->view($this->_at, $data); } } Views - this is where I put HTML, CSS, and JavaScript code (form validation code for the current form, looping through the data supplied by controller, a few if statements to hide and show things depending on the data supplied by the controller). <!--User registration form--> <form class="well min-form" method="post"> <div class="form-heading"> <h3>User Registration</h3> </div> <label for="username">Username</label> <input type="text" id="username" name="username" class="span3" autofocus> <label for="password">Password</label> <input type="password" id="password" name="password" class="span3"> <label for="firstname">First name</label> <input type="text" id="firstname" name="firstname" class="span3"> <label for="middlename">Middle name</label> <input type="text" id="middlename" name="middlename" class="span3"> <label for="lastname">Last name</label> <input type="text" id="lastname" name="lastname" class="span3"> <label for="department">Department</label> <input type="text" id="department" name="department" class="span3" list="list_departments"> <datalist id="list_departments"> <?php foreach ($data['departments'] as $row) { ?> <option data-id="<?php echo $row['department_id']; ?>" value="<?php echo $row['department']; ?>"><?php echo $row['department']; ?></option> <?php } ?> </datalist> <label for="role">Role</label> <input type="text" id="role" name="role" class="span3" list=""> <datalist id="list_it"> <?php foreach ($data['it_roles'] as $row) { ?> <option data-id="<?php echo $row['role_id']; ?>" value="<?php echo $row['role']; ?>"><?php echo $row['role']; ?></option> <?php } ?> </datalist> <datalist id="list_collection"> <?php foreach ($data['tc_roles'] as $row) { ?> <option data-id="<?php echo $row['role_id']; ?>" value="<?php echo $row['role']; ?>"><?php echo $row['role']; ?></option> <?php } ?> </datalist> <datalist id="list_assessor"> <?php foreach ($data['assessor_roles'] as $row) { ?> <option data-id="<?php echo $row['role_id']; ?>" value="<?php echo $row['role']; ?>"><?php echo $row['role']; ?></option> <?php } ?> </datalist> <p> <button type="submit" class="btn btn-success">Create User</button> </p> </form> <script> var departments = []; var roles = []; $('#list_departments option').each(function(i){ departments[i] = $(this).val(); }); $('#list_it option').each(function(i){ roles[roles.length + 1] = $(this).val(); }); $('#list_collection option').each(function(i){ roles[roles.length + 1] = $(this).val(); }); $('#list_assessor option').each(function(i){ roles[roles.length + 1] = $(this).val(); }); $('#department').blur(function(){ var department = $.trim($(this).val()); $('#role').attr('list', 'list_' + department); }); var password = new LiveValidation('password'); password.add(Validate.Presence); password.add(Validate.Length, {minimum: 10}); $('input[type=text]').each(function(i){ var field_id = $(this).attr('id'); var field = new LiveValidation(field_id); field.add(Validate.Presence); if(field_id == 'department'){ field.add(Validate.Inclusion, {within : departments}); } else if(field_id == 'role'){ field.add(Validate.Inclusion, {within : roles}) } }); </script> The code above is actually code from the application that I'm currently working on. I'm also looking for some guidelines in writing MVC code, such as the things that should and shouldn't be included in views, models and controllers. How else can I improve the current code that I have right now? I've written some really terrible code before (duplication of logic, etc.), which is why I want to improve my code so that I can easily maintain it in the future. Answer: Your understanding of MVC seems to be skewed. There are slightly different implementations of MVC, so one person's definition of a certain aspect of it might be slightly different than another's. I'll explain how basic MVC works, and why I think your implementation is not a true implementation of MVC. Also, I'm going to make some comments on your separation of code, or rather, lack of separation. Methods should be separated by functionality. In other words, they should do one thing to the exclusion of all others. Understanding this will make understanding MVC a little easier. Models As PeeHaa said, your explanation seems to be off. I've seen other people mistakenly think of their actual databases as their model without even knowing it. This appears to be what you are doing here. What you have is not what I would call a model, but rather a mini controller with too much information. As I mentioned before there are slightly different implementations of MVC, but as I just pointed out, this more closely resembles a controller. Usually, should probably read always, a model can be thought of as an interface to the database. The database can be anything from a text document, to a JSON database, to an XML file, to a MySQL database. The model has to allow the controller to add(), remove(), fetch(), etc..., depending on the circumstances and without knowledge of what type of database it is accessing. Also, as I pointed out above with that list of methods, the method names in a model will more closely resemble an actual task you would perform on a database. register_user() is something I would expect to see on a controller. It implies that you are not just adding a user to the database, but validating, sending emails, etc.... Now, my first comments on your actual code. register_user() is doing too much. $department_id and $role_id should be defined and pushed into the array beforehand. These variables are static, not to be confused with the keyword static. In other words, they don't depend on values inside that method to change them. Their values are going to stay the same no matter what you do inside that method. Controllers As the model is an interface between the database and controller, so is the controller an interface between the model and view. And just as the view does not need to know how the controller is doing what it is, so does the controller not need to know how the model is doing what it is. In other words, controllers don't need MySQL queries. I only mention this because I mentioned that your model resembled a controller and did not want you to become confused. The model should handle the creation/deletion of entries, the controller should only be concerned about making the model do it and then telling the view what it has done. According to your method name create_user(), your controller appears to be doing what your model needs. I would switch these names around to avoid confusion. Asides from the actual method name being odd, the functionality is spot on. I won't go into detail to avoid confusion, but another implementation, and one that I favor, adds another level of abstraction by adding a router/dispatcher between the controller and view to handle POST, GET, SESSION, COOKIE, and authentication. CodeIgniter does something very similar here, sans the authentication, with methods such as $this->input->post(). Again, your methods are doing too much. create_user() should only "create a user". Even if you change that to register_user(). It doesn't need to be concerned with encryption or form validation. These things need to be done, but in their own methods. Views Your views are fine as far as PHP goes. However, if you ever need to code a view for someone completely oblivious to PHP, you could make it a little easier for them to understand by abstracting initial arrays and using standard view loop/if/else format. Example: $departments = $data[ 'departments' ];//performed in controller before view is included. <?php foreach($departments as $row) : ?> <?php endforeach; ?> I think I would also use extract() within these loops to abstract from the array again, but that is definitely a matter of preference. Unless you are importing PHP variables into your JS, your JS should really be external. Just like in PHP, with CSS you can make referencing a certain field easier if you find you are doing it consistently. For instance. All those "span3" classed inputs can be better written as so. //CSS: .span3 input { } <span class="span3"> <label></label> <input /> </span> Or since it appears that all those inputs use the same class, you can just ignore the class altogether and set the default CSS for the input tag. Good Luck!
{ "domain": "codereview.stackexchange", "id": 2047, "tags": "php, mvc, validation" }
Design an algorithm for efficiently computing the k smallest numbers of the form a+b*sqrt(2)
Question: Full question: Numbers of the form $a+b\sqrt{q}$, where $a$ and $b$ are nonnegative integers, and $q$ is an integer which is not he square of another integer, have special properties, e.g. they are closed under addition and multiplication. For $q=2$, some of the first few numbers of this form are given: $0 + 0\sqrt{2}, 1 + 0\sqrt{2}, 0 + 1\sqrt{2},\ldots$ Design an algorithm for efficiently computing the $k$ smallest numbers of the form $a+b\sqrt{2}$ for nonnegative integers $a$ and $b$. (Hint: systematically enumerate points.) I have the solution available. The first paragraph is: "A key fact about $\sqrt{2}$ is that it is irrational, i.e., it cannot equal to $a/b$ for any integers $a,b$. This implies that if $x + y\sqrt{2} = x' + y'\sqrt{2}$, where $x$ and $y$ are integers, then $x = x'$ and $y = y'$ (since otherwise $\sqrt{2} = (x-x')/(y-y')$). I just don't see how knowing that $\sqrt{2}$ is irrational helps to reason about the problem, especially how it's a "key fact". Answer: One way to solve this exercise is to notice that there are at least $k$ numbers of the form $a + b \sqrt{2}$ smaller than $\lfloor \sqrt{k} \rfloor + \lfloor \sqrt{k} \rfloor \sqrt{2}$ (namely, those with $0 \leq a,b \leq \sqrt{k}$). Conversely, every such number of the form $a + b \sqrt{2}$ must have $a,b \leq (1 + \sqrt{2})\sqrt{k}$. Therefore we could simply take all $O(k)$ values of the form $a+b\sqrt{2}$ with $a,b \leq (1+\sqrt{2})\sqrt{k}$, find the $k$th smallest in time $O(k)$, and then go over the list in $O(k)$ and output the $k$ smallest elements. The entire algorithm runs in time $O(k)$ (ignoring the cost of arithmetic operations). This algorithm needs to compare two numbers of the form $a + b\sqrt{2}$. This can be done as follows. Suppose we want to determine whether $a + b \sqrt{2} \leq c + d \sqrt{2}$. If $a = c$ then this happens iff $b \leq d$, and similarly if $b = d$ then this happens iff $a \leq c$. Suppose, without loss of generality, that $d > b$. If also $c > a$ then $a + b \sqrt{2} < c + d \sqrt{2}$, so suppose that $a > c$. Then $$ a + b \sqrt{2} \leq c + d \sqrt{2} \Leftrightarrow a-c \leq (d-b) \sqrt{2} \Leftrightarrow \frac{a-c}{d-b} \leq \sqrt{2} \Leftrightarrow \\ \left(\frac{a-c}{d-b}\right)^2 \leq 2 \Leftrightarrow (a-c)^2 \leq 2(d-b)^2. $$ What would happen to this approach if we replaced $\sqrt{2}$ by, say, 2? Then there are only $O(\sqrt{k})$ numbers smaller than $\lfloor \sqrt{k} \rfloor + 2\lfloor \sqrt{k} \rfloor$, since not all combinations of the form $a + 2b$ for $0 \leq a,b \leq \sqrt{k}$ are distinct. Indeed, we need to take the upper bound on $a,b$ to be linear in $k$ in order to find the $k$ smallest values of the form $a + 2b$. So the irrationality of $\sqrt{2}$ does make a big difference for the algorithm.
{ "domain": "cs.stackexchange", "id": 12539, "tags": "algorithms, discrete-mathematics" }
Is it possible to suffer from hyperthermia by spending too much of time in the hot springs?
Question: People may tend to spend time in a hot water spring. Is it possible to get hyperthermia (like a sun stroke) due to this? Answer: You betcha! We report a case of acute hepatic failure combined with disseminated intravascular coagulopathy, acute renal failure, and neurological deficit caused by heat stroke after bathing in a hot spring. Generically, the term you're looking for is non-exertional heat stroke.
{ "domain": "biology.stackexchange", "id": 1303, "tags": "human-biology" }
Nodelets: pure virtual method called, process died (-6)
Question: I get the error in the question title everytime one of my Nodelet classes calls ros::requestShutdown(). I've tried debugging it in gdb, but besides the fact that the stacktrace is 5 pages (!) long, it doesn't tell me much, apart from a lot of references to boost::shared_ptr and checked_delete. Is calling ros::requestShutdown() from a Nodelet unsupported or is this a bug somewhere? Problem is the crash is causing my Nodelet destructors to not be called, which makes it hard to do proper cleanup of resources. Breaking using ctrl+c does cleanly shutdown everything and calls destructors. Edit: @Lorenz, you're right, I should've included more info. System: Ubuntu Lucid (10.04.3), ROS Electric from debs. The class that calls ros::requestShutdown() does so in a callback, whether that callback is called by a subscription or as a service doesn't matter. On second thought it seems like the Bond dtor (frame 10) tries to stop/delete some timer (frame 9) which seems to try to use the ros::TimerManager (frame 7) which might have already been stopped/destructed. Stacktrace: pure virtual method called terminate called without an active exception Program received signal SIGABRT, Aborted. 0xb7fe2424 in __kernel_vsyscall () (gdb) bt #0 0xb7fe2424 in __kernel_vsyscall () #1 0xb785d651 in *__GI_raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64 #2 0xb7860a82 in *__GI_abort () at abort.c:92 #3 0xb7aaa52f in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/libstdc++.so.6 #4 0xb7aa8465 in ?? () from /usr/lib/libstdc++.so.6 #5 0xb7aa84a2 in std::terminate() () from /usr/lib/libstdc++.so.6 #6 0xb7aa9155 in __cxa_pure_virtual () from /usr/lib/libstdc++.so.6 #7 0xb7e883fa in ros::TimerManager<ros::WallTime, ros::WallDuration, ros::WallTimerEvent>::remove(int) () from /opt/ros/electric/stacks/ros_comm/clients/cpp/roscpp/lib/libros.so #8 0xb7ed9779 in ros::WallTimer::Impl::stop (this=0x8076730) at /tmp/buildd/ros-electric-ros-comm-1.6.7/debian/ros-electric-ros-comm/opt/ros/electric/stacks/ros_comm/clients/cpp/roscpp/src/libros/wall_timer.cpp:64 #9 0xb7ed9813 in ros::WallTimer::stop (this=0x806f690) at /tmp/buildd/ros-electric-ros-comm-1.6.7/debian/ros-electric-ros-comm/opt/ros/electric/stacks/ros_comm/clients/cpp/roscpp/src/libros/wall_timer.cpp:123 #10 0xb7f30cc3 in ~Bond (this=0x806f420, __in_chrg=<value optimised out>) at /tmp/buildd/ros-electric-bond-core-1.6.1/debian/ros-electric-bond-core/opt/ros/electric/stacks/bond_core/bondcpp/src/bond.cpp:91 #11 0x0804f019 in checked_delete<bond::Bond> (this=0x0) at /usr/include/boost/checked_delete.hpp:34 #12 boost::detail::sp_counted_impl_p<bond::Bond>::dispose (this=0x0) at /usr/include/boost/smart_ptr/detail/sp_counted_impl.hpp:78 #13 0xb7b17779 in boost::detail::sp_counted_base::release (this=0x80791d0, __in_chrg=<value optimised out>) at /usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp:145 #14 ~shared_count (this=0x80791d0, __in_chrg=<value optimised out>) at /usr/include/boost/smart_ptr/detail/shared_count.hpp:217 #15 ~shared_ptr (this=0x80791d0, __in_chrg=<value optimised out>) at /usr/include/boost/smart_ptr/shared_ptr.hpp:169 #16 boost::shared_ptr<bond::Bond>::reset (this=0x80791d0, __in_chrg=<value optimised out>) at /usr/include/boost/smart_ptr/shared_ptr.hpp:386 #17 ~Nodelet (this=0x80791d0, __in_chrg=<value optimised out>) at /tmp/buildd/ros-electric-nodelet-core-1.6.2/debian/ros-electric-nodelet-core/opt/ros/electric/stacks/nodelet_core/nodelet/src/nodelet_class.cpp:48 #18 0xb2ac307d in ~SenderNodelet (this=0x80791d0, __in_chrg=<value optimised out>) at /home/ipso/ros/stacks/nodelet_test/src/nodelets/sender_nodelet.cpp:59 #19 0xb7b1c808 in checked_delete<nodelet::Nodelet> (this=0x8076980) at /usr/include/boost/checked_delete.hpp:34 #20 boost::detail::sp_counted_impl_p<nodelet::Nodelet>::dispose ( this=0x8076980) at /usr/include/boost/smart_ptr/detail/sp_counted_impl.hpp:78 #21 0x0804e3d8 in boost::detail::sp_counted_base::release (this=0x8074180, __in_chrg=<value optimised out>) at /usr/include/boost/smart_ptr/detail/sp_counted_base_gcc_x86.hpp:145 #22 ~shared_count (this=0x8074180, __in_chrg=<value optimised out>) at /usr/include/boost/smart_ptr/detail/shared_count.hpp:217 #23 0xb7b1f820 in std::_Rb_tree<std::string, std::pair<std::string const, boost::shared_ptr<nodelet::Nodelet> >, std::_Select1st<std::pair<std::string const, boost::shared_ptr<nodelet::Nodelet> > >, std::less<std::string>, std::allocator<std::pair<std::string const, boost::shared_ptr<nodelet::Nodelet> > > >::_M_erase(std::_Rb_tree_node<std::pair<std::string const, boost::shared_ptr<nodelet::Nodelet> > >*) () from /opt/ros/electric/stacks/nodelet_core/nodelet/lib/libnodeletlib.so #24 0xb7b18eed in std::_Rb_tree<std::string, std::pair<std::string const, boost::shared_ptr<nodelet::Nodelet> >, std::_Select1st<std::pair<std::string const, boost::shared_ptr<nodelet::Nodelet> > >, std::less<std::string>, std::allocator<std::pair<std::string const, boost::shared_ptr<nodelet::Nodelet> > > >::clear ( this=0xbffff0a4, __in_chrg=<value optimised out>) at /usr/include/c++/4.4/bits/stl_tree.h:726 #25 std::map<std::string, boost::shared_ptr<nodelet::Nodelet>, std::less<std::string>, std::allocator<std::pair<std::string const, boost::shared_ptr<nodelet::Nodelet> > > >::clear (this=0xbffff0a4, __in_chrg=<value optimised out>) at /usr/include/c++/4.4/bits/stl_map.h:626 #26 ~Loader (this=0xbffff0a4, __in_chrg=<value optimised out>) at /tmp/buildd/ros-electric-nodelet-core-1.6.2/debian/ros-electric-nodelet-core/opt/ros/electric/stacks/nodelet_core/nodelet/src/loader.cpp:176 #27 0x0804d4e4 in main (argc=2, argv=0xbffff1c4) at /tmp/buildd/ros-electric-nodelet-core-1.6.2/debian/ros-electric-nodelet-core/opt/ros/electric/stacks/nodelet_core/nodelet/src/nodelet.cpp:277 Originally posted by ipso on ROS Answers with karma: 1416 on 2012-07-27 Post score: 2 Original comments Comment by Lorenz on 2012-07-27: Without seeing the backtrace, it's hard to say what's going wrong. Although it might not tell you much, we might be able to infer something. Answer: Nodelets are probably not supposed to call ros::requestShutdown(), but this is still a bug of some sort. Please open a defect ticket with a pointer back to this question. Originally posted by joq with karma: 25443 on 2012-07-27 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ipso on 2012-07-27: Done. ticket/4431 seems to be related. Comment by joq on 2012-07-27: Yes. I knew about that ticket and the lack of a clean way for a nodelet to shut itself down. Trying ros::requestShutdown() was a reasonable idea, but I think it shuts down the entire nodelet manager process (and not cleanly). Comment by ipso on 2012-07-27: @joq: well in my particular case I want to take the manager with me, but it should be done cleanly (or at least: not die in a horrible crash). Seeing the other tickets it'll probably be a while before this one is picked up. Comment by ipso on 2012-07-27: Oh, I forgot: 'this one' got ticket/5506. Comment by joq on 2012-07-27: You use case can probably be fixed, although well-behaved nodelets should probably avoid doing that, because they don't know what other nodelets are running in that manager process.
{ "domain": "robotics.stackexchange", "id": 10391, "tags": "ros, shutdown, nodelets, nodelet" }
Why there are grooves at inner side of plastic nasal spray cap?
Question: My suggestion was that they are for Tamper-evident band, but there isn't one beneath the cap. Answer: that part was injection molded in an unscrewing mold which allows fully-formed threads to be molded into a part. Ordinarily, the resulting undercuts would prevent the part from being extracted from the mold but in an unscrewing mold, a motorized shaft inside the mold unscrews the finished part from the mold after the plastic has solidified and the two halves of the mold are being separated. those features in the cap are engagement splines that mate into the rotating shaft that unscrews the finished part out of the threaded half of the mold cavity.
{ "domain": "engineering.stackexchange", "id": 2024, "tags": "plastic" }
Speed of a hollow cylinder at end of incline plane. Confusion about Conservation of Energy
Question: A hollow cylinder is rolling down an inclined plane. We have to get the speed of the center of mass of the cylinder at the end of the inclined plane. Energy Conservation tells us: $$ mg(h + R) = \frac{1}{2}mv_{cm}^2 + \frac{1}{2}I_{cm}\omega^2 + mgR $$ Cancelling $mgR$: $$ mgh = \frac{1}{2}mv_{cm}^2 + \frac{1}{2}I_{cm}\omega^2 $$ If we assume frictionless rolling, we can now substitute $\omega = \frac{v_{cm}}{R}$. Which yields, with $I_{cm} = mR^2$: $$ v_{cm} = \sqrt{gh} $$ Is this right? If it is indeed right, why do we not have to take the moment of inertia about the point of contact with the incline, using the parallel-axis theorem? Is it because the point of contact with the incline is stationary, and we would have to consider the speed of the cylinder about the point of contact with the ground, which would be $v_{g}=0$? Some clarification would be very much appreciated. Answer: The calculation is correct. Let's see what happens if we take as reference point the point of contact ${P}$ between the cylinder and the plane: We can use the parallel axis theorem to obtain the moment of inertia: $$I = I_{cm}+md^2 = mR^2 + mR^2 = 2mR^2. $$ The angular speed is the same, as we can see considering the speed of the point opposite to $P$, $v_P=2v_{cm}$, which is at a distance $r=2R$ from $P$, so $\omega = v/r = 2v_{cm}/(2R) = v_{cm}/R.$ The kinetic energy is only rotational now, as $P$ is immovable: $$ \Delta K = \frac{1}{2}I\omega^2 = \frac{1}{2} 2m R^2 \frac{v_{cm}^2}{R^2} = m v_{cm}^2. $$ Then, using that $\Delta K = \Delta U = mgh$, we have: $$ m v_{cm}^2 = mgh \Longrightarrow v_{cm}=\sqrt{gh}. $$ That is, there's no change $-$ just as expected, since the choice of axis is arbitrary.
{ "domain": "physics.stackexchange", "id": 45568, "tags": "homework-and-exercises, rotational-dynamics, energy-conservation, moment-of-inertia" }
Operator overloading: Java vs. Python
Question: Why operator overloading was included in Python and not in Java? How does one decide to include or exclude operator overloading in a programming language? It is said here that operator overloading is excluded from Java to make the language simpler (for programmers, language and VM developers, etc.). I don't understand why the same explanation doesn't work for excluding it in Python? Answer: Operator overloading is an example of syntactic sugar — a notation that doesn't give any extra power but makes programming easier. I don't know the rationale for the decisions in Java and in Python, but see below. See also this answer on stackoverflow. First, I would like to critique the page you were linking. Let's consider the points raised there one by one. Simplicity and cleanliness: It is claimed that operator overloading slows down the compiler and the JVM (!). This is non-sense. It's plain wrong for the JVM, since this is just syntactic sugar so it wouldn't affect the generated code at all. As for the compiler, the compiler is already equipped to parse infix operators and to resolve virtual methods, so supporting operator overloading won't make it more complex. On the flip side, operator overloading makes the Java code simpler and cleaner: compare a.add(b) to a+b. Avoid programming errors: It is claimed that non-standard semantics for operators might confuse programmers. Compare this to C++'s use of >> and << in streams. I think this point is valid but overstated. The same point could be made with regard to other conventions. It's up to the programmers to not abuse the capabilities of the language. JVM complexity: It is claimed that operator overloading complicates the JVM. As stated above, at the JVM level there would be absolutely no difference. Easy development of tools: It is claimed that operator overloading complicated the design of IDEs. The answer here is very similar to the answer to the first point. The only complication with operator overloading is that you have to figure out the type of the operands, but presumably if it makes any difference you would need to do it anyway since there are also several different number types, say integer and floating point. A point which is not raised is that operator overloading might complicate optimization since certain algebraic identities hold for numbers but not in general. This criticism is wrong for the following reason: algebraic identities cannot in general be used even for floating-point computations, since order of operations makes a difference. So the compiler needs to determine the type of the operands anyway, and could easily abstain from optimizing user defined operators. The Python philosophy is to supply the coders with the rope to hang themselves. If you're a good coder, you will use the rope for good, say for tying your bicycle in the train, rather than to hang yourself or anybody else. Python trusts programmers more than Java, by design. See the second point above. Let me end this answer by quoting an excellent answer to the stackoverflow question mentioned above: I left out operator overloading as a fairly personal choice because I had seen too many people abuse it in C++. James Gosling. Source: http://www.gotw.ca/publications/c_family_interview.htm Many C++ design decisions have their roots in my dislike for forcing people to do things in some particular way [...] Often, I was tempted to outlaw a feature I personally disliked, I refrained from doing so because I did not think I had the right to force my views on others. Bjarne Stroustrup. Source: The Desing and Evolution of C++ (1.3 General Background)
{ "domain": "cs.stackexchange", "id": 2765, "tags": "programming-languages, api-design" }
Expansion of the interaction term in the microscopic derivation of the Lindblad equation
Question: I'm reading The Theory of Open Quantum Systems by Heinz-Peter Breuer and Francesco Petruccione, and in chapter 3, I can't understand the decomposition of the interaction Hamiltonian: We assume that $H=H_S + H_B + H_I$, where we assume that $$ H_I = \sum\limits_{\alpha}A_{\alpha}\otimes B_{\alpha} ~~~\textrm{(Eq. 3.119)}, $$ where $A_{\alpha}$ acts on the system and $B_{\alpha}$ acts on the bath, only. If we assume that the system Hamiltonian, $H_S$ has a discrete energy spectra, then by the spectral theorem, we can write $$ A_{\alpha} = \sum\limits_{\epsilon,\epsilon'} \langle \epsilon | A_{\alpha} | \epsilon '\rangle \cdot |\epsilon\rangle\langle \epsilon'| = \sum\limits_{\epsilon, \epsilon'}\Pi(\epsilon)A_{\alpha}\Pi(\epsilon'). $$ However, in the book, Eq. 3.120 only considers $\epsilon$ and $\epsilon'$ values, for which $\omega = \epsilon'-\epsilon$ is fixed: $$ A_{\alpha}(\omega) = \sum\limits_{\omega = \epsilon'-\epsilon}\Pi(\epsilon)A_{\alpha}\Pi(\epsilon'). $$ What is the motivation for this expansion? Why are we restricting the values of $\epsilon$ and $\epsilon'$ to have a fixed difference? Answer: All this is saying is that there are three important terms to consider: The Hamiltonian $H^{\,}_S$ for the system alone, the Hamiltonian $H^{\,}_B$ for the bath alone, and the terms $H^{\,}_I$ that couple the system and bath. The motivation I guess is that we often assume that the bath on its own thermalizes, and that the interaction between the system and bath is weak compared to their individual Hamiltonians. As far as the fixed energy difference, in the final expression for the Liouvillean / Lindbladian / master equation, you will sum over all energy differences $\omega$. So it's not that you're requiring a particular fixed energy difference. It's just that it's easier to collect terms based on how much energy is absorbed from / given to the bath. I suggest reading ahead until you find the Lindblad equation with the sum over $\omega$. Things will probably be more clear there. For one thing, grouping terms by the amount of energy exchanged between the system and bath leads to a nicer form of the Lindblad/Liouville equation. You're basically working in the basis of the Hamiltonians for the system/bath individually, looking at the system-bath terms in this basis, regrouping them by the energy exchanged, and eventually, tracing out the bath. The point is that terms that exchange energy $\omega$ with the bath couple to one another in the Lindblad equation, and terms that exchange different amounts of energy are not coupled (to leading order).
{ "domain": "physics.stackexchange", "id": 92879, "tags": "quantum-mechanics, open-quantum-systems" }
What does ℜ symbol mean?
Question: I came across the symbol $\mathfrak{R}$ in the Kalman introduction: Kalman Filter Intro $$ x\in \mathfrak{R}^m $$ It was used in this context: $$ x_{k} = Ax_{k–1} + Bu_{k – 1} + w_{k – 1} $$ What does $\mathfrak{R}$ stand for? Answer: It means $x$ is an $m$-vector of real values. $\mathfrak{R}$ itself is the set of real numbers.
{ "domain": "dsp.stackexchange", "id": 4033, "tags": "notation" }
Numerical relativity coordinate system displayed
Question: In a picture or video of a numerical relativity simulation, such as a neutron star merger into a black hole, how do they set up their coordinate system? Lets take the point in a video corresponding to x=10km, y=20km, z=30km, t=1ms. Spacetime itself is distorted, in a very complex way, so how do you make sense of these numbers? Website to find some nice videos: http://numrel.aei.mpg.de/images Just to clarify: There are simulations in which space-time is fixed to the a well defined metric (e.g. Kerr black hole accretion disk MHD simulation with no disk self-gravity). But for true numerical relativity, in which the shape of space-time itself has to be simulated, there is no "clean" metric. Answer: There is a huge variance in how these coordinates are set up, and very often the coordinate systems are chosen for computational convenience (having more data points in place where the metric varies a lot, and fewer far from, say, your colliding black holes), in addition to more physical choices. Once you have run the simulation and have found a solution, however, you can apply math and create any coordinate system you wish for visualizations.
{ "domain": "physics.stackexchange", "id": 8739, "tags": "general-relativity, simulations" }
Conducting ring free-falling in a variable magnetic field
Question: Consider a free-falling conducting ring oriented horizontally to Earth's surface. It then enters a region with magnetic field $B = B_o(1+\lambda z)$ where $+z$ is the vertical direction upwards. By Faraday's law, an emf $E = A\lambda B_o v$ will be induced ($A$ is the area of the loop and $v$ is its instantaneous velocity), which in turn will cause resistive heating. My understanding is that, by conservation of energy, the ring will brake and reach a constant velocity (wouldn't it?). The potential energy lost will be emitted as heat due to resistive heating. But the dilemna here is, when the ring reaches a constant velocity, Newton's second law should hold, i.e, another force must oppose and cancel out gravitational force. I presumed it would be the magnetic force, but, applying dF = idlB, I find that the magnetic force is directed radially outwards of the ring. So the question is, which force is canceling out gravitational force and how? Answer: Your force argument is correct, and the ring does not reach a terminal velocity. It continues to free-fall as usual. So that raises another question: where are the ohmic losses coming from, if not the lost potential energy? This is actually due to the strain in the ring: the magnetic force acts radially on the ring, creating a stress in the ring's wire which produces a resulting strain. The energy stored via the strain is released through resistive dissipation.
{ "domain": "physics.stackexchange", "id": 51139, "tags": "electromagnetism, energy-conservation, power, electromagnetic-induction" }
Radio Magneto Tellurics: role of reflection
Question: In the technique of radio magneto tellurics (RMT), plane wave EM radiation incident upon Earth's surface is used to measure subsurface properties. I have read a number of different sources on the technique. Essentially, Maxwell's equations couple the electric field amplitude and magnetic field amplitude of perpendicular components of the impinging wave source (I think the coupling is generally more complicated, but the plane wave assumption limits the wave orientation to a plane and thus the z-field component is zero). Now, the ratio of the magnetic field amplitude to electric field amplitude is given as the ratio of the magnetic permeability of the medium to the electric permittivity. Since most materials have a magnetic permeability almost the same as the vacuum permeability and they are static, this ratio is a proxy measurement of the impedance of the medium. Another way to think about this is that the wave induces secondary currents in the medium, which produce their own waves of the same frequency and superimpose on the original wave to reduce the amplitudes of the electric and magnetic field components respectively. My question is this: The measurement is made at the surface of the medium of interest (the Earth-sky boundary as far as I'm concerned). In all the resources I have read, they say that because the skin depth is frequency dependent, making this measurement for a range of frequencies allows one to infer the spatial distribution of impedance. What I don't understand is how a measurement at the surface can tell you things about the subsurface, unless there are reflections. However, the resources never mention measuring reflected waves. Do I misunderstand something about light? I'm kind of thinking about causality here, because the light travels in time from the atmosphere, then interacts with the boundary and continues propagating through the medium. Is it possible that the wavelength of light determines what the "boundary" is? For example, we could call the boundary the region that extends below Earth's surface up to a tenth of a wavelength or so. Then for very low frequency, the "boundary" could be on the order of km. But it still seems to violate causality because the light is sensing stuff ahead of it which determines how it acts at the surface. I understand that the skin depth determines to what depth the wave can induce secondary currents. Do these secondary currents create waves that propagate back to the surface? I know there are a lot of resources on RMT, and I've looked pretty hard but none address these confusions. PS: I didn't post this on Earth Science stack exchange because I feel like the essence of the problem is physics (even though the method is used in geophysics). Answer: I would not call it reflection although some of the energy which enters the ground does come out again in the opposite direction. Variations in the Earth's magnetic field principally due to the solar wind interacting with the and lightning result in electromagnetic waves with whole range of frequencies being incident on the Earth's surface. Because the ground is a conductor these incoming electromagnetic waves produce telluric currents in the ground. In turn these telluric currents produce electromagnetic waves which can be detected at the surface of the Earth (or just below it) using sensors which can detect electric and magnetic fields. The incoming electromagnetic waves penetrate the Earth's surface to a degree (skin depth) which depends amongst other things on the frequency of the waves and the resistivity of the ground. An approximate value for the skin depth when the amplitude of the electromagnetic wave falls to approximately $e^{-1}$ of its original value is approximately $500\sqrt{\dfrac{\rho_{\rm a}}{f}}$ (metre) where $\rho_{\rm a}$ (ohm metre) is the apparent resistivity of the ground and $f$ (hertz) the frequency of the electromagnetic wave. The penetration of electromagnetic waves into the ground at higher frequencies is lower so the higher frequency signals recorded at the surface by the probes give information about a smaller thickness of the ground than lower frequency signals. The data which is recorded is then processed so that the signal over a given (small) frequency range is determined. The signals over a given frequency range give information about the ground up to a depth of order of magnitude the skin depth. And then the processing and interpretation really starts.
{ "domain": "physics.stackexchange", "id": 40597, "tags": "classical-electrodynamics, geophysics, radio" }
Typical laser scanner noise values
Question: I am building an application that executes graphSLAM using datasets recorded in a simulated environment. The dataset has been produced in MRPT using the GridMapNavSimul application. To simulate the laserScans one can issue the bearing and range error standard deviation of the range finder. Currently I am using a dataset recorded with range_noise = 0.30m, bearing_noise = 0.15deg. Am I exaggerating with these values? Could somebody provide me with typical values for these quantities? Do laser scanner manufacturers provide these values? Thanks in advance, Answer: I think 0.3m noise is a bit exaggerated for a scanning laser rangefinder. As you saw with the Hokuyo (which is one of the cheapest LIDARs you can get) they say that it is 0.03m range "error" (they do not explicitly state this is 2$\sigma$, but I have tested the noise profile myself and it is consistent with 2$\sigma$). My experience with laser scanners is that the depth accuracy is very much a function of cost, while the angular accuracy is pretty small and is only a real problem when the lidar is used for around 10m+. A URG04LX (\$1,300) that I've calibrated had range standard deviations that were around $\sigma = 0.013m$ while a Velodyne HDL64E (>\$150,000) had around $\sigma=0.006m$. I can't remember the angular $\sigma$s but the data sheet says around 0.08 degrees (which I again assume is 3$\sigma$).
{ "domain": "robotics.stackexchange", "id": 1136, "tags": "slam, laser, rangefinder" }
Refactored nearly identical HTML forms
Question: I have two forms in my HTML page. Both share a set of input elements (not a part of any of the forms, but rather residing separately). Besides that, the forms have their own input elements as well. Each of them also have their own notification div. The process of POST request for both the forms is nearly identical, except those specifically owned input elements and notification divs. Here is what I did to remove the code duplication: function execute(config) { var initiator_token = getQueryVariable(window.location.search.substring(1), 'token'); if (!initiator_token) { config.notifyEle .removeClass('alert-info') .addClass('alert-warning') .attr('data-i18n', config.invalidTokenMessage); return; } var params = { 'initiator_token': initiator_token, 'recipient_email': config.emailEle.val() }; var lang = $('#mailLangSelector').val(); if (lang) params['lang'] = lang; var message = $('#message').val(); if (message) params['message'] = message; resetInputs(); var loc = window.location; var url = loc.protocol + '//' + loc.host + $SCRIPT_ROOT + '/social' + config.url; $.ajax({ url: url, type: 'POST', data: JSON.stringify(params), contentType: 'application/json', success: function(data, statusText, xhr) { config.notifyEle .removeClass('alert-warning') .addClass('alert-info') .attr('data-i18n', config.successMessage) .i18n(); }, error: function(xhr, statusText, errorThrown) { config.notifyEle .removeClass('alert-info') .addClass('alert-warning') .attr('data-i18n', config.failureMessage) .i18n(); } }); } So, the function accepts a config object, in which I pass form-specific values AND elements (I mean, jquery elements). A config is like: var inviteFormConfig = { emailEle: $('#inviteEmail'), notifyEle: $('#inviteInfo'), invalidTokenMessage: 'sharePage.descriptions.validTokenRequired', url: '/invitation', successMessage: 'sharePage.descriptions.invitationSent', failureMessage: 'sharePage.descriptions.invitationFailed' }; And the usage for a given form: $('#inviteForm').submit(function(evt) { evt.preventDefault(); execute(inviteFormConfig); }); Now, though this works well for my use, I want to know if there lies a better pattern for such a scenario. Also, should I be passing around jQuery elements like this in config? Answer: This looks quite good, one nitpick I have is that you spend a significant amount of lines showing messages, that should be a more generic information setting function like function notifyUser( messageClass, message ){ config.notifyEle .removeClass( 'alert-info alert-warning' ) .addClass( messageClass ) .attr('data-i18n', message ); } And I must warn you that not putting curly braces is playing a bit with fire: if (lang) params['lang'] = lang; at least use a new line? Edit: I would go for an MVC approach, create a Model object that knows what fields are on the form and knows how to validate them. Then a View object that knows how to style and update the form ( put red borders around blank input types, show messages etc. ) and a a Controller object which puts all the listeners in place and also takes care of working with the model to validate the data on the form before submitting web requests.
{ "domain": "codereview.stackexchange", "id": 10357, "tags": "javascript, jquery, html5" }
Does the linear combination of basis functions, need to use eigenfunctions as basis?
Question: Given a trial function like this one: $$\lvert\hat{\Psi}\rangle = \sum_i c_i\lvert\psi_i\rangle$$ where the trial function is expanded using exact solutions $\psi_i$ to the Time Independent Schroedinger Equation (as it is done in the variations principle.) Then, according to the variations principle, the expected energy will be an upper bound to the systems true energy ($E_0$). $$\langle\hat{\Psi}\lvert H \lvert\hat{\Psi}\rangle \geq E_0$$ Assuming it normalised for simplicity. Hence, those $\psi_i$, are obviously eigenfunctions of H, but the trial function seems not an eigenfunction of $H$. When we do it in the real world, using atomic orbitals, or just any suitable set of basis functions, Do the basis functions that we use (especially in linear variations) need to be eigenfunctions of the Hamiltonian of the system ? Why this question ? I am not sure how could be use the theorem otherwise, since it uses eigenfunctions to derive its result. And if they need not, why? And how do we know that this combinations won't actually (for some unknown reason) be actually below the system's ground energy ? Answer: Continuing from our chat in $\hslash$, I have to concur with the other participants that you are conflating very many things together, and that that is the real source of all your confusions regarding this topic. In fact, even your title is conflating stuff and thus making your confusion apparent. However, it would go too far afield in linguistics to disentangle that. I do, however, like your own answer to this, as it shows that you have learnt some nice little things. It is clear that your end goal here is to learn a bit about the variational principle in QM as it pertains to finding approximate eigenfunctions of a system. However, the basis expansion that you have tried to write down as your first equation, is already a massive impediment to the understanding of what is being done. I am perfectly aware that in your quantum chemistry work, STO and GTO and so forth are always used in the way you are talking about, and they definitely do work. My point, however, is that if you focus too much on these trees, you will miss the forest. You should take a step back and contemplate the general situation. The kind of trial functions that you should be thinking about, should look like $$\tag{1$^\prime$}\langle x\vert\psi^\prime_{(N,\{a_i\})}\rangle=N(a_1x^2+1)e^{-a_2x^2-a_3x^4}$$ The reason why this is helpful (and somewhat standard in physics textbooks covering the variational principle in QM), is that one is under no delusions that this is being expanded in any form of basis. We are making a general argument over any random kind of attempted trial function, and it does not have to span the Hilbert space of possible quantum states, and obviously if it is not having enough parameters, it would then be unable to actually get us to a Hamiltonian eigenstate, no matter how good we choose the parameters $\{a_i\}$. Now, you might think that by assuming the wavefunction to be normalised, you would be simplifying. However, in the context of perturbations and approximations, this is rarely helpful, and it applies here too. I have separated out the parameter $N$, and it is obvious that it is meant to be the normalising constant, found by enforcing $\langle\psi^\prime_{(N,\{a_i\})}\vert\psi^\prime_{(N,\{a_i\})}\rangle=1$ However, this forces your optimisation procedure to be really convoluted, unnaturally breaking this into two steps, and increasing the number of free parameters by one. Instead, it is far nicer to omit this normalisation constant and consider the form of the variational principle that looks like $$\tag2\frac{\langle\psi_{\{a_i^b\}}\vert\hat{\mathcal H}\vert\psi_{\{a_i^b\}}\rangle}{\langle\psi_{\{a_i^b\}}\vert\psi_{\{a_i^b\}}\rangle}\geqslant E_0$$ Note that this is a complicated situation. The theorem is true for any set of $\{a_i\}$, but we are only interested in the lowest expected energy. That is, considering the unnormalised trial wavefunctions $$\tag1\langle x\vert\psi_{\{a_i\}}\rangle=(a_1x^2+1)e^{-a_2x^2-a_3x^4}$$ we are really interested in the stationary values $$\tag3\forall\hat{\mathcal H}\exists\{a_i^b\}\in\mathbb C^n\ |\qquad\left.\frac{\partial\ }{\partial a_i}\frac{\langle\psi_{\{a_i\}}\vert\hat{\mathcal H}\vert\psi_{\{a_i\}}\rangle}{\langle\psi_{\{a_i\}}\vert\psi_{\{a_i\}}\rangle}\right|_{\{a_i\}=\{a_i^b\}}=0$$ these conditions net us a few sets of $\{a_i^b\}$ whereby the $E_t$ in $$\tag4E_t\overset{\text{def}}=\frac{\langle\psi_{\{a_i^b\}}\vert\hat{\mathcal H}\vert\psi_{\{a_i^b\}}\rangle}{\langle\psi_{\{a_i^b\}}\vert\psi_{\{a_i^b\}}\rangle}$$ are stationary with respect to minor variations in $\{a_i\}$, and we really only care for the smallest value of $E_t$ and its trial wavefunction, i.e. its set of $\{a_i^b\}$. Let us take another step back. Given any basis $\left|b_k\right>$, we can express any wavefunction in terms of the basis. i.e. $$\begin{align} \tag5\forall\left|\psi\right>\in\mathcal H\exists{c_k}\in\mathbb C^N\ |\qquad\left|\psi\right>&=\sum_{k=1}^N\left|b_k\right>c_k\\ \tag6&=\sum_{k=1}^N\left|b_k\right>\left<b_k\vert\psi\right> \end {align}$$ where the equality of $c_k=\left<b_k\vert\psi\right>$ in Equation (6) is only true if $\left|b_k\right>$ is an orthonormal basis. If you are dealing with single atoms, the atomic orbitals can have this be true; however, it is much more common in molecules and condensed matter that the kind of STO and GTO that you are using, fail to be orthonormal, and in fact are usually overcomplete. Then it would not be deterministic, but we can still work with them. It represents but mild complications, and not major revisions to the conceptualisation. The consequences of having a basis at all, i.e. Equation (5), means that, even without optimising, any trial function will have $c_k=c_k(\{a_i\})$ $$\tag7\forall\{a_i\}\in\mathbb C^n\exists\{c_k\}\in\mathbb C^N\ |\qquad\left|\psi_{\{a_i\}}\right>=\sum_{k=1}^N\left|b_k\right>c_k$$ It does not matter that, in the overcomplete case, there will be more than one set of $\{c_k\}$ that satisfies this, only that we can always find some set that works. Now we specialise to the situation considered in the theorem of the variational principle. As you noted in your answer, the spectral theorem guarantees that $\hat{\mathcal H}$ has a complete orthonormal eigenbasis $\left|E_k\right>$ with ground state energy $E_0$ being its minimum energy. This means that we can substitute the version of Equation (7) that looks like Equation (6) into Equation (4) combined with Equation (2) to get $$\tag8E_{t,\text{min}}=\frac{\sum_k\langle\psi_{\{a_i^b\}}\vert E_k\rangle E_k\langle E_k\vert\psi_{\{a_i^b\}}\rangle}{\sum_k\langle\psi_{\{a_i^b\}}\vert E_k\rangle\langle E_k\vert\psi_{\{a_i^b\}}\rangle}=E_0+\frac{\sum_{k\neq0}\langle\psi_{\{a_i^b\}}\vert E_k\rangle(E_k-E_0)\langle E_k\vert\psi_{\{a_i^b\}}\rangle}{\sum_k\langle\psi_{\{a_i^b\}}\vert E_k\rangle\langle E_k\vert\psi_{\{a_i^b\}}\rangle}\geqslant E_0$$ where the last sum is necessarily positive by definition that $E_0$ is the minimal energy; all the subtraction $E_k-E_0>0$ You might find Equation (8) a little easier to understand in the familiar notation $$\tag{8$^\prime$}\frac{\sum_k E_k|c_k|^2}{\sum_k|c_k|^2}=E_0+\frac{\sum_{k\neq0}(E_k-E_0)|c_k|^2}{\sum_k|c_k|^2}\geqslant E_0$$ And how do we know that this combinations won't actually (for some unknown reason) be actually below the system's ground energy ? Note that we have not assumed that the trial wavefunction looks like any basis in this derivation, or be expanded in any basis. It is clear that this is just the argument that the average of stuff that are necessarily bigger than some minimum, can thus only be bigger than or equal to said minimum. There is thus no possibility of accidentally getting the trial wavefunction to have energy lower than the system's ground energy $E_0$. i.e. $E_0\leqslant E_{t,\text{min}}$ is a strict inequality proved by this theorem to hold in general for any kind of variation. When we do it in the real world, using atomic orbitals, or just any suitable set of basis functions, Do the basis functions that we use (especially in linear variations) need to be eigenfunctions of the Hamiltonian of the system? Manifestly not. In the case of trying atomic orbitals, what you are doing is choosing for $a_i$ to now just be $c_k$ of some basis of orbitals. In Equation (8), I have chosen painstakingly to stay generic, and thus it does not matter that your $\lvert\psi_{\{a_i\}}\rangle$ are now themselves expanded in some random basis, you just theoretically apply the unknown $\left<E_k\right|$ to them in accordance with Equation (8) and obtain the same result.
{ "domain": "physics.stackexchange", "id": 97572, "tags": "quantum-mechanics, hilbert-space, wavefunction, schroedinger-equation, variational-principle" }
Print HTML tables with information from some devices
Question: I'm not sure if this is the right question but didn't come up with something else. I have this huge for{} loop which prints out some HTML Tables with some infos from some devices. This code looked totally wrong to me and it had bugs. I tried to make it look as idiomatic to JavaScript as possible (started learning JS 4 months ago) but I don't know if what I did is ok. I know my code has several lines, but they are repeating in most cases. Can anybody tell me if this code can be rewritten (or some parts of it) in a more proper way? (PS: the code is running as it is now, but I'm just trying to improve my JavaScript coding style) for (controller in k) { controller = k[controller]; try { newMessage['args']['info'][controller]['data']['dynamic']['temperature'] = newMessage['args']['info'][controller]['data']['dynamic']['temperature'] / 10; newMessage['args']['info'][controller]['data']['static']['di_ia_descr'] = newMessage['args']['info'][controller]['data']['static']['di_ia_descr'].split('*'); if (!$('#controls_' + controller).length) { $('#controls').html($('#controls').html() + '<center id="controls_' + controller + '"></center>'); } if ($('#' + controller + '_di').length) { $('#di_' + controller + '_head').html(''); $('#di_' + controller + '_head').html('<th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][6] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][7] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][8] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][9] + '</th>'); $('#di_' + controller + '_body').html(''); var data = newMessage.args.info[controller].data.dynamic; var $target = $('#di_' + controller + '_body'); if (controller === "lan_control_1") { if (data.phase1 === "up") color1 = '#D60B0B'; else color1 = '#14B514'; if (data.phase2 === "up") color2 = '#D60B0B'; else color2 = '#14B514'; if (data.phase3 === "up") color3 = '#D60B0B'; else color3 = '#14B514'; if (data.phase4 === "up") color4 = '#000000'; else color4 = '#000000'; } else { if (data.phase1 === "up") color1 = '#14B514'; else color1 = '#D60B0B'; if (data.phase2 === "up") color2 = '#14B514'; else color2 = '#D60B0B'; if (data.phase3 === "up") color3 = '#14B514'; else color3 = '#D60B0B'; if (data.phase4 === "up") color4 = '#000000'; else color4 = '#000000'; } var html = '<tr>' + '<td class="text-center"><font color="' + color1 + '">' + data.phase1 + '</font></td>' + '<td class="text-center"><font color="' + color2 + '">' + data.phase2 + '</font></td>' + '<td class="text-center"><font color="' + color3 + '">' + data.phase3 + '</font></td>' + '<td class="text-center"><font color="' + color4 + '">' + data.phase4 + '</font></td>' + '</tr>'; $target.html(html); } else { $('#controls_' + controller).html($('#controls_' + controller).html() + '<span style="display: inline-table;"><table id="' + controller + '_di" class="table table-condensed table-bordered" width="100%"><thead><tr><th colspan="5"><center>Voltage (' + controller + ')</center></th></tr><tr id="di_' + controller + '_head"><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][6] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][7] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][8] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][9] + '</th></tr></thead><tbody id="di_' + controller + '_body"><tr><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['phase1'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['phase2'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['phase3'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['phase4'] + '</td></tr></tbody></table></span>'); } if ($('#' + controller + '_do').length) { $('#do_' + controller + '_head').html(''); $('#do_' + controller + '_head').html(newMessage['args']['info'][controller]['data']['static']['out0_descr']); console.log("here15" + newMessage['args']['info'][controller]['data']['static']['out0_descr']); console.log('\nPause\n'); $('#do_' + controller + '_body').html(''); var $target1 = $('#do_' + controller + '_body'); if (controller === "lan_control_0") { if (['on', 'off'][newMessage['args']['info'][controller]['data']['dynamic']['digital_out_0']] === "off") color5 = '#14B514'; else color5 = '#D60B0B'; } else { if (['on', 'off'][newMessage['args']['info'][controller]['data']['dynamic']['digital_out_0']] === "off") color5 = '#000000'; else color5 = '#000000'; } console.log("here16" + newMessage['args']['info'][controller]['data']['dynamic']['digital_out_0']); console.log('\nPause\n'); var html1 = '<tr>' + '<td class="text-center"><font color="' + color5 + '">' + ['on', 'off'][newMessage['args']['info'][controller]['data']['dynamic']['digital_out_0']] + '</font></td>' + '</tr>'; $target1.html(html1); } else { $('#controls_' + controller).html($('#controls_' + controller).html() + '<span style="display: inline-table;"><table id="' + controller + '_do" class="table table-condensed table-bordered" width="100%"><thead><tr><th colspan="2"><center id="do_' + controller + '_head">' + newMessage['args']['info'][controller]['data']['static']['out0_descr'] + ' (' + controller + ')</center></th></tr><tr><th>Value</th></tr></thead><tbody id="do_' + controller + '_body"><tr><td>' + ['on', 'off'][newMessage['args']['info'][controller]['data']['dynamic']['digital_out_0']] + '</td></tr></tbody></table></span>'); } if ($('#' + controller + '_t').length) { $('#t_' + controller + '_head').html(''); if (controller === "lan_control_0") { $('#t_' + controller + '_head').html('<th>Temp_Amb<sup>Lab Et.-1</sup></th><th>Humidity<sup>Lab Et.-1</sup> [ % ]</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][0] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][1] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][2] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][3] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][4] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][5] + '</th>'); } else { $('#t_' + controller + '_head').html('<th>Temp_Amb<sup>Lab Et.1</sup></th><th>Humidity<sup>Lab Et.1</sup> [ % ]</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][0] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][1] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][2] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][3] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][4] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][5] + '</th>'); } $('#t_' + controller + '_body').html(''); $('#t_' + controller + '_body').html('<tr><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['temperature'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['humidity'] / 10 + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['t0'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['t1'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['t2'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['t3'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['t4'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['t5'] + '</td></tr>'); } else { $('#controls_' + controller).html($('#controls_' + controller).html() + '<span style="display: inline-table;"><table id="' + controller + '_t" class="table table-condensed table-bordered" width="100%"><thead><tr><th colspan="8"><center>Temperature (' + controller + ')</center></th></tr><tr id="t_' + controller + '_head"><th>T</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][0] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][1] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][2] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][3] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][4] + '</th><th>' + newMessage['args']['info'][controller]['data']['static']['di_ia_descr'][5] + '</th></tr></thead><tbody id="t_' + controller + '_body"><tr><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['temperature'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['t0'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['t1'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['t2'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['t3'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['t4'] + '</td><td class="text-center">' + newMessage['args']['info'][controller]['data']['dynamic']['t5'] + '</td></tr></tbody></table></span>'); } var timestamp = new Date(0); timestamp.setUTCSeconds(parseInt(newMessage['args']['info'][controller]['data']['dynamic']['timestamp'])); if ($('#' + controller + '_state').length) { $('#' + controller + '_state').html(''); $('#' + controller + '_state').html('<p><a href="' + newMessage['args']['info'][controller]['url'] + '">' + controller + '</a>:: ' + newMessage['args']['info'][controller]['data']['dynamic']['uptime_days'] + ' days, ' + newMessage['args']['info'][controller]['data']['dynamic']['uptime_hours'] + ' hours, ' + newMessage['args']['info'][controller]['data']['dynamic']['uptime_minutes'] + ' minutes, ' + newMessage['args']['info'][controller]['data']['dynamic']['uptime_seconds'] + ' seconds; ' + '</p>'); } else { $('#controls_status').html($('#controls_status').html() + '<span id="' + controller + '_state"><p><a href="' + newMessage['args']['info'][controller]['url'] + '">' + controller + '</a>:: ' + newMessage['args']['info'][controller]['data']['dynamic']['uptime_days'] + ' days, ' + newMessage['args']['info'][controller]['data']['dynamic']['uptime_hours'] + ' hours, ' + newMessage['args']['info'][controller]['data']['dynamic']['uptime_minutes'] + ' minutes, ' + newMessage['args']['info'][controller]['data']['dynamic']['uptime_seconds'] + ' seconds; ' + timestamp + '</p></span>'); } } catch (err) { console.log('[MAINWEBSOCKET] on message error: ' + err); } } One problem that I could have is that if I ever would like to modify those tables, I won't be able without rewriting everything again. To make yourself an idea of how the HTML looks like, here is an image: Answer: The first thing you should do is use a reference to that long property chain newMessage['args']['info'][controller]['data']. You already do it in one instance: var dynamicData = newMessage['args']['info'][controller]['data']['dynamic']; Also use the dot notation instead of the bracket notation when you have a string (containing a JS identifier) as the property. The first line then becomes much more readable dynamicData.temperature = dynamicData.temperature / 10; You should avoid appending to .html(). This is terribly slow, as this requires the browser the build a string from the DOM and then turn the whole thing back into a DOM structure. When adding a new element use .append() instead. Also don't repeatedly get the same elements with jQuery. Store a reference and use that instead. if (!$('#controls_' + controller).length) { $('#controls').html($('#controls').html() + '<center id="controls_' + controller + '"></center>'); } becomes var $controllerControls = $('#controls_' + controller); if ($controllerControls.length === 0) { $controllerControls = $('<center id="controls_' + controller + '"></center>'); $('#controls').append($controllerControls); } // From here on use $controllerControls instead of $('#controls_' + controller) BTW, center is deprecated. Use a div (or preferably another suitable element) and use CSS to center it. Instead of hard coding the colors in your code consider defining them in your style sheet and setting classes in your JS. This also gets rid of the font elements which are also deprecated. Also, use loops where possible and avoid string concatenations, since they are also slow. For example: var html = '<tr>' + '<td class="text-center"><font color="' + color1 + '">' + data.phase1 + '</font></td>' + '<td class="text-center"><font color="' + color2 + '">' + data.phase2 + '</font></td>' + '<td class="text-center"><font color="' + color3 + '">' + data.phase3 + '</font></td>' + '<td class="text-center"><font color="' + color4 + '">' + data.phase4 + '</font></td>' + '</tr>'; $target.html(html); becomes something like: var html = ["<tr>"]; for (var i = 1; i <= 4; i++) { html.push('<td class="text-center ', data["phase" + i], i === 4 ? " black" : "", '">', data.phase1, '</td>'); } html.push("</tr>") $target.html(html.join('')); With the CSS: .up { color: #14B514; } .dn { color: #D60B0B } .up.black, .dn.black { color: black !important; } #di_lan_control_1_body .up { color: #D60B0B; } #di_lan_control_1_body .dn { color: #14B514; } (This isn't an exact copy. It strong depends things I don't know about. For example, black is a bad choice for the class name, it should be a word describing why phase4 is different from the others.) <span style="display: inline-table;"><table>... This doesn't make much sense. For one, you aren't allowed to put a table inside a span. If you need an inline table, put display: inline-table directly on the table itself - best would be in the stylesheet. Generally: consider using a templating library to generate the HTML. It always best to keep the JS separate from the HTML. (That's all I have time for right now.)
{ "domain": "codereview.stackexchange", "id": 18264, "tags": "javascript, beginner" }
Is it possible to use the linear velocity from an optical flow sensors along with IMU data (sensor fusion) in the ekf_localization package? How?
Question: Hi. I am using the px4flow optical flow camera module with pixhawk. It outputs linear velocity, i would like to use this somehow in the extended kalman filter in ros. The inputs to the robot_pose_ekf are odometry for 2d position, 3d orientation and 6d visual odometry for both 3d position and 3d orientation. Is it possible to use this package for linear velocity also, or would i have to integrate the data and use the position instead? And ofcourse, if it is possible, how would i got about doing this? Cheers Originally posted by ros_geller on ROS Answers with karma: 92 on 2015-10-03 Post score: 2 Answer: Figured it out. There also is a kalman filter package that, called robot_localization, that allows you to add any number of sensors that you would like. It neither restricts your odometry input to 2D (wheel odometry). It also gives you the choice between an extended kalman filter or an unscented kalman filter. Originally posted by ros_geller with karma: 92 on 2015-10-06 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by elyazid on 2021-06-17: Hi, I'm working on the same project, i am using the px4flow optical flow camera, and i want to use optical flow data in the extended Kalman filter in ROS. could you please explain how did you use the optical flow data in robot_localisation
{ "domain": "robotics.stackexchange", "id": 22736, "tags": "navigation, sensor-fusion, extended-kalman-filter, robot-pose-ekf, ekf-localization" }
How to calculate the pH value of a Carbonate solution?
Question: I read the similar questions suggested when submitting my question but they didn't help me How can one calculate the pH of a solution? How to calculate pH of the Na2CO3 solution given ambiguous Ka values We are asked to calculate the pH value of a Carbonate Solution with a concentration of 0.005 mol per liter They provide the numerical solution as 10.97, but not how to get it Carbonate has the molecular form CO3^(2-) I know how to calculate the pH of a strong base (which Carbonate seems to be) pH + pOH = 14 so pH = 14 - pOH, and pOH = -log_10 (concentration = 0.005 mol per liter) But I just don't get the same result as they give I even assumed Carbonate is a weak base, and so used pKb = pOH^2/0.005 with pKb = 3.60 for Carbonate which I found online, and then solved for pOH and used pH = 14 - pOH, but even then I don't get their solution I know that in water, Carbonate becomes Bicarbonate ( CO3^(2-) + H2O --> HCO3- + OH-), which then becomes Carbonic Acid (HCO3- + H2O --> H2CO3 + OH-) All other exercises about pH they gave us, I always found the same result as they do. But for Carbonate, I just don't see what am I doing wrong. What am I missing ? Thank you so much for your help Answer: The equilibrium system of interest is: $$\ce{CO3^{2-}(aq) +H2O(l)<=>HCO3^{-}(aq) +OH-(aq)}$$ Let: A represent $\ce{CO3^{2-}}$ B represent $\ce{HCO3^{-}}$ C represent $\ce{OH^{-}}$ The $pK_b$ of $\ce{CO3^{2-}}$ at 25°C is approximately 3.67. Initially, only $\ce{CO3^{2-}}$ and water are present. As an approximation, water is not included in the equilibrium constant expression, so at equilibrium we have: $$C_A=C_{Ao}-x=0.005-x$$ $$C_B=C_{Bo}+x=x$$ $$C_C=C_{Co}+x=x$$ Which can be substituted into the equilibrium constant: $$K_b=\frac{x^2}{0.005-x}=10^{-3.67}$$ Solving for $x$: $$C_C=x=9.33\times 10^{-4}$$ The pOH of this solution would be: $$\pu{pOH}=\pu{-log}\;C_C=\pu{-log}\;(9.33\times 10^{-4})=3.03$$ Finally, the pH of the solution is: $$\pu{pH}=14-\pu{pOH}=14-3.03=10.97$$
{ "domain": "chemistry.stackexchange", "id": 17064, "tags": "organic-chemistry, acid-base, aqueous-solution, ph, concentration" }
Trello list scraper with data visualization - Monthly food expenses
Question: For the last couple of months I've been working on a python script that pulls data from a specific Trello list and sums up the numeric values by list (lists are split up into months). I've worked on making my code pythonic, basic functionality, separating functionality into distinct functions. It currently only works with a specific list and board. Things I plan to work on is; better error handling with network connections and introducing unit testing. Two things I have little experience with. EDIT: the keys.txt file is formatted as follows token=token api_key=key #!/usr/bin/python from trollop import TrelloConnection import logging from math import ceil from time import strftime import seaborn as sns import matplotlib.pyplot as plt # Lots of issues with Python3. Lots of unicode, string errors, Just switched to # py2. should try to use dicts for {name: cost} and to practice using dicts # TODO: Data Visualization, Error/exception handling # TODO: clean up code, get feedback on reddit, maybe use args for spec params # TODO: set up cron job to run every month k = list() with open('keys.txt', 'r') as keys: for line in keys: tmp = line.split('=')[1] tmp = tmp.rstrip() k.append(tmp) token = k[0] api_key = k[1] # idBoard': '577b17583e5d17ee55b20e44', # idList': '577b17583e5d17ee55b20e45', # Set up basic logging logging.basicConfig(format='%(levelname)s %(message)s', level=logging.INFO, filename='DEBUG.log', filemode='w') # Establish connection conn = TrelloConnection(api_key, token) MONTHS = ['Jan', 'Feb', 'March', 'April', 'May', 'June', 'July', 'Aug', 'Sept', 'Oct', 'Nov', 'Dec'] logging.info("Iterating through boards...") def get_total_per_month(month, board_list): costs = 0.0 month = month.lower() for lst in board_list: if month in lst.name.lower(): for crd in lst.cards: costs += float(crd.name.split('-')[1]) return ceil(costs) def first_of_the_month(): day = strftime("%d") if '1' is day: # TODO: check to see if it's a new month and add a list pass def get_yearly_average(totals): sum = 0.0 count = 0 for month in totals: if month != 0.0: count = count + 1 sum += month # print month year_average = sum / count print 'year ave ' + str(year_average) return year_average def plot(totals, average): sns.set(style='white', font_scale=1.5) plt.title('Monthly Food Expenses') plt.xlabel('Months') plt.ylabel('Pesos') sns.barplot(x=MONTHS, y=totals) plt.show() def main(): costs = list() names = list() board = conn.get_board('BE89pW61') totals = [get_total_per_month(month, board.lists) for month in MONTHS] print totals average = get_yearly_average(totals) logging.info(totals) logging.debug('Board list: {}'.format(board.lists)) plot(totals, average) if __name__ == '__main__': main() Answer: Remove code from the global scope. You should try to keep global scope to just functions and classes. This is as adding data here means any function can change it, and so you can no-longer trust it's state. The only time I use globals that contain data, is when I use a constant. Take MONTHS. Which is a good global variable. I'd add your settings as a global constant too. You should probably use a dictionary, as then you don't have to use dodgy indexing. But, using a dictionary means that you'd have to use SETTINGS['token'], rather than SETTINGS.token. And so I'd make a new class that has this sugar: class FrozenDict(object): def __init__(self, *args, **kwargs): # Hack to bypass `__setattr__`. self.__dict__['d'] = dict(*args, **kwargs) def __getattr__(self, key): return self.d[key] def __setattr__(self, key, val): raise TypeError('FrozenDict does not support setting attributes.') def __delattr__(self, key, val): raise TypeError('FrozenDict does not support deleting attributes.') After this you may want to put getting settings into it's own function. As you wrote: TODO: clean up code, get feedback on reddit, maybe use args for spec params This new function should allow you to use both a file and take arguments from the command line. I won't go into how to do this, as Reddit probably did. But I'd use argparse. But back to your code. I'd change the construction of the list to be entirely in the with. As with statements aren't in a different scope. And I would use a list comprehension to build the list, rather than using append. This can get you: def read_settings(): with open('keys.txt', 'r') as keys: k = [line.split('=')[1].rstrip() for line in keys] token = k[0] api_key = k[1] return FrozenDict({ 'token': k[0], 'api_key': k[1], 'board': 'BE89pW61' }) Nice and simple. I don't know what keys.txt is structured like, but I would recommend that you instead format it so it's: token=... api_key=... board=BE89pW61 Which would allow you to instead use a dictionary comprehension. And would allow you to not be so ridged on how the settings are provided. Using the second argument to str.split. And taking advantage of dict you could get: def read_settings(): with open('keys.txt', 'r') as keys: d = dict(line.split('=', 1) for line in keys) return FrozenDict(d) After this I'd clean up both get_total_per_month and get_yearly_average. Instead of performing the addition yourself you can instead use sum. And to count the amount of items in an array you can use len. This allows you to change get_yearly_average to a list comprehension, and then just a sum divided by its length. And allows you to change get_total_per_month to a comprehension, passed to sum. I'd also remove the ceil in get_total_per_month as you should change the data when outputting not in it's internal state. And leads to \$\pm12\$ accuracy, as if every month is \$0.01\$ you'll say they're \$1\$, and \$12 \dot{} 0.01 = 0.12\$, but \$12 \dot{} 1 = 12\$. Which is absurdly wrong. Finally I'd remove your prints as you're using logging. It makes no sense to use print and Pythons logging library to log the programs state. You should only use print as a means to interact with the user, such as asking if they want to use keys.txt as their settings file. This can leave you with: #!/usr/bin/python # Lots of issues with Python3. Lots of unicode, string errors, Just switched to # py2. should try to use dicts for {name: cost} and to practice using dicts # TODO: Data Visualization, Error/exception handling # TODO: clean up code, get feedback on reddit, maybe use args for spec params # TODO: set up cron job to run every month import logging from trollop import TrelloConnection import seaborn as sns import matplotlib.pyplot as plt MONTHS = ['Jan', 'Feb', 'March', 'April', 'May', 'June', 'July', 'Aug', 'Sept', 'Oct', 'Nov', 'Dec'] class FrozenDict(object): def __init__(self, *args, **kwargs): self.__dict__['d'] = dict(*args, **kwargs) def __getattr__(self, key): return self.d[key] def __setattr__(self, key, val): raise TypeError('FrozenDict does not support setting attributes.') def __delattr__(self, key, val): raise TypeError('FrozenDict does not support deleting attributes.') def read_settings(): with open('keys.txt', 'r') as keys: k = [line.split('=')[1].rstrip() for line in keys] token = k[0] api_key = k[1] return FrozenDict({ 'token': k[0], 'api_key': k[1], 'board': 'BE89pW61' }) def get_total_per_month(month, board_list): month = month.lower() return sum( float(crd.name.split('-')[1]) for lst in board_list if month in lst.name.lower() for crd in lst.cards ) def get_yearly_average(totals): totals = [t for t in totals if t != 0] return sum(totals) / len(totals) def plot(totals, average): sns.set(style='white', font_scale=1.5) plt.title('Monthly Food Expenses') plt.xlabel('Months') plt.ylabel('Pesos') sns.barplot(x=MONTHS, y=totals) plt.show() def main(): # Establish connection conn = TrelloConnection(SETTINGS.api_key, SETTINGS.token) board = conn.get_board(SETTINGS.board) totals = [get_total_per_month(month, board.lists) for month in MONTHS] average = get_yearly_average(totals) logging.info(totals) logging.debug('Board list: {}'.format(board.lists)) plot(totals, average) if __name__ == '__main__': SETTINGS = read_settings() # Set up basic logging logging.basicConfig(format='%(levelname)s %(message)s', level=logging.INFO, filename='DEBUG.log', filemode='w') logging.info("Iterating through boards...") main()
{ "domain": "codereview.stackexchange", "id": 23186, "tags": "python, matplotlib" }
Is the three-body problem be resolved when we calculate the tree of Feynman diagrams for the three bodies?
Question: When I think about the n-body problem, it seems an issue of sequentiality. Most of our equations are based on the interaction between two bodies, where in a 3 body simulation, we execute the equations sequentially, affecting one body, then another, then another. But in reality for every instant those three bodies interact via the fundamental forces, the interactions are all taking eachother into account in those very instants - quantum mechanics included. It seems not a problem so much as a plain oversimplification. In other words, there's (mostly) no such thing as a two body system whatwith just about everything being entangled in some way shape or form. So my question is, if we make Feynman Diagrams of every possible interaction between two bodies for a single instant, repeating for the next pair of bodies out of the three based on all the possible diagrams from the previous - a branching tree of possible outcomes - is one of the answers correct (usually the most likely set of interactions)? Answer: Sorry, but your intuition is incorrect (but see the end). Most physical models involve differential equations with multiple bodies that simultaneously affect each other: there are no sequential effects. Numeric solutions of these equations also occurs in a parallel way: calculating the force of pairs of bodies in sequence, or on one body and then another, is usually too inexact and produces simulation artefacts. In a typical numeric solution all forces at an instant are calculated and then positions and momenta updated (often in clever ways to make precision better). Feynman diagrams are something very different from how classical mechanics works. In quantum mechanics they are summed together to get a probability amplitude rather than representing a search for which outcome happens. The reason the 3-body problem is hard (it isn't so much unresolved as known to be unsolvable in general) is that the space of possible solutions is very complex: there are chaotic regions that can produce vastly different outcomes for inputs that are arbitrarily close to each other. There are no nice, analytic solutions that covers all of the dynamics. In many cases there are few if any conserved quantities beyond energy, momentum and angular momentum that allows us to determine what happens without performing a simulation or observe the actual system. That said, you are still on to something: one can look at individual 3-body encounters and analyse their statistics fairly well. That allows doing something like Feynman diagrams, or at least speak of decay/scattering probabilities.
{ "domain": "physics.stackexchange", "id": 78540, "tags": "quantum-mechanics, three-body-problem" }
What's the math for real world back-propagation?
Question: Considering a simple ANN: $$x \rightarrow f=(U_{m\times n}x^T)^T \rightarrow g = g(f) \rightarrow h = (V_{p \times m}g^T)^T \rightarrow L = L(h,y) $$ where $x\in\mathbb{R}^n$, $U$ and $V$ are matrices, $g$ is the point-wise sigmoid function, $L$ returns a real number indicating the loss by comparing the output $h$ with target $y$, and finally $\rightarrow$ represents data flow. To minimize $L$ over $U$ and $V$ using gradient descent, we need to know $\frac{\partial L}{\partial U_{ij}}$ and $\frac{\partial L}{\partial V_{ij}}$, I know two ways to do this: do the differentiation point wise, and having a hard time figuring out how to vectorize it flatten $U$ and $V$ into a row vector, and use multivariate calculus (takes a vector, yields a vector) to do the differentiation For the purpose of tutorial or illustration, the above two methods might be suffice, but say if you really want to implement back-prop by hand in the real world, what math will you use to do the derivative? I mean, is there a branch, or method in meth, that teaches you how to take derivative of vector-valued function of matrices? Answer: There is Matrix Calculus, (and I would recommend the very useful Matrix Cookbook as a bookmark to keep), but for the most part, when it comes to derivatives, it just boils down to pointwise differentiation and keeping your dimensionalities in check. You might also want to look up Autodifferentiation. This is sort of a generalisation of the Chain Rule, such that it's possible to decompose any composite function, i.e. $a(x) = f(g(x))$, and calculate the gradient of the loss with respect to $g$ as a function of the gradient of the loss with respect to $f$. This means that for every operation in your neural network, you can give it the gradient of the operation that "consumes" it, and it'll calculate its own gradient and propagate the error backwards (hence back-propagation)
{ "domain": "datascience.stackexchange", "id": 2026, "tags": "machine-learning, deep-learning, backpropagation, theory" }
Dependency management in ROS2: CMakeLists.txt, package.xml, colcon build, make ...?
Question: I'm currently trying to understand the build system used by ROS2 and one thing I can't wrap my head around is the dependency management. Following this tutorial, dependencies are added using tags in package.xml. But for building the code, cmake's find_package() is used? Why do I need both? And what benefit is colcon build adding here, can't I just build the package using make? Second, what's the prefered way to install dependencies? Take for example this package. According to its package.xml file it requires - among others - the package camera_info_manager. How do I install it? Does ROS2 ship with a package manager that allows installation of packages from remote source (like npm install or vcpkg install)? Can rosdep do the job or do I have to run sudo apt-get install ros-eloquent-camera-info-manager and cross my fingers that all packages I need are in the dpkg source? Originally posted by eloquent-fox on ROS Answers with karma: 25 on 2020-08-26 Post score: 2 Original comments Comment by gvdhoorn on 2020-08-26:\ Why do I need both? Related I believe (although for ROS 1, but the same/similar infrastructure is used): #q217475 and #q215059. Answer: dependencies are added using tags in package.xml. But for building the code, cmake's find_package() is used? In order to be able to automate packaging as well as determine inter-package dependencies those need to be declared in a machine readable format. That is why they are present in the manifest files package.xml. In CMake you need to find other packages. If the names of your build dependency names align perfectly with the names of the CMake config files you might be able to use convenience functions like ament_auto_find_build_dependencies. But in reality there are often mismatches between rosdep key names (the ones in the package.xml file) and the CMake config name. Therefore it is very common for them to be reiterated in CMake. And what benefit is colcon build adding here, can't I just build the package using make? Any ROS package (which in ROS 2 could any build system like CMake, Python setuptools, etc.) can be build using its native build tool. So yes, you can invoke just cmake && make && make install on a ROS package using CMake (even if it uses ament_cmake or in ROS 1 catkin). The "problem" with this approach is scalability. First, if you want to build multiple packages you need to manually determine in which order you need to build them. Second, if you want to build more than one package it will require a lot of manual labor. Pretty much all colcon is doing for you is figuring out the dependency graph and invoke the necessary commands to build each package based on the build system it uses (while also leveraging parallelism where possible to speed up the process). According to its package.xml file it requires - among others - the package camera_info_manager. How do I install it? Does ROS2 ship with a package manager that allows installation of package from remote source? If there are binary packages for the platform you are interested in (e.g. Ubuntu) you can install those. E.g. for Ubuntu we create debs which you can install using apt. For Windows Microsoft builds chocolatey packages which you can install with choco. So ROS uses existing package managers rather than inviting its own. Beside that you can always build dependencies from source if either binary packages aren't available on your platform or you want a different version. When adding one dependency to your workspace it might require additional recursive dependencies. rosinstall_generator is a tool which helps you to get the source of all recursive dependencies so that you can build them from source. Originally posted by Dirk Thomas with karma: 16276 on 2020-08-26 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by lukicdarkoo on 2020-08-27: Maybe offtopic, but I am curious. So ROS uses existing package managers rather than inviting its own. With rosinstall_generator, vcstool, colcon and rosdep one can build its own package manager for ROS2. A primitive package manager for ROS2 could look something like this: https://gist.github.com/lukicdarkoo/d03196b15367b804b27c142dad8644e9 Is there is any specific reason it is not done already? Of course, there is apt, but with ROS' package manager, one could install packages on unsupported operating systems. Comment by tfoote on 2020-08-27: There are many many corner cases in package managers. How do you recover from errors? How do you check versions and make sure it's reproducible on two different machines? And that requires you to build from source for every package on every deployment which can take a long time for a large installation. And how do you upgrade a specific package? There are a lot of package management tools out there and they have evolved and grown. We have support for apt, gentoo, openembedded, chocolaty, conda, snaps, docker images, and soon rpms. And support building from source on almost any platform. We do this by having the low level meta information available and tools that can leverage that and build on each other. A package manager is not something that's specific to robotics, and consequently we shouldn't reinvent the wheel, but learn from the many decades of development which have gone into the existing package managers and leverage their expertise. Comment by lukicdarkoo on 2020-08-27: I am not arguing that ROS' package manager supposed to replace the apt and similar but to complement them. There are many packages that are not published to apt, or there is no specific package version available, or the package is available for a new ROS distro even though it is compatible. And how do you upgrade a specific package? Use vcstool to pull a new code and rebuild it with colcon (maybe also re-run rosinstall_generator and rosdep). Or, delete the package and install it again. How do you check versions rosinstall_generator generates .repo with the branch names, maybe one could specify hashes instead. How do you recover from errors? But I see your point. The error recovery may be the hardest to implement. It is very hard to guarantee the packages can be compiled smoothly on different operating systems. Comment by gvdhoorn on 2020-08-28: The build-pkgs-from-source helper is a nice idea @lukicdarkoo, but tbh, I'd rather we avoid having people build packages from source as much as possible. Depending on rosdep to install dependencies is possible, but it's not sufficiently robust, as many pkgs don't state their dependencies properly. In addition, too many users have development environments which are not clean or sane, leading to all sorts of problems. Installing pkgs using apt is much more of a hardened process. As @tfoote mentions, platform pkg managers have decades of development behind them, which especially covers all sorts of corner cases and tries to maintain transactional application of updates (at the package level, not across perhaps). We shouldn't want to replicate all of that.
{ "domain": "robotics.stackexchange", "id": 35466, "tags": "ros, ros2, colcon" }
(Leetcode) Snakes and Ladders
Question: This is a Leetcode problem - On an \$N\$ x \$N\$ board, the numbers from 1 to N * N are written boustrophedonically (starting from the bottom left of the board), and alternating direction each row. For example, for a 6 x 6 board, the numbers are written as follows - You start on square 1 of the board (which is always in the last row and first column). Each move, starting from square x, consists of the following - You choose a destination square S with number x+1, x+2, x+3, x+4, x+5, or x+6, provided this number is <= N * N. (This choice simulates the result of a standard 6-sided die roll, ie., there are always at most 6 destinations, regardless of the size of the board.) If S has a snake or ladder, you move to the destination of that snake or ladder. Otherwise, you move to S. A board square on row r and column c has a "snake or ladder" if board[r][c] != -1. The destination of that snake or ladder is board[r][c]. Note that you only take a snake or ladder at most once per move; if the destination to a snake or ladder is the start of another snake or ladder, you do not continue moving. (For example, if the board is [[4,-1],[-1,3]], and on the first move your destination square is 2, then you finish your first move at 3 because you do not continue moving to 4.) Return the least number of moves required to reach square N * N. If it is not possible, return -1. Note - 2 <= board.length = board[0].length <= 20 board[i][j] is between 1 and N * N or is equal to -1. The board square with number 1 has no snake or ladder. The board square with number N * N has no snake or ladder. Example 1 - Input: [ [-1,-1,-1,-1,-1,-1], [-1,-1,-1,-1,-1,-1], [-1,-1,-1,-1,-1,-1], [-1,35,-1,-1,13,-1], [-1,-1,-1,-1,-1,-1], [-1,15,-1,-1,-1,-1]] Output: 4 """ Explanation - At the beginning, you start at square 1 [at row 5, column 0]. You decide to move to square 2, and must take the ladder to square 15. You then decide to move to square 17 (row 3, column 5), and must take the snake to square 13. You then decide to move to square 14, and must take the ladder to square 35. You then decide to move to square 36, ending the game. It can be shown that you need at least 4 moves to reach the N*N-th square, so the answer is 4. """ I would like to have a performance review of my solution and would also like to know whether I could make it more efficient. Here is my solution to this challenge (in Python 3) - # Uses Breadth First Search (BFS) def snakes_and_ladders(board): """ :type board: List[List[int]] :rtype: int """ board_2 = [0] rows, cols = len(board), len(board[0]) row = rows - 1 while row >= 0: for col in range(cols): board_2.append(board[row][col]) row -= 1 if row >= 0: for col in range(cols - 1, -1, -1): board_2.append(board[row][col]) row -= 1 visited = [0 for i in range(len(board_2))] stack = collections.deque() stack.append([1,0]) while stack: current_index, current_dist = stack.popleft() for i in range(1,7): next_index = min(rows * cols, current_index + i) if board_2[next_index] != -1: next_index = board_2[next_index] if next_index == rows * cols: return current_dist + 1 if visited[next_index] == 0: visited[next_index] = 1 stack.append([next_index, current_dist + 1]) return -1 Answer: Type Hinting You were halfway there! Good job having the type of values accepted and returned in the method docstring. Now, you can use type hints to show in the header of the method what values are accepted and returned, as follows from typing import List def snakes_and_ladders(board: List[List[int]]) -> int: Consistent Spacing This for i in range(1,7): should be this for i in range(1, 7): You have good spacing in the rest of your code, but you should stay consistent and apply this spacing everywhere. The same for this line stack.append([1,0]) -> stack.append([1, 0]) Magic Numbers We're coming back to this line again for i in range(1, 7): What is 7 supposed to represent? The max number of rows or columns? What if you have to change it later on to apply to smaller/bigger snakes and ladders boards? I would advise using a variable to hold this value, naming it accordingly. List Comprehension You use lots of loops with one line in them. Particularly when appending to lists. Luckily, you can add two lists together and it will merge them. From this for col in range(cols): board_2.append(board[row][col]) to this board_2 += [board[row][col] for col in range(cols)] You can do the same for the next loop a couple lines down From this for col in range(cols - 1, -1, -1): board_2.append(board[row][col]) to this board_2 += [board[row][col] for col in range(cols - 1, -1, -1)]
{ "domain": "codereview.stackexchange", "id": 36425, "tags": "python, performance, python-3.x, programming-challenge, breadth-first-search" }
Create a map of a building with kinect only?
Question: Hi, I want to mount a Kinect on our Coppa Robot. I'm still new to ROS etc. so my question is: will it be "enough" to create a map of a building? If I understood it correctly, TurtleBot is doing that and a lot more using Kinect only? So I'd just have to use GMapping to create a map and later amcl to estimate the position? That's it? (sounds a bit too easy) ;) Update: it is possible only if you have really good odometry. Without it you can forget it (amcl might still work but gmapping won't). Originally posted by Cav on ROS Answers with karma: 345 on 2012-01-12 Post score: 1 Answer: You'll need the pointcloud_to_laserscan and slam_gmapping packages to accomplish this. You should have a look at turtlebot_navigation gmapping_demo.launch as a good example of how to accomplish what you want. You can find additional help in a number of questions on this board, including here. Originally posted by DimitriProsser with karma: 11163 on 2012-01-12 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by Cav on 2012-01-12: Great, now I just need to wait for the kinect to arrive :) Comment by DimitriProsser on 2012-01-12: I do believe so. I've never done it myself, but there's a lot of support for it on this forum. Comment by Cav on 2012-01-12: Ok, thanks. So Kinect is enough to accomplish all of that?
{ "domain": "robotics.stackexchange", "id": 7867, "tags": "navigation, mapping, kinect" }
constraint on scaling dimension
Question: How can we show that for any scalar operator $\Delta\geq1$ (where $\Delta$ is the scaling dimension)? Where can I find a reference for reading where it comes from? Answer: This is a consequence of the Lehman spectral representation for a physical scalar operator. The two point function of this operator (the expected value of the operator with it's conjugate) can be written as an integral over propagators: $$ \langle \bar{\phi}(p)\phi(p') \rangle = (2\pi)^d\delta^d(p-p') \int_0^\infty {\rho(s)\over p^2 - s + i\epsilon} ds $$ Where each propagator falls off as ${1\over x^2}$ at short distances in 4 dimensions, and $\rho(s)>0$ for all s (because of Hilbert space positivity--- this is the norm of a state, namely $||\phi(p)|0\rangle||$). A superposition of positive propagators falling off as ${1\over p^2}$ with positive coefficients annot produce a falloff at large p which is faster than ${1\over p^2}$. This means that the asymptotic scale dimension of the scalar operator can't be less than 1 in 4 dimensions, it can't be less then 1/2 in 3 dimensions, and it can't be negative in 2 dimensions. This is not exactly mathematically true, because you can engineer a spectral weight which is growing near s=0 as a power law, to produce faster than 1/p^2 falloff. But it is physically true anyway, because such a growth requires an infinite number of particle species at p=0, which is inconsistent with the usual idea that a quantum field theory has a finite number of elementary fields, with a finite thermal entropy. The way to understand this is that superposing any finite tower of particles with positive spectral weights always leads to 1/p^2 falloff or slower, and a ${1\over x^a}$ propagator with $a\le 2$ The Kallen-Lehman spectral representation is a standard field theory result, it is found in most standard textbooks. The original paper is reprinted in Schwinger's reprint volume "Quantum Electrodynamics".
{ "domain": "physics.stackexchange", "id": 5183, "tags": "quantum-mechanics, quantum-field-theory, conformal-field-theory" }
Number of taps needed in an FIR filter to remove DC
Question: An FIR filter is being used to remove the DC of an ECG. The sampling rate is 500Hz. How many taps would said FIR filter require (theoretically and practically) to filter out the DC. P.S. the filter would be a high-pass filter and would not remove the 1Hz fundamental frequency component of the ECG. Answer: See How many taps does an FIR filter need? In your case you'd need more than 1000 taps depending on the allowable ripple, as your cut-off frequency is less than fs/500. Alternatives : use an IIR, a simple order-1 DC removal filter could work great Average your signal and subtract the average in order to remove the DC Rick Lyons proposes a clever implementation of an FIR DC removal filter using cascaded-integrator-comb filters here https://www.dsprelated.com/showarticle/58.php
{ "domain": "dsp.stackexchange", "id": 8072, "tags": "filters, finite-impulse-response" }
Black hole singularity in loop quantum gravity
Question: How is the singularity of a Black Hole treated in Loop Quantum gravity ? Does it go away ? And if it does, what's after the event horizon ? Answer: There has been some funny progress recently, as described in a paper by Ashtekar, Olmedo and Singh, about which Rovelli wrote this article: https://physics.aps.org/articles/v11/127 Apparently, loop quantum gravity predicts that evolving black holes first shrink to Planck-sized objects, then turn into white holes. These are basically the time-reverse of black holes, objects that spit energy and matter out into their environment. LQG seems to like this idea of replacing singularities as they occur in GR by 'bounces', as it also predicts a Big Bounce, instead of a Big Bang, as the origin of our current universe. EDIT: to more precisely answer the original question, the actual singularity within the black hole indeed goes away in LQG, and is replaced with some small region in which the Einstein equations are corrected by quantum effects. In other words, there is a small ball of quantum gravity 'stuff' within the horizon, not a pointlike singularity.
{ "domain": "physics.stackexchange", "id": 64084, "tags": "black-holes, quantum-gravity, singularities, loop-quantum-gravity" }
Is there any scenario in which the size of a molecule increases due to an increase in temperature?
Question: Take an ice cube for example. Heat is applied in a closed container until it is vaporized completely. Will the molecule's size be larger (on average)? Is there a substance that you know of that has odd behaviour such as changing molecular size due to temperature change? Obviously there would be more pressure in the case of $H_2O$. Have you heard of any substances that are exceptions to this rule? I would love to hear about it if you have. If this question hasn't quite boggled your mind or you are having trouble thinking of an exception, let's say that an average diamond at about room temperature (something with entropy, ie: not a perfect lattice) and we say that this is a single macro-molecule because every carbon atom present in the diamond shares a bond with another carbon in the diamond. Pretend we heat said diamond up a bit or let it sit for awhile. Are covalent bonds being broken and reformed as time goes on? If we heated it up enough without breaking the lattice completely, would the diamond itself, the molecule as a whole, increase in size at all? Or would the bonds just be broken more often and reformed, and the size change would be so negligible that it could hardly be considered an increase in volume? Answer: First, let me remark to “the size of a molecule” is not particularly well-defined. I assume in the following that you are interested about bond lengths (which are averages over time of distances between bonded atoms), as seems to be the case in your example. In the simplest case, you can consider an isolated diatomic molecule (think: gas phase N2), and we suppose we're not going to temperature high enough to ionize it or otherwise break its bond. Because the interatomic potential is asymmetric: you can see that, as the temperature increases and the system is allowed to explore more of the potential energy curve (climb higher up the well), the average bond length will increase and not stay constant. So, for a single bond in an isolated molecule, it is expected that the bond lenghtens with temperature. Now, one case where the overall “size expansion” can be clearly studied is that of crystalline materials. In solids, the evolution of volume as a function of temperature is named thermal expansion, and is characterized by the parameter of the same name, defined as: $$\large \alpha = \frac{1}{V} \frac{\partial V}{\partial T}$$ Because bonds grow longer with increasing temperature, most materials display a positive thermal expansion, i.e. they expand with temperature. This is, for example, the case for diamond, with $\alpha \approx +10^{-6} \text K^{-1}$. However, some materials display the oppositive behavior, and shrink upon temperature increase. This phenomenon is called negative thermal expansion (NTE), and is rather uncommon.
{ "domain": "chemistry.stackexchange", "id": 6575, "tags": "physical-chemistry, molecules, temperature" }
ROS2 Following basic service/client tutorial, executables doesn't seems to install
Question: For a bit of context, I am using Ubuntu 20.04 with ROS2 Foxy. I followed this tutorial : https://docs.ros.org/en/foxy/Tutorials/Beginner-Client-Libraries/Writing-A-Simple-Py-Service-And-Client.html To recap everything I have done : - open new terminal : source /opt/ros/foxy/setup.bash mkdir -p dev_ws/src cd dev_ws/src ros2 pkg create --build-type ament_python py_srvcli --dependencies rclpy example_interfaces python py_srvcli --dependencies rclpy example_interfaces going to create a new package package name: py_srvcli destination directory: /home/user/Stage_ros/test/dev_ws/src package format: 3 version: 0.0.0 description: TODO: Package description maintainer: ['user <bastian.muratory@gmail.com>'] licenses: ['TODO: License declaration'] build type: ament_python dependencies: ['rclpy', 'example_interfaces'] creating folder ./py_srvcli creating ./py_srvcli/package.xml creating source folder creating folder ./py_srvcli/py_srvcli creating ./py_srvcli/setup.py creating ./py_srvcli/setup.cfg creating folder ./py_srvcli/resource creating ./py_srvcli/resource/py_srvcli creating ./py_srvcli/py_srvcli/__init__.py creating folder ./py_srvcli/test creating ./py_srvcli/test/test_copyright.py creating ./py_srvcli/test/test_flake8.py creating ./py_srvcli/test/test_pep257.py cd .. (now i am in /dev_ws) : rosdep install -i --from-path src --rosdistro foxy -y #All required rosdeps installed successfully colcon build --packages-select py_srvcli Starting >>> py_srvcli --- stderr: py_srvcli listing git files failed - pretending there aren't any --- Finished <<< py_srvcli [2.44s] Summary: 1 package finished [2.89s] 1 package had stderr output: py_srvcli My question is why is there an error with git meanwile I never used git anywhere in this tutorial ? Furthemore, i then sourced the install directory : . install/setup.bash and ran : ros2 run py_srvcli service which then told me no executables found ... Am I missing something ? EDIT : I have already added the entry points, here is my setup.py as it might help from setuptools import setup package_name = 'py_srvcli' setup( name=package_name, version='0.0.0', packages=[package_name], data_files=[ ('share/ament_index/resource_index/packages', ['resource/' + package_name]), ('share/' + package_name, ['package.xml']), ], install_requires=['setuptools'], zip_safe=True, maintainer='user', maintainer_email='bastian.muratory@gmail.com', description='TODO: Package description', license='TODO: License declaration', tests_require=['pytest'], entry_points={ 'console_scripts': [ 'service = py_srvcli.service_member_function:main', 'client = py_srvcli.client_member_function:main', ], }, ) and finally, here is the tree from the dev_ws Originally posted by Bastian2909 on ROS Answers with karma: 68 on 2022-06-27 Post score: 0 Original comments Comment by harshal on 2022-06-27: Few questions, none to answer your git query: Have you created the service node as per the tutorial(2)? If yes, have you added an entry point for the script (2.2)? If yes, does the node run when instead of "ros2 run py_srvcli service" you type in "service" in the cli? Comment by Bastian2909 on 2022-06-27: Hello, first of all thank you for taking the time to read my question, I created the node folowing the tutorial and yes, I had already added the entry points (I added it in the edit of my post). However i'm not sure about what you mean by typing service in the cli ? Comment by harshal on 2022-06-27: I understand that you get "No executable found" when you enter "ros2 run py_srvcli service" in terminal. What happens when you enter "service" in terminal?...so, type service and press enter. Comment by Bastian2909 on 2022-06-27: My appologies I didn't know this was a command. I types "service" and nothing happened, not even an error message. The terminal is still running the command but won't display anything so i had to use ctrl-C to stop it. Comment by harshal on 2022-06-27: No worries, and no, "service" is not the command that we refer to here. It is the name of the executable that you created (I am facing similar issue here. If you run "service", and then in a separate terminal run "ros2 service list", you should see the name of the service that you created. If yes, you could use the above workaround till you find better guidance in this regard. Comment by gvdhoorn on 2022-06-27: Please see colcon/colcon-core#518. I believe that's the same issue reported. Comment by Bastian2909 on 2022-06-29: Thank you, that was exactly what i needed ! Answer: As mentionned by gvdhoorn, an update with colcon core seem to change where are files installed when using colcon build To solve my issue I used : [THIS IS OUTDATED] colcon build --symlink-install colcon build and everything worked well agin, thank you all for helping me ! EDIT : previous solution was a workaround as mentionned by gvdhoorn, another way to fix this has been found by Guillaumebeuzeboc : (https://github.com/colcon/colcon-core/issues/518) : In the setup.cfg replace : [develop] script-dir=$base/lib/my_package [install] install-scripts=$base/lib/my_package By : [develop] script_dir=$base/lib/my_package [install] install_scripts=$base/lib/my_package Originally posted by Bastian2909 with karma: 68 on 2022-06-29 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2022-06-29: Note: this is a work-around for now. This is not a proper solution.
{ "domain": "robotics.stackexchange", "id": 37810, "tags": "ros, ros2, service, git" }
What does it mean if we disable K-rule in Agda?
Question: TL;DR: Can I say, "K-rule in Agda enables people to match $ \forall a.a \equiv a $ with $ \mathit{refl} $"? In https://agda.readthedocs.io/en/v2.5.4.1/language/without-k.html#without-k, K-rule is introduced as an implicit rule and it's defaultly enabled. If I understand it correctly, it means parameter of type $ \forall a.a \equiv a $ can be matched with $ \mathit{refl} $. If we disable K-rule, what will happen? What kind of codes is it going to prevent me writing? Because we can always construct $ \forall a . a \equiv a $, even without K, we can always get an instance of $ T $ by passing $ \mathit{refl} $ to any functions with type $ \forall a.a \equiv a \rightarrow T $. Agda's doc has given me an example which indeed shows a circumstance that can only work with K: K : {A : Set} {x : A} (P : x ≡ x → Set) → P refl → (x≡x : x ≡ x) → P x≡x K P p refl = p In this code, if we can pattern match x≡x with refl, P refl can be trivially equivalent to P x≡x (but without K, we can't). So does that mean: "K enables people to match $ \forall a.a \equiv a $ with $ \mathit{refl} $"? I didn't find the answer on the Agda doc. If we disable K, will the semantic of Agda's equality type (CH-ISO) change? Answer: Axiom K is related to "Uniqueness of Identity Proofs", which says is that any two proofs of equality are themselves equal to each other (i.e. are both Refl). Agda doesn't include K as an axiom, but it (and UIP) can be proved as a theorem using Agda's dependent pattern matching. Axiom K is inconsistent with homotopy type theory, where the univalence axiom provides ways of constructing equalities other than refl. For example, if there are two functions that are pointwise equal, univalence asserts that there is an equality term equating them, but it cannot be refl if the functions have different implementations. All removing K from Agda does is add restrictions to pattern matching, making fewer matches typecheck. The runtime semantics are unchanged, and no new programs typecheck. For some excellent introductions on the subject, I recommend: Pattern Matching without K Unifiers as Equivalences Both of these have more detailed versions in JFP if you have access to a subscription.
{ "domain": "cs.stackexchange", "id": 21344, "tags": "proof-assistants, agda" }
What is the minimum temperature difference which can be measured?
Question: All real things have temperature. Temperature can be measured in various ways. We have reached to a great precision in measuring change in time and space , however I am not sure to what extent change in temperature can be measured? I searched the Internet but found no relevant results. My question is : With what maximum precision a change in temperature can be measured ? Is there any limit to the measurement of precision in change of temperature ? PS : I am getting suggestions that my question is probably a duplicate but here is the difference: I am not asking the minimum temperature attainable. I am asking with what maximum precision a change in temperature can be measured. For example - if I take two glasses of water ,or anything suitable ,under same constraints and heat them simultaneously and I stop heating one of the glass few seconds or minutes or hours before the second glass then there will be temperature difference. Now , How sensitive are our measuring instruments to measure the change in temperature? If I heat one glass for 30 minutes and second for 31 minutes or 30.5 minutes then say first glass will show temperature T and other glass will show T plus delta T, now , how fine the delta T can be ? Answer: For a bulk substance under relatively normal conditions as you've asked about in your edited question, the "regular" standard is approximately a millikelvin (equal to 0.001 C) using platinum RTDs (resistive temperature devices, here is an example). I've also found this MIT PhD thesis (scroll down to get to the PDF, which also contains a summary of other high-precision temperature measurement techniques which you can read) which uses a laser interferometer to measure the change in level of a classical liquid thermometer and claims sub-microkelvin resolution. However, the problem with this is that it does not have sub-microkelvin accuracy - it can measure changes in temperature very precisely, but not absolute temperatures. In principle you could calibrate it to a known temperature, but it is very hard to get a temperature standard good to better than a millikelvin. The triple point of water (the temperature at which solid, liquid, and gas coexist) is only known to a precision of 0.1 mK. So, in summary: the absolute temperature of an everyday substance can be measured to 0.1 mK at the very best, in practice more like 1 mK with significant effort. The change in temperature can however be measured with microkelvin resolution, if sufficiently isolated from the outside environment.
{ "domain": "physics.stackexchange", "id": 98232, "tags": "temperature, measurements, error-analysis" }
If a state can be efficiently represented by a Projected Entangled Pair State (PEPS), can we prepare it physically?
Question: If we can use PEPS (Projected Entangled Pair State) to represent a many body quantum state, can we generate it by a quantum computer? As far as I can understand, PEPS is dual to a quantum computer with postselection: any PEPS can be created by a postselected quantum circuit, and any output of such a circuit can be written as a Peps (arXiv:quant-ph/0611050). But quantum computer with post selection is not a physical device. Does this mean PEPS can be used to represent certain high complexity state that can not be efficiently prepared by a quantum computer from a simple initial state? Answer: You are absolutely right: The result about PEPS = postselection, together with the fact that postselection is considerably more powerful than polynomial-time quantum computation, implies that it is impossible to prepare a general PEPS efficiently on a quantum computer. So in that sense, PEPS can describe high-complexity quantum states (just as, for instance, certain Hamiltonians can). Note, however, that it might be that the complexity is in the translation from the PEPS description to the state, rather than in the preparation procedure itself. This is the case, for instance, in variants of the construction in the cited paper (which basically yields a product state). Note that the same is true, e.g., for preparing the ground state of a classical spin glass Hamiltonian (which is a product state), when starting from the Hamiltonian.
{ "domain": "physics.stackexchange", "id": 48193, "tags": "quantum-information, quantum-entanglement, quantum-computer, tensor-network" }
Reaction mechanism in fluoride adsorption to aluminum oxide
Question: Despite a fair amount of research (excluding non-open access journals, to which I have not got access), I cannot seem to find an explanation of the process that takes place when $\ce{F^-}$ adsorbs to activated alumina ($\ce{Al_2O_3}$). I do know that the process involves both chemisorption (monolayer formation) and physisorption, but since the latter is explained solely by van der Waals forces, I am primarily looking for an explanation of the former. EDIT: Does adsorption take place only at the aluminium ions? If not, does that mean that all aluminium compounds with oxidation state +3 will be equally good adsorbents? EDIT: I have found an article (https://link.springer.com/article/10.1023/A:1012929900113) which proposes a mechanism that involves aluminium-fluoride complex formation. If this is the case - why is this even classed as an adsorption reaction at all, and not just a normal chemical reaction? And what role do the $\ce{OH^-}$ ions play? Do they keep the aluminum in the ionic lattice of the alumina? Answer: Besides the presence of Al and O atoms on an Aluminum Oxide surface, there are also H atoms, which are bounded to the solid by OH bonds, - O --- H+ . Direct evidence for the presence of H atoms, and OH bonds, come from XPS or FTIR (grazing incidence) data. Indirect evidence comes from several sources, but especially from the reactivity of the surface to some compounds, like Silanes, where Si --- O --- R groups react readily with OH. Based on what is said above, I´d say the best explanation for the F– anion interaction with the Aluminum Oxide surface is a chemisorption process, through an indirect process, bonding of the F– anion to the H+ of the surface, forming an -O --- H+ --- F– , anion (F–) – to – dipole (+H --- O– ) complex structure. A real solid – state Hydrogen bonding.
{ "domain": "chemistry.stackexchange", "id": 8625, "tags": "inorganic-chemistry, physical-chemistry, aqueous-solution, surface-chemistry, adsorption" }
Why do most planets remain within a few degrees from the ecliptic?
Question: Why do planets, just like our moon, have their sidereal paths almost the same (with only slight deviation) as that of the ecliptic? Is it mere coincidence? Or is there a better solution? This question arose when I was going through the following lines in the book- "Astronomy - Principles and Practice 4th ed. - A. Roy, D. Clarke" (for some context): More information, too, would be acquired about the star-like objects that do not twinkle and which have been found in the course of a month to have a slow movement with respect to the stellar background. These planets, like the Moon, would never be seen more than a few degrees from the plane of the ecliptic, yet month after month they would journey through constellation after constellation. In the case of one or two, their paths would include narrow loops, though only one loop would be observed for each of these planets in the course of the year. I just got reading this book, so I am new to astronomy. Could anyone please provide an easy-to-understand solution to this? Here's a table showing the approximate deviations of the sidereal paths of the planets and the Moon from the ecliptic plane in the solar system: Celestial Body Approximate Deviation from Ecliptic (degrees) Mercury 7 Venus 3.4 Earth 0 Mars 1.8 Jupiter 1.3 Saturn 2.5 Uranus 0.8 Neptune 1.8 Moon 5.1 As we can see, the deviation is almost always low, except Mercury—$7°$! If so, why was it only Mercury? Answer: The ecliptic is the path through the sky along which the sun seems to travel during the year. If you flip your perspective around, that means the ecliptic is basically the path of Earth's orbit around the sun, projected onto our view of the sky. The question "Why do all the planets lie near the ecliptic?" is therefore the same as asking "Why are the orbits of all the planets more or less in the same plane?" Apart from very distant objects like comets, the Kuiper belt, and the Oort cloud, everything in the solar system is orbiting in pretty much the same orientation and going the same direction around the sun. How did that happen? Keeping a complex topic down to its most basic level, the answer to that is that all the planets formed from the same protoplanetary disc of gas and dust, so there wasn't a bunch of "stuff" out orbiting in random directions that could form planets with wildly different orbital characteristics. The disc was all moving in pretty much the same direction in pretty much the same plane, so when it all conglomerated into planets, they had to keep moving the same way. The two notable exceptions are Pluto and Mercury. (I'm excluding the moon, because the moon was formed from a planetary collision in the early solar system, and its orbit is dominated by Earth's gravity, so its angle is a little high but for reasons only vaguely related to the formation of the solar system.) Pluto has a very high tilt at over 17 degrees, which is probably due to being disrupted by Neptune's gravity until the two bodies reached the stable orbital resonance they have today. Mercury, meanwhile, probably had a similar relationship with the sun. The sun is notably oblate (that is to say, it's slightly flattened by its own spin, so the equator bulges outward a little bit). From the Earth or Venus, that oblateness isn't a significant factor, but as close as Mercury is, the sun's gravity affects it a little differently when it's above or below the plane of the sun's rotation versus right along the plane. That means Mercury's orbit was slightly unstable early on. We think that over millions of years, the tiny off-center forces from that difference slowly wobbled Mercury's orbit, pushing it into a higher inclination and altering its rotation and orbital eccentricity until it reached a stable resonance that it has maintained since then. In short, Mercury is so close to the sun that its orbit has to be a little weird to stay stable.
{ "domain": "astronomy.stackexchange", "id": 7327, "tags": "ecliptic, inclination" }
Why do we have to compute maximum safe current through resistor equalizing heat exerted by the resistor and heat created in resistor?
Question: My book says says resistance of a wire = $\rho l/\pi r^2$ where $\rho $= resistivity of the resistor material. l=length of the resistor. R= radius of the resistor so heat produced H = $I^2 \rho l $/ $J \pi r^2 $ J= joule constant. Then heat to exerted by the resistor per second us $2\pi rlh$ if h= heat exerted by the unit area of the resistor per unit second. So by equalizing both we get maximum current as $I_{max}^2$ = 2 $\pi^2 J h r^3/ \rho$. So my question is why do we equalize those heats? Answer: When current passes through a resistor, heat is generated and initially its temperature increases. As its temperature increases from its surroundings, it starts to dissipate the heat, and the rate of dissipation of heat is proportional to the difference between the temperature of the two (when this difference is small. See Newton's law of cooling). As the temperature of the resistance increases, after a certain time, it has enough temperature so that the rate of creation of heat is same as the rate of dissipation, and the temperature of the resistor does not increase anymore, it reaches an equilibrium. In the calculations you have provided, we assume that this equilibrium has been achieved, and thus, the rate of generation of heat is the same as that of dissipation.
{ "domain": "physics.stackexchange", "id": 49656, "tags": "electricity, thermoelectricity" }
Rotating of a system of mass
Question: In basic physics lectures, the teacher or professor in my class never explains the behavior of rotating two or many body particles. In my experience and intuition doing physics, two-particle or many-body systems tend to rotate about their center of mass. For example, the earth-sun rotates around its center of mass, or two black holes rotate around its center of mass; also in ideal conditions, without air friction, a stick when a force is applied to it at the end edge, stick will rotate around its center of mass. Can anyone explain the physics of a system of mass that tends to rotate around its center of mass? I tried to solve this problem and read some reference books, but I don't have any conclusion. most of the books I read just explain what rotation or torque is. Or, is my intuition wrong? Answer: You can understand this question intuitively like this: The motion of the centre of mass is only dependent on external force on the entire system. Therefore if there is no external force, the centre of mass will remain static. Thus, the system ratates around the centre of mass if there is no external force on the system. If there is a fixed pivot for rigid body, the pivot itself will provide external force on the body, so that the body might not rotate around its centre of mass anymore. I will prove this on a rigid two particle system using knowledge of circular motion let's consider a system of two particles, m1 and m2, connected by a massless rod/string. The system is rotating with an angular speed of $ \omega\ $, and the length of the string is L, and the center of rotation (pivot) is located at a distance x to m1. it is the tension of the srting that provides the centripital force two particles need to do circular motion. Note that the centripital force is the same for m1 and m2, since they are connected by a string, or, you can justify this by saying there is no external force acting on the system. The equation of the centripital forces of the two particles are expressed as followed: $$ m_{1}\omega^{2}x=m_2\omega^{2}\left(L-x\right) $$ x can be solved as: $$ x=\dfrac{m_{2}}{m_{1}+m_{2}}\cdot L $$ The one dimensional location of a centre of mass of a system is expressed as $$ x_{cm}=\dfrac{\sum m_{i}\cdot r_{i}}{\sum m_{i}} $$ in our specific case, $x_{cm}$ is $$ x_{cm}=\dfrac{m_{1}\cdot 0+m_{2}\cdot L}{m_{1}+m_{2}}=\dfrac{m_{1}}{m_{1}+m_{2}}L $$ Therefore, $x_{cm}=x$ so the centre of rotation is actually the centre of mass.
{ "domain": "physics.stackexchange", "id": 98466, "tags": "newtonian-mechanics, reference-frames, rotational-dynamics, orbital-motion" }
Does mass has any influence on a horizontal launch?
Question: Really short doubt. Let's say I have 2 objects, with masses 3kg and 4kg, both are thrown horizontally with the same initial velocity, both will touch the ground at the same time right? Because g affects equivalently to all masses, g is constant, therefore both will fall at the same time? Separate bonus question: if those 2 masses 3kg and 4kg are thrown horizontally with different initial velocity, the mass won't affect either on this situation right? ie I won't have to use 3kg and 4kg at any moment to find the time where both touches the ground, right? Answer: Yes acceleration due to gravity is same for all objects irrespective of their mass. So in both case, time the objects touch the ground will be same unless their is any initial velocity in that direction.
{ "domain": "physics.stackexchange", "id": 74081, "tags": "newtonian-mechanics, free-fall" }
Is the South Pole of an electromagnet always at the end where current is drawn into it?
Question: I'm trying to determine in which direction a magnetic field will be produced in regards to the direction of current in an electromagnet. Will the end of the electromagnet from which current is drawn from always be the south pole? Or does it depend on the direction of coils (ie clockwise or anti-clockwise)? Also, are the magnetic poles in either of the pictured electromagnets in the wrong position in respect to current flow and rotation of the coil? Answer: It depends on the sense of circulation of the current in the wire of the solenoid. If you take the solenoid in your right hand so that curved fingers copy direction of current in the wires, the thumb will show direction of magnetic field inside the solenoid. It thus points to the "north pole" of the electromagnet. This picture may help to understand this rule: http://etc.usf.edu/clipart/35600/35671/rhrsole_35671.htm Therefore, the direction of the magnetic field does not depend only on which end of the solenoid the current enters the solenoid, but also on whether the wire is a right-handed or left-handed helix.
{ "domain": "physics.stackexchange", "id": 32163, "tags": "electromagnetism, magnetic-fields, electric-current" }
How to calculate STFT of a function for a rectangular window
Question: How to calculate the STFT (by hand) of $$u(n)\cos(0.2\pi n)$$ for a rectangular window of a length 20, positioned at $n = 5$. I know that to use STFT I need to divide longer signal to a shorter parts and than calculate Fourier Transform on each part. Also doing it from the definition is very long and I assume that there is a more efficient way to do it on paper. I don't know how to start with the question so any materials on this topic are appreciated. I came up with this idea: First picture is just a rectangular window of length 20 and position 5. Second picture is how I see the STFT of it. Now to calculate STFT should I provide some coefficients like mainlobe width and highest sidelobe? How I can calculate them from my data? Are my pictures good? Answer: I don't understand what you mean by "positioned at $n=5$" with length $20$. Indeed: is "positioned" the beginning or the center? is length an half-length around a central position, or a full length? finally, at a given location, this is not an STFT anymore, but a mere windowed Fourier transform. If I interpret your question in its most obvious sense, the window starts from $n=5$ to $n=20+5-1=24$, on an interval where $u[n]=1$, so you'd just have to compute a simple DFT of a cosine. The result can be obtained via Euler/De Moivre formulae, with two finite sums of geometric series. If this is not the case, please provide more information, close forms are at hand
{ "domain": "dsp.stackexchange", "id": 6955, "tags": "fourier-transform, stft" }
Why is Paracetamol so great?
Question: Every time I get ill (cold, flu etc) I take a couple of these wonderful tablets for up to 4 times a day and I, eventually, get better. What exactly is paracetamol? Why is it so effective and is it really not harmful as my doctor would have me believe? Answer: Paracetamol is a pain killer, it does not treat the cause of your illness, it only alleviates the symptoms. From its wikipedia page: Paracetamol [...], chemically named N-acetyl-p-aminophenol, is a widely used over-the-counter analgesic (pain reliever) and antipyretic (fever reducer). So, paracetamol does not make you better. Your immune system makes you better. Paracetamol just makes you feel better while you are waiting for your immune system to get an infection under control. You should be aware that it is only safe in small doses and a toxic dose is not that much more than the recommended one (source): Risk of severe liver damage (ie a peak ALT more than 1000 IU/L) Based on the dose of paracetamol ingested (mg/kg body weight): Less than 150 mg/kg - unlikely More than 250 mg/kg - likely More than 12 g total - potentially fatal Again from wikipedia: While generally safe for use at recommended doses (1,000 mg per single dose and up to 4,000 mg per day for adults),[6] acute overdoses of paracetamol can cause potentially fatal liver damage and, in rare individuals, a normal dose can do the same; the risk may be heightened by chronic alcohol abuse, though it is lessened by contemporary alcohol consumption. Paracetamol toxicity is the foremost cause of acute liver failure in the Western world, and accounts for most drug overdoses in the United States, the United Kingdom, Australia and New Zealand. I am sure someone else can explain the pharmacokinetics and details of action of paracetamol. I just wanted to point out that paracetamol can be dangerous and should be treated with respect.
{ "domain": "biology.stackexchange", "id": 724, "tags": "pharmacology" }
C++ Random Tree Node
Question: I've been trying to kind of teach myself "Modern C++" the last couple of months and I just finished this interview type problem and thought it would be a good one to get some feedback on. I not including the full implementation for brevity, just the relevant parts for the problem. Random Node. Implement a binary tree class which, in addition to the usual operations, has a method pick_random() which returns a random node from the tree. All nodes should be equally likely to be chosen. random_node.h #include <utility> #include <random> #include <memory> namespace tree_problems { /* Random Node. * * Implement a binary tree class which, in addition to the usual operations, * has a method pick_random() which returns a random node from the tree. All * nodes should be equally likely to be chosen. */ template<typename Ty> class random_node { struct tree_node; using tree_node_ptr = std::unique_ptr<tree_node>; struct tree_node { Ty value; tree_node_ptr left{}, right{}; const tree_node* parent{}; std::size_t size{}; explicit tree_node( const Ty& value, const tree_node* parent = nullptr ) : value{ value }, parent{ parent }, size{ 1 } { } tree_node( const tree_node& other ) : value{ other.value }, parent{ other.parent }, size{ other.size } { if( other.left ) left = std::make_unique<tree_node>( other.left->value ); if( other.right ) right = std::make_unique<tree_node>( other.right->value ); } tree_node( tree_node&& other ) noexcept : value{ other.value }, parent{ other.parent }, size{ other.size } { left = std::move( other.left ); right = std::move( other.right ); } void insert_child( Ty value ) { if( value <= this->value ) { left = std::make_unique<tree_node>( value, this ); } else { right = std::make_unique<tree_node>( value, this ); } } }; mutable std::mt19937 gen_; tree_node_ptr root_; public: explicit random_node( const unsigned int seed = std::random_device{}( ) ) : gen_{ seed } { } random_node( const std::initializer_list<Ty>& values, const unsigned int seed = std::random_device{}( ) ) : random_node( seed ) { for( const auto& val : values ) insert( val ); } /// <summary> /// insert node /// /// this approach for insertion increments the nodes it passes on the way /// down the tree to keep track of the total size of each node (total size = /// the node + all its children) in constant (additional) time to the normal log insert time. /// This approach does *not* keep the tree balanced or enforce any other invariants other than /// correct node size and basic left <= current < right. /// </summary> /// <param name="value">value to insert</param> void insert( const Ty& value ) { if( !root_ ) { root_ = std::make_unique<tree_node>( value ); return; } tree_node* node = root_.get(), * parent{}; while( node ) { ++node->size; parent = node; node = value <= node->value ? node->left.get() : node->right.get(); } parent->insert_child( value ); } [[nodiscard]] auto next( const std::size_t& min, const std::size_t& max ) const -> std::size_t { using uniform = std::uniform_int_distribution<std::mt19937::result_type>; const uniform distribution( min, max ); return distribution( gen_ ); } // forward the root to the recursive version. [[nodiscard]] auto pick_random() const -> Ty& { return pick_random( *root_ ); } /// <summary> /// pick random /// /// This routine looks at the "total" size of the node, which is maintained by /// the insert to be the the current node + the total number of nodes below it, /// so the root have the size of the total tree. Each call to pick random, we /// generate a uniform number between 1 and the the node size, this gives us /// a 1/n chance of picking the current node (and 1/1 for a leaf so we always /// exit). If the number is [1, left-size] we traverse left, otherwise we traverse /// right, and then re-roll with that node's size. /// /// </summary> /// <complexity> /// <run-time>O(E[N/2])</run-time> /// <space>O(E[N/2])</space> /// </complexity> /// <param name="node">the starting node</param> /// <returns>a node between [node, children] with equal probability</returns> [[nodiscard]] auto pick_random( tree_node& node ) const -> Ty& { const auto rnd = next( 1, node.size ); if( rnd == node.size ) return node.value; if( node.left && rnd <= node.left->size ) { return pick_random( *node.left ); } return pick_random( *node.right ); } }; } random_node_tests.cpp #include "pch.h" #include <gtest/gtest.h> #include <typeinfo> #include "../problems/tree.h" using namespace tree_problems; namespace tree_tests { /// <summary> /// Testing class for random node. /// </summary> class random_node_tests : public ::testing::Test { protected: void SetUp() override { } void TearDown() override { } }; TEST_F( random_node_tests, case1 ) { // basic functionality. const auto rand = random_node<int>( { 1, 2, 3 }, 1234 ); const auto actual = rand.pick_random(); const auto expected = 2; EXPECT_EQ( actual, expected ); } TEST_F( random_node_tests, balanced_tree ) { // actually test the probability function. // fix the tree to be balanced tree with 7 nodes const auto values = std::initializer_list<int> { 4, 2, 6, 1, 3, 5, 7 }; const auto rand = random_node<int>( values, 2358 ); // storage for 10k draws auto results = std::map<int, int>(); const std::size_t iters = 1e6; for( auto index = std::size_t(); index < iters; ++index ) { results[ rand.pick_random() ]++; } double max = 0.0f, min = 0.0f; for( const auto& [key, value] : results ) { auto freq = static_cast< double >( value ) / iters; max = std::max( max, freq ); min = std::min( max, freq ); } const auto epsilon = 0.001; // error tolerance EXPECT_LT( max - min, epsilon ); } TEST_F( random_node_tests, unbalanced_tree ) { // actually test the probability function. // fix the tree to be an unbalanced tree with 11 nodes const auto values = std::initializer_list<int> { 4, 3, 6, 2, 1, 0, 5, 7, 9, 10, 11 }; // seed the generator const auto rand = random_node<int>( values, 6358 ); // storage for 10k draws auto results = std::map<int, int>(); const std::size_t iters = 1e6; for( auto index = std::size_t(); index < iters; ++index ) { results[ rand.pick_random() ]++; } double max = 0.0f, min = 0.0f; for( const auto& [key, value] : results ) { auto freq = static_cast< double >( value ) / iters; max = std::max( max, freq ); min = std::min( max, freq ); } const auto epsilon = 0.001; // error tolerance EXPECT_LT( max - min, epsilon ); } } Looking for any design improvements, style suggestions, general approach, etc. Answer: explicit tree_node( const Ty& value, const tree_node* parent = nullptr ) : Might be better for parent to not be a default argument. (The root node is the special case for which it would be better to explicitly specify the nullptr). void insert_child( Ty value ) { if( value <= this->value ) { left = std::make_unique<tree_node>( value, this ); } else { right = std::make_unique<tree_node>( value, this ); } } Maybe check that left and right are null before setting the value, or rename this function to replace_child or something. Also, is it intentional to allow equal values? (This has consequences for finding nodes.) Speaking of which, the instructions say "in addition to the usual operations"... I'd say this lacks a lot of the "usual operations", e.g. finding, iteration, erasure etc. mutable std::mt19937 gen_; Hmm. I think it might be better to have the user pass in the rng as a parameter to the pick_random function. That would allow using multiple rng's to access the same data. tree_node* node = root_.get(), * parent{}; Ick. Separate definitions would be much clearer. const uniform distribution( min, max ); return distribution( gen_ ); This causes a compiler error for me because uniform_int_distribution::operator() isn't const. [[nodiscard]] auto next( const std::size_t& min, const std::size_t& max ) const -> std::size_t I doubt passing std::size_t by const& is faster than by value. Specifying the return type as auto, and then listing the return type after the function seems like unnecessary typing. We could just specify the return type up front. [[nodiscard]] auto pick_random() const -> Ty& { return pick_random( *root_ ); } [[nodiscard]] auto pick_random( tree_node& node ) const -> Ty& ... These should return Ty const& (or by value) not Ty&. Changing the values in the nodes will break the tree! Since we have no access to tree_nodes outside the class, that second function should probably be private. We might want to throw a specific error for an empty tree. (If we implemented iterators, we could return an end iterator, which would be better still.)
{ "domain": "codereview.stackexchange", "id": 38394, "tags": "c++, algorithm, random, c++17, binary-tree" }
How does Rogozhin's (2, 18) universal turing machine work?
Question: I am trying to understand Rogozhin's (2, 18) universal turing machine by stepping through a simple 2-tag encoding that I believe should loop forever: a -> aa For example, using an initial input of aaa: aaa aaa aaa .... etc Apologies for the extremely specific question, but it's what I've narrowed my issue down to and I've been stuck for a while! Following the instructions in part 10 / pp22 and page 6 I believe this system should be encoded as: c⃖₁ c⃖₁ b b 1 b 1 b >b 1 c 1 c 1 c |P2 |P1 |P0 |Ar |As |At Running this, however, results in termination rather than an infinite loop. Following the trace I have managed to identify something I can't explain and that seems wrong, but have not been able to figure out a resolution. Following the first stage of modelling: On the first stage, the UTM searches the code P, corresponding to the code A, and then the UTM deletes the codes A, and A, (i.e. it deletes the mark between them) ... if the head of the UTM moves to the right and meets the mark c, then the first stage of modelling is over. The UTM deletes this mark and the second stage of modelling begins Gives the following trace: c⃖₁ c⃖₁ b b 1 b 1 b >b 1 c 1 c 1 c c⃖₁ c⃖₁ b b 1 b 1 b b⃖ >1 c 1 c 1 c c⃖₁ c⃖₁ b b 1 b 1 b >b⃖ c₂ c 1 c 1 c c⃖₁ c⃖₁ b b 1 b 1 >b b c₂ c 1 c 1 c c⃖₁ c⃖₁ b b 1 b 1 b⃖ >b c₂ c 1 c 1 c c⃖₁ c⃖₁ b b 1 b 1 b⃖ b⃖ >c₂ c 1 c 1 c c⃖₁ c⃖₁ b b 1 b 1 b⃖ b⃖ 1⃖ >c 1 c 1 c c⃖₁ c⃖₁ b b 1 b 1 b⃖ b⃖ >1⃖ 1⃗ 1 c 1 c At this stage, Rogozhin claims the tape should be: P2P1P'0R'At Notice in particular R'At R’ consists of 1⃖ and 1⃗ and the head of the UTM is located on the R’ in the state Q2 But to me, it appears that only Ar has been deleted!? c⃖₁ c⃖₁ b b 1 b 1 b⃖ b⃖ >1⃖ 1⃗ 1 c 1 c |P2 |P1 |P'0 |R' |As!! |At I would expect something like: c⃖₁ c⃖₁ b b 1 b 1 b⃖ b⃖ >1⃖ 1⃗ 1⃖ 1⃗ 1 c |P2 |P1 |P'0 |R' |At I have identified the following potential errors I have made, but have double checked each and have not been able to identify any: Understanding of 2-tag system. Encoding of 2-tag system. Execution of rules (programming bug). Formatting of trace. Interpretation of trace. Can anyone spot what I'm missing? Am hoping it's something obvious! Supplementary materials Spreadsheet to generate tape Ruby program I'm using to generate traces Full trace showing that aaa terminates when I think it shouldn't Original motivation and how I ended up here in the first place Answer: I see (at least) two errors: The head position should be at the beginning of $S$; and there is an extra 1 at each $P_i, i > 1$: $P_i = bb 1^{N_{i_{m_i}}}1b....b1^{N_{i_2}}1b1^{N_{i_1}}$ c⃖₁ c⃖₁ b b 1 1 b 1 b b >1 c 1 c 1 c ^ ^ !!! !!! Don't forget that the rest of the tape should be fille with $1^{\leftarrow}$ (the balnk symbol). A few years ago I also examined its behaviour; you can find the javascript version I made here: click "Small" to load it, then fill the tape (click on the second gray row to place the head), and click "Step". You can compare it with your implementation.
{ "domain": "cs.stackexchange", "id": 14067, "tags": "turing-machines" }
Is there a maximum limit of connections between a pair of neurons?
Question: As neurons fire and work together, their connections strengthen. I was wondering if there was a limit on how connected neurons can be. If these neurons were constantly connecting then surely there would be an overload of connections at some point? Answer: There's several ways neuronal connections can strengthen: growing the spine, increasing the amount of vesicles that carry neurotransmitters at the synapse, sensitizing and increasing the amount of receptors and yes creating more spines and synapses also. Roughly, neurons can make tens of thousands of connections to multiple neurons the size of each spine is estimated to be between 0.01 µm3 to 0.8 µm3. You are correct in saying that as life happens the connection would tend to grow to max and not encode anything anymore, however the brain seems to re-normalize all of its connections during sleep through special membrane channels. Check out: https://en.wikipedia.org/wiki/Neuron (especially the connectivity part) https://en.wikipedia.org/wiki/Long-term_potentiation https://en.wikipedia.org/wiki/Long-term_depression https://en.wikipedia.org/wiki/Ion_channel
{ "domain": "biology.stackexchange", "id": 6008, "tags": "cell-biology, neuroscience" }
How to merge data from two different lasers
Question: I tried to use 1 hokuyo_node to publish to 'scan' and another hokuyo_node to publish to 'scan_1' I tried using the ira_laser_tool package but scan_mulit is not reaction. <node pkg="ira_laser_tools" name="laserscan_multi_merger" type="laserscan_multi_merger" output="screen"> <param name="destination_frame" value="/base_link"/> <param name="cloud_destination_topic" value="/merged_cloud"/> <param name="scan_destination_topic" value="/scan_multi_merged"/> <param name="laserscan_topics" value ="/scan /scan1" /> </node> I use rosnode info /scan_multi show: Node [/scan_multi] Publications: None Subscriptions: None Services: None cannot contact [/scan_multi]: unknown node So,someone can tell me how to solve this problem. Thank you Originally posted by Maxes on ROS Answers with karma: 11 on 2017-10-31 Post score: 1 Original comments Comment by gvdhoorn on 2017-10-31: I'm confused: you have scan_destionation_topic set to /scan_multi_merged, not /scan_multi rosnode info .. checks nodes, not topics your "[an]other hokuyo_node" publishes to scan_1, not /scan1 according to your description please check these things and report back. Comment by Maxes on 2017-10-31: Hey gvdhoom , I change it you said but I open /scan_multi_merged topic in rviz, it only shows that scan is not merged. What information do I need to let you know my difficulties? Answer: There is a related question with an answer. Originally posted by Ruben Alves with karma: 1038 on 2017-10-31 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 29241, "tags": "ros, hokuyo-laser" }
Axially Symmetric Lens Integration
Question: I'm trying to understand the deflection of light due to an axially symmetric gravitational lens following chapter 2.3 of these Heidelberg lecture notes. In doing so, I encounter the integral (2.12 a) $$\int_0^{2\pi} \frac{r - r'\cos(\phi)}{r^2 + r'^2 - 2rr'\cos(\phi)}\,d\phi$$ which apparently vanishes for $r'>r$ and is $2\pi/r$ for $r'<r$. I've tried to understand how this answer comes about, but to no avail. I couldn't find it in an integral table and I can't get Mathematica to give me a useful answer unless I plug in specific values for $r,r'$. Answer: Let $$I(r_0,r;a,b)=\int_a^{b}\frac{r_0-r \cos\phi}{r_0^2+r^2-2 r_0 r \cos\phi}d\phi$$ The integrand plots like Note that $$ I(r_0,r;0,2\pi)=I(r_0,r;0,\pi)+I(r_0,r;\pi,2\pi) $$ Upon the change of varibale $\phi\to\phi-\pi$ in the second integral above, and then explicitly simplifying, one gets the recursion $$ \begin{align} I(r_0,r;0,2\pi)&=r_0\int_0^{2\pi}\frac{r_0^2-r^2 \cos\phi}{r_0^4+r^4-2 r_0^2 r^2 \cos\phi}d\phi \\ &=r_0~I(r_0^2,r^2;0,2\pi)\\ \end{align} $$ Therefore $$ \begin{align} I(r_0,r;0,2\pi)&=\lim_{k\to\infty}r_0^{k -1}I(r_0^k,r^k;0,2\pi)\hspace{1cm} k\in1,2,4,8\ldots \end{align} $$ For $r<r_0$, $$ \begin{align} I(r_0,r;0,2\pi)&=\lim_{k\to\infty}\frac{r_0^{k-1} r_0^k}{r_0^{2k}}\int_0^{2\pi}\frac{1-(r/r_0)^k\cos\phi}{1+(r/r_0)^{2k}-2(r/r_0)^k\cos\phi}d\phi \\ &=\frac{2\pi}{r_0} \end{align} $$ For $r_0<r$, $$ \begin{align} I(r_0,r;0,2\pi)&=\lim_{k\to\infty}\frac{r_0^{k-1} r^k}{r^{2k}}\int_0^{2\pi}\frac{(r_0/r)^k-\cos\phi}{1+(r_0/r)^{2k}-2(r_0/r)^k\cos\phi}d\phi \\ &=\lim_{k\to\infty}(r_0/r)^{k-1}\ldots\\ &=0 \end{align} $$
{ "domain": "physics.stackexchange", "id": 71173, "tags": "gravitational-lensing" }
Dealing with issues in "test" predictons for single "items" (null values, standardization in place, etc)
Question: I know this is kind of a broad question but I have tried to scour both this forum and the internet in general to no avail for this particular situation. So imagine I have a model trained for which, though the data initially might not be complete and clean, I took steps to make the data complaint with the model requirements (no outliers where appropriate, de-skewed if necessary, normalized if necessary, null values imputed appropriately). This is done in a cross validation framework. All this stuff works and is absolutely fine when tuning the model but I run into problems when I try to make a single prediction out of it (meaning I have a single "test" record - think web service with some fields that can be null). In fact, null values generally need a dataset to refer to for filling, as well as for the normalization/outlier procedures. Initially I thought about linking such "test" record to a portion of the "train" dataset so that I would not run into this problem (such problems would be resolved) but at that point other issues would arise: how would I choose such dataset? if I used the most recent data, would I bias it somehow? and using the whole dataset is impractical as well as potentially unfeasible when dealing with "big" data. do you happen to know whether there are some best practices on the topic or could you refer me the themes/keywords that deal with these issues? p.s.: for the relevance of the problem, the null values most likely will remain there (I have no way of forcing them beforehand in the web application in order to have a smoother user experience) Answer: You need to save the instructions for performing these preprocessing steps, not necessarily the dataset that you extracted them from. See Obtaining consistent one-hot encoding of train / production data Binary Classification - One Hot Encoding preventing me using Test Set one-hot-encoding categorical data gives error In particular, sklearn preprocessors can be pickled, then used with their transform method in production, if you can use sklearn in deployment. PMML also cam translate most transformers. Or you can write your own simple transformer. As to using newer data to rework the transformers, that's getting closer to retraining; in most settings, I would keep it in the same place as refitting the model: either both offline or both online.
{ "domain": "datascience.stackexchange", "id": 6878, "tags": "cross-validation, prediction, normalization, missing-data, data-imputation" }
How much fresh water could be produced by pumping warm humid air through a pipeline up to the top of a mountain?
Question: I have been doing a lot of research on the Internet lately about various desalination processes which are being used today and this led me to begin studying about mountain weather and the orographic effect (or orographic lifting). From studying mountain weather, the thought occurred to me about whether a lot of fresh water could be produced by creating an artificially-produced orographic effect by pumping warm, humid coastal air through a pipeline that would lead to the top of a coastal mountain. Orographic Effect: I then used MS Paint to make a conceptual drawing on how this could be done: Since the temperature of the metal pipe will decrease as it ascends up the coastal mountain, contact with this colder metal should cause the water vapor within the pumped air to condense on the inner wall of the pipeline and forming water droplets. These water droplets should then be pulled down by gravity and should fall into a pipe leading to a water storage tank. In the case that one air pumping plant cannot produce enough air pressure to push the air all the way up a mountain, then perhaps another air pumping plant could be stationed near the top of the mountain to assist with transporting the air upwards through the pipeline. These air pumping plants would probably need to have a large volume industrial centrifugal blower fan like the ones built by Elektror Airsystems pictured here: Reference: https://www.elektror.com/en/products/industrial-blowers/large-volume-fans/ I am neither a climatologist nor a scientist so I really don't how much fresh water could be produced this way. I am looking for someone in Earth Science.SE to give me just a ballpark figure of how much water may be produced by this process. Say that this pipeline is 2.5 meters in diameter, the top of the mountain is 2500 meters high, the air temperature at the top of the mountain is 280 Kelvin, the coastal air temperature is 302 Kelvin, and the coastal air humidity is at 70%. How much fresh water could be produced by pumping warm humid air through a pipeline up to the top of a mountain? Answer: Using the relative humidity and temperature, the g/kg ratio of water to dry air can be calculated. Using an online rh calculator, a relative humidity of 70% and a temperature of 302 K at 1 atm yields 17.54 grams water/kg air. This is the maximum amount of water that can be extracted. Using an online calculator, the density of dry air at 302K and 1 atm is 1.16882 kg/cu m. The total volume of your pipe (2.5 m diameter with a height of 2500 m) = 12,271.85 cu m. This holds 14,343.58 kg dry air and 251,586.45 grams (251 L) of water or 66.5 gallons of water. So, the maximum possible is 66.5 gallons/pipe of air. Your design won't extract this much and it's likely to get less than half this. I haven't calculated is how much time is required to extract all the water from a full pipe of air but I can't imagine this being a fast process and wouldn't be surprised if it took several hours. Based on this simple analysis, the cost/benefit ratio would be much too high to invest in a project like this. Remember that you are competing with existing technology that according to Texas: Texas Water Development Board states a good rule of thumb is \$1.10-2.40 per 1,000 gallons for brackish water and $2.46-4.30 per 1,000 gallons for seawater desalination.
{ "domain": "earthscience.stackexchange", "id": 2602, "tags": "meteorology, geophysics, water, climatology, water-vapour" }
Is temperature affected by gravitational potential?
Question: Ok, I feel a bit silly asking this. I'm asking in relation to this question here on the molecular basis of hydrostatic pressure in a gas. There's been quite a bit of discussion and one of the commenters has confused me by suggesting that: The kinetic energy of an ideal gas is only related to the temperature through the internal energy. Typically all contributions to internal energy are ignored other than kinetic energy, however, in this scenario the gravitation contribution to internal energy is not negligible. I agree that a isothermal profile maximizes entropy and thus is the steady state, however that constant temperature means constant internal energy not kinetic energy He seems to be suggesting that, because internal energy of a gas should include gravitational potential energy (GPE), gas temperature should be related to gravitational potential (not just kinetic energy of the molecules). That would suggest that a region of gas molecules could have more kinetic energy but the same temperature as another region of molecules higher up, because the molecules have lower GPE. This seems wrong to me, as my understanding is that temperature is linked to the average kinetic energy of the gas molecules and has nothing to do with gravitational potential. If his theory were correct, then if I had a quantity of gas in a sealed, perfectly insulating container and I took it from sea level up Mount Everest, then its temperature would increase just because the gravitational potential has increased. This would seem to violate the conservation of energy (because now the gas has extra thermal energy as well as increased GPE) as well as the ideal gas law (because the pressure and density have not changed, but the temperature has). I'm looking for some confirmation one way or the other of which of us is correct. If anyone knows of any reliable references that deal with this or can provide a convincing counter-example, that would be great. Answer: The internal energy of the gas should not include the $GPE$. A well-known example of how things work in this way is stars. Protostars have to heat up before they can produce energy through fusion. The initial temperature increase comes from the transfer of $GPE$ to $KE$ as the particles condense. As the gas loses $GPE$, it gains $KE$ (and therefore temperature). It's difficult to see this directly in earth's atmosphere because other processes such as heat transfer and adiabatic compression tend to swamp the effect.
{ "domain": "physics.stackexchange", "id": 41587, "tags": "thermodynamics, classical-mechanics, kinetic-theory" }
ros jade and gazebo 5.0 migration problem
Question: Hello, I am trying to upgrade from ros indigo to ros jade. I followed the following procedure sudo apt-get get remove ros-indigo-desktop-full then sudo apt-get get install ros-jade-desktop-full However, when I am trying to launch a simulation with gazebo I obtain the following error /opt/ros/jade/lib/gazebo_ros/gzserver: 22: .: Can't open /share/gazebo//setup.sh /opt/ros/jade/lib/gazebo_ros/gzclient: 17: .: Can't open /share/gazebo//setup.sh [gazebo-2] process has died [pid 5171, exit code 2, cmd /opt/ros/jade/lib/gazebo_ros/gzserver -e ode /home/mago/Projects/2015a_Nicastro_HeterogeneousMultirobot/src/uav/worlds/vtol.world __name:=gazebo __log:=/home/mago/.ros/log/0b588c00-6078-11e5-b34f-f04da268d7ae/gazebo-2.log]. log file: /home/mago/.ros/log/0b588c00-6078-11e5-b34f-f04da268d7ae/gazebo-2*.log Apparently it can't find the file /share/gazebo//setup.sh I noticed the double slash, therefore I tried to search the setup.sh file manually. However, the command ls /usr/share/ | grep gazebo gives gazebo-3.0 gazebo-5.0 Moreover, in the "gazebo-5.0" folder there is no file "setup.sh" I think something went wrong during the installation, I tried to remove everything and reinstall but without success. Can anyone help me? EDIT: I tried the workaround described here, but apparently I have libgazebo5-dev already installed. Originally posted by Mago Nick on ROS Answers with karma: 385 on 2015-09-21 Post score: 2 Answer: Hi, installing the gazebo5 package seems to have solved the problem. However I do not understand why this was necessary. Isn't the package ros-jade-desktop-full supposed to install it? Thank anyway Andrea Originally posted by Mago Nick with karma: 385 on 2015-09-22 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Markus Bader on 2015-10-09: Hi, i had the same problem sudo apt-get install gazebo5 solved it :-)
{ "domain": "robotics.stackexchange", "id": 22677, "tags": "ros, installation, ros-jade" }
Reinforcement Learning Vs Transfer Learning?
Question: I recently saw a video lecture from Jeremy Howard of fast.ai in which he states that transfer learning is better than reinforcement learning. But I was unable to understand the reasoning behind it. Can someone explain to me or point to any evidence stating which is better and why? Answer: I didn't watch this lecture, but, the way I see it, reinforcement learning and transfer learning are absolutely different things. Transfer learning is about fine-tuning a model, which was trained on one data and then striving to work with another data and another task. For example if you use weights of pretrained model on imagenet and then implement it to your dataset, while your dataset consists of small amount of different species of birds images (which might be not sufficient to train for example unet from a scrath). Reinforcement learning is about how some agent should response to environment condition to receive high reward. There is an illustrative example with a drone making a delivery, when there is some range of restrictions of the environment. There are two links, which might be useful: https://machinelearningmastery.com/transfer-learning-for-deep-learning/ https://skymind.com/wiki/deep-reinforcement-learning I guess, I can't answer, which approach is better, because they aim to solve different challenges.
{ "domain": "datascience.stackexchange", "id": 5824, "tags": "deep-learning" }
Are all healthy animals more likely to defecate near the end of a rest-cycle?
Question: Just what the title states. It stems from observation & personal experience that a person/dog/cat/monkey is more likely to relieve oneself immediately after it wakes up from the peak-sleep cycle of it's body-clock. Is this observation true? What causes this behaviour ? Answer: I think we can actually go farther than mere behavioral argumentation. The separation of the autonomous nervous system into parasympathetic and sympathetic is, one the one hand, associated with the sleep/wake cycle, on the other with parasympathetic/sympathetic activity (low/high epinephrine secretion). This duality certainly applies to all vertebrata (i.e. big animals), as fight & flight is associated with all vertebrata, see e.g. http://en.wikipedia.org/wiki/Fight_or_flight_response In other words: from the existence of fight or flight in all vertebrata I infer the existence of a dual autonomous nervous system (parasympathetic and sympathetic) in all vertebrata, although there may be other proof. Since in sleep the parasym. NS is active, the bowel moves. Conclusion: the bowel moves in the sleep of all vertebrata.
{ "domain": "biology.stackexchange", "id": 605, "tags": "physiology" }
Making GIFs with Java
Question: I wrote a Java class to make a GIF animation from a list of images. The whole project can be found here. I am pretty new with GitHub, so I would be very glad if you can give critiques regarding my project structure. package shine.htetaung.giffer; import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import java.util.Iterator; import javax.imageio.IIOException; import javax.imageio.IIOImage; import javax.imageio.ImageIO; import javax.imageio.ImageTypeSpecifier; import javax.imageio.ImageWriter; import javax.imageio.metadata.IIOInvalidTreeException; import javax.imageio.metadata.IIOMetadata; import javax.imageio.metadata.IIOMetadataNode; import javax.imageio.stream.ImageOutputStream; /* * Giffer is a simple java class to make my life easier in creating gif images. * * Usage : * There are two methods for creating gif images * To generate from files, just pass the array of filename Strings to this method * Giffer.generateFromFiles(String[] filenames, String output, int delay, boolean loop) * * Or as an alternative you can use this method which accepts an array of BufferedImage * Giffer.generateFromBI(BufferedImage[] images, String output, int delay, boolean loop) * * output is the name of the output file * delay is time between frames, accepts hundredth of a time. Yeah it's weird, blame Oracle * loop is the boolean for whether you want to make the image loopable. */ public abstract class Giffer { // Generate gif from an array of filenames // Make the gif loopable if loop is true // Set the delay for each frame according to the delay (ms) // Use the name given in String output for output file public static void generateFromFiles(String[] filenames, String output, int delay, boolean loop) throws IIOException, IOException { int length = filenames.length; BufferedImage[] img_list = new BufferedImage[length]; for (int i = 0; i < length; i++) { BufferedImage img = ImageIO.read(new File(filenames[i])); img_list[i] = img; } generateFromBI(img_list, output, delay, loop); } // Generate gif from BufferedImage array // Make the gif loopable if loop is true // Set the delay for each frame according to the delay, 100 = 1s // Use the name given in String output for output file public static void generateFromBI(BufferedImage[] images, String output, int delay, boolean loop) throws IIOException, IOException { ImageWriter gifWriter = getWriter(); ImageOutputStream ios = getImageOutputStream(output); IIOMetadata metadata = getMetadata(gifWriter, delay, loop); gifWriter.setOutput(ios); gifWriter.prepareWriteSequence(null); for (BufferedImage img: images) { IIOImage temp = new IIOImage(img, null, metadata); gifWriter.writeToSequence(temp, null); } gifWriter.endWriteSequence(); } // Retrieve gif writer private static ImageWriter getWriter() throws IIOException { Iterator<ImageWriter> itr = ImageIO.getImageWritersByFormatName("gif"); if(itr.hasNext()) return itr.next(); throw new IIOException("GIF writer doesn't exist on this JVM!"); } // Retrieve output stream from the given file name private static ImageOutputStream getImageOutputStream(String output) throws IOException { File outfile = new File(output); return ImageIO.createImageOutputStream(outfile); } // Prepare metadata from the user input, add the delays and make it loopable // based on the method parameters private static IIOMetadata getMetadata(ImageWriter writer, int delay, boolean loop) throws IIOInvalidTreeException { // Get the whole metadata tree node, the name is javax_imageio_gif_image_1.0 // Not sure why I need the ImageTypeSpecifier, but it doesn't work without it ImageTypeSpecifier img_type = ImageTypeSpecifier.createFromBufferedImageType(BufferedImage.TYPE_INT_ARGB); IIOMetadata metadata = writer.getDefaultImageMetadata(img_type, null); String native_format = metadata.getNativeMetadataFormatName(); IIOMetadataNode node_tree = (IIOMetadataNode)metadata.getAsTree(native_format); // Set the delay time you can see the format specification on this page // https://docs.oracle.com/javase/7/docs/api/javax/imageio/metadata/doc-files/gif_metadata.html IIOMetadataNode graphics_node = getNode("GraphicControlExtension", node_tree); graphics_node.setAttribute("delayTime", String.valueOf(delay)); graphics_node.setAttribute("disposalMethod", "none"); graphics_node.setAttribute("userInputFlag", "FALSE"); if(loop) makeLoopy(node_tree); metadata.setFromTree(native_format, node_tree); return metadata; } // Add an extra Application Extension node if the user wants it to be loopable // I am not sure about this part, got the code from StackOverflow // TODO: Study about this private static void makeLoopy(IIOMetadataNode root) { IIOMetadataNode app_extensions = getNode("ApplicationExtensions", root); IIOMetadataNode app_node = getNode("ApplicationExtension", app_extensions); app_node.setAttribute("applicationID", "NETSCAPE"); app_node.setAttribute("authenticationCode", "2.0"); app_node.setUserObject(new byte[]{ 0x1, (byte) (0 & 0xFF), (byte) ((0 >> 8) & 0xFF)}); app_extensions.appendChild(app_node); root.appendChild(app_extensions); } // Retrieve the node with the name from the parent root node // Append the node if the node with the given name doesn't exist private static IIOMetadataNode getNode(String node_name, IIOMetadataNode root) { IIOMetadataNode node = null; for (int i = 0; i < root.getLength(); i++) { if(root.item(i).getNodeName().compareToIgnoreCase(node_name) == 0) { node = (IIOMetadataNode) root.item(i); return node; } } // Append the node with the given name if it doesn't exist node = new IIOMetadataNode(node_name); root.appendChild(node); return node; } public static void main(String[] args) { String[] img_strings = {"sample-images/cool.png", "sample-images/cry.png", "sample-images/love.png", "sample-images/oh.png"}; try { Giffer.generateFromFiles(img_strings, "sample-images/output.gif", 40, true); } catch (Exception ex) { ex.printStackTrace(); } } } Answer: I agree with Peter Rader that static factories are not a good pattern. For this particular task I would be inclined to use a builder pattern. In particular, I would quite like to have a BufferedImage... images varargs parameter somewhere, and IMO the cleanest way of doing that would be for it to be the sole parameter of a constructor. generateFromBI opens an output stream, but doesn't seem to close it. The "best practices" way of doing this nowadays would be a try-with-resources statement. I haven't used Java 8, so I can't do better than to point you at the docs. // Not sure why I need the ImageTypeSpecifier, but it doesn't work without it ImageTypeSpecifier img_type = ImageTypeSpecifier.createFromBufferedImageType(BufferedImage.TYPE_INT_ARGB); Full marks for honest commenting, but you may not have realised that you're making a dangerous assumption. What if the image isn't of TYPE_INT_ARGB? The more robust way of doing this would be to create a fresh IIOMetadata for each image, using new ImageTypeSpecifier(img) to create an appropriate ImageTypeSpecifier for each one. The disposalMethod is hard-coded to "none". It would be nice to make this configurable, preferably via an enum rather than raw strings, in particular because using "doNotDispose" opens up the possibility of optimising the images to use transparency on areas which don't change. (And on the subject of exposing nice types rather than cryptic raw data, Java 8 provides Duration, which is nicer than raw centiseconds). app_node.setUserObject(new byte[]{ 0x1, (byte) (0 & 0xFF), (byte) ((0 >> 8) & 0xFF)}); is a bit odd. It seems to be saying "The structure here is very important and you might want to change it, but I won't document it for you". I would either use magic new byte[] { 1, 0, 0 } or pull out the parameters as new byte[]{ foo, bar, baz } with more descriptive names. Finally, some of the method names seem slightly misleading to me. get connotes "this object already exists and is stored somewhere", but several of these getXYZ methods are actually creating (or getting, with a creation fallback). I would favour renaming with a create or a findOrCreate prefix as appropriate for each case.
{ "domain": "codereview.stackexchange", "id": 17402, "tags": "java, beginner, image, animation" }
Algorithms and structural complexity theory
Question: Many important results in computational complexity theory, and in particular "structural" complexity theory, have the interesting property that they can be understood as fundamentally following (as I see it...) from algorithmic results giving an efficient algorithm or communication protocol for some problem. These include the following: IP = PSPACE follows from a space-efficient recursive algorithm simulating interactive protocols, and an efficient interactive protocol for evaluating totally quantified boolean formulae. In fact any complexity class equality A=B can be seen as following from two efficient algorithms (an algorithm for problems in A which is efficient with respect to B, and vice versa). Proving NP-completeness of some problem is just finding an efficient algorithm to reduce an NP-complete problem to it. The (arguably!) crucial ingredient in the Time Hierarchy Theorem is an efficient universal simulation of Turing machines. The recent result of Ryan Williams that ACC $\not \supset$ NEXP is based on an efficient algorithm to solve circuit satisfiability for ACC circuits. The PCP Theorem is that efficient gap amplification is possible for constraint satisfaction problems. etc., etc. My question (which is possibly hopelessly vague!) is as follows: Are there any important results in structural complexity theory (as distinct from "meta-results" like the relativisation barrier) which are not known to have a natural interpretation in terms of efficient algorithms (or communication protocols)? Answer: For many lower bounds in algebraic complexity, I do not know of a natural interpretation in terms of efficient algorithms. For example: the partial derivatives technique of Nisan and Wigderson the rank-of-Hessian technique of Mignon and Ressayre (giving the currently best known lower bound on permanent versus determinant) the degree bound of Strassen (and Baur-Strassen) the connected components technique of Ben-Or. In all the above results, they really seem to be using a property of the functions involved, where that property itself seems unrelated to the existence of any particular algorithm (let alone an effective one). For non-algebraic results, here are a couple of thoughts: The standard counting argument for the $n \log n$ sorting lower bound does not appear to have an interpretation in terms of efficient algorithms. However, there is an adversarial version of this lower bound [1], in which there is an algorithm which, given any decision tree that uses too few comparisons, efficiently constructs a list that the decision tree sorts incorrectly. But the adversarial version, while not difficult, is significantly more difficult than the counting proof. (Note that this is quite a bit stronger than what one gets by applying the adversary lower bound technique, e.g. as in these notes, since in [1] the adversary itself is efficient.) I think I've changed my mind about PARITY not in $AC^0$ (even the original proof, let alone the Razborov-Smolensky proof, as pointed out by @RobinKothari). Although the Switching Lemma can be viewed as a randomized (or deterministic) algorithm that lets you swap two rows of a circuit without a big blow-up in size, I think this really has a different flavor than many results in complexity, and specifically the ones you cite. For example, Williams's proof that $ACC \neq NEXP$ is based crucially on the existence of a good algorithm for a particular problem. In contrast, if one could prove something like the Switching Lemma nonconstructively, it would be just as good for proving PARITY not in $AC^0$. Because of these last two examples - especially sorting, where the standard proof is nonconstructive - it seems to me that the question may not just be about natural interpretations in terms of efficient algorithms, but also somehow about the constructiveness / effectiveness of the proofs of various complexity results (depending on what the OP had in mind). That is, the standard sorting lower bound is not constructive or algorithmic, but there is a constructive, algorithmic proof of the same result. [1] Atallah, M. J. and Kosaraju, S. R. An adversary-based lower bound for sorting. Inform. Proc. Lett. 13(2):55-57, 1981.
{ "domain": "cstheory.stackexchange", "id": 1746, "tags": "cc.complexity-theory, structural-complexity" }
What is the optimal distance to sit from a TV?
Question: I was wondering how to calculate the optimal distance to sit away from a TV. I don't quite know the full set of parameters it will depend on, I would suspect it to include the following: Size of the screen pixel size/resolution of the screen I've probably forgotten something... There will probably also be a biological factor, as in a property of the eye that matters. To take any biology out of the problem and make it purely physics let's assume any parameter necessary (probably the focal length of the eye? not sure though) without determining its value (although it would be great if someone knew it). I am more interested in which parameters determine this problem and the associated formula rather than actual values for my TV though. For simplicity we shall also assume the TV screen to be square (if it is easier you may take the freedom to change this to circular or any shape you like). Answer: So, the question is sort of broad but I'm bored enough. I describe my reasoning bellow. When we watch TV we are constantly looking at a screen, which I assumed to be circular with radius $L$. This is represented in this beautiful figure: Now, to make sense of what you're talking about I needed to compute some personal, yet generalizable, estimates. My criterion was really simple: the optimal angle $$\theta_{opt} = tan^{-1} \left( \frac{L}{d_{opt}} \right)$$ is defined the following way: from very near start walking away from the screen, stopping at each step to look at two different points, each located on the circle and diametrically opposed (that is, these two points must be $2L$ away from each other). At the minute you feel comfortable to look at both points without moving your head, than this is the optimal distance. This means you can watch TV by just moving your eyes and keeping your head still, which is my way of defining optimal. I did some testing for myself and noticed that $$\theta_{opt} = (16 \pm 1)º \, .$$ So you can apply this to any TV size you want. If your hypothetical round TV from the future has a $50 \, \text{cm}$ radius, then the optimal distance is $d_{opt} \approx 174 \pm 11 \, \text{cm}$. P.S.: I'm assuming perfect pixel separation and variable luminosity, which is pretty much the scenario with actual technology. P.S.S.: I checked and, strangely, this result pretty much mimics the official comfort distances provided by most of the sellers. That's nice!
{ "domain": "physics.stackexchange", "id": 31939, "tags": "optics, geometric-optics, imaging" }
How can i obtain the correction to the eigenvalues as a result of adding the gravitational potential to the hydrogenlike atom?
Question: In treating the hydrogen-like atoms, we normally neglect the gravitational potential for the system. I am able to justify the above statement by comparing the gravitational potential to the electrostatic potential, I need someone to help me on; how to obtain the exact eigenvalues of the complete system. Does this mean obtaining the eigenvalues whilst neglecting the gravitational potential? how to obtain the correction to the eigenvalue as a result of adding gravitational potential. Here, in the expression for energy, I have added $Gmm$ to $\frac{Ze^2}{4 \pi \epsilon_{o}}$ where $G$ is the gravitational constant and $m$ is mass of hydrogen-like particle, I am not sure whether I have done the correct thing. how can i obtain the exact eigenfunctions of the whole system? Answer: The potential energy is still just $1/r$, $$V=-\frac{Ze^2}{4\pi\epsilon_0r}-\frac{Gm_pm_e}{r}=-\frac{Ze^2}{4\pi\epsilon_0}\left(1+\frac{4\pi\epsilon_0Gm_pm_e}{Ze^2}\right)\frac{1}{r},$$ so instead of the energy levels being $$E_n=-\left(\frac{Ze^2}{4\pi\epsilon_0}\right)^2\frac{1}{2\hbar^2}\frac{m_pm_e}{m_p+m_e}\frac{1}{n^2}$$ they are $$E_n=-\left(\frac{Ze^2}{4\pi\epsilon_0}\right)^2\left(1+\frac{4\pi\epsilon_0Gm_pm_e}{Ze^2}\right)^2\frac{1}{2\hbar^2}\frac{m_pm_e}{m_p+m_e}\frac{1}{n^2}.$$ For hydrogen, the dimensionless correction $\frac{4\pi\epsilon_0Gm_pm_e}{Ze^2}$ is only $4.4\times 10^{-40}$ and the energy shift of the ground state energy ($-13.6\,\text{eV}$) is only $-1.2\times 10^{-38}\,\text{eV}$. Since gravity is attractive, just like the electrostatic force between the proton and the electron, it makes the energy very slightly more negative. As @PM2Ring commented, the gravitational correction is absurdly small and is swamped by many other physical effects being neglected here, so this is a fairly pointless (but amusing) calculation. For example, relativistic corrections to the kinetic energy and corrections due to electron and proton spin are much more important, which is why textbooks cover those effects but ignore this one. The eigenfunctions follow from a similar correction to the Bohr radius which I will let you work out using $$\frac{Ze^2}{4\pi\epsilon_0}\to\frac{Ze^2}{4\pi\epsilon_0}\left(1+\frac{4\pi\epsilon_0Gm_pm_e}{Ze^2}\right).$$
{ "domain": "physics.stackexchange", "id": 60676, "tags": "quantum-mechanics, homework-and-exercises, newtonian-gravity, atomic-physics, hydrogen" }
If I squash an insect and it produces red "juice", does it always mean it is a blood-sucking type?
Question: If I squash an insect and it produces red "juice", does it always mean it is a blood-sucking type of insect? Or do some insects have red "juice" themselves, so the color is red on its own and not caused by sucking higher animal's blood? I squashed a small fly-like insect recently, it produced a red stain, so I am wondering if the stain come from the insect itself or if it has to be some other animal's blood. Answer: No - invertebrates don't have "blood" though they do have hemolymph. Hemolymph flows around the body cavity, rather than through vessels such as veins and capillaries, and comes in to direct contact with tissues and generally it is not red. Hemolymph fills all of the interior (the hemocoel) of the animal's body and surrounds all cells. It contains hemocyanin, a copper-based protein that turns blue in color when oxygenated, instead of the iron-based hemoglobin in red blood cells found in vertebrates, thus giving hemolymph a blue-green color rather than the red color of vertebrate blood. When not oxygenated, hemolymph quickly loses its color and appears grey. The hemolymph of lower arthropods, including most insects, is not used for oxygen transport because these animals respirate directly from their body surfaces (internal and external) to air, but it does contain nutrients such as proteins and sugars. ~ http://en.wikipedia.org/wiki/Hemolymph The red you see from squashing, depending on the insect, could come from blood of other animals or (eye) pigments produced by the insect. For example, in my lab we have some Drosophila melanogaster with red eyes and some with white eyes (they carry a genetic mutation which represses the production of pigments in the eye) - if I squash the red eyed fly red "juice" is left on the desk, this doesn't happen with the white eyed flies.
{ "domain": "biology.stackexchange", "id": 4935, "tags": "entomology" }
Introductory book on Logic and Computation
Question: Can you give me some suggestions about a good introductory (but comprehensive) bookabout Logic and Computation? Some fuzzy topics that I have in mind are: Presburger artihm., PA, ZF, ZFC, HOL Set theory, Type theory Modeling Computation (Turing machines) in different theories Links with computational complexity (FMT, descriptive complexity) Answer: I suggest one of the book I recently bought: Pavel Pudlak: Logical Foundations of Mathematics and Computational Complexity - A Gentle Introduction; Springer Monographs in Mathematics; 2013 I had not ("still haven't" :-) a strong background on logic and this book is helping me to better understand some "fundamental" aspects of logic and its relation with computation and complexity. Doubtless a good introductory book. The TOC and preface of the book are downloadable from the Pudlak's home page and you can also find some excerpts of the book on http://books.google.com. From the Introduction: ... The first two chapters are an introduction to the foundations of mathematics and mathematical logic. The material is explained very informally and more detailed presentation is deferred to later chapters.... Chapter 3 is devoted to set theory, which is the most important part of the foundations of mathematics. The two main themes in this chapter are: (1) higher infinities as a source of powerful axioms, and (2) alternative axioms, such as the Axiom of Determinacy... Proofs of impossibility, the topic of Chapter 4, are proofs that certain tasks are impossible, contrary to the original intuition. Nowadays we tend to equate impossibility with unprovability and non-computability, which is a rather narrow view. Therefore, it is worth recalling that the first important impossibility results were obtained in different contexts: geometry and algebra. The most important result presented in this chapter is the Incompleteness Theorem of Kurt Godel... Proofs of impossibility are, clearly, important in foundations. One field in which the most basic problems are about proving impossibility is computational complexity theory, the topic of Chapter 5. But there are more connections between computational complexity and the foundations.... In fact, there is a field of research that studies connections between computational complexity and logic. It is called ‘Proof Complexity’ and it is presented in Chapter 6. Although we do have indications that complexity should play a relevant role in the foundations, we do not have any results proving this connection. ... Every book about the foundations of mathematics should mention the basic philosophical approaches to the foundations of mathematics. I also do it in Chapter 7, but as I am not a philosopher, the main part of the chapter rather concentrates on mathematical results and problems that are at the border of mathematics and philosophy ... It doesn't cover FMT and descriptive complexity, but there are a few good books that are focused on those topics (e.g. Leonid Libkin: Elements of Finite Model Theory; Texts in Theoretical Computer Science. An EATCS Series; 2004 ) I accept my answer because I hadn't the opportunity to read the book suggested by Trung Ta, yet.
{ "domain": "cs.stackexchange", "id": 6751, "tags": "reference-request, logic, books" }
prediction of fluctuation
Question: We can predict the probabilty of a fluctuation but why can't we predict when a fluctuation is going to happen ? Does that change if we changed the type of system we're working with.. for example if we had an approximately isolated system. Answer: If we could predict exactly when a fluctuation is going to happen, we would not be dealing with statistical physics and probability, we would instead be dealing with deterministic physics and equations of motion. For deterministic systems such as are analyzed in Newtonian Mechanics, when given sufficient starting conditions we can (usually) predict with certainty when a particular configuration will be reached. (The exception is chaotic systems.) Fluctuation usually means a random, unpredictable event. In non-deterministic systems we cannot predict with certainty, or doing so is too difficult. Instead we have to deal in probabilities of events happening. A probability cannot be assigned to an event without also specifying a time frame - eg occurrence within the next hour. Depending on the probability distribution, which is dictated by the nature of the processes causing the event, we can calculate the mean time between events. Provided that this is viewed as a statistical prediction, the accuracy of which depends on a large number of events being observed, the mean time between events can be predicted very accurately. If a system is isolated the predictions are more reliable, both for deterministic and random systems. External influences are usually unknown or unpredictable, and have unpredictable effects.
{ "domain": "physics.stackexchange", "id": 40344, "tags": "statistical-mechanics, probability" }
Showing that the integration measure is preserved under gauge transformation in the non-Abelian case
Question: I am trying to show that the integration measure we use in the Fadeev-Popov method of quantisation of non-Abelian gauge theory is invariant under a gauge transformation. I am using Peskin & Schroeder chapter 16.2. The gauge transformation of the gauge field is given by $$ (A^\alpha)^a_\mu=A^a_\mu+\frac{1}{g}D_\mu\alpha^a $$ which is in the adjoint representation as shown by the transformation. Now the integration measure we use in the functional integral is given by $$ \mathcal{D}A=\prod_x\prod_{a.\mu}dA^a_\mu $$ So when we take the gauge transformed measure we have $$ \mathcal{D}A^\alpha=\prod_x\prod_{a,\mu}d(A^\alpha)^a_\mu=\prod_x\prod_{a,\mu}\left( dA^a_\mu+\frac{1}{g}d(\partial_\mu\alpha^a)+f^{abc}d(A^b_\mu\alpha^c)\right) $$ This looks like a more complicated shift in our integration but I don't quite understand how they leave the measure invariant. The autors mention that this is a shift followed by a rotation of the components of $A_\mu^a$ but how can we see this explicitly? Some of my (maybe incorrect) reasoning The second term in the transformed measure is just a shift and since we are integrating over fields $A^a_\mu(x)$ it indeed leaves the measure invariant. It's the third term that I really struggle to make sense of. Answer: What the authors means is this: When you disregard the shift all that's left is $$ (\delta^{ab} + f^{abc}\alpha^c)\mathrm{d}A^b_\mu,$$ which is the infinitesimal version of a linear transformation generated by the matrix $M^{ab} = f^{abc}\alpha^c$. Since the structure constants are anti-symmetric, $M^{ab}$ is, too, and so it is the generator of a rotation.
{ "domain": "physics.stackexchange", "id": 88560, "tags": "gauge-theory, path-integral, gauge-invariance, yang-mills" }
Finding the percentile corresponding to a threshold
Question: I need to find which percentile of a group of numbers is over a threshold value. Is there a way that this can be speed up? My implementation is much too slow for the intended application. In case this changes anything, I am running my program using mpirun -np 100 python program.py. I cannot use numba, as the rest of this program uses try/except statements. import numpy as np my_vals = [] threshold_val = 0.065 for i in range(60000): my_vals.append(np.random.normal(0.05, 0.02)) upper_bound = 100 lower_bound = 0 perc = 50.0 val = np.percentile(my_vals, perc) while abs(val - threshold_val) > 0.00001: print val if val > threshold_val: upper_bound = perc perc = (perc + lower_bound)/2 else: lower_bound = perc perc = (perc+upper_bound)/2 val = np.percentile(my_vals, perc) print perc Answer: If I understand your question, you are asking to determine the percentile value of the first value that exceeds the threshold. The thing to do is the following: Specify the number of random.normal deviates in the call and pre-sort the array. That way you can just look at each element in the array, knowing that the next one is bigger than the last. Just count the number of values that fail to exceed the threshold, stopping after you find one that exceeds it. No need to continue beyond this point. Then, do the arithmetic. my_vals = sorted(np.random.normal(0.05, 0.02, 60000)) count_vals = 0 for i in my_vals: count_vals += 1 if i > threshold_val: break percentile_val = 100 * (count_vals/len(my_vals)) print('{0:0.6}'.format(percentile_val)) As a check, you could then calculate the value of the element at that percentile using np.percentile. print(np.percentile(my_vals, percentile_val)) Even easier is to do the counting this way: count_vals = sum(i > threshold_val for i in my_vals) percentile_val = 100 * (count_vals/len(my_vals))
{ "domain": "codereview.stackexchange", "id": 18857, "tags": "python, time-limit-exceeded, numpy, statistics" }
Precision of spectroscopy for astronomy
Question: How precise can the measurements be when looking at spectral lines in astrophysics? For example, suppose I have a telescope in orbit, and I am looking at $H_\alpha$ lines coming from a star at 613 pc. Is it realistic to observer the lines move from 626.282 nm to 656.282002854 nm? this would correspond to a radial velocity of about 1.3 m/s (I am not an astronomer, and I just want to get a feel for whether this is realistic to measure, or out of the question) Answer: While most measurements in astronomy are better in space, precision spectroscopy can actually do quite well on the ground. One of the best spectrographs (some would say the best) is HARPS, the High-Accuracy Radial Velocity Planetary Searcher used for finding extrasolar planets. As described in its instrument paper (pdf; note that the sole purpose of this paper is for the team to tout its own accomplishments), it operates from $380\ \mathrm{nm}$ to $690\ \mathrm{nm}$. HARPS can reliably get down to $1\ \mathrm{m}/\mathrm{s}$ precision, so yes, this accuracy can be achieved. HARPS, like other high-precision spectragraphs, is of the echelle variety. In a very complicated way it spreads the spectrum over many rows of a CCD.1 Even then, the shifts one extracts usually are on the sub-pixel level. This is done by fitting many spectral lines presumed to be Doppler shifted by the same amount. If you were only allowed one line this would be somewhat more difficult. Given that the Sun's reflex motion due to Earth tugging on it is $10\ \mathrm{cm}/\mathrm{s}$, there is a big push in the exoplanet community to get even more precise measurements. 1 A traditional spectrograph spreads light from a slit in one direction, so the rectangular CCD will have one spectral axis and one spatial axis.
{ "domain": "physics.stackexchange", "id": 10999, "tags": "astronomy, astrophysics, spectroscopy" }
Should features be correlated or uncorrelated for features-selection with the help of multiple regression analysis?
Question: I have seen researchers using Pearson correlation coefficient to find out the relevant features - to keep the features that have a high correlation value with the target. The implication is that the correlated features contribute more information in finding out the target in classification problems. Whereas, we remove the features which are redundant and have negligible correlation value. Q1) Should highly correlated features with the target variable be included or removed from classification problems ? Is there a better/elegant explanation to this step? Q2) How do we know that the dataset is linear when there are multiple variables involved? What does it mean by dataset being linear? Q3) How to check for feature importance for non-linear case? Answer: Q1) Should highly correlated features with the target variable be included or removed from classification and regression problems? Is there a better/elegant explanation to this step? Actually there's no strong reason either to keep or remove features which have a low correlation with the target response, other than reducing the number of features if necessary: It is correct that correlation is often used for feature selection. Feature selection is used for dimensionality reduction purposes, i.e. mostly to avoid overfitting due to having too many features / not enough instances (it's a bit more complex than this but that's the main idea). My point is that there's little to no reason to remove features if the number of features is not a problem, but if it is a problem then it makes sense to keep only the most informative features, and high correlation is an indicator of "informativeness" (information gain is another common measure to select features). In general feature selection methods based on measuring the contribution of individual features are used because they are very simple and don't require complex computations. However they are rarely optimal because they don't take into account the complementarity of groups of features together, something that most supervised algorithms can use very well. There are more advanced methods available which can take this into account: the most simple one is a brute-force method which consists in repeatedly measuring the performance (usually with cross-validation) with any possible subset of features... But that can take a lot of time for a large set of features. However features which are highly correlated together (i.e. between features, not with the target response), should usually be removed because they are redundant and some algorithms don't deal very well with those. It's rarely done systematically though, because again this involves a lot of calculations. Q2) How do we know that the dataset is linear when there are multiple variable involved? What does it mean by dataset being linear? It's true that correlation measures are based on linearity assumptions, but that's rarely the main issue: as mentioned above it's used as an easy indicator of "amount of information" and it's known to be imperfect anyway, so the linearity assumption is not so crucial here. A dataset would be linear if the response variable can be expressed as a linear equation of the features (i.e. in theory one would obtain near-perfect performance with a linear regression). Q3) How to do feature importance for nonlinear case? Information gain, KL divergence, and probably a few other measures. But using these to select features individually is also imperfect.
{ "domain": "datascience.stackexchange", "id": 7169, "tags": "classification, regression, feature-selection, pearsons-correlation-coefficient" }
Cyclic rotation of a Matrix
Question: What is meant by a cyclic rotation of a matrix, specifically in proving that j*k-k*j=i*l where j,k,l are cyclic rotations of the Pauli spin matrices sigma x, sigma y, and sigma z Answer: The statement "A,B,C" are cyclic rotations (more often: cyclic permutations) of "E,F,G" means that "A,B,C" is either "E,F,G" or "F,G,E" or "G,E,F", in one of these three orders (but not in the remaining orders "E,G,F", "F,E,G", "G,F,E"). Relatively to "E,F,G", these three letters were ordered by one of the three elements of the so-called cyclic group ${\mathbb Z}_3$. Similarly for cyclic rotations of $k$ elements and the group ${\mathbb Z}_k$. In your example, if you used a non-cyclic permutation of the Pauli matrices, you would get $j*k-k*j = -i*l$ with the minus sign on the right hand side because the Pauli matrices anticommute. In particular, let me stress that no operation is applied "inside" the matrices. They're treated as wholes, as elements of a set, that are being permuted with other elements.
{ "domain": "physics.stackexchange", "id": 5667, "tags": "linear-algebra" }
robot_pose_ekf with an external sensor
Question: I am trying to use the robot_pose_ekf package with my system, and am having trouble understanding what my tf tree should look like, and what frame the output of robot_pose_ekf is in. My robot is microcontroller based and is not running ROS on-board. Additionally it does not have any IMU information or visual sensing on-board. It reports its pose estimate (based on odometry) in an /odom frame at a fixed frequency. I have an external vision system tracking the robot as well. The transform from the /sensor_optical frame to the /odom frame is static and known. Every time the sensor transmits an estimate of the robot's position, I use this transform to transform the data into the /odom frame. Thus, the /odom topic comes from the robot's odometry, and its header.frame_id is /odom, and its child_frame_id is /base_footprint. The /vo topic comes from the external vision system, and its header.frame_id is /odom, and its child_frame_id is /base_footprint. When I run the system the robot_pose_ekf package runs without any errors. If I plot the pose published on the /odom and /vo topic, they are exactly what I expect, but the pose published on /robot_pose_ekf/odom_combined topic is not what I expect. Setting the covariances of the /vo topic to be very high, the pose output of /robot_pose_ekf/odom_combined is then the same "shape" as that of /odom, but it must be in some odd coordinate system. What am I doing wrong? What frame should this pose be expressed in? When I use tf2_visualization, I see that I have unconnected trees. robot_pose_ekf publishes a transform from /odom_combined -> /base_footprint, but that transform is unconnected from the rest of the tree. I definitely believe this is a problem, but I am unsure how to fix it. Finally, the pose published in /robot_pose_ekf/odom_combined says its frame_id is odom (but it doesn't actually seem to be). Thanks! Originally posted by jarvisschultz on ROS Answers with karma: 9031 on 2011-10-25 Post score: 5 Answer: Let's ignore the vision system for a moment, and focus on wheel odometry and robot_pose_ekf. Your TF tree should look like this (also see this ros-users thread): map --> odom_combined --> base_footprint --> base_link ... where map --> odom_combined is usually published by amcl (optional) odom_combined --> base_footprint is published by robot_pose_ekf base_footprint --> base_link is a fixed link in your URDF. The reason for this peculiar choice of frames is that in TF, each frame can only have one parent. Intuitively, one would like to have multiple parents of base_footprint: odom --> base_footprint for pure wheel odometry odom_combined --> base_footprint for EKF estimation map --> base_footprint for AMCL localization Since this is not possible in TF, the frames are published as a chain (see above). The important part for your problem is that when using robot_pose_ekf, tf publishing from the wheel odometry has to be switched off. Instead, you should only publish nav_msgs/Odometry messages (using odom_combined as the header frame id), which form the input for robot_pose_ekf and can also be visualized in RViz. In short: use odom_combined instead of odom everywhere (or the other way around, but be consistent about it) don't publish any TF messages, let robot_pose_ekf do that publish nav_msgs/Odometry messages for the wheel odometry and the overhead camera on the /odom and /vo topics, in both cases specifying odom_combined as the header frame id and base_footprint as the child_frame_id. I hope that helps; otherwise, could you post your TF tree, showing the unconnected frames? Originally posted by Martin Günther with karma: 11816 on 2011-10-31 This answer was ACCEPTED on the original site Post score: 19 Original comments Comment by Alireza on 2012-01-26: how can i implement base_footprint --> base_link transformation? Comment by mjcarroll on 2012-01-26: This is awesome info and should be somewhere in the navigation stack documentation. It took me a long time to put all the pieces together the first time that we tried using move_base, but this is very clear. Comment by Martin Günther on 2011-11-03: That's a good question, Jarvis. I think that is because there's a mismatch between REP-105 (which suggests "odom") and the way these frames are actually used on the PR2 and most other robots (which use "odom_combined"). See the mailing list thread linked in my answer. Comment by jarvisschultz on 2011-11-02: Thank you so much! That totally got me straightened out, and solved the issues that I was having. Given your explanation, I wonder why the geometry_msgs::PoseWithCovarianceStamped that is published by robot_pose_ekf has a hard-coded header.frame_id of "odom"? Comment by jxl on 2016-07-07: @Güntherad @jarvisschultz ,thanks very much .But when i tried according to your answer ,i got a problem when rosrun move_base node .It says "odom" passed to lookuptransform argument target_frame does not exist" . how can i modify move_base to fix this tf problem,thanks very much .
{ "domain": "robotics.stackexchange", "id": 7089, "tags": "ros, navigation, frame, robot-pose-ekf, transform" }
What is this nested bracket notation?
Question: The following is an excerpt from K. Varga's paper, Precise solution of few-body problems with stochastic variational method on correlated Gaussian basis: ...The function $θ_{LM_L}(\mathbf{x})$ in Eq. (2), which represents the angular part of the wave function, is a generalization of $\mathcal{Y}$ and can be chosen as a vector-coupled product of solid spherical harmonics of the Jacobi coordinates $$ θ_{LM_L}(\mathbf{x}) = [[[\mathcal{Y}_{l_1}(\mathbf{x}_1) \mathcal{Y}_{l_2}(\mathbf{x}_2)]_{L_{12}} \mathcal{Y}_{l_3}(\mathbf{x}_3)]_{L_{123}}, \ldots]_{LM_L}.\tag{5} $$ Each relative motion has a definite angular momentum... What is the RHS of the above equation? I've never seen this nested-bracket notation before, and as far as I can tell, it isn't defined in the paper. My first guess would be something like a commutator $$[A, B] = AB - BA$$ but that doesn't explain the subscripts on the closing brackets. Answer: This is pretty niche notation, and it is indeed not defined in the paper, but the name "vector-coupled product" does seem to be used by a few people beyond Varga and Suzuki. In essence, $$ [\mathcal Y_{l_1}(\mathbf x_1)\mathcal Y_{l_2}(\mathbf x_2)]_{LM} $$ is a coupled wavefunction with total angular momentum $L$ that's made up of the single-particle wavefunctions $\mathcal Y_{l_1}(\mathbf x_1)$ and $\mathcal Y_{l_2}(\mathbf x_2)$, which have angular momentum $l_1$ and $l_2$ respectively. This means that you need to couple them via Clebsch-Gordan coefficients as usual. Thus, the product above is given by $$ [\mathcal Y_{l_1}(\mathbf x_1)\mathcal Y_{l_2}(\mathbf x_2)]_{LM} = \sum_{m_1,m_2} ⟨l_1m_1,l_2m_2|LM⟩ \mathcal Y_{l_1m_1}(\mathbf x_1)\mathcal Y_{l_2m_2}(\mathbf x_2) $$ where the sum is over all permissible magnetic quantum numbers for the individual particles.
{ "domain": "physics.stackexchange", "id": 24126, "tags": "quantum-mechanics, angular-momentum, computational-physics, notation, commutator" }
Is / (How is) the partition function related to the equipartition theorem?
Question: I see that the partition function is associated to the way particles are "partitioned" among energy levels. The equipartition theorem divides (partitions) average energy among degrees of freedom allowed to the particles. Equating the average energy based on the partition function, given by $$\langle E\rangle = -\frac{\partial \ln Z}{\partial \beta}\,,$$ with the average energy of the system given by the equipartition theorem, given by $\langle E\rangle = (f/2)kT$, where $f$ is the total number of degrees of freedom in the system, $\beta = 1/kT$, and $\langle E\rangle$ is the average energy of the system, we get $$ -\frac{\partial \ln Z}{\partial \beta} = \frac{f}{2}kT\,. $$ Simplifying, we get $$ \ln Z = -\frac{f}{2}\ln(kT) + const.\,, $$ or, $$ Z = (kT)^{-f/2} \times const. $$ So, is it correct to say that: the partition function $Z$ shows how the average energy is partitioned among the degrees of freedom at a particular temperature OR the "partitions" of energy associated with the partition function $Z$ are actually the degrees of freedom allowed to the particles at a particular temperature. Answer: The partition function is related to the equipartition theorem (the theorem is derived from that function) but the word "partition" refers to a different thing in the two cases. Here is a general way to think of the partition function in any ensemble. If we assign a "multiplicity" $A_i$ to microstate $i$, then the probability of microstate is $$ p_i = \frac{A_i}{\sum_i A_i} = \frac{A_i}{\text{partition function}} $$ The denominator is the partition function of the ensemble and the summation is over all microstates that are compatible with the macroscopic state, whether this is $E,V,N$ (microcanonical), $T$, $V$, $N$ (canonical) or other. In the canonical ensemble we have $A_i = e^{-\beta E_i}$, where $E_i$ is the energy of the microstate. It is the special form of the energy and its dependence on the degrees of freedom that leads to equipartition of energy. More specifically, equipartition is achieved in the limit $1/\beta = k T \to \infty$ So, while the partition function and the equipartition theorem are related to each other, the partition function is a more general tool that describes the probability of microstate, while the equipartition theorem is a result that is based on the partition function in the limit of high temperature.
{ "domain": "physics.stackexchange", "id": 94697, "tags": "thermodynamics, statistical-mechanics, partition-function" }
What is the effect of the tokens?
Question: What is the effect of the tokens that the model has if model A has 1B tokens and the other model has 12B tokens? Will that have an effect on the performance? Answer: The question is not precise enough, it depends on other factors: in general, a larger training set tends to lead to a better model. However it depends if the training set is really relevant and useful for the task. For example: adding the larger dataset contains data from a different domain than the target task, the additional data might be useless if the data contains a lot of errors or noise, it might cause the model to perform worse if the larger data contains mostly duplicates, it's likely not to perform better. So larger data is good for performance only if the additional data is actually of good quality.
{ "domain": "datascience.stackexchange", "id": 10993, "tags": "nlp, tokenization" }
Is there a way to follow a trajectory with no collision detection?
Question: Hi all, Is there a way, in ROS, to do a simple trajectory following with no costmap and no collision detection ? The trajectory is specified in terms of (x,y) waypoints. I can't figure out how to twick the navigation stack in order to do this, and I would like to avoid setting the objects perception distance (obstacle range) to 0. Thanks for your help, Guido Originally posted by Guido on ROS Answers with karma: 514 on 2011-08-16 Post score: 0 Answer: There are a few things you'll have to do. Firstly, you'll need to configure the costmaps with either a fake laser scanner (as Lorenzo Riano mentioned) or, I believe, just leave the obstacle_sources empty so that the costmaps assume there are no sensors. I haven't tested the latter, so I'm not sure if the costmaps will actually update if there are no sensors. Secondly, if you want it to follow a user defined set of x,y points with no planning in between those points, you'll have to write your own BaseGlobalPlanner that just forwards your set of input points to the local planner. If you only need to get to one point at a time, you could try using the carrot_planner for global planning. Once you have the entire path being output properly, you'll need to tune whichever BaseLocalPlanner you are using (dwa_local_planner, base_local_planner, etc) to prefer following the path over heading directly towards the goal. See Tuning Guide for more info (specifically look for the path_distance_bias and goal_distance_bias parameters) Originally posted by Eric Perko with karma: 8406 on 2011-08-17 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Guido on 2011-08-17: I will try this solution. Thanks you for the answer.
{ "domain": "robotics.stackexchange", "id": 6443, "tags": "ros, navigation, collision" }
What happens to charged particle during the process of conversion from matter to antimatter?
Question: During the conversion, there is a brief moment in which proton has no electric charge? (during the switch phase from plus to minus) Could it be that in this moment proton has no electromagnetic properties and thus 0 mass? (is electromagnetism responsible for atomic mass?) Answer: This is absolutely wrong, if you have a given particle (anti-particle), this particle (anti-particle) can't change its nature. A particle remains a particle and an anti-particle remains an anti-particle. Nothing else. What can happens, is that particle and anti-particle can interact between each other, for example, a basic process is the following: $$ e_{+}+ e_{-} \rightarrow 2\gamma$$ This means that an electron, during the scattering process (interaction process) with his anti-particle, that is the positron, produce as a result 2 photons. In any case, also the interaction processes are characterized by some conservation laws,as the conservation of the electric charge, or the energy. In the case above, you can see immediately that the conservation of the electric charge is respected; in fact, if you sum the charge of a particle and the charge of its anti-particle, you will obtain 0, that is the charge of the photon $\gamma$ (i.e. the charge of an anti-particle has got the same modulus but opposite sign with respect to the particle which is referred).
{ "domain": "physics.stackexchange", "id": 50962, "tags": "particle-physics, conservation-laws, charge, antimatter" }
Comparing video recording algorithms in Java: no-compression vs. naive diff compressor
Question: Usage of the program is in Facebook. Intro In this JavaFX program, a user is presented with a white canvas with a black circle at the center of the canvas. Then, the user has 10 seconds to record a video by dragging the circle via mouse. During the recording, there are two different video recording algorithms recording each frame. The algorithms are as follows: No-compression: each frame is recorded on a pixel-by-pixel basis; one bit per pixel (white vs. black), Naive compressor: the first frame is stored just like in 1, yet after that it records only the "diff" between two consecutive frames. More precisely, in algorithm 2, each new frame is computed as follows: Compare the previous and the next frames; store the number of pixels \$N\$ that has changed between the two. Next, for each changed pixel \$(x, y)\$, store it after written value of \$N\$. Critique request Please, tell me anything that comes to mind. Source code com.github.coderodde.compression.util.BitArrayBuilder.java: package com.github.coderodde.compression.util; import java.util.Arrays; /** * This class implements a bit array builder for storing the video frames. * * @author Rodion "rodde" Efremov * @version 1.6 (Jul 18, 2023) * @since 1.6 (Jul 18, 2023) */ public final class BitArrayBuilder { private static final int DEFAULT_LONG_ARRAY_CAPACITY = 50_000; /** * This array stores the actual bits. */ private long[] bitArray; /** * This field specifies how many bits there are in this builder. */ private int size; /** * Constructs a bit array builder with default capacity. */ public BitArrayBuilder() { this(DEFAULT_LONG_ARRAY_CAPACITY * Long.SIZE); } /** * Constructs a bit array builder capable of holding * {@code initialNumberOfBits} bits. * * @param initialNumberOfBits the initial requested capacity. */ public BitArrayBuilder(int initialNumberOfBits) { this.bitArray = new long[getInitialNumberOfLongs(initialNumberOfBits)]; } /** * Returns the number of bits stored in this builder. * * @return the number of bits. */ public int size() { return size; } /** * Appends {@code length} least-significant bits from {@code bitsToAppend} * to this bit array builder. * * @param bitsToAppend the {@code long} value holding the bits. * @param length the number of least-significant bits to append. */ public void appendBits(long bitsToAppend, int length) { expandTableIfNeeded(size + length); for (int bitIndex = 0; bitIndex < length; bitIndex++) { appendBit(((bitsToAppend >> bitIndex) & 0x1L) != 0); } } /** * Reads {@code length} bits starting from index {@code index}. * * @param index the leftmost index of the bit range to read. * @param length the length of the bit range to read. * @return the {@code length} bits. */ public long readBits(int index, int length) { long ret = 0L; for (int i = index, bitIndex = 0; i < index + length; i++, bitIndex++) { boolean bit = readBit(i); if (bit) { ret |= (1 << bitIndex); } } return ret; } /** * Reads the {@code index}th bit. * * @param index the index of the bit to read. * @return a bit, {@code true} for the 1 and {@code false} for the 0. */ private boolean readBit(int index) { int longIndex = index / Long.SIZE; long dataLong = bitArray[longIndex]; index %= Long.SIZE; long mask = 1L << index; return (dataLong & mask) != 0; } /** * Returns the shortest byte array capable of holding all the bits in this * bit array builder. * * @return the byte content of this builder. */ public byte[] getBytes() { int numberOfBytes = size / Byte.SIZE; int remainder = size % Byte.SIZE; int byteArrayLength = numberOfBytes + (remainder != 0 ? 1 : 0); byte[] bytes = new byte[byteArrayLength]; for (int byteIndex = 0; byteIndex < bytes.length; byteIndex++) { bytes[byteIndex] = getDataByte(byteIndex); } return bytes; } /** * Returns the {@code byteIndex}th byte. * * @param byteIndex the byte index. * @return the byte value at index {@code byteIndex}. */ private byte getDataByte(int byteIndex) { long longData = bitArray[byteIndex / Long.BYTES]; byteIndex %= Long.BYTES; switch (byteIndex) { case 0: return (byte)(longData); case 1: return (byte)((longData >> 8) & 0xFF); case 2: return (byte)((longData >> 16) & 0xFF); case 3: return (byte)((longData >> 24) & 0xFF); case 4: return (byte)((longData >> 32) & 0xFF); case 5: return (byte)((longData >> 40) & 0xFF); case 6: return (byte)((longData >> 48) & 0xFF); case 7: return (byte)((longData >> 56) & 0xFF); default: throw new IllegalStateException("Should not get here ever."); } } /** * Implements the actual appending of a bit to the tail of this bit array * builder. * * @param bit the bit to append. If is {@code true}, a 1-bit is appended, * and 0-bit otherwise. */ private void appendBit(boolean bit) { int longIndex = size / Long.SIZE; int longBitsIndex = size - longIndex * Long.SIZE; long mask = 0x1L << longBitsIndex; if (bit) { bitArray[longIndex] |= mask; } else { bitArray[longIndex] &= ~mask; } size++; } /** * Computes the initial number of {@code long} values sufficient to hold * {@code initialNumberOfBits} bits. * * @param initialNumberOfBits the number of bits to accommodate. * @return the number of longs needed to store all the bits. */ private int getInitialNumberOfLongs(int initialNumberOfBits) { int longs = initialNumberOfBits / Long.SIZE; int remainder = initialNumberOfBits % Long.SIZE; return longs + (remainder != 0 ? 1 : 0); } /** * Doubles the capacity of the actual bit array. * * @param requestedCapacity the requested capacity. */ private void expandTableIfNeeded(int requestedCapacity) { if (requestedCapacity > bitArray.length * Long.SIZE) { long[] newBitArray = Arrays.copyOf( bitArray, Math.max(bitArray.length * 2, requestedCapacity)); int sizeToCopy = size / Long.SIZE + (size % Long.SIZE == 0 ? 0 : 1); System.arraycopy(bitArray, 0, newBitArray, 0, sizeToCopy); bitArray = newBitArray; } } } com.github.coderodde.compression.util.CircleVideoShape.java: package com.github.coderodde.compression.util; import java.util.concurrent.atomic.AtomicInteger; /** * This class implements the circle video shape for recording. * * @author Rodion "rodde" Efremov * @version 1.6 (Jul 18, 2023) * @since 1.6 (Jul 18, 2023) */ public final class CircleVideoShape { /** * The width of the canvas that contains this circle. */ private final int pixelMatrixWidth; /** * The height of the canvas that contains this circle. */ private final int pixelMatrixHeight; /** * The radius of this circle. */ private final int radius; /** * The center X-coordinate. */ private final AtomicInteger centerX; /** * The center Y-coordinate. */ private final AtomicInteger centerY; /** * Constructs a new circle. * * @param pixelMatrixWidth the width of the containing canvas. * @param pixelMatrixHeight the height of the containing canvas. * @param radius the radius of the circle. */ public CircleVideoShape( int pixelMatrixWidth, int pixelMatrixHeight, int radius) { this.pixelMatrixWidth = pixelMatrixWidth; this.pixelMatrixHeight = pixelMatrixHeight; this.radius = radius; this.centerX = new AtomicInteger(pixelMatrixWidth / 2); this.centerY = new AtomicInteger(pixelMatrixHeight / 2); } public int getRadius() { return radius; } public int getCenterX() { return centerX.get(); } public int getCenterY() { return centerY.get(); } public void setCenterX(int centerX) { this.centerX.set(centerX); } public void setCenterY(int centerY) { this.centerY.set(centerY); } /** * Returns the entire pixel color matrix. * * @return pixel color matrix. */ public PixelColor[][] getColorMatrix() { PixelColor[][] pixelColorMatrix = new PixelColor[pixelMatrixHeight] [pixelMatrixWidth]; for (int y = 0; y < pixelMatrixHeight; y++) { for (int x = 0; x < pixelMatrixWidth; x++) { pixelColorMatrix[y][x] = getPixelColorAt(x, y); } } return pixelColorMatrix; } /** * Computes the pixel color of the pixel at coordinates {@code (x, y)]}. * * @param x the X-coordinate of the pixel. * @param y the Y-coordinate of the pixel. * @return the actual color of the pixel. */ private PixelColor getPixelColorAt(int x, int y) { int dx = x - centerX.get(); int dy = y - centerY.get(); int dist2 = dx * dx + dy * dy; double dist = Math.sqrt(dist2); return dist > radius ? PixelColor.WHITE : PixelColor.BLACK; } } com.github.coderodde.compression.util.PixelColor.java: package com.github.coderodde.compression.util; public enum PixelColor { WHITE, BLACK; } com.github.coderodde.compression.util.Utils.java: package com.github.coderodde.compression.util; /** * This class contains various utility methods. * * @author Rodion "rodde" Efremov * @version 1.6 (Jul 18, 2023) * @since 1.6 (Jul 18, 2023) */ public final class Utils { public static void sleep(long milliseconds) { try { Thread.sleep(milliseconds); } catch (InterruptedException ex) { } } public static void sleep(long milliseconds, int nanoseconds) { try { Thread.sleep(milliseconds, nanoseconds); } catch (InterruptedException ex) { } } public static void sleep(SleepDuration sleepDuration) { try { Thread.sleep(sleepDuration.millisecondsPart, sleepDuration.nanosecondsPart); } catch (InterruptedException ex) { } } /** * Returns the minimum number of bits sufficient to store the value * {@code maximumValue}. * * @param maximumValue the integer value we wish to store. * @return number of bits sufficient to store the input value. */ public static int computeNumberOfBitsToStore(int maximumValue) { if (maximumValue == 0) { return 1; } int bits = 0; while (maximumValue != 0) { maximumValue /= 2; bits++; } return bits; } /** * Returns the number of nanoseconds which it takes to call the * {@link Trhead.sleep}. * * @return the number of nanoseconds. */ public static long getThreadSleepCallDuration() { long start = System.nanoTime(); sleep(0L); long duration = System.nanoTime() - start; return Math.max(duration, 0L); } public static SleepDuration getFrameSleepDuration(int framesPerSecond) { long sleepCallDuration = getThreadSleepCallDuration(); long durationNanoseconds = 1_000_000_000 / framesPerSecond - sleepCallDuration; long durationMilliseconds = durationNanoseconds / 1_000_000; long durationNanosecondsPart = durationNanoseconds - durationMilliseconds * 1_000_000; return new SleepDuration(durationMilliseconds, (int) durationNanosecondsPart); } public static final class SleepDuration { public final long millisecondsPart; public final int nanosecondsPart; SleepDuration(long milliseconds, int nanoseconds) { this.millisecondsPart = milliseconds; this.nanosecondsPart = nanoseconds; } } } com.github.coderodde.compression.util.VideoScreenCanvas.java: package com.github.coderodde.compression.util; import javafx.scene.canvas.Canvas; import javafx.scene.canvas.GraphicsContext; import javafx.scene.image.Image; import javafx.scene.image.PixelWriter; import javafx.scene.image.WritableImage; import javafx.scene.paint.Color; /** * This class implements the video screen canvas. * * @author Rodion "rodde" Efremov * @version 1.6 (Jul 18, 2023) * @since 1.6 (Jul 18, 2023) */ public final class VideoScreenCanvas extends Canvas { public static final int VIDEO_SCREEN_CANVAS_WIDTH = 1200; public static final int VIDEO_SCREEN_CANVAS_HEIGHT = 800; private static final int CIRCLE_VIDEO_SHAPE_RADIUS = 100; private final GraphicsContext graphicsContext; private final CircleVideoShape circleVideoShape = new CircleVideoShape( VIDEO_SCREEN_CANVAS_WIDTH, VIDEO_SCREEN_CANVAS_HEIGHT, CIRCLE_VIDEO_SHAPE_RADIUS); private final int matrixWidth; private final int matrixHeight; public VideoScreenCanvas() { super(VIDEO_SCREEN_CANVAS_WIDTH, VIDEO_SCREEN_CANVAS_HEIGHT); this.matrixWidth = VIDEO_SCREEN_CANVAS_WIDTH; this.matrixHeight = VIDEO_SCREEN_CANVAS_HEIGHT; this.graphicsContext = getGraphicsContext2D(); clear(); paintCircleVideoShape(); } public void clear() { graphicsContext.setFill(Color.WHITE); graphicsContext.fillRect(0, 0, matrixWidth, matrixHeight); } public void paintCircleVideoShape() { graphicsContext.setFill(Color.BLACK); graphicsContext.fillOval( circleVideoShape.getCenterX() - circleVideoShape.getRadius(), circleVideoShape.getCenterY() - circleVideoShape.getRadius(), circleVideoShape.getRadius() * 2, circleVideoShape.getRadius() * 2); } public void drawFrame(Image image) { graphicsContext.drawImage(image, 0.0, 0.0); } public Image convertFramePixelsToImage(Color[][] framePixels) { WritableImage raster = new WritableImage( framePixels[0].length, framePixels.length); PixelWriter pixelWriter = raster.getPixelWriter(); for (int y = 0; y < framePixels.length; y++) { for (int x = 0; x < framePixels[0].length; x++) { pixelWriter.setColor(x, y, framePixels[y][x]); } } return raster; } public CircleVideoShape getCircleVideoShape() { return circleVideoShape; } } com.github.coderodde.compression.video.app.VideoCompressionApp.java: package com.github.coderodde.compression.video.app; import com.github.coderodde.compression.util.VideoScreenCanvas; import javafx.event.EventHandler; import javafx.application.Application; import javafx.scene.Scene; import javafx.scene.control.Alert; import javafx.scene.control.Alert.AlertType; import javafx.scene.input.MouseEvent; import javafx.scene.layout.StackPane; import javafx.stage.Stage; public final class VideoCompressionApp extends Application { static final int FRAMES_PER_SECOND = 25; static final int VIDEO_DURATION_SECONDS = 10; public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) throws Exception { stage.setTitle("VideoCompressionApp 1.6 - by coderodde"); showBeginRecordingHint(); VideoScreenCanvas videoScreenCanvas = new VideoScreenCanvas(); videoScreenCanvas.paintCircleVideoShape(); EventHandler<MouseEvent> mouseEventHandler = new EventHandler<MouseEvent>() { @Override public void handle(MouseEvent mouseEvent) { int x = (int) mouseEvent.getX(); int y = (int) mouseEvent.getY(); videoScreenCanvas.clear(); videoScreenCanvas.getCircleVideoShape().setCenterX(x); videoScreenCanvas.getCircleVideoShape().setCenterY(y); videoScreenCanvas.paintCircleVideoShape(); } }; videoScreenCanvas.addEventHandler( MouseEvent.MOUSE_DRAGGED, mouseEventHandler); stage.setResizable(false); StackPane root = new StackPane(); root.getChildren().add(videoScreenCanvas); stage.setScene(new Scene(root)); stage.show(); VideoRecordingThread videoRecordingThreadNoCompression = new VideoRecordingThread( videoScreenCanvas, VideoRecordingThread .VideoCompressionAlgorithm .NO_COMPRESSION); VideoRecordingThread videoRecordingThreadNaiveCompression = new VideoRecordingThread( videoScreenCanvas, VideoRecordingThread .VideoCompressionAlgorithm .NAIVE_COMPRESSOR); // Start recording. videoRecordingThreadNoCompression.start(); videoRecordingThreadNaiveCompression.start(); VideoCoordinatorThread videoCoordinatorThread = new VideoCoordinatorThread( videoScreenCanvas, videoRecordingThreadNoCompression, videoRecordingThreadNaiveCompression); videoCoordinatorThread.start(); } private static void showBeginRecordingHint() { Alert beginInfoAlert = new Alert(AlertType.INFORMATION); beginInfoAlert.setTitle("Beginning recording"); beginInfoAlert.setHeaderText( "Press OK and you will have 10 seconds to record the video."); beginInfoAlert.setContentText( "During the 10 seconds of recording, you can move " + "the black circle by dragging it with the mouse."); beginInfoAlert.showAndWait(); } } com.github.coderodde.compression.video.app.VideoCoordinatorThread.java: package com.github.coderodde.compression.video.app; import com.github.coderodde.compression.util.Utils; import com.github.coderodde.compression.util.VideoScreenCanvas; import javafx.application.Platform; import javafx.scene.control.Alert; import javafx.scene.control.Alert.AlertType; /** * This thread is a fast drop-in for coordinating the playback threads. * * @author Rodion "rodde" Efremov * @version 1.6 (Jul */ public final class VideoCoordinatorThread extends Thread { private static final long ITERATION_SLEEP_DURATION = 100L; private final VideoScreenCanvas videoScreenCanvas; private final VideoRecordingThread nonCompressiveVideoRecordingThread; private final VideoRecordingThread naiveCompressingVideoRecordingThread; public VideoCoordinatorThread( VideoScreenCanvas videoScreenCanvas, VideoRecordingThread nonCompressiveVideoRecordingThread, VideoRecordingThread naiveCompressingVideoRecordingThread) { this.videoScreenCanvas = videoScreenCanvas; this.nonCompressiveVideoRecordingThread = nonCompressiveVideoRecordingThread; this.naiveCompressingVideoRecordingThread = naiveCompressingVideoRecordingThread; } @Override public void run() { while (nonCompressiveVideoRecordingThread.isRunning() || naiveCompressingVideoRecordingThread.isRunning()) { Utils.sleep(ITERATION_SLEEP_DURATION); } Platform.runLater(() -> { Alert alert = new Alert(AlertType.INFORMATION); alert.setTitle("Before playback"); alert.setHeaderText( "After pressing OK, a recording via non-compressive " + "recording starts."); alert.setContentText("Press OK, to view non-compressed video."); alert.showAndWait(); }); VideoPlaybackThread nonCompressiveVideoPlaybackThread = new VideoPlaybackThread( videoScreenCanvas, VideoRecordingThread.VideoCompressionAlgorithm.NO_COMPRESSION, nonCompressiveVideoRecordingThread.getBitArrayBuilder()); nonCompressiveVideoPlaybackThread.start(); try { nonCompressiveVideoPlaybackThread.join(); } catch (InterruptedException ex) { } Platform.runLater(() -> { Alert alert = new Alert(AlertType.INFORMATION); alert.setTitle("Before playback"); alert.setHeaderText( "After pressing OK, a recording via naively-compressed " + "recording starts."); alert.setContentText( "Press OK, to view the video with naive compressor."); // REVIEW REQUEST: How to make the showAndWait() block this thread? alert.showAndWait(); }); VideoPlaybackThread naiveCompressorVideoPlaybackThread = new VideoPlaybackThread( videoScreenCanvas, VideoRecordingThread.VideoCompressionAlgorithm.NO_COMPRESSION, nonCompressiveVideoRecordingThread.getBitArrayBuilder()); naiveCompressorVideoPlaybackThread.start(); try { naiveCompressorVideoPlaybackThread.join(); } catch (InterruptedException ex) { } } } com.github.coderodde.compression.video.app.VideoPlaybackThread.java: package com.github.coderodde.compression.video.app; import com.github.coderodde.compression.util.BitArrayBuilder; import com.github.coderodde.compression.util.Utils; import com.github.coderodde.compression.util.VideoScreenCanvas; import static com.github.coderodde.compression.video.app.VideoRecordingThread.VideoCompressionAlgorithm.*; import javafx.scene.paint.Color; /** * This class implements the video playback thread. * * @author Rodion "rodde" Efremov * @version 1.6 (Jul 20, 2023) * @since 1.6 (Jul 20, 2023) */ public final class VideoPlaybackThread extends Thread { private final VideoScreenCanvas videoScreenCanvas; private final VideoRecordingThread.VideoCompressionAlgorithm algorithm; private final BitArrayBuilder bitArrayBuilder; private final Color[][] framePixels = new Color[VideoScreenCanvas.VIDEO_SCREEN_CANVAS_HEIGHT] [VideoScreenCanvas.VIDEO_SCREEN_CANVAS_WIDTH]; public VideoPlaybackThread( VideoScreenCanvas videoScreenCanvas, VideoRecordingThread.VideoCompressionAlgorithm algorithm, BitArrayBuilder bitArrayBuilder) { this.videoScreenCanvas = videoScreenCanvas; this.algorithm = algorithm; this.bitArrayBuilder = bitArrayBuilder; } @Override public void run() { switch (algorithm) { case NO_COMPRESSION: playbackWithNoCompression(); return; case NAIVE_COMPRESSOR: playbackWithNaiveCompressor(); return; default: throw new IllegalStateException( "Unknown playback algorithm: " + algorithm); } } private void playbackWithNoCompression() { long sleepDuration = 1000L / VideoCompressionApp.FRAMES_PER_SECOND; int frameBitsStartIndex = 0; int frameBitLength = VideoScreenCanvas.VIDEO_SCREEN_CANVAS_HEIGHT * VideoScreenCanvas.VIDEO_SCREEN_CANVAS_WIDTH; for (int frameIndex = 0; frameIndex < VideoCompressionApp.FRAMES_PER_SECOND * VideoCompressionApp.VIDEO_DURATION_SECONDS; frameIndex++) { // Load framePixels with new pixels: loadFramePixels(frameBitsStartIndex); draw(framePixels); // Advance towards the next frame: frameBitsStartIndex += frameBitLength; Utils.sleep(sleepDuration); } } private void loadFramePixels(int frameBitStartIndex) { int bitIndex = frameBitStartIndex; for (int y = 0; y < framePixels.length; y++) { for (int x = 0; x < framePixels[0].length; x++, bitIndex++) { long bit = bitArrayBuilder.readBits(bitIndex, 1); if (bit == 1L) { framePixels[y][x] = Color.BLACK; } else { framePixels[y][x] = Color.WHITE; } } } } private void draw(Color[][] framePixels) { videoScreenCanvas.drawFrame( videoScreenCanvas.convertFramePixelsToImage(framePixels)); } private void playbackWithNaiveCompressor() { long sleepDuration = 1000L / VideoCompressionApp.FRAMES_PER_SECOND; int frameBitLength = VideoScreenCanvas.VIDEO_SCREEN_CANVAS_HEIGHT * VideoScreenCanvas.VIDEO_SCREEN_CANVAS_WIDTH; // Draw the initial frame: loadFramePixels(0); draw(framePixels); Color[][] previousPixels = framePixels; BitIndexHolder bitIndexHolder = new BitIndexHolder(); bitIndexHolder.bitIndex = frameBitLength; for (int frameIndex = 1; frameIndex < VideoCompressionApp.FRAMES_PER_SECOND * VideoCompressionApp.VIDEO_DURATION_SECONDS; frameIndex++) { // Get the next frame:: Color[][] nextPixels = loadNextPixels( previousPixels, bitIndexHolder); draw(nextPixels); previousPixels = nextPixels; Utils.sleep(sleepDuration); } } private Color[][] loadNextPixels(Color[][] previousPixels, BitIndexHolder bitIndexHolder) { // Compute the minimum number of bits sufficient to represent the // integer no larger than the number of pixels in a frame: int bitsInNumberOfPixels = Utils.computeNumberOfBitsToStore( VideoScreenCanvas.VIDEO_SCREEN_CANVAS_HEIGHT * VideoScreenCanvas.VIDEO_SCREEN_CANVAS_WIDTH); // Compute the minimum number of bits sufficient to represent any valid // pixel X-coordinate: int bitsInXCoordinate = Utils.computeNumberOfBitsToStore( VideoScreenCanvas.VIDEO_SCREEN_CANVAS_WIDTH); // Compute the minimum number of bits sufficient to represent any valid // pixel Y-coordinate: int bitsInYCoordinate = Utils.computeNumberOfBitsToStore( VideoScreenCanvas.VIDEO_SCREEN_CANVAS_HEIGHT); // Read the number of pixels that changed between the previous and // next frames: int numberOfChangedPixels = (int) bitArrayBuilder.readBits( bitIndexHolder.bitIndex, bitsInNumberOfPixels); bitIndexHolder.bitIndex += bitsInNumberOfPixels; Color[][] nextPixels = new Color[VideoScreenCanvas.VIDEO_SCREEN_CANVAS_HEIGHT] [VideoScreenCanvas.VIDEO_SCREEN_CANVAS_WIDTH]; for (int pixelIndex = 0; pixelIndex < numberOfChangedPixels; pixelIndex++) { // Read the X-coordinate of the pixel that changed between previous // and next frames: int x = (int) bitArrayBuilder.readBits( bitIndexHolder.bitIndex, bitsInXCoordinate); // Advance towards the Y-coordinate: bitIndexHolder.bitIndex += bitsInXCoordinate; // Read the Y-coordinate of the pixel that changed between previous // and next frames: int y = (int) bitArrayBuilder.readBits( bitIndexHolder.bitIndex, bitsInYCoordinate); // Advance towards the next pixel: bitIndexHolder.bitIndex += bitsInYCoordinate; Color previousPixelColor = previousPixels[y][x]; Color nextPixelColor = flipColor(previousPixelColor); nextPixels[y][x] = nextPixelColor; } return nextPixels; } private static Color flipColor(Color color) { return color == Color.WHITE ? Color.BLACK : Color.WHITE; } private static class BitIndexHolder { int bitIndex; } } com.github.coderodde.compression.video.app.VideoRecordingThread.java: package com.github.coderodde.compression.video.app; import com.github.coderodde.compression.util.BitArrayBuilder; import com.github.coderodde.compression.util.PixelColor; import com.github.coderodde.compression.util.Utils; import com.github.coderodde.compression.util.Utils.SleepDuration; import com.github.coderodde.compression.util.VideoScreenCanvas; import java.awt.Point; import java.util.HashSet; import java.util.Set; import java.util.concurrent.atomic.AtomicBoolean; import javafx.application.Platform; public final class VideoRecordingThread extends Thread { public static enum VideoCompressionAlgorithm { NO_COMPRESSION, NAIVE_COMPRESSOR; } private final int framesToRecord; private final VideoScreenCanvas videoScreenCanvas; private final VideoCompressionAlgorithm compressionAlgorithm; private BitArrayBuilder bitArrayBuilder; private final AtomicBoolean isRunning = new AtomicBoolean(true); public VideoRecordingThread( VideoScreenCanvas videoScreenCanvas, VideoCompressionAlgorithm compressionAlgorithm) { this.videoScreenCanvas = videoScreenCanvas; this.framesToRecord = VideoCompressionApp.FRAMES_PER_SECOND * VideoCompressionApp.VIDEO_DURATION_SECONDS; this.compressionAlgorithm = compressionAlgorithm; } public BitArrayBuilder getBitArrayBuilder() { return this.bitArrayBuilder; } public boolean isRunning() { return isRunning.get(); } @Override public void run() { switch (compressionAlgorithm) { case NO_COMPRESSION: recordWithNoCompression(); isRunning.set(false); return; case NAIVE_COMPRESSOR: recordWithNaiveCompression(); isRunning.set(false); return; } throw new IllegalStateException( "Unknown compression algorithm: " + compressionAlgorithm); } private void recordWithNoCompression() { SleepDuration sleepDuration = Utils.getFrameSleepDuration( VideoCompressionApp.FRAMES_PER_SECOND); int bitArrayBuilderCapacity = VideoCompressionApp.FRAMES_PER_SECOND * VideoCompressionApp.VIDEO_DURATION_SECONDS * VideoScreenCanvas.VIDEO_SCREEN_CANVAS_WIDTH * VideoScreenCanvas.VIDEO_SCREEN_CANVAS_HEIGHT; bitArrayBuilder = new BitArrayBuilder(bitArrayBuilderCapacity); for (int frameIndex = 0; frameIndex < framesToRecord; frameIndex++) { Platform.runLater(() -> { videoScreenCanvas.clear(); videoScreenCanvas.paintCircleVideoShape(); }); Utils.sleep(sleepDuration); PixelColor[][] pixelMatrix = videoScreenCanvas .getCircleVideoShape() .getColorMatrix(); for (PixelColor[] pixelMatrixRow : pixelMatrix) { for (PixelColor pixelColor : pixelMatrixRow) { bitArrayBuilder.appendBits( pixelColor == PixelColor.WHITE ? 0L : 1L, 1); } } } System.out.println( "Bits in no compression bit array: " + bitArrayBuilder.size()); } private void recordWithNaiveCompression() { SleepDuration sleepDuration = Utils.getFrameSleepDuration( VideoCompressionApp.FRAMES_PER_SECOND); bitArrayBuilder = new BitArrayBuilder(); // Process the first frame: PixelColor[][] previousPixelMatrix = processFirstFrame(); Utils.sleep(sleepDuration); for (int frameIndex = 1; frameIndex < framesToRecord; frameIndex++) { Platform.runLater(() -> { videoScreenCanvas.clear(); videoScreenCanvas.paintCircleVideoShape(); }); Utils.sleep(sleepDuration); PixelColor[][] currentPixelMatrix = videoScreenCanvas.getCircleVideoShape().getColorMatrix(); loadBits(previousPixelMatrix, currentPixelMatrix); previousPixelMatrix = currentPixelMatrix; } System.out.println( "Bits in naive compressor bit array: " + bitArrayBuilder.size()); } private void loadBits(PixelColor[][] previousPixelMatrix, PixelColor[][] currentPixelMatrix) { int matrixHeight = previousPixelMatrix.length; int matrixWidth = previousPixelMatrix[0].length; int matrixYBitLength = Utils.computeNumberOfBitsToStore(matrixHeight); int matrixXBitLength = Utils.computeNumberOfBitsToStore(matrixWidth); Set<Point> changedPixelPoints = new HashSet<>(); for (int y = 0; y < previousPixelMatrix.length; y++) { for (int x = 0; x < previousPixelMatrix[0].length; x++) { PixelColor previousPixelColor = previousPixelMatrix[y][x]; PixelColor currentPixelColor = currentPixelMatrix[y][x]; if (previousPixelColor != currentPixelColor) { changedPixelPoints.add(new Point(x, y)); } } } int changedPixelPointsBitLength = Utils.computeNumberOfBitsToStore( VideoScreenCanvas.VIDEO_SCREEN_CANVAS_WIDTH * VideoScreenCanvas.VIDEO_SCREEN_CANVAS_HEIGHT); bitArrayBuilder.appendBits(changedPixelPoints.size(), changedPixelPointsBitLength); for (Point point : changedPixelPoints) { bitArrayBuilder.appendBits(point.x, matrixXBitLength); bitArrayBuilder.appendBits(point.y, matrixYBitLength); } } private PixelColor[][] processFirstFrame() { PixelColor[][] pixelMatrix = videoScreenCanvas.getCircleVideoShape().getColorMatrix(); for (int y = 0; y < pixelMatrix.length; y++) { for (int x = 0; x < pixelMatrix[0].length; x++) { if (pixelMatrix[y][x].equals(PixelColor.WHITE)) { bitArrayBuilder.appendBits(0L, 1); } else { bitArrayBuilder.appendBits(1L, 1); } } } return pixelMatrix; } } Typical output After the recording is done, both the recording threads output the number of bits it took to store the entire video. Looks like this: Bits in no compression bit array: 240000000 Bits in naive compressor bit array: 27451650 GitHub The entire repository is here. Answer: Some improvements: VideoCompressionApp class: This piece of code can be replace with something more elegant: EventHandler<MouseEvent> mouseEventHandler = new EventHandler<MouseEvent>() { @Override public void handle(MouseEvent mouseEvent) { int x = (int) mouseEvent.getX(); int y = (int) mouseEvent.getY(); videoScreenCanvas.clear(); videoScreenCanvas.getCircleVideoShape().setCenterX(x); videoScreenCanvas.getCircleVideoShape().setCenterY(y); videoScreenCanvas.paintCircleVideoShape(); } }; You can replace with lambda: EventHandler<MouseEvent> mouseEventHandler = mouseEvent -> { int x = (int) mouseEvent.getX(); int y = (int) mouseEvent.getY(); videoScreenCanvas.clear(); videoScreenCanvas.getCircleVideoShape().setCenterX(x); videoScreenCanvas.getCircleVideoShape().setCenterY(y); videoScreenCanvas.paintCircleVideoShape(); }; Instead of duplicate code here: VideoRecordingThread videoRecordingThreadNoCompression = new VideoRecordingThread( videoScreenCanvas, VideoRecordingThread .VideoCompressionAlgorithm .NO_COMPRESSION); VideoRecordingThread videoRecordingThreadNaiveCompression = new VideoRecordingThread( videoScreenCanvas, VideoRecordingThread .VideoCompressionAlgorithm .NAIVE_COMPRESSOR); You can extract a method: private VideoRecordingThread createVideoRecordingThread(VideoCompressionAlgorithm compressionAlogrithm) For VideoRecordingThread class: public static enum VideoCompressionAlgorithm { NO_COMPRESSION, NAIVE_COMPRESSOR; } static is redundant here; nested enum types are implicitly static. Since you are working with JDK20 (just saw it in your pom.xml), you can convert switch/case to the more elegant switch/case provided by JDK17. So instead of: @Override public void run() { switch (compressionAlgorithm) { case NO_COMPRESSION: recordWithNoCompression(); isRunning.set(false); return; case NAIVE_COMPRESSOR: recordWithNaiveCompression(); isRunning.set(false); return; } throw new IllegalStateException( "Unknown compression algorithm: " + compressionAlgorithm); } better to enhance it with: @Override public void run() { switch (compressionAlgorithm) { case NO_COMPRESSION -> { recordWithNoCompression(); isRunning.set(false); return; } case NAIVE_COMPRESSOR -> { recordWithNaiveCompression(); isRunning.set(false); return; } } throw new IllegalStateException( "Unknown compression algorithm: " + compressionAlgorithm); } in the term of pretty switch case in JDK20. Look at BitArrayBuilder class Instead of your getDataByte(), I wrote my own as: private byte getDataByte(int byteIndex) { long longData = bitArray[byteIndex / Long.BYTES]; byteIndex %= Long.BYTES; return switch (byteIndex) { case 0 -> (byte) (longData); case 1 -> (byte) ((longData >> 8) & 0xFF); case 2 -> (byte) ((longData >> 16) & 0xFF); case 3 -> (byte) ((longData >> 24) & 0xFF); case 4 -> (byte) ((longData >> 32) & 0xFF); case 5 -> (byte) ((longData >> 40) & 0xFF); case 6 -> (byte) ((longData >> 48) & 0xFF); case 7 -> (byte) ((longData >> 56) & 0xFF); default -> throw new IllegalStateException("Should not get here ever."); }; } Utils class: Since this class is static, you must declare a private constructor to force any instantiation of this class to be a compilation error. You have duplicate code here: public static void sleep(long milliseconds, int nanoseconds) { try { Thread.sleep(milliseconds, nanoseconds); } catch (InterruptedException ex) { } } public static void sleep(SleepDuration sleepDuration) { try { Thread.sleep(sleepDuration.millisecondsPart, sleepDuration.nanosecondsPart); } catch (InterruptedException ex) { } } This could be: public static void sleep(long milliseconds, int nanoseconds) { try { Thread.sleep(milliseconds, nanoseconds); } catch (InterruptedException ex) { } } public static void sleep(SleepDuration sleepDuration) { sleep(sleepDuration.millisecondsPart, sleepDuration.nanosecondsPart); } You are catching errors with no handling in the catch part; this is not best practice. At least print something. PixelColor class: You have redundant semicolon after Black type.
{ "domain": "codereview.stackexchange", "id": 44993, "tags": "java, compression, canvas, javafx, video" }
Learning OOP PHP, simple MySQL connection class.
Question: I have posted an earlier version of this and here is the improved version from the feedback I recieved. Some of the feedback I received was; Don't chain method (tried my best to limit this) Do not use print() or die() and an error response (still a little lost, but I did attempt to use a redirect to a custom error page) I tried my best to rewrite the code, I am still very new to this so go easy. I did read up on OOP methodology and read about Interfacing and Implementation, so I tried to incorporate that in my class. I would love to see some different ideas on how to make this the most efficient as possible. I am sure it needs a ton of changing, but that is why I am here to help me learn and grow. I am a visual learner so if possible actual code would be awesome, but any response would be greatly appreciated. <?php class Mysql { private $user; private $pass; private $data; private $host; public function __construct($user,$pass,$data,$host) { $this->user = $user; $this->pass = $pass; $this->data = $data; $this->host = $host; $this->process(); } /* INTERFACE */ private function process() { if($this->verifyNullFields()==true) { if($this->verifyDatabaseConnection()==true) { if($this->verifyDatabaseExist()==true) { print('ALL PASSED'); //for debugging } else { print('redirect to custom error page will go here'); } } else { print('redirect to custom error page will go here'); } } else { print('redirect to custom error page will go here'); } } /* IMPLEMENTATIONS */ private function verifyNullFields() { if($this->user != NULL) { if($this->data != NULL) { if($this->host != NULL) { return true; } else { return false; } } else { return false; } } else { return false; } } private function verifyDatabaseConnection() { $link = @mysql_connect($this->host,$this->user,$this->pass); if(!$link) { return false ; } else { return true; } } private function verifyDatabaseExist() { $db = @mysql_select_db($this->data); if(!$db) { return false; } else { return true; } } } ?> <?php $m = new Mysql("root","","magic","localhost"); ?> Answer: Let me show my version of this code: DatabaseException.php: class DatabaseException extends Exception { } Database.php: abstract class Database { protected $login; protected $password; protected $database; protected $hostname; public function __construct($login, $password, $database, $hostname) { // NB: password not checked and may be empty $this->throwExceptionIfNotSet('login', $login); $this->throwExceptionIfNotSet('database', $database); $this->throwExceptionIfNotSet('hostname', $hostname); $this->login = $login; $this->password = $password; $this->database = $database; $this->hostname = $hostname; } private function throwExceptionIfNotSet($argName, $argValue) { if (empty($argValue)) { throw new DatabaseException("'${argName}' not set"); } } } Mysql.php: class Mysql extends Database { private $link = null; public function __construct($login, $password, $database, $hostname) { parent::__construct($login, $password, $database, $hostname); $this->connect(); $this->selectDatabase(); } public function connect() { if (! is_null($this->link)) { return; } $link = @mysql_connect($this->hostname, $this->login, $this->password); if (! $link) { throw new DatabaseException( sprintf( 'Cannot connect to database. mysql_connect() to %s with login %s fails', $this->hostname, $this->login ) ); } } public function selectDatabase() { $ret = @mysql_select_db($this->database, $this->link); if (! $ret) { throw new DatabaseException("Cannot select database {$this->database}"); } } } application.php: try { $db = new Mysql('root', '', 'magic', 'localhost'); print('ALL PASSED'); //for debugging } catch (DatabaseException $ex) { print('redirect to custom error page will go here'); }
{ "domain": "codereview.stackexchange", "id": 1662, "tags": "php, php5, object-oriented" }
what is variable capture in nominal logic?
Question: While reading a paper Nominal Unification from a Higher-Order Perspective, in the abstract I noticed something feel bit confusing, Nominal Logic is an extension of first-order logic with equality, name-binding, name-swapping, and freshness of names. Contrarily to higher-order logic, bound variables are treated as atoms, and only free variables are proper unknowns in nominal unification. This allows “variable capture”, breaking a fundamental principle of lambda-calculus. I know a bit about nominal logic, but as I know it respects $\alpha$-conversion, such as $\lambda x.x \approx_\alpha \lambda y.y$. but I could not understand above statement. I understand the notion of variable capture which leads to a wrong result during the substitution. So does variable capture not lead to an incorrect result in nominal logic? what does it mean saying breaking "variable capture" here? Can anyone explain with examples? Answer: Variable capture is the phenomenon which "breaks" things when you do your substitutions in a naive way. For example: Correct: in the expression $$\int_0^1 (a + x)^2 dx$$ substitute $t^2$ for $a$, to get $$\int_0^1 (t^2 + x)^2 dx.$$ Correct: in the expression $$\int_0^1 (a + t)^2 dt$$ substitute $t^2$ for $a$, to get $$\int_0^1 (t^2 + u)^2 du$$ (we renamed the bound variable $t$ to $u$ on the fly to avoid capturing $t$). Incorrect: in the expression $$\int_0^1 (a + t)^2 dt$$ substitute $t^2$ for $a$, to get $$\int_0^1 (t^2 + t)^2 dt.$$ (We ended up with a different intergral because $t$ was captured by $dt$). Similarly, in $\lambda$-calculus: Correct: $(\lambda x . \lambda y . y x)) y = \lambda z . z y$ Incorrect: $(\lambda x . \lambda y . y x)) y = \lambda y . y y$ (because $y$ was captured).
{ "domain": "cs.stackexchange", "id": 7775, "tags": "programming-languages, logic, variable-binding" }
Boost CRC example program file
Question: I'm currently looking at this Boost::CRC example code which I have also inserted below. I always try to look for suggestions for improving my own coding style when I encounter well-written and well-formatted code. This code definitely looks like good code, but two things puzzle me about this: is it good practice to copy #defined constants to local const variables before using them? Why would this be a good idea? is it actually acceptable to just omit the parenthesis around a function body if the body is a try-catch-block? // Boost CRC example program file ------------------------------------------// // Copyright 2003 Daryle Walker. Use, modification, and distribution are // subject to the Boost Software License, Version 1.0. (See accompanying file // LICENSE_1_0.txt or a copy at <http://www.boost.org/LICENSE_1_0.txt>.) // See <http://www.boost.org/libs/crc/> for the library's home page. // Revision History // 17 Jun 2003 Initial version (Daryle Walker) #include <boost/crc.hpp> // for boost::crc_32_type #include <cstdlib> // for EXIT_SUCCESS, EXIT_FAILURE #include <exception> // for std::exception #include <fstream> // for std::ifstream #include <ios> // for std::ios_base, etc. #include <iostream> // for std::cerr, std::cout #include <ostream> // for std::endl // Redefine this to change to processing buffer size #ifndef PRIVATE_BUFFER_SIZE #define PRIVATE_BUFFER_SIZE 1024 #endif // Global objects std::streamsize const buffer_size = PRIVATE_BUFFER_SIZE; // Main program int main ( int argc, char const * argv[] ) try { boost::crc_32_type result; for ( int i = 1 ; i < argc ; ++i ) { std::ifstream ifs( argv[i], std::ios_base::binary ); if ( ifs ) { do { char buffer[ buffer_size ]; ifs.read( buffer, buffer_size ); result.process_bytes( buffer, ifs.gcount() ); } while ( ifs ); } else { std::cerr << "Failed to open file '" << argv[i] << "'." << std::endl; } } std::cout << std::hex << std::uppercase << result.checksum() << std::endl; return EXIT_SUCCESS; } catch ( std::exception &e ) { std::cerr << "Found an exception with '" << e.what() << "'." << std::endl; return EXIT_FAILURE; } catch ( ... ) { std::cerr << "Found an unknown exception." << std::endl; return EXIT_FAILURE; } Answer: With the manifest constant being assigned to a const variable, you now have two ways of referring to the same value. In principle, this is superfluous, and should therefore be avoided. In practice, with manifest constants always being globally visible in C++, this is a rather moot point, but it can still be used for making a decision in lack of any other point. So, if there are any other considerations not immediately evident in the code, such as the need to be able to redefine the manifest constant AND at the same time the need to have the constant strongly typed, then by all means, it should be kept, and preferably documented. The try statement used as a function body looks mighty weird, which means that the average C++ programmer looking at it will go "WTF?". At the same time it does not accomplish any mighty feat that will make one add "--oh, I see why it is done that way, cool!". It is okay for a coding technique to have a certain WTF factor to it, as long as it accomplishes something neat, the neatness of which is proportional to its WTFness, so as to justify it. All that the try-statement-used-as-a-function-body accomplishes is to spare us from having to type an additional --but expected-- pair of curly brackets. Therefore, it should definitely be avoided.
{ "domain": "codereview.stackexchange", "id": 905, "tags": "c++, boost" }
Are non-zero-area pulses of electromagnetic radiation possible?
Question: This is a bit of a long-standing question / bone of contention that I've seen floating around, and which I would like to ask here in the hopes of getting some outside perspectives from a broader community. My neck of the woods centers around pulsed lasers: these can be long or short pulses, at a variety of wavelengths, but in essence we always have some light source which produces radiation that is focused down into some interaction region so that it can do things to atoms or molecules or whatnot. And, in this community, there is a bit of folklore that essentially says something like You cannot have nonzero-area pulses where a nonzero-area pulse is a pulse of radiation with electric field $\mathbf E(\mathbf r,t)$ with the property that $$ \int_{t_\mathrm{i}}^{t_\mathrm{f}} \mathbf E(\mathbf r,t)\mathrm dt \neq 0, $$ where $t_\mathrm{i}$ and $t_\mathrm{f}$ are pulses well before, and well after, the pulse has started and finished, respectively, and the integral is taken at some fixed position in space in your interaction region, i.e. in the radiation regime kind of far away from the charges and currents that make up the light source that produces that electric field. This principle has a pretty wide applicability, because it impacts what kinds of vector potentials are of interest: since we normally define the vector potential as $\mathbf A(\mathbf r,t) = -\int_{t_\mathrm{i}}^t \mathbf E(\mathbf r,\tau)\mathrm d\tau$, if all pulses must have zero area then you should only work with vector potentials that go to zero at $t\to\infty$. This is nice because it simplifies things, but it also restricts the kinds of pulses you might hope to shine on your target, if you want to do something like attosecond streaking or whatnot. Unfortunately, however, this bit of folklore is very rarely backed up with any kind of solid justification (even in e.g. course notes from researchers that I hold in very high regard), and as I will argue below it is on very shaky ground. I would like some help establishing whether it is true or not. On the justifications front, a friend recently found a resource that offers some foundations for this principle, in the book chapter Clifford R. Pollock. Ultrafast optical pulses, chapter 4 in Progress in Optics, E. Wolf, ed. (Elsevier, 2008), p. 221. from which the relevant passage runs as follows: 2.4 The "zero area" pulse An interesting feature of a travelling electromagnetic wave is that it must have equal amounts of positive and negative field, meaning the shortest optical pulse that could be generated would be one full cycle. It is not possible to generate and transmit a unipolar electromagnetic pulse that consists of only 1/2 cycle. A plot of both cases is shown in fig. 3. This can be easily understood from the Fourier analysis of the propagating wave. The Fourier spectrum of a propagating unipolar pulse (right-hand side of fig. 3) will have a DC term. To launch a unipolar pulse, it would be necessary to transmit a transverse DC potential with the rest of the waves. But it is impossible to establish a transverse DC potential in space, thus it is impossible to create such a pulse. Because no propagating wave can have a DC component, pulses must have equal amounts of positive and negative field. Propagating fields will always have an oscillatory structure with at least one cycle. Another way to look at this is to recognize that all physically realizable pulses must have an average area of zero - i.e. there is just as much negative as positive field in a total optical pulse. This is, then, a formal appearance of the principle in the literature - and in a resource that otherwise looks pretty solid, too -, but I find it deeply problematic. My problems are most easily condensed around the phrase it would be necessary to transmit a transverse DC potential with the rest of the waves because there is no such thing as a 'transverse DC potential', since DC fields don't have a propagation and the notion of 'transverse' is therefore meaningless. More generally, I'm really troubled by the mixture of a Fourier decomposition (which looks at the whole experiment globally, i.e. you need to treat the entire time interval as a unified whole for the analysis to make sense) with talk of 'launching' those Fourier components. More practically, though, I find this very weird because it is possible to exhibit radiative solutions of the Maxwell equations that include zero-area pulses, at least in the plane-wave regime. To show a specific example, consider the fields \begin{align} \mathbf E(\mathbf r,t) & = E_0 \hat{\mathbf y} f(|x|-ct) \\ \mathbf B(\mathbf r,t) & = \operatorname{sgn}(x)B_0 \hat{\mathbf z} f(|x|-ct), \end{align} where $B_0 = E_0/c$, driven by a sheet of current in the $y,z$ plane with uniform surface current density $$ \mathbf K(\mathbf r,t) = -\frac{2B_0}{\mu_0}\hat{\mathbf y} f(-ct). \qquad\ $$ These are full solutions of the Maxwell equations, including the boundary terms, irrespective of what form $f$ has, so long as it is differentiable. To disprove the zero-area principle, then, one can simply take the waveform to be any nonzero-area pulse, which can be e.g. $$ f(x) = \operatorname{sech}^2(x/\sigma), \qquad\quad $$ among a myriad of other examples. As you can see, then, there is some significant tension here. Are CR Pollock's arguments simply not applicable to the example above? Or is Pollock's argument just completely flawed and the zero-area principle is not true at all? On the other hand, though, the passage above does suggest that my plane-wave counter-example is not quite strong enough, because it requires infinite energy and an extended source, so really one should find examples of pulses with finite energy and localized fields, and find explicit sources that will produce them (to really defuse the "you can't 'launch' the DC component of those fields" argument). And, on the opposite end of the spectrum, the focus on the pulse area as a DC component also raises some interesting flags, because normally we like to work in the paraxial approximation to describe the focusing of a laser beam, but those DC components are very much not describable by those dynamics. Indeed, in the paraxial approximation, the Gouy phase shift of $\pi$ over the focus suggests that if you manage to make a cosine-like pulse before the focus, then it will become a minus-cosine-like pulse after the focus, and that is some pretty odd behaviour when seen through the lens of a DC-like field. A bit more on the practical and experimental side, we're getting close to the point where we can reliably and consistently produce and measure pulses that look like part (c) of this figure (from this paper), where you really start to concentrate the energy in the central half-cycle of the pulse, tilting the pulse area towards a positive balance. Now, you can argue that the area of the central pulse must always be balanced to zero by all the little negative bits in the leading and trailing edges of the pulse, but frankly I find that perspective on the argument to look very weak. Which brings me, then, to the formal statement of my question. I've ranted a bit to show the tensions between multiple different statements, facts, and perspectives, and the resolution doesn't look that easy to me. The way I see it, what's required is either a counterexample showing pulses with nonzero pulse area, in the radiatively-far regime, with finite energy in the fields, and with the sources shown explicitly; or an actual solid proof of the zero-area principle, with none of the fuzzy language in Pollock's passage, which explicitly delimits its validity and its hypotheses, and where the proof and its validity limits clearly show why it is not applicable to the plane-wave example I posted above. On the other hand, this isn't quite rocket science, either, so hopefully there are already better explorations of this topic in the literature. Has anyone seen such a text? Or if not, can someone show conclusively, as above that the principle is true or false? Answer: This is indeed a somewhat tricky question. It is related to the issue of laser vacuum acceleration of charges, which is generally believed to be impossible - a claim that goes under the name of Lawson-Woodward theorem. There has been a bit of a discussion a while ago in PRL, see the paper Coherent Electron Acceleration by Subcycle Laser Pulses. B. Rau, T. Tajima, and H. Hajo, Phys. Rev. Lett. 78, 3310 (1997) who claim to have found unipolar sub-cycle pulses that can be used for electron acceleration. This has been criticised in an interesting comment on that paper at Comment on “Coherent Acceleration by Subcycle Laser Pulses”. Kwang-Je Kim, Kirk T. McDonald, Gennady V. Stupakov, and Max S. Zolotorev, Phys. Rev. Lett. 84, 3210 (2000), which I consider to be both relevant and correct. It is certainly true that unipolar, pulsed plane wave solutions to the 3d wave equation exist mathematically (see e.g. OP), but they do not have finite energy and/or cannot be generated by a source of finite extent as Kim et al. point out (with a reference to Feynman). Put differently, this implies that the question indeed cannot be answered in a physically sensible way without considering the generation of the pulse, i.e. its source together with the realisation of the latter.
{ "domain": "physics.stackexchange", "id": 39963, "tags": "electromagnetism, waves, electromagnetic-radiation, laser" }
EM field in a vacuum in terms of potentials
Question: I know we can express the electric field $\mathbf{E}$ and the magnetic field $\mathbf{B}$ in terms of the electric potential $\phi$ and vector potential $\mathbf{A}$: $$ \mathbf{E} = -\nabla \phi - \frac{\partial \mathbf{A}}{\partial t}, $$ $$ \mathbf{B} = \nabla \times \mathbf{A}.$$ I have been reading Gerry and Knight's book on quantum optics and in chapter two they quantise the EM field by starting with Maxwell's theory. They start in the vacuum free of sources of currents and state that $$ \mathbf{E} = - \frac{\partial \mathbf{A}}{\partial t}, $$ $$ \mathbf{B} = \nabla \times \mathbf{A}.$$ What happened to the $-\nabla \phi$ contribution for $\mathbf{E}$? Answer: It is called the Gibbs gauge condition (or sometimes Hamiltonian gauge, or temporal gauge in textbooks). J.W. Gibbs, Velocity of Propagation of Electrostatic Forces, Nature 53, 509 (1896) In electromagnetism, there is a gauge freedom such that transforming the potential fields as $$ \phi \rightarrow \phi - \frac{1}{c} \frac{\partial \lambda}{\partial t} \\ \mathbf{A} \rightarrow \mathbf{A} - \nabla \lambda $$ will not change the Maxwell's equations for arbitrary scalar function $\lambda$. Gibbs gauge is simply $$ \tag{Gibbs} \phi = 0 $$ exactly as you stated in the question. Nothing more. For some other cases, if you set a $\lambda$ such that it gives $$ \tag{Coulomb} \nabla \cdot \mathbf{A} =0 $$ then it is called Coulomb gauge, if $$ \tag{Lorenz} \nabla \cdot \mathbf{A} - \frac{1}{c} \frac{\partial \phi}{\partial t} = 0 $$ then it is called Lorenz gauge. There are several gauges that are used in literature. It actually depends on your problem. Just like choosing a coordinate system in order to solve it easier. Since you have a source-free problem, it is better to get rid of the scalar potential and only deal with the vectorial potential. EDIT: As I stated in the comments below, the Gibbs gauge provides $\nabla \cdot \mathbf{A} =0$ far from charges. However, in the problem above, there are no charges so you have the gauge condition $\nabla \cdot \mathbf{A} =0$ everywhere just like Coulomb gauge. So, Gibbs gauge implies Coulomb gauge in the source-free case.
{ "domain": "physics.stackexchange", "id": 87005, "tags": "electromagnetism, optics, potential, gauge-theory, gauge" }
If all matter can emit at all wavelengths, can all matter absorb at all wavelengths too?
Question: Based on Planck’s law all matter can emit at all wavelengths at different intensities dependent of temperature. I was wondering if this holds true, does all matter absorb all wavelengths too, at different intensities ? If the answer to this is yes, then how can we see colours? ( if all wavelengths are absorbed and emitted) Answer: The planck assumes a theoretical blackbody object that absorbs all incoming electromagnetic radiation and emits radiation at all wavelengths, depending on its temperature. The spectral distribution mathematical function by Planck describes the intensity of blackbody radiation at different wavelengths for a given temperature. Hence, blackbody radiation described by Planck's law is a theoretical construct, that is often used as an idealized model, but it is not perfectly applicable in real life situations, as most physical body not perfectly behaves like a blackbody. Real objects emit and reflect light in much more complex ways, with the properties of the object influencing the specific wavelengths of light that are emitted or reflected. Although, this law is fundamental of statistical mechanics and solid foundation of modern understanding of the physical nature of thermal radiation. The Planck's law does not provide any means to quantify the radiations from intrinsic atomic properties. We have to extend this law to relate the atomic absorption and emission of radiation to microscopic properties of atoms through quantum mechanics of energy transition of different atom levels and mode cavities. Different atoms and molecules have different electronic and vibrational energy levels, which means that they can absorb and emit radiation at different specific wavelengths. The temperature of an atom or molecule also affects the wavelengths it emits. At high temperatures, the atoms and molecules have more kinetic energy, and are more likely to make transitions to higher energy levels, resulting in the emission of shorter wavelength radiation. At lower temperatures, the atoms and molecules have less kinetic energy and are more likely to make transitions to lower energy levels, resulting in the emission of longer wavelength radiation. For example, hydrogen gas emits light primarily at a wavelength of 656 nm (in the red region of the spectrum) when it makes transitions from the n=3 energy level to the n=2 energy level. However, at a high enough temperature, hydrogen atoms may be excited to higher energy levels and will emit light at a different set of wavelengths. Similarly, each element and molecule may have their own unique set of emission spectra based on its energy level transitions.
{ "domain": "physics.stackexchange", "id": 93092, "tags": "electromagnetic-radiation, visible-light, thermal-radiation, absorption, photon-emission" }
Foo Bar - Power Hungry challenge test case failing on unknown edge cases
Question: The challenge is to find the maximum product of a subset of a given array. Here's the problem: Write a function solution(xs) that takes a list of integers representing the power output levels of each panel in an array, and returns the maximum product of some non-empty subset of those numbers. So for example, if an array contained panels with power output levels of [2, -3, 1, 0, -5], then the maximum product would be found by taking the subset: xs[0] = 2, xs[1] = -3, xs[4] = -5, giving the product 2*(-3)*(-5) = 30. So solution([2,-3,1,0,-5]) will be "30". Each array of solar panels contains at least 1 and no more than 50 panels, and each panel will have a power output level whose absolute value is no greater than 1000 (some panels are malfunctioning so badly that they're draining energy, but you know a trick with the panels' wave stabilizer that lets you combine two negative-output panels to produce the positive output of the multiple of their power values). My code basically removes all 0's inside the list, then I iterate over all the remaining integers in the array and put each one in their respective array (positive or negative numbers). I sort my array with negative numbers then check its length; if it's odd, then I remove the last element (which will always be the one closest to 0 since I sorted it). Finally, I multiply each number inside these two arrays and return the result (we need to return it as string). def solution(xs): negatives = [] positives = [] product = 1 xs = filter(lambda a: a != 0, xs) if not xs: return '0' for num in xs: if num > 0: positives.append(num) elif num < 0: negatives.append(num) if not positives and len(negatives) == 1: return '0' negatives.sort() if negatives: if len(negatives) % 2 != 0: negatives.pop() for x in positives + negatives: product *= x return str(product) I've already search the internet as to why my logic is failing the fourth test case, I've read that if there's only one negative number in the array, you should return 0, but I also read that you should return the negative value. However, previously to writing the part of my code that checks this possibility, the fifth test case was also failing. So you need to return 0 in this case. Here's the repl with my code, open test cases and some test cases that I've added. Clearly there's an edge case that my code is failing but I can't figure out at all. Help, please? Answer: You've got too much code. First you filter out the zeros: xs = filter(lambda a: a != 0, xs) Then you filter the positives to one list and the negatives to another: for num in xs: if num > 0: positives.append(num) elif num < 0: negatives.append(num) So why bother with the first filtering? Zeros would go into neither list in this loop; they're naturally filtered out. You sort the negatives array unnecessarily. You only need to do so if it contains an odd number of elements. And you don't need to protect the test of whether it contains an odd number of elements with a check if it contains any elements at all. No positive values and 1 negative value seems an odd special case. If you let the code progress past that point, the single odd value would be pop'd out of the negative array, leaving no positive values and no negative values, which seems a less "special" special case. Simplified code: def solution(xs): negatives = [num for num in xs if num < 0] positives = [num for num in xs if num > 0] if len(negatives) % 2 != 0: negatives.sort() negatives.pop() if positives or negatives: product = 1 for x in positives + negatives: product *= x return str(product) return '0' This should behave the same as your original code (complete with not passing some edge cases), but should be slightly faster. Do you really need to sort the negatives array? That is an \$O(N \log N)\$ operation. You just need to remove the smallest magnitude number from the array, and finding and removing that can be done in \$O(N)\$ time. Edge Cases What is the maximum product of a non-empty subset of [-4]? I've read that if there's only one negative number in the array, you should return 0, but I also read that you should return the negative value. Seems unreasonable to return zero in this case, because the only non empty subset is [-4], and the maximum product is -4. But what about [-4, 0]? Both 0 > -4 and -4 * 0 > -4, so the maximum product is no longer -4. Modification of the code left to student.
{ "domain": "codereview.stackexchange", "id": 37442, "tags": "python, python-2.x" }
Why don't we avoid self-interaction terms when we measure energy of a continuous charge distribution as we do for point charge distribution?
Question: When we calculate the electrostatic potential energy for discrete point charges we make sure that while adding potential energy for individual charges we don't take the same charge and square it up by declaring $i$ not equal to $j$. But when we measure potential energy for continuous charge distribution,we don't avoid self interaction. (at least in the formula). Why?** Answer: Consider what an integral is. One way of describing an integral is to take some level of resolution, say $r$, chop the region over which the integral is being calculated into subregions with side lengths less than or equal to $r$, calculate the values for each subregion, and then add them up. Then take the limit as $r$ goes to zero. If we have a resolution of $r$, then the proportion of points that are within $r$ of each other will go as $r^3$, while the potential energy due to any particular pair will go as $1/r$. Thus the total energy of pairs within $r$ of each other will go as $r^2$. So when we let $r$ go to zero, the self-interaction energy goes to zero as well. For point particles, on the other hand, once $r$ gets to be smaller than the smallest separation between the points, the proportion of points that are within $r$ of each other is constant, leaving the $1/r$ to go to infinity as $r$ goes to zero. Thus, we need to worry about self-interaction for point charges because their self-interaction term diverges, but we don't need to worry about it for continuous distributions because it converges to zero as our resolution goes to zero.
{ "domain": "physics.stackexchange", "id": 55698, "tags": "electrostatics, charge, potential-energy" }
In vitro enzyme production
Question: I need to express a protein in vitro but I don't know where to start. I will likely do a T7 transcription protocol but for translation I am not sure what to do. Are there any good kits? Answer: New England Biolabs, which has an excellent reputation for well-tested, well-documented, and robust products, has a line of protein expression and purification products that I'd definitely take a look at. I've used the pMAL system for expression of fusion proteins in E. coli, and the PURExpress system looks just like what you're looking for. I have a personal (but not financial) connection with them, and I can vouch for their quality and technical support. Pierce/Thermo is generally a good place to look when dealing with protein stuff. I've had good results with lots of their stuff, but I haven't done in vitro expression in ages, so I can't remember what I used. (BTW, I have no connection with them, I just tend to like their products.) As Alan Boyd mentioned, lots of companies like Life Technologies, Promega, Qiagen, and Sigma have in vitro expression kits. Which one you pick will ultimately be influenced by what exactly you need your protein to do, the desired scale of expression, the type of tag(s) you want to put on it, what equipment and expertise you have in your lab, and how much time and money you want to invest.
{ "domain": "biology.stackexchange", "id": 1046, "tags": "molecular-biology, protocol, translation" }
Color vectors for antiquarks
Question: For a single quark, the colour vectors are given by $$r=\begin{pmatrix} 1 \\ 0 \\0 \end{pmatrix} \qquad g=\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \qquad b=\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}$$ and the $F_3$ and $F_8$ colour charge operators are $$F_3 = \frac{1}{2} \begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \\ F_8= \frac{1}{2\sqrt{3}} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -2 \end{pmatrix}.$$ The color charges $F_3$ and $F_8$ for the color vectors are given by this table: For example, acting $F_3$ on vectors $r, g ,b$ give $$F_3r=\frac{1}{2} r, \quad F_3 g = -\frac{1}{2} g, \quad F_3 b = 0$$ My question is what would be the colour vectors $\bar{r} , \bar{g}, \bar{b}$? From the above table, I can see that we need to have $$F_3 \bar{r} = - \frac{1}{2} \bar{r}, \ \ F_3 \bar{g} = \frac{1}{2} \bar{g} \ \ F_3 \bar{b} = 0.$$ This gives me $$\bar{r} = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \qquad \bar{g} = \begin{pmatrix} 1 \\ 0 \\0\end{pmatrix} \qquad \bar{b} = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}$$ But these are wrong since the $F_8$ charges are not satisfied: $$F_8 \bar{r} = \frac{1}{2\sqrt{3}} \bar{r} \neq -\frac{1}{2\sqrt{3}} \bar{r}$$ $$F_8 \bar{g} = \frac{1}{2\sqrt{3}} \bar{g} \neq -\frac{1}{2\sqrt{3}} \bar{g}$$ $$F_8 \bar{b} = -\frac{1}{\sqrt{3}} \bar{b} \neq \frac{1}{\sqrt{3}} \bar{b}$$ What am I doing wrong? Answer: From the above table, I can see that we need to have $F_3 \bar{r} = - \frac{1}{2} \bar{r}, \ \ F_3 \bar{g} = \frac{1}{2} \bar{g} \ \ F_3 \bar{b} = 0.$ ... What am I doing wrong? No. That's what you are doing wrong. Color transformations act on the transposed antiquarks from the right, and complex conjugated; since these Hermitian color generators are real, $$ q\to e^{i(\theta_3 F_3+\phi_8 F_8)}q,\\ \bar{q}\to \bar{q} e^{-i(\theta_3 F_3+\phi_8 F_8)} .$$ Plugging in this "minus transpose" feature, you obtain the table values.
{ "domain": "physics.stackexchange", "id": 96232, "tags": "particle-physics, standard-model, antimatter, quarks, color-charge" }
Collision of a ball with wedge
Question: Question:- A ball of mass $m$ with velocity $u_{0}$ collides perpendicularly with smooth wedge of mass $M$. Taking coefficient of restitution as $e$. Find velocity gained by wedge after collision. Let velocity of ball after collision be $v_{2}$(perpendicular to wedge) and velocity of wedge after collision be $v_{1}$ Applying Conservation of momentum along $x$-axis $mu_{0}sin\alpha=Mv_{1}-mv_{2}sin\alpha$ If we apply conservation of momentum along $y$-axis we get $u_{0}=v_{2}$ which would imply elastic collision and that is wrong. I have learnt that we can neglect gravitational forces during collision due to small impact time ($\Delta T\to 0$)which leads to no change in momentum as $\Delta P=F \Delta T$.So what's goes wrong here in this case Thank You in advance! Answer: Yes, it is true that for calculating the speed of the individual elements of the system moments after the collision the gravitational forces can be neglected. Since gravity did not have the time to have an effect. But the normal force from the ground can not be its effect is much more appreciable. When the ball hits the incline the incline gets a velocity component in the Y-axis But it is immediately decelerated to zero. Thus the force must be so large that we can not ignore its effect.$$\frac{dp}{dt}=F,dt\approx0$$ $$F \to \infty$$ Therefore even for that small interval of time, the force changes the momentum of the system significantly and thus not conserved about the Y-axis. In subjectively simple terms the normal force is more of an external force than gravity moments after the collision.
{ "domain": "physics.stackexchange", "id": 76895, "tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics" }
How is $\sum_i\langle i|M|i\rangle$ correlated to $\mathrm{tr}(M)$?
Question: In the book Quantum computation and quantum information, it says to evaluate $tr(A|\psi\rangle\langle\psi|)$ using Gram-Schmidt procedure to extend $|\psi\rangle$ to an orthonormal basis $|i\rangle$ which includes $|\psi\rangle$ as the first element. Then: $$tr(A|\psi\rangle\langle\psi|)=\sum_i\langle i|A|\psi\rangle\langle\psi|i\rangle\tag{2.60}$$ $$=\langle\psi|A|\psi\rangle\tag{2.61}$$ I understood that equation 2.61 that uses the special basis $|i\rangle$ described. But in equation 2.60, how $\sum_i\langle i|M|i\rangle$ is correlated to $tr(M)$ ? Can you help me with a more detailed description of it ? Answer: Thanks for the comments, so $tr(M)$ is exactly $\sum_i\langle i|M|i\rangle$ as pointed. I just discovered the same question answered with a proof in the physics stackexchange: https://physics.stackexchange.com/a/104155/273977
{ "domain": "quantumcomputing.stackexchange", "id": 2374, "tags": "mathematics, textbook-and-exercises, linear-algebra" }
If the entire potential energy of the charge is lost within the circuit, what happens to the charge?
Question: This is a really confusing question for me. Suppose, there is an electric circuit. And, right near to the negative terminal of the battery, I place 3 bulbs, which would take almost all the energy of the electrons(energised charge). Now, if the charge looses all of the energy, will it not stop? But if it stops, it is still under the influence of the electric field, so it should move towards the positive terminal, and because it is in the field, it still has some potential energy. So my question: If what I just said is right, will the charge not have infinite energy until it moves towards the positive terminal(because of the electric force)? But if this is true, shouldn't infinite bulbs light up in the circuit? If I am wrong, then what happens to the charge when it looses its energy? What happens to the electric field? Answer: The motion of the charges is totally unlike you setting off from home walking 10 km and feeling a little tired and slowing down and then walking another 10 km and feeling even more tired and slowing down and then walking a final 10 km and arriving home at a very slow pace, indeed just reaching your front door and stopping. Here is a simple model of what happens. The battery sets up an electric field inside the wires. The mobile charge carries, the free electrons which are responsible for electrical conduction, are accelerated by this electric field and gain kinetic energy. Whilst in a bulb, any one of the three, the free electrons collide with the bound ions and give those bound ions some kinetic energy. Then before another collision with a bound ion the free electron gains some more kinetic energy from the electric field to then give some to the next bound ion it meets. The net effect of all this is that the free electrons have an average velocity (drift velocity) and the bound ions vibrate more, their "temperature" increases. The battery is the source of the energy which ultimately resides with the bound ions. The free electrons are just the mechanism by which the energy is transferred. Even if a free electron lost all of its kinetic energy it can then take some more from the electric field so it will never be marooned.
{ "domain": "physics.stackexchange", "id": 28298, "tags": "electrostatics" }
Lysis step during DNA purification
Question: I am currently studying DNA extraction from various bio-originated samples. I am new to this field, and have learned how commercial DNA extraction kits work for bacteria. I understand that the very first step is called 'lysis', and that this is supposed to break cells to pull out all the stuff inside them, including DNA. I am studying cell-free DNA extraction procedure from human blood, and I don't understand why the lysis step is necessary in this context. If lysis is to break cells, and cell-free DNA is already out of the cell and freely circulating, such a step seems unnecessary. Is my understanding about lysis right? Then, why is the lysis step necessary during cfDNA isolation? Answer: You want to isolate the cfDNA, and leave anything else in the plasma out of your analysis — essentially, the lysis step is part of the purification. (See here) (I'll elaborate with edits shortly) Edit: Actually, it looks like lysis is not a good idea in cfDNA isolation — lysing cells in your sample will result in contamination by genomic DNA, as discussed here. It would be better to stabilize/fix cells to prevent them from lysing, to avoid this risk. In other words, your initial instincts were correct!
{ "domain": "biology.stackexchange", "id": 6619, "tags": "dna, dna-isolation" }
In which representation are monopoles of grand unifying theories classified?
Question: In the context of grand unification theories A.Zee's book states that $SU(5)$ (or $SO(10)$ if $SU(5)$ is considered as outdated as GUT candidate) as GUT and as spontaneously broken non-abelian gauge theory contains the monopole. I am wondering in which representation of $SU(5)$ it should be classified, may be as a singlet? Or is this question not meaningful since the monopole is perhaps a soliton? So if it were a soliton, how would it be classified? Answer: Monopoles are somewhat subtle, and there are different layers towards their classification, each being more correct than the previous one. Quick remark: what is usually called the $\text{SO}(10)$ model should more properly be called the $\text{Spin}(10)$ model, because it contains spinors. One has $\pi_1\text{Spin}(n)=0$ and $\text{Spin}(2n)^\vee=\text{PSO}(2n)$ and $\text{Spin}(2n+1)^\vee=\text{PSp}(2n)$. Also, one should point out that the correct GUT embedding reads $\text{SU}(3)\times \text{SU}(2)\times \text U(1)/\mathbb Z_6\hookrightarrow \text{SU}(5)$ and so, strictly speaking, the formulas below only make sense for $n=6$. The ABC of magnetic monopoles. Goddard-Nuyts-Olive: the monopoles of a gauge theory with gauge group $G$ are classified by the representations of the GNO/Langlands dual group $G^\vee$ (cf. Ref. 1). For example, the dual of $\text{SU}(N)$ is $\text{PSU}(N)$, whose representations are basically the adjoint and its tensor powers. Similarly, the dual of $\text{SO}(2n)$ is $\text{SO}(2n)$ itself, and that of $\text{SO}(2n+1)$ is $\text{Sp}(2n)$. The representations of these are all well-known. Lubkin: the GNO monopoles are typically unstable unless there is some topological charge that protects them. This topological charge takes values in $\pi_1G$ (cf. Ref. 2), and so one may have non-trivial monopoles if and only if $G$ is not simply-connected. For $\text{SU}(N)$ one has $\pi_1=0$ and so there are no monopoles. On the other hand, $\pi_1 \text{SO}(n)=\mathbb Z_2$, so here one does expect a (unique) non-trivial monopole. The SSB monopole: if the theory undergoes a Higgs mechanism $G\to H$ one has slightly more freedom in making stable monopoles (cf. Refs. 3,4). In particular, these are classified not by $\pi_1G$ or $\pi_1H$ but by $\pi_2(G/H)=\operatorname{ker}(\pi_1 H\to\pi_1 G)$. For example, if $G$ is simply-connected one has $\operatorname{ker}(\pi_1 H\to\pi_1 G)=\pi_1H$ and so the classification reduces to that of Lubkin. Indeed, if $$ \text{SU}(5)\to \text{SU}(3)\times \text{SU}(2)\times \text U(1)/\mathbb Z_n,\qquad n\in\{1,2,3,6\} $$ one has a $$ \operatorname{ker}(\mathbb Z\times\mathbb Z_n\to 0)=\mathbb Z\times\mathbb Z_n $$ classification. Similarly, for an $\text{SO}(10)$ model one has a $$ \operatorname{ker}(\mathbb Z\times\mathbb Z_n\to \mathbb Z_2)=\mathbb Z\times\mathbb Z_{\lceil n/2\rceil} $$ classification (this last equality is an educated guess; ask your friend the mathematician to be sure). References. Gauge theories and magnetic charge, P.Goddard, J.Nuyts, D.Olive. Geometric definition of gauge invariance, Elihu Lubkin. Magnetic monopoles in unified gauge theories, G.'t Hooft. Particle Spectrum in the Quantum Field Theory, Alexander M. Polyakov.
{ "domain": "physics.stackexchange", "id": 62469, "tags": "field-theory, group-representations, magnetic-monopoles, solitons, grand-unification" }
Intuition behind the construction of an ansatz circuit
Question: I'm learning about the VQE algorithm. When I looked at the declaration in Qiskit I saw you need to pass an ansatz which prepares the state. I looked at some commonly used ansatz functions, e.g. EfficientSU2 of Qiskit, and I saw many of them use $R_y$ and $X$ gates. I was wondering why exactly was this structure chosen? What's the physical logic behind it that makes it so convenient to use for many different Hamiltonians? How can the structure of the Hamiltonian affect the construction of the ansatz? Thank you Answer: Interesting question! An ansatz circuit is a parameterized circuit, say $V(\theta)$ where $\theta$ are a set of parameters, used to prepare a trial state for your problem: $$ |\Psi(\theta)\rangle = V(\theta)|0\rangle $$ In a variational algorithm, such as VQE, the trial state encodes your solution and is iteratively updated until some termination criterion is met. $$ |\Psi(\theta_0)\rangle \rightarrow |\Psi(\theta_1)\rangle \rightarrow \dots \rightarrow |\Psi(\theta_n)\rangle $$ Therefore the first question you must ask when looking for an ansatz is: Can the trial state prepared by my ansatz circuit encode my solution? For example: Does your solution contain complex amplitudes? If yes, you need a circuit that contains complex amplitudes (such as EfficientSU2). If no, you could use one that has only real amplitudes (such as RealAmplitudes). Apart from that, I think we can distinguish in two different categories of ansatz circuits: physically motivated ones and heuristic ones. Physically motivated ansatz circuits are based on some knowledge of the problem we want to solve. For example the UCCSD ansatz prepares a state where tuning the parameters turns excitations on and off. A potential drawback here is that the circuits can get massive! Go ahead and check out the size of a UCCSD ansatz. For the order of 10 parameters your circuit can already have 1000s of gates. That's not in reach of today's hardware and cannot be run meaningfully on an actual quantum computer. Heuristically motivated ansatz circuits, are essentially circuit that we tested and they turned out to work well. An interesting class are hardware efficient circuits (which usually are circuits with 1- and 2-qubit gates) which we can implement efficiently on hardware. EfficientSU2 also falls into this category. Then there are mixtures between these circuits. For instance, Qiskit's ExcitationPreserving circuit prepares a trial wave function, that preserves the particle numbers of you solve a molecular ground state calculation and used a Jordan-Wigner mapping to get the qubit operator. This notebook, among other things, discusses this topic.
{ "domain": "quantumcomputing.stackexchange", "id": 2015, "tags": "qiskit, programming, hamiltonian-simulation, vqe" }
Question Regarding the conservation of momentum in an inelastic collision of two rods
Question: I am tasked with solving this question but am facing some intuition difficulty. consider this system: The empty circle signifies a nail that is stuck in the wall. I am unsure if there is conversion of angular momentum around the nail- On the one hand, it seems its force is parallel to the force that will cause rotational motion after the collision of the bodies. On the other hand, If we consider the CoM, which would be located between the nail and the bottom rod, the force which is acting on it seems to create torque, seeing as r has a component which is perpendicular to the force. Am I missing something? Where should my intuition come from? (side note: I believe that the angular momentum around the CoM is NOT conserved but am not quite sure so any input on that would be welcome) Answer: In an isolated system, angular momentum is always conserved. The nail provides a connection between your system of two rods and the outside world. So your system of two rods is not isolated. You can't guarantee conservation of energy (you said already that the collision is not elastic). The nail will exert an external force on the system of two rods, so you should not expect conservation of linear momentum. And you have already intuited that the nail will exert a torque on the system, so you can't rely on conservation of angular momentum, either. However, the total angular momentum in a system depends on your choice of origin. (An exercise that you should do if you haven't: prove to yourself that an isolated object moving in a straight line at constant speed has constant angular momentum, by choosing some centers of rotation both on and off of its line of motion.) The same holds for an external torque: the magnitude of the torque $\vec \tau = \vec r \times \vec F$ depends on the displacement $\vec r$ between the point of application of the external force $\vec F$ and your arbitrary choice of a center of rotation. If you can find a point where the torque $\vec\tau$ from the nail is guaranteed to be zero, the angular momentum in that frame will be the same before and after the collision. You have correctly identified this privileged center of rotation as the nail, which sets $\vec r = 0$.
{ "domain": "physics.stackexchange", "id": 75363, "tags": "homework-and-exercises, newtonian-mechanics, angular-momentum, collision, rigid-body-dynamics" }
Finding electron configuration by following the Aufbau principle
Question: In the question, "Electron Configuration of Tellurium", there is mention of the 'follow yellow brick road' method of finding electron configuration. What I'd learned in the past was to find the nearest noble gas (in the row above the element of interest) and count from there. I searched in Google for this method and see that it appears to also called the Aufbau principle & diagonal method. However I'm not getting a good grasp on how it works. Could someone explain this method to me? Also, would this method work for something with a weird electron configuration, like copper? Answer: The so-called rule is to fill the subshells in increasing order of $n+l$. For subshells with the same value of $n+l$, the subshell with lowest $n$ is filled first. $$\begin{array}{cc} \hline n+l & \text{Orbitals} \\ \hline 1 & \mathrm{1s} \\ 2 & \mathrm{2s} \\ 3 & \mathrm{2p,3s} \\ 4 & \mathrm{3p,4s} \\ 5 & \mathrm{3d,4p,5s} \\ \vdots & \vdots \\ \hline \end{array}$$ For all intents and purposes, this is exactly equivalent to your heuristic of "go from the nearest noble gas", since to get the configuration of the noble gas itself you have to go through this same ordering of subshells. You can easily verify that the table above reproduces the order of filling of the Periodic Table. Chromium and copper are exceptions to the rule (obviously there are many more).
{ "domain": "chemistry.stackexchange", "id": 6922, "tags": "electronic-configuration, periodic-trends" }
Tools to extract information
Question: I have set of technical sentences extracted from few research papers. e.g., Via analyzing the peer feedback content, it was found that the feedback provided by the mixed mode group was of better quality than that provided by the "peer comments" group; that is, the former provided more detailed feedback to individuals than the latter. I want to extract the entities in the sentences and the relationships that take part with the identified. (e.g., SOV relationships) What are the tools that I can use to efficiently extract them? Answer: Open Calais is a free-to-use tool for entity recognition and relationship mapping. It's from Thompson-Reuters so may not be wholly suitable for technical language but worth a try. Has python bindings
{ "domain": "datascience.stackexchange", "id": 1808, "tags": "machine-learning, data-mining, nlp, stanford-nlp" }