anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Identify My Spider | Question:
I live in Littleton Colorado and caught this spider because I haven't seen it before. Any idea what it is?
Answer: That's a male Dysdera crocata, a.k.a, "Woodlouse Hunter". They can be a little aggressive and might try to bite you if handled but they're no more harmful than a common bee sting (even less so actually since they won't potentially cause an allergic reaction). | {
"domain": "biology.stackexchange",
"id": 9797,
"tags": "species-identification, arachnology"
} |
How to publish transform from odom to base_link? | Question:
I am running a p3at in a gazebo with ros plugins. I need to subscribe to the odometry pose and update broadcast the tf between odom and base_link.How do I do it? Should I define a "odom" link in the .xacro file used for simulation? What about the joint type?
Originally posted by balakumar-s on ROS Answers with karma: 137 on 2013-11-22
Post score: 1
Original comments
Comment by MartinW on 2013-11-22:
To my knowledge you don't. As long as you have those frames defined in your urdf/xacro, you shouldn't have to broadcast this information again. It seems redundant if it were!
Answer:
The base_link frame is the root of your robot kinematic tree. Given a particular kinematic model and a particular configuration of your robot you can determine the relative position of all your robot bodies.
When your robot is made of solid bodies linked together by joints whom parameters are accurately known, this problem is quite trivial. In particular, it is solved by the robot_state_publisher node. However, when it comes to the "map" and "odom" frame the problem is totally different. These two frames meaning is defined in REP 105. They define the position of the robot in the "world" frame.
Here you have two cases:
your robot position is fixed all the time and you can use (for instance) a TF static_transform_publisher to stream this relative position.
your robot position evolves over time and you will need to run a localisation component which will estimates your robot position (using the robot command and the robot sensors for instance). See the robot_pose_ekf node from the navigation stack. Other ways include using directly the ground truth coming from your simulator, using an external system such as motion capture, etc. to localize your robots. Regarding embedded approaches, you can use SLAM, integrate your command and fuse it or not with the IMU information, etc.
You usually have two "levels" of localisation:
a local method which drifts (integrating the command for instance)
a global one which tries to estimate the robot position from exterior information such as the environment (SLAM algorithms, GPS may enter this category).
The local method will define the odom frame (this frames drifts but the position never jump from one position to another one which is good when you need a reference value for your command) whereas the map frame should be accurate anytime but may jump from a position to another one which may produce very high velocities if you try to close the loop on this data.
One should always have: map -> odom -> base_link
If you don't have components to estimate all the frames it is ok to set some of these frames relative information to the identity. At this point the best strategy really depends on your robot and your application.
To conclude, tf is not a tool which only broadcasts frame position estimated from the robot model and its configuration, this is only one possibility offered by ROS. It makes sense for most of the robots to do so to estimate the robot bodies relative position but it makes no sense for the other frames. Therefore, adding odom or map in your robot model is a bad idea unless it is a fixed base robot. Even in this case, IMHO it is better to use a static_transform_publisher as you may want at some point to relocate your robot at a different location w.r.t to your world frame. Also, editing 3rd party package is a bad practice, if you use a model from a ROS package, editing the xacro to fit your own need is a bad practice.
Originally posted by Thomas with karma: 4478 on 2013-11-24
This answer was ACCEPTED on the original site
Post score: 14
Original comments
Comment by balakumar-s on 2013-11-25:
Thank you! I solved the odom to base_link tf by making publishTF(skid steer ros gazebo plugin) TRUE. I just published map to odom by using tf publisher. | {
"domain": "robotics.stackexchange",
"id": 16246,
"tags": "ros, navigation, odometry, xacro, map-to-odom"
} |
ethzasl_ptam installation | Question:
hi
i am a new in ros domaine , trying to install ethzasl_ptam with fuerte any help pls os : ubuntu 10.04 lonovo i7core 64bit.
Originally posted by houssamou on ROS Answers with karma: 16 on 2013-04-10
Post score: 0
Answer:
i think the pbroblem is with fuerte version or some thing like that
rosdep never works for me
ERROR: Rosdep cannot find all required resources to answer your query
Missing resource ptam
ROS path [0]=/opt/ros/fuerte/share/ros
ROS path [1]=/opt/ros/fuerte/share
ROS path [2]=/opt/ros/fuerte/stacks
Originally posted by houssamou with karma: 16 on 2013-04-14
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13764,
"tags": "ros"
} |
Phonebook implementation in C | Question: I have implemented a phonebook in C using a singly linked list sorted data structure. This is my first average-major project.
I need some reviews so that I can improve my coding standards. The program consists of 3 files:
header.h: custom header
/*
* File : header.h
* Custom Header Definition
*/
#include<stdio.h>
#include<string.h>
#include<stdlib.h>
#include<ctype.h>
#ifdef __linux__
#define CLEAR_SCREEN system("clear")
#elif _WIN32
#define CLEAR_SCREEN system("cls")
#endif
#define MAX_NAME 26
#define MAX_NO 11
typedef struct phonebook {
char * name ;
char * no ;
struct phonebook * next ;
} phonebook ;
/* Root Node */
extern phonebook * head ;
/* Temporary Storage */
extern char temp_name[] ;
extern char temp_no[] ;
/* Serial no while printing */
extern int START ;
/* Gets Input values From User , Returns 1 if it Fails */
extern int get_details() ;
/* Flushing stdin */
extern void input_flush() ;
/* basic verbs */
extern void add_contact ( ) ;
extern void print_book () ;
extern void search_contact ( ) ;
extern void delete_contact ( ) ;
extern void format_book ( struct phonebook ** ) ;
/* File Operations */
extern void file_read() ;
extern void file_write() ;
el.c: execution logic
#include "header.h"
char temp_name [ MAX_NAME ] ;
char temp_no [ MAX_NO ] ;
int START = 1 ;
phonebook * head = NULL ;
int run ()
{
char ch ;
char c ; /* flush */
int m =0 ;
CLEAR_SCREEN ;
printf ("\n\n\t PHONEBOOK MANAGER \n") ;
printf ("\n\n\n 1 . %-12s " , "Add Contact") ;
printf ("\n\n 2 . %-12s " , "Multiple") ;
printf ("\n\n 3 . %-12s " , "Search") ;
printf ("\n\n 4 . %-12s " , "Delete") ;
printf ("\n\n 5 . %-12s " , "Format") ;
printf ("\n\n 6 . %-12s " , "Print All") ;
printf ("{ Choice : }\b\b\b") ;
ch = getchar() ;
input_flush() ;
CLEAR_SCREEN ;
switch ( ch )
{
case '1':
if ( ! get_details() ) add_contact();
break ;
case '2':
printf (" \n How many Contacts ? : ") ;
if ( scanf ("%d" , &m ) !=1 )
break ;
else
input_flush() ;
while ( m>0 )
{
if ( ! get_details() ) add_contact() ;
m-- ;
}
break ;
case '3':
printf (" \n Enter Part/Full Contact Name/Number : ") ;
gets(temp_name ) ;
search_contact() ;
break ;
case '4':
printf (" \n Enter Full Contact Name : ") ;
gets(temp_name ) ;
delete_contact() ;
break ;
case '5':
printf (" \n Entire Data Will Be Lost ! Continue ?? { Y/other } : ") ;
ch = getchar () ; getchar() ;
if ( ch == 'Y' || ch == 'y' )
{
format_book( &head ) ;
printf ("\n Successful ! ") ;
}
else
printf ("\n Aborted ! ") ;
break ;
case '6':
print_book() ; break ;
default :
printf ("\n\n\nThank You ! \n") ;
return 0;
}
input_flush() ;
return 1 ;
}
int main ()
{
file_read() ;
while ( run () ) ;
file_write() ;
return 0 ;
}
bl.c: contain Implementations of Subroutines
#include"header.h"
/* helping subroutines */
void new_node ( phonebook ** ) ;
void fill_details () ;
void print_record ( phonebook * ) ;
void header () ;
void footer () ;
int duplicate_check() ;
int error_check () ;
void split (char * ) ;
void lowercase (char * ) ;
void input_flush() ;
void add_contact ( )
{
/* Traversing */
phonebook *temp = head ;
/* Storing Temporary Address */
phonebook * address = NULL ;
/* Adding First Contact */
if ( temp == NULL )
{
new_node ( &head ) ;
fill_details ( head ) ;
return ;
}
/* Modifying Root Node */
if ( strcmp ( temp->name, temp_name ) >0 )
{
new_node ( &head ) ;
fill_details ( head ) ;
head->next = temp ;
return ;
}
/* Adding Upcoming */
while ( temp->next !=NULL )
{
if ( strcmp( temp->next->name , temp_name ) < 0 )
temp=temp->next ;
else
break ;
}
/* Contact to add in the middle of two nodes */
if ( temp->next != NULL )
{
address = temp->next ;
new_node ( &temp->next ) ;
fill_details ( temp->next ) ;
temp->next->next = address ;
return ;
}
/* Adding contact at the end ( appending node at the end ) */
else
{
new_node ( &temp->next ) ;
fill_details ( temp->next ) ;
temp->next->next = NULL ;
return ;
}
}
void search_contact ()
{
phonebook * temp = head ;
/* How many contacts matched */
int cnt =0 ;
header () ;
START = 1 ;
if ( head ==NULL)
{
footer () ;
printf("\n\n Phone Book is Empty ..! ") ;
return ;
}
/* String to be searched is read to ' temp_name' */
while ( temp!= NULL )
{
if ( strstr (temp->name , temp_name ) !=NULL || strstr(temp->no , temp_name )!=NULL )
{
print_record( temp ) ; /* Detail Matched */
cnt++ ;
}
temp=temp->next ;
}
footer () ;
printf ("\n\n %d Contact%s have been Matched ! \n " , cnt , cnt==1?"":"s" ) ;
}
void delete_contact( )
{
/* Traversing */
phonebook * temp = head ;
/* Storing Temporary Address */
phonebook * address = NULL ;
if ( head ==NULL)
{
printf("\n\n Phone Book is Empty ..! ") ;
return ;
}
if ( strcmp( head->name , temp_name ) == 0 )
{
address = head ->next ;
free( head ) ;
head = address ;
printf("\n\n Contact deleted successfully...!! ") ;
return ;
}
while ( temp->next !=NULL )
{
if ( strcmp (temp->next->name , temp_name ) == 0 )
{
address = temp ->next->next ;
free ( temp->next ) ;
printf("\n\n Contact deleted successfully...!! ") ;
temp->next = address ;
return ;
}
else
temp=temp->next ;
}
printf ("\n\n Contact Does not Exist ! \n ") ;
}
void print_book ( )
{
phonebook *temp = head ;
/* Serial no is reset to 1 */
START = 1 ;
printf ("\n Complete List \n") ;
header () ;
if ( head ==NULL)
{
footer () ;
printf("\n\n Phone Book is Empty ..! ") ;
return ;
}
while ( temp!= NULL)
{
print_record( temp ) ;
temp = temp->next ;
}
footer () ;
}
void file_read ()
{
char data[44] ;
int i=-1 ;
char ch ;
FILE * fp1 ;
fp1 = fopen ("contacts.txt" , "r") ;
while ( ( ch = getc (fp1) ) != EOF )
{
if ( ch == '\n')
{
data[++i] ='\0', split (data) ;
add_contact() ;
i = -1 ;
}
else
{
data[++i] = ch ;
}
}
fclose( fp1 ) ;
remove ( "contacts.txt") ;
}
/* Separate Name and No. from "Data" store them in temp_name , temp_no */
void split (char * str )
{
int i=-1 ;
while ( *str!='=' )
{
temp_name[++i] = *str ;
str++ ;
}
temp_name[++i] = '\0' ;
str++ ;
i=-1 ;
while ( temp_no[++i] = *str )
str++ ;
}
void file_write ()
{
phonebook * temp = head ;
char data[40] ;
FILE * fp1 ;
fp1 = fopen ("contacts.txt" , "w") ;
while ( temp != NULL )
{
strcpy ( data , temp->name ) ;
strcat ( data , "=") ;
strcat ( data , temp->no ) ;
fprintf ( fp1 , "%s\n" , data) ;
temp=temp->next ;
}
fclose ( fp1 ) ;
}
void format_book ( phonebook ** temp )
{
if ( *temp == NULL )
return ;
format_book ( &( (*temp)->next ) ) ;
free ( *temp ) ;
*temp=NULL ;
}
int get_details ()
{
int response = 0 ;
printf("\n\n Enter Contact Name : " ) ;
gets(temp_name) ;
printf("\n Enter Mobile Number : " ) ;
gets(temp_no) ;
return error_check() ;
}
void new_node ( phonebook ** temp )
{
*temp = ( phonebook * ) malloc ( sizeof ( head ) ) ;
(*temp)->name = NULL ;
(*temp)->no = NULL ;
(*temp)->next = NULL ;
return ;
}
void fill_details ( phonebook * temp )
{
temp->name = ( char* ) malloc ( sizeof (char) * MAX_NAME ) ;
temp->no = ( char* ) malloc ( sizeof (char) * MAX_NO ) ;
strcpy ( temp->name , temp_name ) ;
strcpy ( temp->no , temp_no ) ;
return ;
}
void input_flush()
{
int c ;
while((c = getchar()) != '\n' && c != EOF) ;
}
void header ()
{
printf("\n %-50s" , "------------------------------------------------------- " ) ;
printf("\n %-8s %-29s %-11s" , "SL.NO" ,"CONTACT NAME " , "MOBILE NUMBER " ) ;
printf("\n %-50s" , "------------------------------------------------------- " ) ;
}
void footer ()
{
printf("\n %-50s" , "------------------------------------------------------- " ) ;
}
void print_record ( phonebook * temp )
{
printf ("\n %2d %-29s %-11s" , START , temp->name , temp->no ) ;
START++ ;
}
/* Arbitrary for strlwr */
void lowercase ( char * temp )
{
while ( *temp )
{
if (*temp>='A' && *temp<='Z' )
*temp= *temp+32 ;
temp++ ;
}
}
int error_check ( )
{
char * name = temp_name ;
char * no = temp_no ;
if ( strlen ( temp_name ) > MAX_NAME || strlen( temp_name ) <1 )
{
printf("\n\n Failed !\n Length exceeded/ No input for name { max 25 } \n") ;
return 1;
}
for ( ; *name ; name ++ )
if ( !isalpha (*name ) && *name != ' ' )
{
printf( "\n\n Failed !\n Invalid characters used for name { only alphabet , space allowed } \n" ) ;
return 1;
}
if ( strlen ( temp_no)!=10 )
{
printf("\n\n Failed !\n Invalid ten digit number { Ten Digits Please }.\n") ;
return 1;
}
if ( *temp_no <'7' )
{
printf("\n\n Failed !\n Currently No Numbers Exist on \'%c\' Series .\n" , *temp_no ) ;
return 1;
}
for ( ; *no ; no++ )
if ( ! isdigit (*no ) )
{
printf( "\n\n Failed !\n Invalid characters used for number { only digits allowed } \n" ) ;
return 1;
}
lowercase ( temp_name ) ;
if ( !duplicate_check() )
{
printf ("\n\n Successful ! \n ") ;
return 0;
}
}
int duplicate_check( )
{
phonebook * temp = head ;
while (temp!= NULL )
{
if ( strcmp( temp->name , temp_name )==0 )
{
printf ("\n\n Failed\n Contact Exists with Same Name ! ") ;
return 1 ;
}
if ( strcmp( temp->no , temp_no )==0 )
{
printf ("\n\n Failed\n Number Associated with \" %s \". " , temp->name ) ;
return 1 ;
}
temp=temp->next ;
}
return 0 ;
}
Answer: I see a number of things that could help you improve your code.
Comment unusual code
The code currently contains this loop:
while ( temp_no[++i] = *str )
str++ ;
This code might do what you intend, but at first glance, it looks like it might be an error because it's = instead of ==. It's probably a good idea to add a comment there explaining what's being done and why.
Don't use gets
Using gets is not good practice because it can lead to buffer overruns. It has been removed from the C11 standard and marked "obsolete" in POSIX 2008. Use fgets instead, which requires the specification of a buffer size.
Avoid the use of global variables
It's generally better to explicitly pass variables your function will need rather than using the vague implicit linkage of a global variable. The temp_name and temp_no variables in particular, are especially worthy of being eliminated. By their names, they should likely have been local to particular functions that need them rather than global.
Eliminate unused variables
This code declares a variable response within get_details() but then does nothing with it. The same is true of c within run(). Your compiler is smart enough to help you find this kind of problem if you know how to ask it to do so.
Use const where practical
The current print_record() routine does not (and should not) modify the passed phonebook struct, and so it should be declared const:
void print_record(const phonebook *);
Don't use system("cls")
I usually give two reasons not to use system("cls") or system("pause"). The first is that it is not portable to other operating systems, but I see that you have attempted to address that one. However, the second is more important: it's a security hole, which you absolutely must care about. Specifically, if some program is defined and named cls or pause, your program will execute that program instead of what you intend, and that other program could be anything. Rewrite the function to do what you want using C and avoiding system. For example, if your terminal supports ANSI Escape sequences, you could use this:
void cls()
{
printf("\x1b[2J");
}
Don't put code in headers
The current CLEAR_SCREEN code exists solely in a header, but that's not the right place. Header files shouldn't have code in them. Headers should only include declarations of functions and variables and structures, but not code. In this case, it's a simple matter to just move that function to the el.c file which is the only place it's used.
Make sure all paths return a value
The error_check() routine checks for various errors and returns a value, but there is a control path through the function that gets to the bottom of the function. The function should return a value no matter what control path is executed.
Use consistent formatting
The code as posted has inconsistent indentation which makes it hard to read and understand. Pick a style and apply it consistently.
Be careful with signed versus unsigned numbers
If you only want to allow positive numbers for the number of contacts in run(), which makes sense in this context, that should be declared as unsigned rather than int. Similarly, you can use %u in printf and scanf statements rather than %d.
Choose better names
The variable m within run() is used as the number of contacts. Nothing about the name m suggests that meaning; it would be better to call it contact_count or similar. Also the header really should be named something better than header.h.
Check your return values
The code in file_read() calls fopen and then immediately does a getc. What if either of those calls fails? The code doesn't currently check the return value, which could indicate a failure. Better would be to check the return value and do the appropriate thing -- possibly by returning early. Similarly, calls to malloc can fail and that situation should be handled gracefully.
Avoid being overly restrictive of inputs
The current code requires exactly ten digits for a mobile phone number and only allows numbers starting with the digits '8' or '9'. This makes the program very specific and unusable on most of the planet. Rather than rejecting other phone numbers, it might instead be better to accept them. If you need an error message, make it a warning rather than an error.
Think of the user
If I decide to put in the number of my friend Jürgen, I find that even though I get no error message, his number is not put into the phonebook. You should investigate why that is and fix it. | {
"domain": "codereview.stackexchange",
"id": 17763,
"tags": "c"
} |
Are the physical processes behind electrical resistance and electrical heat generation different? | Question: I asked a similar question in electronics stackexchange and was told to come here.
I have been told seemingly conflicting information about electrical resistance and heat generation and am looking for guidance.
I have been told that:
An electrical resistor works by applying friction to the electrons passing through it and radiating the removed energy as heat, so the higher the resistance of the resistor, the more heat is generated.
The more resistance a resistor has, the less current will flow through a circuit. Imagine a simple circuit with a 9v battery and a single resistor, the higher the resistance of that resistor, the less current will flow through that circuit and the longer that battery will last.
So in one case more friction equals less current and in the other case, more friction equals more heat.
Something about this seems contradictory to me.
I imagine two circuits. Each with a resistor and a 9v battery.
Circuit #2 has a resistor with 2x the resistance of Circuit #1
I imagine that Circuit #2 will NOT generate 2x as much heat as Circuit #1.
If that is the case, the stronger resistor is slowing the flow of the current without converting the difference entirely into heat.
If that is the case, then the physical process behind electrical resistance must be different than the physical process behind electrical heat generation.
What am I missing here?
Thank you for your time!
Answer:
An electrical resistor works by applying friction to the electrons passing through it and radiating the removed energy as heat, so the higher the resistance of the resistor, the more heat is generated.
This is essentially true. The power loss through a resistor (that is, the rate at which thermal energy is generated) can be written $P = I^2 R$, where $I$ is the current through the resistor and $R$ is the resistance. If the current through the resistor is held fixed, then the power loss is proportional to the resistance. However, if by adding more resistance you also change the current, then this will no longer be true.
The more resistance a resistor has, the less current will flow through a circuit. Imagine a simple circuit with a 9v battery and a single resistor, the higher the resistance of that resistor, the less current will flow through that circuit and the longer that battery will last.
This is also true. The current being drawn from the battery in this case will be $I = \frac{V}{R}$, where $V$ is 9 volts. Increasing the resistance while holding $V$ fixed will decrease the amount of current being drawn from the battery, which will prolong its life.
In your example, you have a constant 9 volt battery and two circuits with resistors $R_1 < R_2$. The current flowing through the first circuit is $I_1 = V/R_1$, and the current flowing through the second circuit is $I_2 = V/R_2$. Therefore, the power loss through the first circuit is
$$P_1 = I_1^2 R_1 = \left(\frac{V}{R_1}\right)^2 R_1 = \frac{V^2}{R_1}$$
and similarly $P_2 = V^2 / R_2$. Since $R_1 < R_2$, it follows that $P_1 > P_2$, meaning that the first circuit (with a smaller resistor) generates more heat.
This isn't a contradiction. Remember that $P=I^2 R$; the second circuit has a higher $R$ but a smaller $I$, and the latter has a larger effect on the power loss. As a result, in this particular example increasing the resistance has the effect of reducing the rate at which heat is generated.
To answer your question about the mechanism being the same, consider what happens when you rub your hands together for warmth. If you press your hands together with more force, then the frictional force (which is responsible for the heat generation) will go up. As a result, as long as you keep rubbing your hands together with the same speed, the amount of heat generated will also increase.
However, if by increasing the frictional force you also decrease the speed with which you rub your hands together, then the net effect might be a reduction in generated heat rather than an increase. This is what happens in your example of the two simple circuits. | {
"domain": "physics.stackexchange",
"id": 74013,
"tags": "electricity, electric-circuits, electric-current, electrical-resistance"
} |
Optional argument for which None is a valid value | Question: I have the following function:
def item_by_index(iterable, index, default=Ellipsis):
"""Return the item of <iterable> with the given index.
Can be used even if <iterable> is not indexable.
If <default> is given, it is returned if the iterator is exhausted before
the given index is reached."""
iterable_slice = itertools.islice(iterable, index, None)
if default is Ellipsis:
return next(iterable_slice)
return next(iterable_slice, default)
It should (and does) throw an exception if the default is not given and the iterator is exhausted too soon. This is very similar to what the built-in function next does.
Usually, one would probably use None as default value to make an argument optional. In this case, however, None is a valid value for default and should be treated differently from no argument. This is why I used Ellipsis as a default value. It sort of feels like an abuse of the Ellipsis object, though.
Is there a better way of doing this or is Ellipsis a good tool for the job?
Answer: I would argue the best option is to make an explicit sentinel value for this task. The best option in a case like this is a simple object() - it will only compare equal to itself (aka, x == y only when x is y).
NO_DEFAULT = object()
def item_by_index(iterable, index, default=NO_DEFAULT):
...
This has some semantic meaning, and means there is no potential overlap between a value a user wants to give and the default (I imagine it's highly unlikely they'd want to use Ellipses, but it's better to make as few assumptions as possible about use cases when writing functions). I'd recommend documenting it as such as well, as it allows the user to give a default/not using a variable, rather than having to change the call.
try:
default = get_some_value()
except SomeException:
default = yourmodule.NO_DEFAULT
item_by_index(some_iterable, index, default)
As opposed to having to do:
try:
default = get_some_value()
except SomeException:
item_by_index(some_iterable, index)
else:
item_by_index(some_iterable, index, default)
The only downside it's not particularly clear what it is if you are looking at the object itself. You could create a custom class with a suitable __repr__()/__str__() and use an instance of it here.
As of Python 3 enums have been added - the default type of enum only compares equal on identity, so a simple one with only the one value would also be appropriate (and is essentially just an easy way to do the above):
from enum import Enum
class Default(Enum):
no_default = 0
Thanks to metaclass magic, Default.no_default is now an instance of a Default, which will only compare equal with itself (not 0, by default, enums are pure). | {
"domain": "codereview.stackexchange",
"id": 3826,
"tags": "python"
} |
Box Sliding Down an Inclined Plane - Underlying Assumption | Question: The classic introductory mechanics problem considers the motion of a box sliding down an inclined plane. As I'm reviewing the early chapters in Taylor's Classical Mechanics, I was struck by a question: in such a problem, we always say that the box has no motion in the y-direction (as is standard for such problems our coordinate system has axes parallel and perpendicular to the plane, the x- and y-axis, respectively) and therefore we can equate the normal force with the perpendicular component of gravity so that we can proceed with the analysis. For instance, in solving the problem, Taylor waves his hand, saying
Since the block does not jump off the incline, we know there is no motion in the y-direction[...]
What allows us, a priori, to say that the box will stay on the plane. Is it simply common sense, or is there some other way of showing it that is more rigorous?
Answer: The forces on the box in such questions are gravity, friction, and the reaction of the plane. The reaction of the plane is the key. Gravity has a component in the y direction, and would cause the object to fall through the plane if not for the reaction.
The plane is a rigid object. That means when a force tries to deform it, it pushes back with just enough force to prevent the deformation. An object sitting on it will not penetrate the surface. On the other hand, the plane will not push back strongly enough to lift the object off the surface.
This doesn't explain the reaction force. It is more or less a restatement of the fact that the plane is rigid. And that is as far as this kind of problem usually takes it.
To explain the normal force you need to explain atomic bonds. They have a more or less fixed length. They can stretch, but less than the separation between atoms. So you can model them as rigid links, or perhaps very stiff springs, and leave it at that.
Or you can look at the orbitals of molecules and calculate the configuration that results in the lowest energy. In spirit, this is like calculating orbitals of an atom. But instead of one nucleus, there are two for a diatomic molecule. Or a periodic array of them for a crystal.
There is a lowest energy at a particular distance. If the nuclei get too close, potential energy increases because they repel each other. If they are too far apart, energy increases because the electrons are not close enough to both nuclei. | {
"domain": "physics.stackexchange",
"id": 66842,
"tags": "classical-mechanics"
} |
What is full AM used for? | Question: What are some applications where full AM is used nowadays? Is it used at all considering the fact that it is less power-efficient than, say, AM with suppressed carrier?
Answer: AM is still used worldwide in civil aviation, where there is a large installed base of radios (in small planes, etc.) than can not be easily updated (due to multiple national and International regulatory hurdles), and where you want one transmission to be able to talk over another signal on the same channel in an emergency.
AM radio receivers are simple and cheap using primitive technology (1 discrete diode, etc.), and there are still a few parts of the world where people can barely afford an old one of those. | {
"domain": "dsp.stackexchange",
"id": 5080,
"tags": "modulation, analog, signal-power"
} |
What is inverse depth (in odometry) and why would I use it? | Question: Reading some papers about visual odometry, many use inverse depth. Is it only the mathematical inverse of the depth (meaning 1/d) or does it represent something else. And what are the advantages of using it?
Answer: Features like the sun and clouds and other things that are very far off would have a distance estimate of inf. This can cause a lot of problems. To get around it, the inverse of the distance is estimated. All of the infs become zeros which tend to cause fewer problems. | {
"domain": "robotics.stackexchange",
"id": 1586,
"tags": "slam, computer-vision, odometry"
} |
Is this reaction with H+ and OH− a possible reaction? | Question: I am wondering about the reaction $\ce{H+ + O^2- -> OH-}$. Is it possible?
I think this is not a possible reaction, because here a strong acid plus another particle reacts to a strong base. If it is possible, what would this mean for $\ce{O^2-}$ in the acid-base system? Would it be a very, very strong base?
Answer: You can try this in the lab: Take some $\ce{CaO}$, add $\ce{HCl}$ and see what happens. Actually, no don’t. Don’t try this unless you know exactly what you are doing.
A single proton does not exist, especially not in aquaeous solution. Using $\ce{H3O+}$ as a proton-donating species is a much better way to formulate reactions. A professor once said, the smallest actually proven proton-equivalent be $\ce{H9O4+}$, so four water molecules that shift the additional proton between them, but since it was first-year, he didn’t go into details. Also in organic solutions or others, protons will always be bound to something, however weakly.
Now that I got that out of the way, let’s take a look at the $\ce{H3O+}$ particle I just postulated. It has three hydrogens connected to an oxygen and in a very, very, very broad way of arguing, every hydrogen can be considered a potentially acidic proton. Therefore, we can think of hydronium as a triprotic acid with the following conjugate bases:
$$\ce{H3O+ ->C[- H+] H2O ->C[- H+] HO- ->C[- H+] O^2-}$$
Aha! Oxide ions can be considered as fully deprotonated water. Of course, the entire reaction series is reversible, so add ‘protons’ to oxide ions (whatever the source) and find that they get protonated to hydroxide and finally to water. You knew of the second step (hydroxide to water). And here’s the big surprise: The first one is identical, only the equilibrium is much, much further on the hydroxide’s side.
Adding excess acid to oxides would just create water. However, the acid-base neutralisation reaction is exothermic. Very exothermic. Which is precisely why I wrote do not try this unless you know exactly what you are doing at the top. If you have ever neutralised semi-concentrated hydrochloric acid and sodium hydroxide, you know how much heat can be created. Think more than twice that.
In fact, water has what is termed a nivelling effect: Water itself is too strong an acid to allow oxide bases to exist in its vicinity; so $\ce{O^2-}$ cannot be dissolved as is, it will immediately get protonated to hydroxide. Find some $\ce{CaO}$, dissolve it in water, and you would immediately get $\ce{Ca(OH)2}$. Oxides are only able to exist in aquaeous solutions if they are either complexed entirely by cations ($\ce{Al}$ can form oxido-hydroxido multi-core complexes at $\ce{pH 7}$ which include internal oxide ions) or if they are in a bound state, so not technically oxides, such as in the vanadyl cation $\ce{(VO)^2+}$.
Finally, you stated something in your question saying that a strong acid would react with something to a strong base which you think is impossible. That is not the case; if the product of the reaction of something with a strong acid is a strong base, then ‘something’ just needs to be an even stronger base. Which is precisely the case of oxide ions. Just to recall for a second that strong acids and even stronger bases react very, very exothermicly. I feel I have not stressed it enough: Do not try this unless you know exactly what you are doing! | {
"domain": "chemistry.stackexchange",
"id": 4254,
"tags": "acid-base, reaction-mechanism"
} |
Using JS Template Literals as for a simple Templating Engine | Question: There are many template engines but my question is: Why do they still exist? Can we not achieve the same job with readily available JavaScript Template Literals?
So influenced from this blogpost, I have come up with something as follows. Do you think this is reasonable in daily JavaScript?
The following code is pure JavaScript. Lets start with a proper data.
var data = { labels: ['ID','Name']
, rows : [ {"id": "1", 'name': "richard"}
, {"id": "2", 'name': "santos"}
]
};
Now let's define a tableTemplate function which takes the data and gives us an HTML table.
function tableTemplate(data){
return `
<table>
<tr> ${data.labels.map(label => html`
<th>&${label}</th>`).join("")}
</tr>${data.rows.map(row => html`
<tr>
<td>&${row.id}</td>
<td>&${row.name}</td>
</tr>`).join("")}
</table>`;
}
So what is this html` thingy and also that &$?
First of all html` is just a generic tag function that we define only once and without modifying it, we can use it for all of our template functions.
&$ on the other hand is simple. & is just a simple string character and the second one is the escaping $ character for variables in template strings such as ${x}. We can use a tag function to validate our HTML code by escaping invalid HTML characters like &, >, <, ", ' or `. The version I give below looks a little convoluted but it's really simple and very useful. It basically escapes invalid HTML characters and provides a "raw" version of the template string.
Please follow above links for further information. Here it is:
function html(lits,...vars){
var esc = { "&": "&"
, ">": ">"
, "<": "<"
, '"': """
, "'": "'"
, "`": "`"
};
return lits.raw
.reduce( (res,lit,i) => (res += lit.endsWith("&") ? lit.slice(0,-1) + [...vars[i].toString()].reduce((s,c) => s += esc[c] || c, "")
: lit + (vars[i] || ""))
, ""
);
}
Now, it's time to put everything together for a working sample. Let's also add <thead> and <tbody> tags and some CSS this time.
function html(lits,...vars){
var esc = { "&": "&"
, ">": ">"
, "<": "<"
, '"': """
, "'": "'"
, "`": "`"
};
return lits.raw
.reduce( (res,lit,i) => (res += lit.endsWith("&") ? lit.slice(0,-1) + [...vars[i].toString()].reduce((s,c) => s += esc[c] || c, "")
: lit + (vars[i] || ""))
, ""
);
}
function tableTemplate(data){
return `
<table>
<thead>
<tr>${data.labels.map(label => html`
<th>&${label}</th>`).join("")}
</tr>
</thead>
<tbody>${data.rows.map(row => html`
<tr>
<td>&${row.id}</td>
<td>&${row.name}</td>
</tr>`).join("")}
</tbody>
</table>`;
}
var data = { labels: ['ID','Name']
, rows : [ {"id": 1, 'name': "richard"}
, {"id": 2, 'name': "santos"}
, {"id": 3, 'name': "<ömer & `x`>"}
]
};
document.write(tableTemplate(data));
table {
background-color: LightPink;
border : 0.2em solid #00493E;
border-radius : 0.4em;
padding : 0.2em
}
thead {
background-color: #00493E;
color : #E6FFB6
}
tbody {
background-color: #E6FFB6;
color : #00493E
}
Obviously you can now insert <style> tag or class="whatever" attribute into your quasi-HTML template literal in the tableTemplate function just the way you would normally do in an HTML text file.
Also please note that if you console.log the resulting HTML text string, thanks to the raw conversion, it would beautifully display (according to your indenting in the tableTemplate function) just like below:
<table>
<thead>
<tr>
<th>ID</th>
<th>Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>richard</td>
</tr>
<tr>
<td>2</td>
<td>santos</td>
</tr>
<tr>
<td>317</td>
<td><ömer & 'x'></td>
</tr>
</tbody>
</table>
Answer:
There are many template engines but my question is: Why do they still exist? Can we not achieve the same job with readily available JavaScript Template Literals?
Sure, that seems like it would work ok for smaller projects, especially if you remember to keep to the string encoding guidelines to prevent XSS attacks. From my quick glance over your function, I don't see any security issues with how you're encoding strings.
Many templating libraries were popularized before template literals existed. Had template literals existed earlier, I wouldn't be surprised if these libraries would have been made to support them from the start, though, perhaps they continue to not support them because they like keeping the HTML outside of your business logic.
I'll go ahead and point out some potential design considerations you can think about.
It's pretty easy to accidentally forget to precede an interpolated value with an "&" character, which could cause the interface to be buggy, or worse, open up XSS issues. One option would be to make string-escaping the default behavior and require putting a special character before an interpolated value to explicitly state that you trust the interpolated content, and are ok with it getting inserted into the template literal untouched.
Another option would be to have the interpolated values take object literals, which explains what kind of encoding you want to be performed. This could help make things more readable, especially if you want to support other forms of encoding in the future. For example:
html`
<div id="${{ attr: 'All attr-unsafe characters will be encoded' }}">
<p>${{ content: 'All content-unsafe characters will be encoded' }}</p>
${{ raw: 'This string will be added, unchanged' }}
</div>
`
I'm not saying you should or should not switch to one of these options, just pointing out some alternative options to consider.
One more option (this one's my favorite), is to not ever use template literals. For small projects, I like to use this very simple helper function:
function el(tagName, attrs = {}, children = []) {
const newElement = document.createElement(tagName)
for (const [key, value] of Object.entries(attrs)) {
newElement.setAttribute(key, value)
}
newElement.append(...children)
return newElement
}
This allows you to use function calls to construct your HTML content. Because it's using native browser APIs, you won't have to worry about hand-writing escaping logic either, since the browser will automatically escape your data (even still, don't do something stupid like putting untrusted data into a script tag).
function el(tagName, attrs = {}, children = []) {
const newElement = document.createElement(tagName)
for (const [key, value] of Object.entries(attrs)) {
newElement.setAttribute(key, value)
}
newElement.append(...children)
return newElement
}
const renderTable = data =>
el('table', { id: 'my-table' }, [
el('thead', {}, [
el('tr', {}, [
...data.labels.map(renderTableHeader),
]),
]),
el('tbody', {}, [
...data.rows.map(renderRow)
]),
]);
const renderTableHeader = label =>
el('th', {}, [label])
const renderRow = ({ id, name }) =>
el('tr', {}, [
el('td', {}, [id]),
el('td', {}, [name]),
])
var data = { labels: ['ID','Name']
, rows : [ {"id": 1, 'name': "richard"}
, {"id": 2, 'name': "santos"}
, {"id": 3, 'name': "<ömer & `x`>"}
]
};
document.getElementById('main')
.appendChild(renderTable(data))
#my-table {
background-color: LightPink;
border : 0.2em solid #00493E;
border-radius : 0.4em;
padding : 0.2em
}
thead {
background-color: #00493E;
color : #E6FFB6
}
tbody {
background-color: #E6FFB6;
color : #00493E
}
<div id="main"></div> | {
"domain": "codereview.stackexchange",
"id": 42496,
"tags": "javascript, template"
} |
Performance of row- vs. column-wise matrix traversal | Question: Scott Meyers describes here that traversing a symmetric matrix row-wise performes significantly better over traversing it column-wise - which is also significantly counter-intuitive.
The reasoning is connected with how the CPU-caches are utilized. But I do not really understand the explanation and I would like to get it because I think it is relevant to me.
Is it possible to put it in more simple terms for somebody not holding a PhD in computer architecture and lacking experience in hardware-level programming?
Answer: In today's standard architectures, the cache uses what is called "spatial-locality". This is the intuitive idea that if you call some cell in the memory, it is likely that you will want to read cells that are "close by". Indeed, this is what happens when you read 1D arrays.
Now, consider how a matrix is represented in the memory: a 2D matrix is simply encoded as a 1D array, row by row. For example, the matrix $\left(\begin{array}{ll} 2, 3 \\4, 5\end{array}\right)$ is represented as $2,3,4,5$.
When you start reading the matrix in cell $(0,0)$, the CPU automatically caches the cells that are close by, which start by the first row (and if there is enough cache, may also go to the next row, etc).
If your algorithm works row-by-row, then the next call will be to an element still in this row, which is cached, so you will get a fast response. If, however, you call an element in a different row (albeit in the same column), you are more likely to get a cache miss, and you will need to fetch the correct cell from a higher memory. | {
"domain": "cs.stackexchange",
"id": 1832,
"tags": "cpu-cache, performance"
} |
What is the crystal structure of bismuth oxyhydroxyphosphate (BOHP)? | Question: The recent Chemical & Engineering News article Photocatalyst shreds drinking water contaminant PFOA report the following:
Cates and his colleagues made an accidental breakthrough while testing the photocatalyst bismuth phosphate. After finding that it could break down PFOA as effectively as a previously reported photocatalyst, gallium oxide, they tried to improve its performance by varying the pH of the reaction used to make the catalyst particles. The researchers had hoped this would change the particles’ size and shape, but the process inadvertently produced bismuth oxyhydroxyphosphate (BOHP). This new catalyst has the highest photocatalytic activity for PFOA degradation ever reported.
In lab tests under UV light, BOHP degraded all the PFOA in a 130 μM solution in about an hour, roughly 15 times faster than did bismuth phosphate or gallium oxide. BOHP performed almost as well over five repetitions of the experiment, suggesting it can be reused. To simulate PFOA-contaminated groundwater, the researchers then tested a more dilute solution of 1.2 μM PFOA accompanied by 500 μg dissolved organic carbon/L. In this case, the photocatalyst also degraded all the PFOA, though it took two hours. Strathmann calls it a “promising development” that advances the application of photocatalytic treatment of PFOA.
It also includes this SEM image of microcrystals of the "inadvertently produced bismuth oxyhydroxyphosphate (BOHP)."
The accidentally synthesized BOHP (shown) can break down the toxic industrial contaminant PFOA faster than any other photocatalyst. Credit: Environ. Sci. Technol. Lett.
A quick check with google doesn't send me to the materials Wikipedia page, and I'm wondering if that is because this is an unusual substance or the crystal structure hasn't been solved yet.
Question: What is the crystal structure of bismuth oxyhydroxyphosphate (BOHP)?
Answer:
What is the crystal structure of bismuth oxyhydroxyphosphate (BOHP)?
Not exactly sure what an answer looks like so here is everything.
First off looking at the diffraction pattern from the paper, bismuth oxyhydroxyphosphate matches that of petitjeanite so that is definately the prototype structure.
Using the information about diffraction, from the ICDD Databse (pdf 01-082-8058)we find that petitjeanite is a triclinic system of the $\mathrm P\bar 1$ space group (#2). with lattice parameters of:
$\require{mediawiki-texvc}$
$$a = 9.798\AA; ~ b = 7.250\AA; ~ c= 6.866 \AA; \qquad \alpha = 88.280^\circ \beta = 115.270^\circ ~ \gamma = 110.700^\circ$$
and atomic positions of:
$$\begin{array}{c|lll}
\text{Atom}&x&y&z\\
\hline
\ce{Bi}&~~~0.52327&~~~0.41421&~~~0.7523\\
\ce{Bi}&~~~0.29838&~~~0.74044&~~~0.55549\\
\ce{Bi}&~~~0.07735&~~~0.1609&~~~0.28045\\
\ce{P}&~~~0.6952&~~~0.0172&~~~0.9215\\
\ce{P}&~~~0.0961&~~~0.372&~~~0.7862\\
\ce{O}&~~~0.3777&~~~0.5382&~~~0.4759\\
\ce{O}&~~~0.3066&~~~0.1549&~~~0.5579\\
\ce{O}&~~~0.859&~~~0.0275&~~~0.8943\\
\ce{O}&~~~0.5569&-0.2084&~~~0.8602\\
\ce{O}&~~~0.621&~~~0.1596&~~~0.7526\\
\ce{O}&~~~0.7684&~~~0.1052&~~~1.1811\\
\ce{O}&~~~0.2584&~~~0.5101&~~~0.7497\\
\ce{O}&-0.0468&~~~0.4637&~~~0.6902\\
\ce{O}&~~~0.0209&~~~0.1498&~~~0.6362\\
\ce{O}&~~~0.1695&~~~0.3727&~~~1.0506\\
\end{array}
$$
and last but not least this renders to an image of:
Purple: Bi
Orange: P
Red: O | {
"domain": "chemistry.stackexchange",
"id": 11786,
"tags": "inorganic-chemistry, crystal-structure, catalysis, nanotechnology, ceramics"
} |
What is the Technique to Find Variance of Estimation Error | Question: Given an $n$-vector $y$ (responses) and a design matrix $X$, I wish to fit them with a simple linear regression model
$$y=X\beta+e,$$ or,
$y_t = x_t'\beta_0 + e_t$
where $e\sim\mathcal{N}(0, \sigma^2I)$ and $\beta_0$ is the true parameter. Then, we have
$$y\sim\mathcal{N}(X\beta, \sigma^2I).$$
Then the maximum likelihood estimations (MLE) of $\beta$ which is the Ordinary Least Square estimator is
$$\hat\beta_n=(X_n^TX)^{-1}X_n^TY_n$$. Then how to calculate the Expectation of the variance of the estimation error viz. $E[(\hat{\beta_n} - \beta_0){(\hat{\beta_n} - \beta_0)}^T]$ . I am struggling to understand what to substitute for $\beta_0$ in this calculation and how to find the expression. Any help will be extremely useful. Thank you.
Answer: I don't understand the subscript $n$ notation, however, in the least squares problem that is given by:
\begin{equation}
{\bf{y}}={\bf{H}}{\theta}+\bf{n},
\end{equation}
where ${\bf{n}}\sim\mathcal{N}(\bf{0}, \sigma^2I_N)$ is a zero mean additive white Gaussian noise and $I_N$ is the $N \times N$ identity matrix, the maximum likelihood and the least squares estimators are equivalent and are efficient estimators (i.e. unbiased and attains the Cramer Rao bound) and are given by
\begin{equation}
\widehat \theta_{ML}=\left ( \bf{H^TH} \right )^{-1} \bf{H^Ty} .
\end{equation}
Now, let's calculate the first two moments of the estimator:
\begin{equation}
E\left \{ \widehat \theta_{ML} \right \} = E\left \{ \left ( \bf{H^TH} \right )^{-1} \bf{H^Ty} \right \} = \\
\left ( \bf{H^TH} \right )^{-1} \bf{H^T} \underset{=\bf{H}\theta}{\underbrace{E\left \{ \bf{y}\right \}}} = \theta,
\end{equation}
where $\theta$ is the true parameter value.
Thus, the estimator is unbiased and its Mean Square Error(MSE) will equal its covariance.
Now for the covariance:
Remember that if $\bf{z}=\bf{Ax}$ then $cov\left ( \bf{z} \right ) = \bf{A} cov\left ( x \right ) \bf{A^T}$.
Since the mean doesn't change the covariance in our problem $\bf{y} \sim \mathcal{N}\left ( \bf{H\theta}, \sigma^2I_N \right )$, thus,
\begin{equation}
cov\left ( \widehat \theta_{ML} \right )=\left ( \bf{H^TH} \right )^{-1} \bf{H^T} cov\left ( \bf{n} \right ) \bf{H}\left ( \bf{H^TH} \right )^{-1} = \\
= \left ( \bf{H^TH} \right )^{-1} \bf{H^T} \sigma^2 \bf{I_N} \bf{H}\left ( \bf{H^TH} \right )^{-1} = \\
= \sigma^2 \left ( \bf{H^TH} \right )^{-1} \bf{H^T} \bf{H}\left ( \bf{H^TH} \right )^{-1} = \sigma^2 \left ( \bf{H^TH} \right )^{-1}.
\end{equation}
Note that $\bf{H^TH}$ is symmetric so it equals its transpose. Same is correct for its inverse.
I believe that this answers your question completely. | {
"domain": "dsp.stackexchange",
"id": 3150,
"tags": "estimation, least-squares, covariance, self-study"
} |
Densest k subgraph problem for outerplanar graphs? | Question: The densest k subgraph problem aims to find a subgraph $H$ of a graph $G$ with exactly $k$ vertices that maximizes the number of edges $|E(H)|$.
Does anyone know if there exists a polynomial-time algorithm for this problem under the restriction that $G$ is outerplanar? (Note: I am specifically asking for an algorithm, not a PTAS, and I want $G$ to be outerplanar, not $b$-outerplanar for some $b > 1$).
Answer: It can be solved in linear time in an even more general class of graphs: As shown in
N. Bourgeois, A. Giannakos, G. Lucarelli, I. Milis, V.T. Paschos
Exact and approximation algorithms for densest $k$-subgraph
WALCOM’13, LNCS, vol. 7748, Springer-Verlag (2013), pp. 114-125
the Densest-$k$-Subgraph problem can be solved in $O(2^{\mathrm{tw}(G)}\cdot k \cdot ((\mathrm{tw}(G)^2)+k)\cdot |X|)$ time when a tree decomposition $(X,T)$ of the input graph $G$ is given. Since outerplanar graphs have treewidth at most two, the Densest-$k$-Subgraph can be solved in $O(k^2 n)$ time on outerplanar graphs with $n$ vertices. | {
"domain": "cstheory.stackexchange",
"id": 4692,
"tags": "graph-theory, graph-algorithms, planar-graphs"
} |
What is the formal description of a Turing machine? | Question: I was asked to give a formal description of a Turing machine I have no experience with this, and was wondering what "formal description" entails.
Answer: In class you must have seen a formal definition of a Turing machine, such as the one given by Wikipedia. Formally, a Turing machine is given by a bunch of sets and functions. A formal description of a Turing machine is just this data. You can check out some examples on Wikipedia.
The definition in Wikipedia is only one possible definition — you will need to use the formal definition given in class. While the different formal definitions give rise to slightly different Turing machine models, these models are always equivalent in power (up to some small differences), in particular they are equivalent in terms of which functions can be computed in them.
Programming a Turing machine "formally" is like writing a program in a programming language (or assembly language). In contrast, informal descriptions of Turing machines are similar to pseudocode or to informal descriptions of algorithms. Programming Turing machines is very messy, and the point of this exercise is just to show you that it is possible in principle, and to help you understand how a Turing machine actually "works". | {
"domain": "cs.stackexchange",
"id": 2704,
"tags": "terminology, turing-machines"
} |
how to Know the longest ORF in Protein? | Question: I have a protein Seq. and I want to know
how to Know the longest ORF in this Protein?
and if there is any tutorial for this process ??
Answer: Proteins don't have ORFs, DNA sequences do. What DNA sequences do you have? If cDNA (i.e. transcript/mRNA) then an ORF finder will give multiple ORFs and the coordinates on the full-length input DNA which indicate it's length. | {
"domain": "bioinformatics.stackexchange",
"id": 1094,
"tags": "proteins, orf"
} |
Replacing fermionic operators with their Fourier transform and boundary conditions | Question: In the section 4.1 of Quantum Computation by Adiabatic Evolution, Farhi et al proposes a quantum adiabatic algorithm to solve the $2$-SAT problem on a ring. To compute the complexity of the algorithm the authors computed the energy gap between the ground and first excited states of the
adiabatic Hamiltonian.
The adiabatic Hamiltonian is defined as
$$
\tilde{H} (s) = (1-s) \sum^n_{j=1}(1-\sigma^{(j)}_x) + s \sum^n_{j=1}\frac{1}{2} (1-\sigma^{(j)}_z \sigma^{(j+1)}_z )
$$
Then the adiabatic Hamiltonian is reexpressed using fermionic operators as follows.
$$
\tilde{H}(s) = \sum^n_{j=1} \left\{2 (1-s)b^\dagger_j b_j + \frac{s}{2}(1-(b^\dagger_j - b_j)(b^\dagger_{j+1} + b_{j+1}))\right\}
$$
Then the authors takes the Fourier transform of the fermionic operators,
$$\beta_p = \frac{1}{\sqrt{n}} \sum^n_{j=1} e^{i\pi p j/n} b_j$$
where $p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)$,
and rewrite the adiabatic Hamiltonian as
$$\tilde{H}(s) = \sum_{p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} A_p (s)$$
where
$$
A_p (s) = 2 (1-s)[\beta^\dagger_p \beta_p + \beta^\dagger_{-p} \beta_{-p}] + s \left\{1 - \cos\frac{\pi p}{n} [\beta^\dagger_p \beta_p - \beta_{-p} \beta^\dagger_{-p}] + i \sin \frac{\pi p}{n}[\beta^\dagger_{-p} \beta^\dagger_{p} - \beta_{p} \beta_{-p}]\right\}.
$$
My question:
How can I derive the second part i.e. $s \left\{1 - \cos\frac{\pi p}{n} [\beta^\dagger_p \beta_p - \beta_{-p} \beta^\dagger_{-p}] + i \sin \frac{\pi p}{n}[\beta^\dagger_{-p} \beta^\dagger_{p} - \beta_{p} \beta_{-p}]\right\}$?
My attempt:
We compute few quantities.
$$
\beta^\dagger_p \beta_p = \left(\frac{1}{\sqrt{n}}\sum^n_{k=1}e^{-i\pi p k/n}b^\dagger_k\right)\left(\frac{1}{\sqrt{n}}\sum^n_{j=1}e^{i\pi p j/n}b_j\right)
\\
=\frac{1}{n} \sum^n_{k,j=1}e^{-i\pi p (k-j)/n} b^\dagger_k b_j
$$,
$$
\beta^\dagger_{-p} \beta_{-p} = \left(\frac{1}{\sqrt{n}} \sum^n_{k=1} e^{i\pi p k/n} b^\dagger_k\right)\left(\frac{1}{\sqrt{n}} \sum^n_{j=1} e^{-i\pi p j/n} b_j\right)
\\
=\frac{1}{n} \sum^n_{k,j=1}e^{i\pi p (k-j)/n} b^\dagger_k b_j
$$,
$$
\beta_{-p} \beta^\dagger_{-p} = \left( \frac{1}{\sqrt{n}} \sum^n_{j=1} e^{-i\pi p j/n} b_j\right)\left( \frac{1}{\sqrt{n}} \sum^n_{k=1} e^{i\pi p k/n} b^\dagger_k\right)
\\
=\frac{1}{n} \sum^n_{k,j=1}e^{i\pi p (k-j)/n} b_j b^\dagger_k
$$,
$$
\beta^\dagger_{-p} \beta^\dagger_{p} = \left(\frac{1}{\sqrt{n}} \sum^n_{j=1} e^{i\pi p j/n} b^\dagger_j\right)\left(\frac{1}{\sqrt{n}} \sum^n_{k=1} e^{-i\pi p k/n} b^\dagger_k\right)
\\
=\frac{1}{n} \sum^n_{k,j=1}e^{-i\pi p (k-j)/n} b^\dagger_j b^\dagger_k
$$,
and
$$
\beta_{p} \beta_{-p} = \left(\frac{1}{\sqrt{n}} \sum^n_{j=1} e^{i\pi p j/n} b_j\right)\left(\frac{1}{\sqrt{n}} \sum^n_{k=1} e^{-i\pi p k/n} b_k\right)
\\
=\frac{1}{n} \sum^n_{k,j=1}e^{-i\pi p (k-j)/n} b_j b_k
$$.
We also compute two linear combinations of these quantities.
\begin{align}
\beta^\dagger_{p} \beta_{p} - \beta_{-p} \beta^\dagger_{-p} &= \frac{1}{n} \sum^n_{k,j=1}e^{-i\pi p (k-j)/n} b^\dagger_k b_j - \frac{1}{n} \sum^n_{k,j=1}e^{i\pi p (k-j)/n} b_j b^\dagger_k
\nonumber\\
&= \frac{1}{n} \left( \sum^n_{k,j=1}e^{-i\pi p (k-j)/n} b^\dagger_k b_j - \sum^n_{k,j=1}e^{i\pi p (k-j)/n} b_j b^\dagger_k\right)
\nonumber\\
&= \frac{1}{n} \left( \sum^n_{k,j=1}\left(\cos \left(\pi p (k-j)/n\right) - i \sin \left(\pi p (k-j)/n\right)\right) b^\dagger_k b_j \right.
\nonumber\\
& \left. - \sum^n_{k,j=1} \left(\cos \left(\pi p (k-j)/n\right) + i \sin \left(\pi p (k-j)/n\right) \right) b_j b^\dagger_k\right)
\nonumber\\
&= \frac{1}{n} \sum^n_{k,j=1} \left(\left(\cos \left(\pi p (k-j)/n\right) - i \sin \left(\pi p (k-j)/n\right)\right) b^\dagger_k b_j \right.
\nonumber\\
& \left. - \left(\cos \left(\pi p (k-j)/n\right) + i \sin \left(\pi p (k-j)/n\right) \right) b_j b^\dagger_k\right)
\end{align}
and
\begin{align}
\beta^\dagger_{-p} \beta^\dagger_{p} - \beta_{p} \beta_{-p} &= \frac{1}{n} \sum^n_{k,j=1}e^{-i\pi p (k-j)/n} b^\dagger_j b^\dagger_k - \frac{1}{n} \sum^n_{k,j=1}e^{-i\pi p (k-j)/n} b_j b_k
\nonumber\\
&= \frac{1}{n} \left( \sum^n_{k,j=1}e^{-i\pi p (k-j)/n} b^\dagger_j b^\dagger_k - \sum^n_{k,j=1}e^{-i\pi p (k-j)/n} b_j b_k\right)
\nonumber\\
&= \frac{1}{n} \left( \sum^n_{k,j=1}\left(\cos \left(\pi p (k-j)/n\right) - i \sin \left(\pi p (k-j)/n\right)\right) b^\dagger_j b^\dagger_k \right.
\nonumber\\
& \left.- \sum^n_{k,j=1}\left(\cos \left(\pi p (k-j)/n\right) - i \sin \left(\pi p (k-j)/n\right)\right) b_j b_k\right)
\nonumber\\
&= \frac{1}{n} \sum^n_{k,j=1} \left( \left(\cos \left(\pi p (k-j)/n\right) - i \sin \left(\pi p (k-j)/n\right)\right) b^\dagger_j b^\dagger_k \right.
\nonumber\\
& \left.- \left(\cos \left(\pi p (k-j)/n\right) - i \sin \left(\pi p (k-j)/n\right)\right) b_j b_k\right)
\end{align}.
So,
\begin{align}
1 - \cos \frac{\pi p}{n}\left[\beta^\dagger_p \beta_p - \beta_{-p} \beta^\dagger_{-p}\right] + i \sin \frac{\pi p}{n} \left[\beta^\dagger_{-p} \beta^\dagger_p - \beta_p \beta_{-p}\right] =
\nonumber\\
1 - \cos \frac{\pi p}{n}\left[\frac{1}{n} \sum^n_{k,j=1} \left(\left(\cos \left(\pi p (k-j)/n\right) - i \sin \left(\pi p (k-j)/n\right)\right) b^\dagger_k b_j \right. \right.
\nonumber\\
- \left. \left. \left(\cos \left(\pi p (k-j)/n\right) + i \sin \left(\pi p (k-j)/n\right) \right) b_j b^\dagger_k\right)\right]
\nonumber\\
+ i \sin \frac{\pi p}{n} \left[\frac{1}{n} \sum^n_{k,j=1} \left( \left(\cos \left(\pi p (k-j)/n\right) - i \sin \left(\pi p (k-j)/n\right)\right) b^\dagger_j b^\dagger_k \right. \right.
\nonumber\\
- \left. \left. \left(\cos \left(\pi p (k-j)/n\right) - i \sin \left(\pi p (k-j)/n\right)\right) b_j b_k\right)\right]
\nonumber\\
= 1 - \frac{1}{n} \cos \frac{\pi p}{n}\left[ \sum^n_{k,j=1} \left(\left(\cos \left(\pi p (k-j)/n\right) - i \sin \left(\pi p (k-j)/n\right)\right) b^\dagger_k b_j \right. \right.
\nonumber\\
- \left. \left. \left(\cos \left(\pi p (k-j)/n\right) + i \sin \left(\pi p (k-j)/n\right) \right) b_j b^\dagger_k\right)\right]
\nonumber\\
+ \frac{1}{n}i \sin \frac{\pi p}{n} \left[ \sum^n_{k,j=1} \left( \left(\cos \left(\pi p (k-j)/n\right) - i \sin \left(\pi p (k-j)/n\right)\right) b^\dagger_j b^\dagger_k \right. \right.
\nonumber\\
- \left. \left. \left(\cos \left(\pi p (k-j)/n\right) - i \sin \left(\pi p (k-j)/n\right)\right) b_j b_k\right)\right]
\nonumber\\
= 1 - \frac{1}{n} \left[ \sum^n_{k,j=1} \left(\left(\cos \frac{\pi p}{n} \cos \left(\pi p (k-j)/n\right) - i\cos \frac{\pi p}{n} \sin \left(\pi p (k-j)/n\right)\right) b^\dagger_k b_j \right. \right.
\nonumber\\
- \left. \left. \left(\cos \frac{\pi p}{n} \cos \left(\pi p (k-j)/n\right) + i \cos \frac{\pi p}{n} \sin \left(\pi p (k-j)/n\right) \right) b_j b^\dagger_k\right)\right]
\nonumber\\
+ \frac{1}{n}i \left[ \sum^n_{k,j=1} \left( \left(\sin \frac{\pi p}{n} \cos \left(\pi p (k-j)/n\right) - i \sin \frac{\pi p}{n} \sin \left(\pi p (k-j)/n\right)\right) b^\dagger_j b^\dagger_k \right. \right.
\nonumber\\
- \left. \left. \left(\sin \frac{\pi p}{n} \cos \left(\pi p (k-j)/n\right) - i \sin \frac{\pi p}{n} \sin \left(\pi p (k-j)/n\right)\right) b_j b_k\right)\right]
\end{align}
I am not sure how to get to $1-(b^\dagger_j - b_j)(b^\dagger_{j+1} + b_{j+1})$ from here.
Update 1:
Following the comment by @mas, I am starting with Eq. 4.14 i.e. the inverse Fourier transform.
\begin{align}
b_j &= \frac{1}{\sqrt{n}} \sum_{p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{-i\pi p j/n} \beta_p
\end{align}
So,
$$
\sum^n_{j=1} (b^\dagger_j - b_j)(b^\dagger_{j+1} + b_{j+1}) = \\
\sum^n_{j=1} (\frac{1}{\sqrt{n}} \sum_{p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{i\pi p j/n} \beta^\dagger_p - \frac{1}{\sqrt{n}} \sum_{p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{-i\pi p j/n} \beta_p)(\frac{1}{\sqrt{n}} \sum_{p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{i\pi p (j+1)/n} \beta^\dagger_p + \frac{1}{\sqrt{n}} \sum_{p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{-i\pi p (j+1)/n} \beta_p)
\\
=\sum^n_{j=1} \frac{1}{n} ( \sum_{p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{i\pi p j/n} \beta^\dagger_p - \sum_{p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{-i\pi p j/n} \beta_p)( \sum_{p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{i\pi p (j+1)/n} \beta^\dagger_p + \sum_{p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{-i\pi p (j+1)/n} \beta_p)
\\
=\sum^n_{j=1} \frac{1}{n} ( \sum_{p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{i\pi p j/n} \beta^\dagger_p \sum_{q = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{i\pi q (j+1)/n} \beta^\dagger_q - \sum_{p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{-i\pi p j/n} \beta_p \sum_{q = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{i\pi q (j+1)/n} \beta^\dagger_q + \sum_{p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{i\pi p j/n} \beta^\dagger_p \sum_{q = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{-i\pi q (j+1)/n} \beta_q - \sum_{p = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{-i\pi p j/n} \beta_p \sum_{q = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{-i\pi q (j+1)/n} \beta_q)
\\
=\sum^n_{j=1} \frac{1}{n} ( \sum_{p,q = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{i\pi p j/n} e^{i\pi q (j+1)/n} \beta^\dagger_p \beta^\dagger_q - \sum_{p,q = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{-i\pi p j/n} e^{i\pi q (j+1)/n} \beta_p \beta^\dagger_q + \sum_{p,q = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{i\pi p j/n} e^{-i\pi q (j+1)/n} \beta^\dagger_p \beta_q - \sum_{p,q = \pm 1, \pm 3, \ldots, \pm \left(n-1\right)} e^{-i\pi p j/n} e^{-i\pi q (j+1)/n} \beta_p \beta_q)
$$
I am still stuck.
Answer: \begin{equation}
b_{j} = \frac{1}{\sqrt{n}}\sum_{p}e^{-i\pi pj/n}\beta_{p}\qquad b_{j+1} = \frac{1}{\sqrt{n}}\sum_{p}e^{-i\pi q(j+1)/n}\beta_{q}
\end{equation}
Then
\begin{equation}
b_{j}^{\dagger}b_{j+1}^{\dagger} = \frac{1}{n}\sum_{p,q}e^{\pi i(p+q)j/n}e^{\pi iq/n}\beta_{q}^{\dagger}\beta_{p}^{\dagger} = \sum_{p} e^{-\pi ip/n}\beta_{-p}^{\dagger}\beta_{p}^{\dagger}
\end{equation}
Likewise $b_{j}b_{j+1}= \sum_{p} e^{\pi ip/n}\beta_{-p}\beta_{p}$. Then
\begin{eqnarray}
b_{j}^{\dagger}b_{j+1}^{\dagger}-b_{j}b_{j+1} & = & \sum_{p} [e^{-\pi ip/n}\beta_{-p}^{\dagger}\beta_{p}^{\dagger}-e^{\pi ip/n}\beta_{-p}\beta_{p}] \\
& = & \sum_{p}e^{-\pi ip/n}[\beta_{-p}^{\dagger}\beta_{p}^{\dagger}-e^{2\pi ip/n}\beta_{-p}\beta_{p}]\\
& = & \sum_{p}e^{-\pi ip/n}[\beta_{-p}^{\dagger}\beta_{p}^{\dagger}-\beta_{-p}\beta_{p}]\quad\boxed{\text{using}~e^{2\theta}=1}\\
& = & \sum_{p}(\cos\frac{\pi p}{n}-i\sin\frac{\pi p}{n})[\beta_{-p}^{\dagger}\beta_{p}^{\dagger}-\beta_{-p}\beta_{p}]\\
& = & -2i\sum_{p}\sin\left(\frac{\pi p}{n}\right)[\beta_{-p}^{\dagger}\beta_{p}^{\dagger}-\beta_{-p}\beta_{p}] \\
\Rightarrow -\frac{1}{2}[b_{j}^{\dagger}b_{j+1}^{\dagger}-b_{j}b_{j+1}] & = & i\sum_{p}\sin\left(\frac{\pi p}{n}\right)[\beta_{-p}^{\dagger}\beta_{p}^{\dagger}-\beta_{-p}\beta_{p}] \\
\end{eqnarray}
Which is the desired expression. Lets check the term $\sum_{p}(\cos\frac{\pi p}{n}-i\sin\frac{\pi p}{n})[\beta_{-p}^{\dagger}\beta_{p}^{\dagger}-\beta_{-p}\beta_{p}]$ for $p=\pm 1$ (setting $\frac{1}{n}=m$)
\begin{eqnarray}
(\cos\pi m-i\sin\pi m)[\beta_{-1}^{\dagger}\beta_{1}^{\dagger}-\beta_{-1}\beta_{1}]+[\cos(-\pi m)-i\sin(-\pi m)][\beta_{1}^{\dagger}\beta_{-1}^{\dagger}-\beta_{1}\beta_{-1}] & = & -2i\sin(\pi m)[\beta_{-1}^{\dagger}\beta_{1}^{\dagger}-\beta_{-1}\beta_{1}]
\end{eqnarray}
(There $b_{-p}b_{p}=-b_{p}b_{-p}$ has been used). Therefore no contribution comes from cosine terms. Likewise rest of the expression follows. | {
"domain": "physics.stackexchange",
"id": 31711,
"tags": "quantum-mechanics, condensed-matter, quantum-information, fourier-transform, fermions"
} |
Suggestions on using irobot create with hokuyo urg 04lx / turtlebot? | Question:
I was wondering if its possible to use hokuyo URG 04LX lidar sensor with the turtlebot or irobot create.The kinect has a limited range of 50 degrees scan. I want to use the hokuyo for larger scans (180-240 degrees). How do we use the same turtlebot power and sensor board to connect the hokuyo laser? I have seen that people have successfully used arduino with roomba and hokuyo, but has anyone being able to use create / turtlebot successfully and build 2d maps.
Originally posted by Vegeta on ROS Answers with karma: 340 on 2012-09-27
Post score: 0
Answer:
Sure, that should work fine. The only thing, you need to do is use 5V instead of the 12V for Kinect.
Originally posted by dornhege with karma: 31395 on 2012-09-27
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 11154,
"tags": "turtlebot, hokuyo"
} |
Predicting the effect of a resistive force on a sattelite in orbit around a planet | Question: I'm asked to predict the effect of a resistive force acting on a sattelite of mass $m$ in orbit of radius $r$ around a planet of mass $M$. I have come up with the following equations:
Kinetic energy = $\frac{GMm}{2r}$ (Derived by equating equation for centripetal force to gravitation force acting on sattelite)
Potential energy = $\frac{-GMm}{r}$ (Multiplied the gravitational potential energy eqution by $m$)
Total energy = $\frac{-GMm}{2r}$ (Added equations 1 and 2)
What I predict will change:
The kinetic energy will decrease because the resistive force will decrease the angular velocity
From equation 1, the radius should increase
From equation 2, the gravitational potential energy will increase since radius increases
From equation 3, the total energy will increase since the radius increases
They are all wrong and the opposites are true. Where did I go wrong? I wrote all that since I feel I must be missing something very fundamental.
Answer: The error is right here:
The kinetic energy will decrease because the resistive force will decrease the angular velocity
This would happen only if the friction force was localized on a short segment of the orbit, such as if suddenly the satellite got hit by a dense cloud of dust or meteorites. And even then, the decrease in velocity would be only temporary, because the orbit has been altered and the satellite will start falling towards the attraction center and will increase its speed on the other side of the orbit.
Usually, the friction on satellites is due to atmosphere and is steady and very weak.
Let us assume initially circular orbit.
If there is no friction at all, the circular orbit will remain stable and the satellite will keep the same distance from the attracting center.
If, however, a weak friction opposed to velocity is present, what will happen is that the friction will slowly steal mechanical energy of the system satellite-planet. It will slowly put the satellite down. Because it is a slow process, the shape of the orbit will not change much. It will remain almost circular, but the circle will shrink in time (because mechanical energy is decreasing in time).
So the satellite, while orbiting, is also falling down, towards the center. And since circular orbit speed is inversely proportional to radius, the more the satellite falls, the higher its speed and kinetic energy becomes. | {
"domain": "physics.stackexchange",
"id": 52686,
"tags": "newtonian-mechanics"
} |
CMAKE_OSX_DEPLOYMENT_TARGET and CMAKE_OSX_SYSROOT | Question:
Hi possible lifesavers,
I am just getting started with ROS, and now having trouble with rosmake.
When I try to do rosmake, it will give:
CMake Error at /usr/local/Cellar/cmake/2.8.10.1/share/cmake/Modules/Platform/Darwin.cmake:190 (message):
CMAKE_OSX_DEPLOYMENT_TARGET is '10.6' but CMAKE_OSX_SYSROOT:
" "
is not set to a MacOSX SDK with a recognized version. Either set
CMAKE_OSX_SYSROOT to a valid SDK or set CMAKE_OSX_DEPLOYMENT_TARGET to
empty.
Call Stack (most recent call first):
/usr/local/Cellar/cmake/2.8.10.1/share/cmake/Modules/CMakeSystemSpecificInformation.cmake:36 (include)
-- Configuring incomplete, errors occurred!
Then I configured CMAKE_OSX_SYSROOT to
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk
Now, I am getting:
CMake Error at /usr/local/Cellar/cmake/2.8.10.1/share/cmake/Modules/Platform/Darwin.cmake:190 (message):
CMAKE_OSX_DEPLOYMENT_TARGET is '10.6' but CMAKE_OSX_SYSROOT:
"10.7"
is not set to a MacOSX SDK with a recognized version. Either set
CMAKE_OSX_SYSROOT to a valid SDK or set CMAKE_OSX_DEPLOYMENT_TARGET to
empty.
Call Stack (most recent call first):
/usr/local/Cellar/cmake/2.8.10.1/share/cmake/Modules/CMakeSystemSpecificInformation.cmake:36 (include)
-- Configuring incomplete, errors occurred!
which does not really make sense to me now.
So, any ideas?
Originally posted by gegao884 on ROS Answers with karma: 1 on 2012-12-15
Post score: 5
Original comments
Comment by Kevin on 2013-01-01:
I have the same issue too ... trying to figure it out.
Answer:
ok this is what I did, go to /opt/ros/groovy/share/ros/core/rosbuild/public.cmake and comment out line 164 (with the # symbol) that say:
if(APPLE)
#SET(CMAKE_OSX_DEPLOYMENT_TARGET "10.8" CACHE STRING "Deployment target for OSX" FORCE)
endif(APPLE)
That seems to work for me so far. Maybe @WilliamWoodall might have a better idea.
I have OSX 10.8, ROS Groovy Beta 3, cmake 2.8.10.1, and XCode 4.5.2
Originally posted by Kevin with karma: 2962 on 2013-01-01
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Kevin on 2013-01-01:
Note, I tried changing the 10.6 to a 10.8 since I am using OSX 10.8 and it didn't help. Your line 164 of public.cmake will say 10.6.
Comment by WilliamWoodall on 2013-01-02:
I didn't add this line, I'll have to look at the history to see when/why it was introduced. I haven't run into this problem yet.
Comment by Kevin on 2013-01-21:
That should be correct if you did not install it (for some reason) to /opt/ros/groovy. Make sure you do a make clean, that may help.
Comment by WilliamWoodall on 2013-01-21:
Just look for the one which is under the path returned by rospack find rosbuild | {
"domain": "robotics.stackexchange",
"id": 12120,
"tags": "macos, rosmake, macos-mountain-lion, macos-lion, osx"
} |
Bash script to send emails when web server does not respond | Question: I've made a simple bash script to check if a web server responds, and to send emails to a list of addresses if the website is down. Any suggestions as to how to improve it/ edge cases that I missed/ better styling/ etc? I'm not very familiar with idiomatic bash.
#!/bin/sh
output=$(wget http://lon3213:8111 2>&1)
pattern="connected"
path="/remote/users/my_user/projects/cmb_pinger/"
# File where I check and write the status of the server
# so I don't end up sending more e-mails for the same
# downtime. It writes 0 if no email has been sent yet
# and 1 if the email has been already sent.
log="check"
emails="email1@example.com,email2@example.com"
# Create file if it doesn't exist
if ! [[ -f "$path$log" ]]
then
echo "0" > "$path$log"
fi
# Website cannot be reached
if [[ ! "$output" =~ "$pattern" ]]
then
# Emails have not yet been sent
if [[ $(cat "$path$log") == "0" ]]; then
echo "$output" | mail -s "Dashboard is down" $emails
echo "1" >| "$path$log"
fi
else
echo "0" >| "$path$log"
fi
Answer: I am impressed with the consistency of your code. You have obviously put effort in to making sure it is well presented, etc. It makes it easy to read.
The first thing I notice is the she-bang line, you have #!/bin/sh this should be #!/bin/bash... right? The script uses Bash specific features that don't exist in classic /bin/sh, such as [[. Although it may appear to work with #!/bin/sh in your system, chances are that your /bin/sh is symlinked to /bin/bash (typically in Linux). This is not something you can rely on (it will break spectacularly in Solaris, for example), so when using Bash-specific features, then you should use #!/bin/bash in the she-bang.
Then, there are some shortcuts you are missing though, and these are standard processes available on most Unix commands.... wget has an exit code! Use it.
wget saves the web content to a file in the local directory. It will not override files, so you may end up with many copies of the same content. I recommend redirecting the URL contents to the stdout by using -O - argument for wget.
Error handling is a critical part of any script. It's not always easy in bash, but it can be done.
You have used the redirection >| in a few places. This caught me by surprise. I admit I looked it up. In my experience the noclobber setting is seldom set, but that may just be for me. That's just an observation. Your site may be set up differently.
Finally, bash has function support, which allows you simplify logic substantially
Consider the following:
#!/bin/bash
url="http://lon3213:8111"
path="/tmp/cmb_pinger/"
# File where I check and write the status of the server
# so I don't end up sending more e-mails for the same
# downtime. It writes 0 if no email has been sent yet
# and 1 if the email has been already sent.
log="check"
emails="email1@example.com,email2@example.com"
error() {
local message="$@"
>&2 echo $message
exit 1
}
setFlag() {
local flag=$1
echo $flag >| "$path$log" || error "Unable to set flag to $flag"
}
# Create file if it doesn't exist
[[ -d "$path" ]] || mkdir -p "$path" || error "Unable to create directory $path"
[[ -f "$path$log" ]] || setFlag 0
# Website cannot be reached
output=$(wget -O - $url 2>&1)
success=$?
if [[ $success -eq 0 ]]
then
setFlag $success
else
# Emails have not yet been sent
if [[ $(cat "$path$log") == "0" ]]; then
echo $output | mail -s "Dashboard is down" $emails || error "Unable to send mail to $emails"
setFlag $success
fi
fi | {
"domain": "codereview.stackexchange",
"id": 12107,
"tags": "bash, linux, email"
} |
joint_trajectory_controllers for ROS Melodic on Arch Linux | Question:
Hi there,
After a new fresh install of ROS Melodic on our Arch machine, I noticed that upon installing deps for the new ur_robot_driver, the package joint_trajectory_controller is not available on arch, at least for Melodic. For Kinetic and Indigo, I could find packages in the AUR, but not for Melodic.
Does anyone know how to get this running on Arch ?
Because I would like to not have to switch to an Ubuntu VM just for ROS development.
Cheers
PS: Error message from rosdep install --from-paths src --ignore-src -y:
ERROR: the following packages/stacks could not have their rosdep keys
resolved to system dependencies:
ur_controllers: No definition of [joint_trajectory_controller] for OS [arch]
ur_robot_driver: Cannot locate rosdep definition for [ur_msgs]
Originally posted by ramdambo on ROS Answers with karma: 3 on 2020-05-14
Post score: 0
Answer:
Not being able to install a package using your package manager is one of the valid reasons for building them from source (#q320046: it mentions Ubuntu in the title, but the same reasoning aplies on other OS, see the points in the No release available section).
For a description of the general workflow when building packages from source, you could refer to #q252478.
If you really need to build much more from source (because rosdep tells you fi it cannot find the dependencies of the packages in ros_controllers in your package manager), then you would probably have to resort to rosinstall_generator. Before doing that though you may want to contact whoever is maintaining the Arch repositories, as ros_control is a pretty common dependency and I would expect it to be available.
Originally posted by gvdhoorn with karma: 86574 on 2020-05-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by ramdambo on 2020-05-27:
Hi there, sorry for the delay.
I tried to install ros_controllers from source but as you suspected, I had to install even more depencencies after that, so I resorted to rosinstall_generator.
Problem is that even here, I am missing dependencies, namely libgazebo9-dev, libboost-thread-dev and libboost-filesystem-dev even though boost and gazebo are installed.
I'll try to contact the maintainer/s. I suppose you mean the maintainer of ros-melodic-desktop-full right ?
Comment by gvdhoorn on 2020-05-27:\
I'll try to contact the maintainer/s. I suppose you mean the maintainer of ros-melodic-desktop-full right ?
No.
The people doing the Arch linux packaging.
Comment by ramdambo on 2020-05-27:
Yeah that's what I meant, should have been clearer. But apparently an issue addressing the missing dependencies has already been opened.
Comment by gvdhoorn on 2020-05-27:
Please link that issue here, so we can keep things connected. | {
"domain": "robotics.stackexchange",
"id": 34956,
"tags": "ros, ros-melodic, rosdep, joint-trajectory-controller"
} |
The storage of kinetic energy in a flywhell? | Question: I am reading a book on physics demonstrations and problems, and one of the problems deals with a flywheel which rotates at maximum angular speed. The density of the flywheel is uniform and the question asks for the kinetic energy per kilogram stored in the flywheel. I am confused about the effect of the physical dimensions. Will the thickness $h$ and the radius $r$ affect the storage of kinetic energy?
Answer: There are two things at play: the energy stored in the flywheel, and the stresses due to the rotation that threaten to pull it apart. The question is really asking about their relationship:
The stored energy is given as $$E = \frac12 I \omega^2$$
Where the moment of inertia for a solid disk is given by
$$I = \frac12 m r^2$$
For a given density of material $\rho$ and thickness $t$, the mass is given by $m = \pi r^2 t \rho$ so we can write the energy as
$$E = \frac14 \pi \rho t r^4 \omega^2$$
Now the hoop stress is given by
$$\sigma = \rho r^2 \omega^2$$
When that value is limited by material properties, we see that the energy stored in the flywheel is given by
$$E = \frac14 \pi r^2 t \sigma$$
In other words, for a given stress level, the maximum stored energy goes up with the thickness of the flywheel (no surprise there, since making it thicker is like increasing the number of flywheels - so of course it should scale with that) and also with the radius - bigger flywheels store more energy before they break. That's perhaps not quite so intuitive, but it clearly follows from the equations.
Interestingly, the density of the disk does not appear in the final expression - it contributes to the stored energy and the hoop stress in the same way, so when you express energy in terms of the strength of the material, the density disappears.
Finally - looking at the above expression you can see that most of the energy term now relates to the volume of the disk, so we can get the energy per kilogram by dividing by the mass ($\pi r^2 t \rho$) on both sides:
$$\frac{E}{m} = \frac{\sigma}{4\rho}$$
The energy stored per unit mass does not, in fact, depend on the physical dimensions. Unexpected and interesting result. | {
"domain": "physics.stackexchange",
"id": 19389,
"tags": "homework-and-exercises, energy, rotational-dynamics"
} |
What are $\partial_t$ and $\partial^\mu$? | Question: I'm reading the Wikipedia page for the Dirac equation:
$\rho=\phi^*\phi\,$
......
$J = -\frac{i\hbar}{2m}(\phi^*\nabla\phi - \phi\nabla\phi^*)$
with the conservation of probability current and density following
from the Schrödinger equation:
$\nabla\cdot J + \frac{\partial\rho}{\partial t} = 0.$
The fact that the density is positive definite and convected according
to this continuity equation, implies that we may integrate the density
over a certain domain and set the total to 1, and this condition will
be maintained by the conservation law. A proper relativistic theory
with a probability density current must also share this feature. Now,
if we wish to maintain the notion of a convected density, then we must
generalize the Schrödinger expression of the density and current so
that the space and time derivatives again enter symmetrically in
relation to the scalar wave function. We are allowed to keep the
Schrödinger expression for the current, but must replace by
probability density by the symmetrically formed expression
$\rho = \frac{i\hbar}{2m}(\psi^*\partial_t\psi - \psi\partial_t\psi^*).$
which now becomes the 4th component of a space-time vector, and the
entire 4-current density has the relativistically covariant expression
$J^\mu = \frac{i\hbar}{2m}(\psi^*\partial^\mu\psi - \psi\partial^\mu\psi^*)$
What exactly are $\partial_t$ and $\partial^\mu$?
Are they tensors?
If they are, how are they defined?
Answer: $\partial_t\equiv\frac\partial{\partial t}$ and $\partial^\mu\equiv g^{\mu\nu}\frac\partial{\partial x^\nu}=\left(\sum_{\nu=0}^3g^{\mu\nu}\frac\partial{\partial x^\nu}\right)_{\mu=0}^3$ are differential operators. $\partial^\mu$ is formally contravariant (upper index) and obeys the corresponding transformation laws. $\partial_t$ has a lower index and is (up to a constant factor) a component of the formally covariant operator $\partial_\mu$ via $\partial_0=\frac1c\partial_t$, which, in general, is not equal to $\partial^0$, the zeroth component of $\partial^\mu$.
The differential operator $\partial^\mu$ is known as gradient, which derives vector fields from potential functions. The gradient is not a natural operation on arbitrary manifolds and only available if there's a metric. Its dual $\partial_\mu\equiv\frac\partial{\partial x^\mu}$ on the other hand is a natural operation corresponding to the differential $\mathrm d$, taking potentials to 1-forms (covectorfields).
As a side note, $\partial_t$ can also be understood as a local vector field, as one of the intrinsic definitions of vectors on manifolds is via their directional derivatives. In mathematical literature, it is common to write the basis of the tangent space as $\{\frac\partial{\partial x^\mu}\}$ and its dual space as $\{\mathrm dx^\mu\}$. | {
"domain": "physics.stackexchange",
"id": 4889,
"tags": "special-relativity, notation, tensor-calculus, differentiation"
} |
Statistics about gaps in DNA sequences | Question: Noobie to Numba here, I'm trying to get faster code from existing function but the result is not faster. 10 times faster would be heaven, but I know nothing about optimization.
This is code about parsing gaps in DNA sequences pairwise alignment and doing statistics about it.
The code looks like this:
import re
import time
import numpy as np
from numba import autojit, int32, complex64
sstart = 10
send = 52
absoluteRoiStart = 19 #rank of the first nucleotide in the ROI
absoluteRoiStop = 27 #rank of the the first nucleotide after the ROI
#ROI is here 'TATCGA---CAG|TA-----TACTA-C|G--TTGAGAGAGAC-CCCA'
#between | 'T--CGACCAC--|-GATCGAG---ATC|GGCTT--------CTC--A'
source = 'TATCGA---CAGTA-----TACTA-CG--TTGAGAGAGAC-CCCA'
sequence = 'T--CGACCAC---GATCGAG---ATCGGCTT--------CTC--A'
realSource = 'AAGGTTCCAATATCGACAGTATACTACGTTGAGAGAGACCCCACATGACTGACTACGT'
tresholds = {
"DEL" : {
"other" : 2,
"slippage": 2,
"quantity" : 7
},
"INS" : {
"other" : 3,
"slippage": 3,
"quantity" : 7
},
"MUT" : {
"other" : 3,
"slippage": 3,
"quantity" : 7
},
"NA" : {
"other" : 3,
"slippage": 3,
"quantity" : 7
}
}
def getAllGaps(sequence1, sequence2):
starts = []
stops = []
lengths = []
types = []
locations = []
gap = '(\-)+'
x = re.compile(gap)
for m in x.finditer(sequence1):
#Get Gap satrt, stop and length
start,stop = m.span()
#Test if Gap is slippage(compression or extension)
if start > 1 and stop < len(sequence2):
h = sequence2[start-1:stop+1].upper()
i = sequence1[start-1:stop+1]
repetitions = i.replace('-', h[0]).upper(), i.replace('-', h[-1]).upper()
if h == repetitions[0] or h == repetitions[1]:
slippage = True
else:
slippage = False
else:
slippage = False
starts.append(start)
stops.append(stop)
lengths.append(stop-start)
if slippage:
types.append(2)
else:
types.append(1)
locations += range(start, stop)
d = [starts, stops, lengths, types]
return {'locations': locations, 'bounds': d}
def getAlignmentData(source, sequence, sstart, tresholds):
insertionData = getAllGaps(source, sequence)
alignmentLength = len(source)
oneArray = np.ones(alignmentLength)
oneArray[insertionData['locations']] = 0
absoluteIndex = oneArray.cumsum()-1+sstart
relativeIndex = np.arange(alignmentLength)
tf = (absoluteIndex >= absoluteRoiStart) & (absoluteIndex < absoluteRoiStop)
absoluteBounds = absoluteIndex[tf]
relativeBounds = relativeIndex[tf]
relativeRoiStart = int(relativeBounds.min())
relativeRoiStop = int(relativeBounds.max())
events = np.array(insertionData['bounds'], dtype=np.int32)
insertionStartingInRoi = events[:,(events[0] >= relativeRoiStart) & (events[1] <= relativeRoiStop)]
print(insertionStartingInRoi)
deletionData = getAllGaps(sequence, source)
events = np.array(deletionData['bounds'], dtype=np.int32)
deletionOverlappingRoiOrStartingInRoi = events[:,((events[0] <= relativeRoiStart) & (events[1] >= relativeRoiStart)) | ((events[0] >= relativeRoiStart) & (events[1] <= relativeRoiStop))]
print(deletionOverlappingRoiOrStartingInRoi)
t0 = time.time()
getAlignmentData(source, sequence, sstart, tresholds)
t1 = time.time()
getAlignmentData(source, sequence, sstart, tresholds)
t2 = time.time()
print(str(t1-t0)+' to first try')
print(str(t2-t1)+' to second try')
When I add the @jit decorator on the two functions, I get slower code. Do I need to do something special, like signatures? Can Numba make this code faster or do I need to use Cython?
Answer: After profiling your code as:
import cProfile, pstats, StringIO
pr = cProfile.Profile()
pr.enable()
for it in range(0,10000):
getAlignmentData(source, sequence, sstart, tresholds)
pr.disable()
s = StringIO.StringIO()
sortby = 'cumulative'
ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
ps.print_stats()
print s.getvalue()
You will get:
1370129 function calls (1370122 primitive calls) in 3.249 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
10000 1.003 0.000 3.249 0.000 test.py:71(getAlignmentData)
20000 1.071 0.000 1.684 0.000 test.py:38(getAllGaps)
20000 0.171 0.000 0.171 0.000 {numpy.core.multiarray.array}
20000 0.160 0.000 0.160 0.000 {method 'reduce' of 'numpy.ufunc' objects}
10000 0.018 0.000 0.121 0.000 {method 'min' of 'numpy.ndarray' objects}
400022 0.119 0.000 0.119 0.000 {method 'append' of 'list' objects}
180000 0.107 0.000 0.107 0.000 {method 'replace' of 'str' objects}
100000 0.106 0.000 0.106 0.000 {range}
You can use Cython or f2py to optimise memory management starting with getAllGaps, which is simpler, and then go for getAlignmentData.
Keep in mind that you need to deactivate outputs and take long runs to measure real speedup.
If you compile your code using nuitka you can get 9x speedup.
110128 function calls (110121 primitive calls) in 0.355 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
20000 0.158 0.000 0.158 0.000 {method 'reduce' of 'numpy.ufunc' objects}
10000 0.011 0.000 0.101 0.000 /usr/local/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/_methods.py:28(_amin)
10000 0.022 0.000 0.096 0.000 /usr/local/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/numeric.py:141(ones)
20000 0.025 0.000 0.081 0.000 /usr/local/Frameworks/Python.framework/Versions/2.7/lib/python2.7/re.py:192(compile)
10000 0.008 0.000 0.076 0.000 /usr/local/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/_methods.py:25(_amax)
20000 0.056 0.000 0.056 0.000 /usr/local/Frameworks/Python.framework/Versions/2.7/lib/python2.7/re.py:230(_compile)
10000 0.037 0.000 0.037 0.000 {numpy.core.multiarray.copyto}
10000 0.037 0.000 0.037 0.000 {numpy.core.multiarray.empty}
All you need to do is install nuitka and run:
$ nuitka mycode.py
You also need a larger dataset to run a proper benchmark. Since it's easy to keep small data in cache, the profiler can provide misleading results. | {
"domain": "codereview.stackexchange",
"id": 14258,
"tags": "python, performance, numpy, bioinformatics, numba"
} |
Is there direct verification or proof of the orbital probability clouds of Hydrogen using Schrodinger wave function? | Question: The different probability cloud shapes of the Hydrogen atom (consisting of one proton and one electron) can be solved using the spherical wave function. The answer is often summarized as follows:
When plotted with 3D graphics, they look something like:
My question is about the direct verification of these "strange" (so to speak) shapes. I have searched and found almost no experimental results that directly verify all these shapes, for example by taking many separate measurements and tabulating the results into a 3D cloud. There is an experiment that purportedly show the (1,0,0) and (2,0,0) and (3,0,0) shape of the probability cloud, but what about the exotic shapes like (4,1,1) and (4,2,0)?
If we show these pictures to a 6 year-old, the first thing they will probably say is: "I don't believe it. Proof it and show me!" Great claims require great experimental proofs. The greater the claim, the greater the burden of proof, right? :)
EDIT: i did read the related post Is there experimental verification of the s, p, d, f orbital shapes? and follow the links given in the answers. It is one experiment where it looks like the top-most (1,0,0) and (2,0,0) and (3,0,0) shapes are reproduced. Unfortunately, only the spherical ones are reproduced, and arguably it would easier to approximately reproduce a general "round" spherical shape compared to a very strange characteristic one like, for example (4,2,0), which if it can be reproduced very accurately is a good omen that the Schrodinger function produces this "shape", as it is not easy to produce this exact specific probability cloud with some other functions...
Answer: I would tell the six year old that these angular shapes are not weird at all. They are the functional forms of irreducible representations of SO(3)...at least in the angular department. The fact that:
$$ \sum_{m=-l}^l|\psi_{nlm}(\rho,\theta, \phi)|^2 \propto \frac {f(\rho)} {4\pi} $$
i.e., full shells are spherically symmetric is to be expected. Moreover, should he complain the $z$-axis is arbitrary, I'd remind him these a irreducible representations. They are closed under rotations:
$$\psi_{nl'm'}(\rho',\theta', \phi') = \sum_{m=-l}^lc^{lm}_{l'm'}\psi_{nlm}(\rho,\theta, \phi)$$
Regarding the radial wave function, I would hope he expects more nodes as the energy increases.
My only concern would be if this precocious little sixth grader noticed that the the energies aren't just degenerate in $m$...which is mandatory based on the conservation of angular momentum, but also in $l$, (and similarly, planetary orbital energies are independent of eccentricity). This means there is a hidden symmetry, the Newton/Coulomb potential is separable in parabolic coordinates, and atom don't care about our coordinates. They have no obligation to be in the standard $\psi_{nlm}$ orbitals, and could be orbitals labeled by different quantum numbers (sometimes, $n_1,n_2,m$)...now those are "strange". | {
"domain": "physics.stackexchange",
"id": 83075,
"tags": "quantum-mechanics, experimental-physics, wavefunction, atomic-physics, orbitals"
} |
Why do we keep orbiting through the Perseid meteors? | Question: I understand that the meteors originated from the the comet Swift–Tuttle. Presumably they left the comet with some velocity relative to the solar system. As comets follow elliptical orbits presumably the meteors are generally also following an elliptical orbit. And as the earth follows a circular orbit, why do we keep meeting them every year despite being on different orbits?
Answer: This is probably best understood by picture.
Comet Swift-Tuttle is both on a highly elliptical orbit, and a highly inclined orbit, though the inclination of the orbit doesn't matter to your specific question, but it helps with the overall picture. What really matters for Earth getting a meteor shower is that the two orbital paths intersect.
What makes the periodic meteor shower is the dust trail, that follows (and precedes) the comet (not to be confused with dust tail - that's something else). Any solid debris that gets blown off the comet at low velocity enters an orbit with a very similar orbital path to the comet and over hundreds or thousands of orbits, as the comet effectively gets broken up each time it passes the sun, the comet generates a dust or debris trail that largely follows it's entire orbit, at least, that's where the greatest concentration of particles is. Not all meteor showers are periodic, but all periodic meteor showers happen in this way with a semi permanent orbiting ring of dust that the Earth flies through once a year.
Dust trail (generally not visible, too spread out, unless heated by a close pass to the sun)
Dust tail (can be visible), only exists as the comet passes close to the sun.
Every short period comet, unless it's very young, has a debris or dust trail that roughly spans it's entire orbit though the trail is most dense near the comet. In fact, the density of the dust trail can tell us how long the comet has been on it's current orbit. Most short period comets were deflected into their current orbits within the last 100,000 years or so. Short period comets have relatively short life-spans.
Long period comets or near-parabolic comets are different because their orbits are much longer, though they can still generate trails during a pass close to the sun, they tend to be much more spread out given the enormous size of their orbits.
Because the Earth's orbit intersects Swift-Tuttle's debris trail once a year and the debris trail stays mostly in the same place, we get the Perseid meteor shower about the same time every year, and the meteors always come from the same direction in the sky.
Not every short period comet has an orbit that intersects with the Earth's orbit, which is why not all comets are associated with meteor showers, in fact, the majority aren't. 3 dimensions are hard to draw in 2-D pictures, but Pluto and Neptune can be used as an example. It's often said that they cross orbits, but they actually don't. Pluto gets closer to the sun than Neptune, but only when Pluto is near it's perihelion, at which point it's well below (or well above, depending on orientation), Neptune's orbital plane.
An interesting sidebar to this is that comets that give us meteor showers are also comets that are potential earth impacters. | {
"domain": "astronomy.stackexchange",
"id": 1743,
"tags": "orbit, meteor"
} |
Decimal to binary transformation in c | Question: Input:
Enter a decimal number: 101
Output:
The binary number is : 1100101
I have solved this way->
#include <stdio.h>
int main(void){
int input=get_input();
long long complete_integer=binary_conversion(input);
reverse_digits(complete_integer);
}
int get_input() {
int number = 0;
printf("Enter a decimal number: ");
scanf("%d",&number);
return number;
}
int binary_conversion(int number) {
long long complete_integer = 0;
int base=2;
while (number != 0) {
//Take the last digit from the integer and push the digit back of complete integer
complete_integer = complete_integer * 10 + number%base;
number /= base;
}
printf("Completed integer number : %d\n",complete_integer);
return complete_integer;
}
int reverse_digits(long long complete_integer) {
int binary = complete_integer;
int last_digit = 0;
while(binary != 0) {
last_digit = last_digit * 10 + binary % 10;
binary /= 10;
}
printf("The binary number is: %d",last_digit);
}
I am here for: What about my comments and indentations? Will I face any problem for some conditions? Would you propose a better simplification by using my program?
Answer: Indent your code!
Most of your code is completely unindented, which makes it very hard to read.
While the compiler can easily scan your code and count { and } signs to tell where each function and code block begins and ends, for humans this is much more difficult and prone to mistakes. That's why it's a good idea to indent the lines inside each function and block, so that the structure of the code is apparent to a human eye.
Does your code really look like something you'd want to read in six months, after you've forgotten what it looks like? If not, please make it cleaner. (If you think it is, please actually try this experiment!)
Turn on compiler warnings!
Compiling your code with warnings enabled (Clang with -Weverything and --std=c17) would alert you to several serious issues in it that you should fix. These include:
the lack of prototypes for your helper functions,
the unused value parameter to binary_conversion,
the incorrect use of the %d format specifier instead of %lld for printing a long long int value,
the fact that the return value of binary_conversion is truncated from a long long int to an int, and
the fact that reverse_digits is defined to return an int even though it actually doesn't.
Actually, it's quite surprising that your code even compiles and (sort of) works with all these bugs in it.
Incorrect output:
You left in the line:
printf("Completed integer number : %d\n",complete_integer);
that causes your code to print something that it's not supposed to. A simple and minor issue, really, but possibly enough to fail your assignment.
A bigger problem is that, because you forgot to change the return type of binary_conversion to long long int, your code will produce incorrect output for inputs larger than 1023 (= 1111111111 in binary). For example, the input 1025 causes your code to generate the nonsense output:
The binary number is: 455665549
(Even if you did fix this bug, your code would still fail for inputs larger than about 219 − 1, since a 64-bit signed long long int cannot store a decimal number with more than 19 digits.)
An even bigger problem is that your algorithm doesn't work for even numbers at all! For example, the input 1024 (or indeed any other power of 2) causes your program to print:
The binary number is: 1
Honestly, did you even test the code at all?
A better solution:
The computer already stores the input number read by scanf in binary. Instead of converting that binary number to a decimal number that looks the same when printed with printf (and reversing its decimal digits twice in the process), you should instead try to find some way of printing the bits of the binary number one by one directly.
Here's a quick sketch of something that should work:
#include <stdio.h>
#include <stdint.h>
void print_binary(uint32_t number) {
uint32_t bit = (uint32_t)1 << 31;
while (bit > number && bit > 1) {
bit /= 2; // skip leading zeros
}
while (bit > 0) {
putchar(bit & number ? '1' : '0');
bit /= 2;
}
}
(Note the use of the fixed-length unsigned integer type uint32_t to avoid making unnecessary assumptions about the number of bits an int can store — which might be only 16 on some low-end non-POSIX systems — and the user of the bit shift operator << to easily calculate the constant 231. Also, if you're not familiar with the bitwise AND operator & or the ternary conditional operator ?…:, this could be a good time learn about them.) | {
"domain": "codereview.stackexchange",
"id": 39567,
"tags": "c"
} |
Reactivity of thioesters with respect to nucleophilic attack | Question: Why are thioesters relatively reactive with regard to nucleophilic attack? Prof says to wait until pchem 3 when we learn about orbital symmetry. He also said that sulfur’s d-orbitals (?!) don't have the correct symmetry to participate in resonance with the carbonyl carbon.
Wait. I thought that sulfur didn’t utilize its d-orbitals. Was it previously taught that sulfur’s lone pairs were in d-orbitals? Why would sulfur even need d-orbitals when in a thioester? Can't we just explain this based on the poor overlap between carbon's 2p and sulfur's hypothetical 3p orbitals?
Answer: According to these lecture notes from Gonzaga University:
The question arises over why thioesters occupy a prominent place in metabolism while oxygen esters play a relatively minor role. One answer is thermodynamic. The sulfur in the thioester linkage is less able to participate in electron delocalization through the acyl group (the usual explanation given for this is poor overlap between the 2p orbital of carbon and the 3p orbital of sulfur), and this makes the thioester bond less stable than the ester bond. Hence, the thioester bond has a more negative standard free energy of hydrolysis (−7.5 kcal/mole vs. about −5 kcal/mole for most oxygen esters). In many cases, it appears that thioesters are more reactive than oxygen esters, undergoing more facile nucleophilic displacement reactions at the acyl group. The reactivity of a thioester at the alpha carbon compares favorably with that of ketones.
...
Because resonance structure 2c [see web page - CF] makes a more significant contribution for a thioester than an analogous structure for an ester, the acyl carbon is more positive, hence more susceptible to nucleophilic attack. An attacking nucleophile would be more readily acylated by a thioester than it would be by an ester. Furthermore, the C-S bond is weaker than the C-O bond, and the thiolate (or thiol, if protonated) is a better leaving group than alkoxide (or alcohol). These factors all make a thioester in general a better acylating agent than an ester.
So the answer doesn't seem to involve d orbitals explicitly, but instead the difference between 2p and 3p orbitals (as well as other factors). | {
"domain": "chemistry.stackexchange",
"id": 2944,
"tags": "organic-chemistry, resonance"
} |
What do operations on single Qubits of Unfactorable Superpositions Do? | Question: So suppose I have the following Quantum Circuit:
A
---- |Control| -----|Hadamard|----
B
---- |xxxxxxx|------------------------
Which is a 2 input Controlled Gate (applying some gate of two choices to Qubit B, depending on the value of Qubit A) followed by a single Hadamard Gate acting on Qubit A.
Initially the Qubits are in states
$$a_0\left| 0 \right> + a_1\left| 1 \right> $$
$$b_0\left| 0 \right> + b_1\left| 1 \right> $$
Respectively. So the combined system is in state
$$ a_0 b_0 \left| 00 \right> +a_0 b_1 \left| 01 \right> + a_1 b_0 \left| 10 \right> + a_1 b_1 \left| 11 \right>$$
At the beginning.
After the application of the controlled Gate, the combined superposition can easily be in a state that CANNOT be factored into a tensor product of two states. Any superposition of the form
$$ q_0 \left| 00 \right> +q_1 \left| 01 \right> + q_2 \left| 10 \right> + q_3 \left| 11 \right>$$
Where $q_0/q_1 \ne q_2/q_3$ for example couldn't be factored into a tensor product.
But now when we apply that Hadamard gate, it is applied onto a single Qubit. What is it doing then? Given that the "state" of a single qubit cannot be independently factored and considered, how does the hadamard gate now affect the state of system?
How this is different than:
Help on applying a Hadamard gate and CNOT to two single q-bits
In the linked question, we are dealing with a factorable state, that then is given a CNOT transform. That computation is obvious since the factorable state (post Hadamard) can be expressed as:
$$a_0\left| 0 \right> + a_1\left| 1 \right> $$
$$b_0\left| 0 \right> + b_1\left| 1 \right> $$
yielding superposition state:
$$ a_0 b_0 \left| 00 \right> +a_0 b_1 \left| 01 \right> + a_1 b_0 \left| 10 \right> + a_1 b_1 \left| 11 \right>$$
Which can now be easily computed with the $4 \times 4$ CNOT operator.
In our question we go the other way. WE start off with teh application of a $4 \times 4$ controlled operator to generate an entangled superposition
$$ q_0 \left| 00 \right> +q_1 \left| 01 \right> + q_2 \left| 10 \right> + q_3 \left| 11 \right>$$
And now am attempting to determine how the behavior of a gate acting on a SINGLE Qubit affects the whole superposition.
The link is irrelelvant here since our system is no longer factorable as a tensor product of independent superpositions.
What I'm asking can be summarized succinctly as: How can I write a single Qubit operator $O$ (given as a $2 \times 2$ matrix) as a multiQubit operator $O'$ (given as a $2^k \times 2^k$ matrix) that acts as the identity on all inputs except the first where it acts as $O$ traditionally would.
To that end, the question offers no hint of how to go about it.
Work so far
My intuition suggests given the system:
$$ q_0 \left| 00 \right> +q_1 \left| 01 \right> + q_2 \left| 10 \right> + q_3 \left| 11 \right>$$
We can artificially believe that the first qubit is in the state
$$ (q_0 + q_1) \left| 0 \right> + (q_2 + q_3) \left| 1 \right> $$
And that the entire superposition is in:
$$ (q_0 + q_1) \frac{q_0}{q_0 + q_1}\left| 00 \right> +(q_0 + q_1) \frac{q_1}{q_0 + q_1} \left| 01 \right> + (q_2 + q_3) \frac{q_2}{q_2 + q_3} \left| 10 \right> + (q_2 + q_3) \frac{q_3}{q_2 + q_3} \left| 11 \right>$$
So when we apply the Hadamard to the Qubit we send:
$$ (q_0 + q_1) \left| 0 \right> + (q_2 + q_3) \left| 1 \right> $$
To
$$ \frac{q_0 + q_1+q_2 + q_3}{\sqrt{2}} \left| 0 \right> + \frac{q_0 + q_1-q_2 - q_3}{\sqrt{2}} \left| 1 \right> $$
And thus the entire system now is in:
$$ \frac{q_0 + q_1+q_2 + q_3}{\sqrt{2}} \frac{q_0}{q_0 + q_1}\left| 00 \right> +\frac{q_0 + q_1+q_2 + q_3}{\sqrt{2}} \frac{q_1}{q_0 + q_1} \left| 01 \right> + \frac{q_0 + q_1-q_2 - q_3}{\sqrt{2}} \frac{q_2}{q_2 + q_3} \left| 10 \right> + \frac{q_0 + q_1-q_2 - q_3}{\sqrt{2}} \frac{q_3}{q_2 + q_3} \left| 11 \right>$$
But I'm not sure why I would rigorously believe this.
(Renormalize where necessary)
Answer:
Given that the "state" of a single qubit cannot be independently factored and considered, how does the hadamard gate now affect the state of system?
To apply a single-qubit operation $M$ to an n-qubit system you hit the system with $I \otimes M \otimes I \otimes ... \otimes I$. The position of $M$ within that tensor product determines which qubit you hit.
You can also use simple equivalent rules, like these:
Group the states by the uninvolved qubits.
$|\psi\rangle = a|00⟩ + b|01⟩ + c|10⟩ + d|11⟩$
$= \Big(a|0⟩ + c|1⟩\Big)|0⟩ + \Big(b|0⟩ + d|1⟩\Big)|1⟩$
Apply the operation within each group.
$H_0 |\psi\rangle = \Big(H(a|0⟩ + c|1⟩)\Big)|0⟩ + \Big(H(b|0⟩ + d|1⟩)\Big)|1⟩$
$= \Big(\frac{a+c}{\sqrt 2}|0⟩ + \frac{a-c}{\sqrt 2}|1⟩\Big)|0⟩ + \Big(\frac{b+d}{\sqrt 2}|0⟩ + \frac{b-d}{\sqrt 2}|1⟩\Big)|1⟩$
Ungroup
$=\frac{a+c}{\sqrt 2}|00⟩ + \frac{b+d}{\sqrt 2}|01⟩ + \frac{a-c}{\sqrt 2}|10⟩ + \frac{b-d}{\sqrt 2}|11⟩$
Sometimes it makes sense to just keep the grouping around. For example, when the system is spread across two parties you might as well keep it laid out in a grid where columns are the Alice groups and rows are the Bob groups. Gives you a kind of "matrix of amplitudes", which is convenient because Alice's operations correspond to left-multiplying that matrix and Bob's operations correspond to right-transpose-multiplying it. | {
"domain": "physics.stackexchange",
"id": 30116,
"tags": "quantum-information, wavefunction, quantum-computer, superposition"
} |
What is the motivation behind restore model of computation? | Question: The memory that stores the input is called the input memory. The memory that an algorithm additionally occupies during the computation is called the working memory.
$\textit{Model of Computations}$
Word RAM : This a very well known model of computation.
Read-only word RAM: Where input memory is assumed to be read-only.
Permuted word RAM: Allow input memory to be permuted but not destroyed.
Restore model: This is a variant of permuted word RAM model, where the input memory is allowed to be modified during the process of answering a query, but it has to be restored to its original state afterward.
I know that why word RAM is the interesting model of computation in many cases but I don't have any idea what is the motivation behind studying problems in 3 and 4. Let us take the example of DFS(Depth-first Search). I know that the problem has been studied in word RAM and I also know about the space complexity of it in that model. I recently come across one paper in which DFS (see this) has been studied in 4th model of computation. The result is that DFS and BFS in place memory in linear time.
Question: What is the motivation behind restore model of computation?
Answer: The restore model is semantically identical to the read-only model. Both models are natural since you expect operations on objects to not alter them. For example, imagine running a query on a database. You don't want the query to modify the database, but you might not mind if the database is temporarily changed while running the query, as long as it is restored to its original state in the end.
(In fact, in this particular you might care, for many reasons: perhaps several processes access the database in parallel; or perhaps you are worried that a failure will stop the process in the model, thus leaving the database in a modified state. These are reasons why you'd sometimes prefer the read-only model.)
The restore model is more powerful than the read-only model, in the sense that it potentially requires less auxiliary memory. In recent times, this model is known as catalytic computation. See for example Koucký's expository article, which provides further motivations for this model. | {
"domain": "cs.stackexchange",
"id": 13111,
"tags": "algorithms, computation-models"
} |
Isn't $E-U = K$ in Schrodinger's Equation? | Question: I'm studying quantum mechanics in its most basic level (I don't even know if Physicists call this already quantum mechanics) and I have one doubt in Schrodinger's equation. The book presents the equation for the special case where the solution is of the form $\Psi(x,t)=\psi(x)e^{-i\omega t}$ and says that Schrodinger's equation (in one dimension) is
$$\dfrac{d^2\psi}{dx^2}+\dfrac{8\pi^2 m}{h^2}(E-U(x))\psi(x)=0$$
Where $m$ is the mass of the particle, $E$ it's total energy and $U$ it's potential energy function.
The first doubt that arises is the following: the book says that $E$ is the "constant total energy" of the particle. But wait, since $E = K + U$ and $U$ varies, clearly $E$ should vary. How can $E$ be constant if $U$ is not?
Also, when we write $E-U(x)$ isn't this simply $K$, the kinectic energy? Why do we bother then writing explicitly $E-U$?
I feel that the potential $U(x)$ on the equation and the one that is part of $E$ are different, but I'm not understanding how.
Answer: Note that the equation you give can be trivially restated as
$$\left[-\frac{\hbar^2}{2m}\frac{\mathrm{d^2}}{\mathrm{d}x^2}\right]\psi(x) = \left[E-U(x)\right]\psi(x)\text{,}$$
where $\hbar = h/2\pi$. So if in this situation we interpret kinetic energy as the difference between total and potential energy, this tells us that the kinetic energy operator must in the position basis be:
$$\hat{T} = -\frac{\hbar^2}{2m}\frac{\mathrm{d}^2}{\mathrm{d}x^2}\text{.}$$
We can also definite a potential energy operator, which in this basis is just trivial multiplication: $\hat{U}\left[\psi(x)\right] = U(x)\psi(x)$.
The first doubt that arises is the following: the book says that $E$ is the "constant total energy" of the particle. But wait, since $E = K + U$ and $U$ varies, clearly $E$ should vary. How can $E$ be constant if $U$ is not?
What the books means by "constant total energy" is that the total energy has a definite value, i.e., the given wavefunction is an eigenstate of the total energy:
$$\left[\hat{T} + \hat{U}\right]\psi = E\psi\text{.}$$
If one measures the energy of the particle correctly, one will get this definite value $E$. This won't happen for either the kinetic or potential energies, except in some trivial cases: measuring them will give a random eigenvalue of the corresponding operators.
However, given an energy eigenstate as we have here, the kinetic and potential energies are time-independent in the sense that the probability distribution of the possible measurement results does not vary in time. The same holds for any observable that's not explicitly dependent on time.
Also, when we write $E-U(x)$ isn't this simply $K$, the kinectic energy? Why do we bother then writing explicitly $E-U$?
That wouldn't make sense. If we simply write $K$ as a constant in the equation you give, that would suggest that the kinetic energy takes on a definite value, and this is not true for the wavefunction given. To have a definite value of kinetic energy, the state $\psi$ must be an eigenstate of kinetic energy, $\hat{T}\psi = K\psi$. | {
"domain": "physics.stackexchange",
"id": 13919,
"tags": "quantum-mechanics, schroedinger-equation"
} |
Tunneling for hydrogen fusion | Question: So I'm trying to find a rough estimate of the temperature required for Hydrogen to turn into Helium through quantum tunneling. In the lecture we were presented with the following:
$$\frac{p^{2}}{2\mu}=\frac{Z_{1}Z_{2}e^{2}}{r}$$
For $p=h/\lambda$ and $\lambda\approx r$ we get:
$$\lambda=\frac{h^{2}}{Z_{1}Z_{2}e^{2}2\mu}$$
Now $$(3/2)k_{B}T=\frac{Z_{1}Z_{2}e^{2}}{\lambda}$$ and if I replace $\lambda$ I get:
$$T=\frac{4\mu Z_{1}^{2}Z_{2}^{2}e^{4}}{3h^{2}k_{B}}$$
If I know replace the constants for the p-p cycle ($Z_{1}=Z_{2}=1, \mu=m_{p}/2$)I get a number which is totally wrong and not $T=10^{7}K$ that is expected. Can anyone tell me where the mistake is?
Answer: You are doing your calculations not entirely in the SI system, where the Coulumb potential is $$\frac{1}{4\pi\varepsilon_0}\frac{Z_1Z_2 e^2}{r}.$$
You can see that your $\lambda$ doesn't have units of length, but rather something like $\text{m}^2\cdot\text{kg}\cdot\text{s}^{-4}\cdot\text{A}^{-2}$. Luckily, $\varepsilon_0$ has units of $\frac{\text{F}}{\text{m}}=\text{m}^{-1}\cdot\text{kg}^{-1}\cdot\text{s}^{4}\cdot\text{A}^{2}$ so that obviously fixes the units.
If you include a factor of $(\frac{1}{4\pi\varepsilon_0})^2$ (since you use the non-SI Coulomb energy twice) you get $9.79 * 10^6 \text{K} $, which is what you want to have. | {
"domain": "physics.stackexchange",
"id": 37727,
"tags": "temperature, sun, fusion, quantum-tunneling, nucleosynthesis"
} |
Are all T Tauris on the Hayashi track? | Question: Reading the definitions of T Tauri stars and the Hayashi track one can gather that:
T Tauri stars are pre-main-sequence stars in the process of contracting to the main sequence along the Hayashi track
and that:
After a protostar ends its phase of rapid contraction and becomes a T Tauri star, it is extremely luminous. The star continues to contract, but much more slowly. While slowly contracting, the star follows the Hayashi track
Can we then say that all stars on the Hayashi track are T Tauri stars? And moreover, are all T Tauris on the Hayashi track?
Answer: Very much a case of semantics and what exactly you mean by a T Tauri star.
If a requirement is for the star to have a significant disc, then no, a fair fraction of young stars, particularly those of low mass, have lost their discs before they reach the end of the Hayashi track. The exponential decay timescale for losing a disc is around 3 Myr. The timescale to get to the bottom of the Hayashi track can be 10-100 Myr for stars of 0.5-0.1 solar masses.
If the definition encompasses both "classical" T Tauri stars and "weak-lined" T Tauri stars (those without an active accretion disc), then yes, all young stars on the Hayashi track are T-Tauri stars.
Edit: I realised I hadn't completed the answer. It is also true that many higher mass T-Tauri stars (more than 0.5 solar masses) have left the Hayashi track and still have discs whilst they are on their radiative Henyey tracks. | {
"domain": "astronomy.stackexchange",
"id": 6688,
"tags": "star, stellar-evolution, t-tauri-stars"
} |
Spawning a cube in gazebo | Question:
I tried to spawn a cube in gazebo with the following URDF.
<robot name="simple_box">
<link name="my_box">
<inertial>
<origin xyz="2 0 0" />
<mass value="1.0" />
<inertia ixx="1.0" ixy="0.0" ixz="0.0" iyy="100.0" iyz="0.0" izz="1.0" />
</inertial>
<visual>
<origin xyz="0.7 0 0.56"/>
<geometry>
<box size="0.02 0.02 0.02" />
</geometry>
</visual>
<collision>
<origin xyz="0.7 0 0.56"/>
<geometry>
<box size="0.02 0.02 0.02" />
</geometry>
</collision>
</link>
<gazebo reference="my_box">
<material>Gazebo/Blue</material>
</gazebo>
</robot>
with the following entry in my launch file :
<!-- spawn a cube for pick and place -->
<node name="spawn_cube" pkg="gazebo_ros" type="spawn_model" args="-file $(find ur5_pnp)/urdf/object.urdf -urdf -model cube" />
But I see some strange behavior of the cube as it never stops moving.
https://youtu.be/HPc2xJXtPhc
I am guessing it has something to do with inertia / mass?
Originally posted by ipa-hsd on ROS Answers with karma: 150 on 2016-06-06
Post score: 0
Original comments
Comment by JohnDoe2991 on 2016-06-06:
Should your mass origin and visual/collision origin not be the same? I'm not an expert with URDF, I always solve this by try and error. But this movement looks like your mass center is wrong.
Comment by ipa-hsd on 2016-06-06:
Perfect! Works fine
Comment by longforgotten on 2016-06-06:
@JohnDoe2991 not always. But for the cube, you're right.
Comment by longforgotten on 2016-06-06:
@intelharsh, your inertia tensor is miles away from a real tensor, given that your cube mass is 1 kg.
You can use this xacro to compute a viable inertia tensor to the cube.
Comment by ipa-hsd on 2016-06-07:
@emersonfs can I find a documentation somewhere regarding this?
Comment by ipa-hsd on 2016-06-07:
@emersonfs I used the macro you mentioned and it seems to work fine, though I'll need to read about it to understand what's happening.
Comment by longforgotten on 2016-06-07:
This xacro file implements some known moment of inertia tensors for certain shapes. To understand why inertia tensors are so importants to physics, you can read the chapter 6 "Manipulator dynamics", section 3 "Mass distribution", from Craig's book, "Introduction to Robotics".
Comment by ipa-hsd on 2016-06-08:
@emersonfs thanks! Will go through it
Answer:
I can confirm from a lot of tests that the ratio between mass and inertia is crucial in the Gazebo physics.
I had a problem with my robot collapsing, it also ended up being caused by inertia issues.
The answer here covers a lot of solutions.
Originally posted by Laurens Verhulst with karma: 167 on 2016-06-06
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 24830,
"tags": "gazebo"
} |
Return row count from mysqli prepared statement | Question: I want to know if it is okay/safe to use a prepared statement and mysqli_num_rows like this:
public function getUnreadNumber() {
$userLoggedIn = $this->user_obj->getUsername();
// get_result for * and bind for select columns
// bind_result Doesn't work with SQL query that use *
$query = $this->con->prepare('SELECT * FROM notifications WHERE viewed="0" AND user_to = ? ');
$query->bind_param("s", $userLoggedIn);
$query->execute();
$query_result = $query->get_result();
return mysqli_num_rows($query_result);
}
Or should I do this?
$query = $this->con->prepare('SELECT * FROM notifications WHERE viewed="0" AND user_to = ? ');
$query->bind_param("s", $userLoggedIn);
$query->execute();
$query_result = $query->get_result();
$numRows = $query_result->num_rows;
return $numRows;
Answer: You should definitely not be mixing procedural and object-oriented syntax.
Although it works with un-mixed syntax, the process is working harder than it needs to and delivering more result set data than you intend to use.
I would add COUNT(1) or COUNT(*) to the sql like this:
$sql = "SELECT COUNT(1) FROM notifications WHERE viewed='0' AND user_to = ?";
$query = $this->con->prepare($sql);
$query->bind_param("s", $userLoggedIn);
$query->execute();
$query->bind_result($count);
$query->fetch();
return $count;
Assuming the sql is not broken due to syntax errors, this will always return a number. | {
"domain": "codereview.stackexchange",
"id": 38904,
"tags": "php, comparative-review, mysqli"
} |
Spinors, Spacetime and Clifford algebra | Question: I'm looking to understand the intrinsic connection that Clifford algebra allows one to make between spin space and spacetime. For a while now I've trying to wrap my head around how the Clifford algebra fits into this story, with the members of my department consistently telling me "not to worry about it". However, I think there's something deep to be uncovered.
The gamma matrices present in the Dirac equation generate a Clifford algebra:
$\{\gamma^{\nu}, \gamma^{\mu}\} = 2\eta^{\nu \mu} I$. It is argued in the gamma matrices wikipedia page that this algebra is the complexification of the spacetime algebra: $Cl_{1,3}(\mathcal{C})$ is the complexification of $Cl_{1,3}(\mathcal{R})$. The answer given here (What is the role of the spacetime algebra?) would seem to suggest that this complex structure falls out naturally from the decomposition into degrees of $Cl_{1,3}(\mathcal{R})$. Is this the case?
Furthermore, is it the case that one can then use the gamma matrices that generate $Cl_{1,3}(\mathcal{C})$ to form the Lie Algebra of the Lorentz group, which to me gives the picture that these constructions in spin space can form spacetime transformations (as outlined here: Relation between the Dirac Algebra and the Lorentz group)?
Essentially (I think) the question I'm asking is does the Clifford algebra encapsulate some global space of which spacetime and the space of spinors belong - if so how then do the gamma matrices present in the Dirac equation respect and link these two spaces? Were dealing with algebras, but does the isomorphism $SO(1,3)$~$SU(2)$ x $SU(2)$ come into play here?
I'm not a mathematician by trade, but I think technical answers will naturally come into play here - if people could try and hold onto some physical intuition it would be much appreciated.
Best wishes to all.
Answer:
The reason we usually complexify the Clifford algebra is mostly convenience: The representation theory of complex algebras is simpler in general, and if we want to restrict to real representations for some reason later on we can always do that. In particular, Dirac spinors at least exist in all dimensions, while the "real" Majorana spinors are dependent on the number of dimensions and even on the signature (depending on what, exactly, you mean by "Majorana"), see also this Q&A of mine.
The second degree of the Clifford algebra (complex or real doesn't matter here) is isomorphic as a Lie algebra to the Lorentz algebra (or, in the generalized version, the generalized Clifford algebra for a metric $\eta$ has the isometry algebra for that metric as its second degree). It is not the "gamma matrices" (= generators of the Clifford algebra, hence in particular first degree elements of it) that generate the Lorentz algebra, but their commutators $\sigma^{ij} = [\gamma^i, \gamma^j]$. (It may be that you are already aware of this, but this is a common point of confusion)
I'm not quite sure what your "does the Clifford algebra encapsulate some global space of which spacetime and the space of spinors belong" question is trying to ask, but let me point out that four dimensions - where one could identify the first degree of the Clifford algebra with both spacetime and the four-dimensional Dirac spinors - are an "accident". The Dirac spinor representation in $d$ dimensions is $2^{\lfloor d/2 \rfloor}$-dimensional, which you cannot identify with the $d$-dimensional first degree of the algebra in most other dimensions. Therefore, the Clifford algebra does not, in a general sense, "contain" spinors.
Lastly and most tangentially, there is no isomorphism $\mathrm{SO}(1,3)\cong \mathrm{SU}(2)\times \mathrm{SU}(2)$, regardless of how often you will read this lie in physics-oriented texts. See e.g. this answer by Qmechanic and the linked questions for details on the relation between the two groups and their algebras. The nutshell is that $\mathrm{su}(2)\oplus\mathrm{su}(2)$ is the compact real form of the complexification of $\mathrm{so}(1,3)$, hence the complex finite-dimensional representations of these algebras are equivalent, hence the projective representations of the group $\mathrm{SO}(1,3)$ are given equivalently by $\mathrm{su}(2)\oplus\mathrm{su}(2)$ representations. (For why projective representations matter, see this Q&A of mine) | {
"domain": "physics.stackexchange",
"id": 98402,
"tags": "spacetime, lorentz-symmetry, dirac-equation, dirac-matrices, clifford-algebra"
} |
rospy Subscriber object not getting destroyed? | Question:
This just came up in an assignment that I'm grading, and I can't figure out why it works. In the constructor of a class, there is:
self.sub = rospy.Subscriber('/odom', Odometry, self.seek_goal)
followed immediately by
self.sub = rospy.Subscriber('/base_scan', LaserScan, self.laser_agg)
My understanding is that, when self.sub is reassigned, the original Subscriber should go out of scope, get destroyed, and get garbage-collected. However, both callbacks seem to run happily for the duration of the node's existence.
Any Python gurus out there with any insight? I could guess about the object not getting reaped, but it should still go out of scope (and get destroyed), right?
Originally posted by Bill Smart on ROS Answers with karma: 1263 on 2012-11-16
Post score: 1
Answer:
After looking at rospy's source code, it looks like no __del__ method is implemented for subscribers. That means, topics are just not cleaned up when a subscriber object is destroyed. The user needs to manually unregister topics.
I'm not sure if this was intended behavior but I would consider filing an enhancement ticket.
Originally posted by Lorenz with karma: 22731 on 2012-11-18
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by joq on 2012-11-24:
I recently noticed a similar issue with rospy service advertisements. | {
"domain": "robotics.stackexchange",
"id": 11779,
"tags": "rospy"
} |
Gaia Gbp - Grp: why does it get larger as star gets redder? | Question: As an enthusiastic amateur scientist, I'm plowing my way through a cool Gaia DR2-based paper, "An Empirical Measurement of the Initial-Final Mass Relation with Gaia White Dwarfs". I'm doing all right, but have hit a roadblock: understanding the meaning of the stellar color value GBP - GRP.
GBP is the blue photometry value, and GRP is the red photometry value, so GBP - GRP is the difference between the two, giving a number broadly encoding the star's color. Here's a graph from the paper that uses this value:
It's a graph of white dwarf ("WD") color versus absolute magnitude. On the left, the absolute magnitude MG increases (downwards) as the star grows dimmer, and across the bottom the color GBP - GRP increases (rightwards) as the star grows redder. Hence the slope of the curve; as a WD cools, it gets smoothly dimmer and redder.
But wait! as a star gets redder, the the "blue" brightness should become relatively smaller than the "red" brightness, right? So, the GBP - GRP value should get smaller and smaller, right? But that's the opposite of what this graph (and other graphs in the paper) apparently shows.
Should GBP - GRP get larger as a star cools and gets redder? Where am I confused? (I'm assuming the professionals aren't all confused...)
Answer: But you demonstrate that you know the answer to this question! Fainter objects have larger magnitudes. So as the white dwarf becomes cooler and redder, its blue magnitude $(G_{BP})$ grows by more than its red magnitude $(G_{RP})$ and the colour-index $(G_{BP} - G_{RP})$ becomes larger. This is very similar to the commonly used $B-V$ colour, which is larger for cooler stars. | {
"domain": "astronomy.stackexchange",
"id": 6273,
"tags": "temperature, white-dwarf, gaia, absolute-magnitude"
} |
Why should tf2_ros::TransformBroadcaster be defined as static? | Question:
Hello everyone, this is a followup question to this one.
To cut a long story short, I found out (the hardway!) that if I do not use a static/single instance of broadcaster, the turtles will act erratically, going in circles.
If I only use a static instance, it works just fine. I read the tutorials about tf2, but there is no explanation about this.
Why is that? Could someone kindly explain whats happening here?
Thanks a lot in advance
Originally posted by Rika on ROS Answers with karma: 72 on 2021-07-07
Post score: 2
Answer:
Why should tf2_ros::TransformBroadcaster be defined as static?
the answer to this would be: it doesn't need to be.
But this seems to be the longer version of the question you include at the end of #q381845:
I'd be grateful if someone could explain why it has to be one instance of the broadcaster for this to work and what happens when its not. (why the eratic behavior?)
This is most likely not "special" to TF/TF2, but an instance of the creating-publishers-in-short-lived-scopes anti-pattern.
This has been discussed many times on ROS Answers.
As a summary: it takes time to setup pub-sub connections between nodes. If you (re)create your publishers at high rates, you're essentially forcing your susbcribers to (re)subscribe at that same rate, or risk losing messages. As in most cases it will be impossible to (re)subscribe fast enough, messages are lost and all sorts of "unexplainable things" start to happen.
ros::TransformBroadcaster creates ros::Publishers internally, so what I wrote above also applies to it.
If I only use a static instance, it works just fine.
that's because if variables are declared static, they will be initialised (ie: created) once, at program start. This will immediately solve the problem, as now subscribers have all the time they need to setup the subscription connections (or: as long as your node is running).
While static works (in this case), it's not too nice, and it's also not needed.
All you need to do is make sure you create your ros::TransformBroadcaster in a scope which lives longer than that of the callback which is using it. For the code you show in #q381845, that would basically mean the scope of main(..) (as you are using lambdas to implement your callbacks). In more traditional code, you'd make the broadcaster a member variable of a class, or keep it in global scope (although that comes with its own disadvantages).
Edit: the comments in #q381845 are also not completely correct:
//Its very important that broadcaster is shared between all turtles
//that is if you define broadcaster in this function/it must be static
//otherwise, you will endup with multiple broadcasters, sending coords
// which will mess everything up. previously I forgot about this
as I now assume you understand, there is no real requirement to only have a single instance, nor must it be shared necessarily. It's perfectly possible to have multiple broadcasters in a single process.
What is important is to make sure it has a sane lifecycle (ie: it does not get created and destroyed every few milliseconds).
Originally posted by gvdhoorn with karma: 86574 on 2021-07-07
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by Rika on 2021-07-07:
Thanks a lot. really appreciate your kind and thorough explanation :)
yes, I'll update my previous comment there as well and add this as a reference there too. | {
"domain": "robotics.stackexchange",
"id": 36659,
"tags": "ros, transform, ros-kinetic, tf2"
} |
How can I compare two different neural networks, from a theorical point of view? | Question: Let's say I have a problem (i.e. Given f(x), find x) and two neural networks(i.e. feedforward and recurrent). I would like to know if one works better than the other one. I could run the twos on a computer, but other programs might interfere and I wouldn't know if the implementations I'm running are really the best ones humankind could create. Moreover, how could I be sure that the feedforward network worked better than the recurrent, when it might have just been "lucky"?
So, here is the question: can I compare the efficiency of two neural networks(with known sizes, structures and functions) from a theorical point of view? And if the answer is yes, how?
Thank you in advance.
Answer: Essentially, no. The only way to know which neural network is going to give you better accuracy is to try them on a realistic data set. The theory we have is not well-enough developed to allow us to reliably predict which will do better on a particular data set.
A secondary remark. When you remark "other programs might interfere", that's not correct. Even if other programs are running (on a multi-tasking machine), they won't affect the accuracy. They might take the process of running or training the neural network on your data set take longer, but they won't affect the results. | {
"domain": "cs.stackexchange",
"id": 7459,
"tags": "machine-learning, artificial-intelligence, neural-networks"
} |
Complexity in creating transgenic animals (e.g., mice) | Question: Many papers I have seen describing transgenic rodent models (and presumably applicable to other model organisms) involve the knock-in, or modification to, a single gene, possibly two genes. With respect to recombineering techniques, what prevents targeting multiple genes in a single organism? For instance, if I wanted to simultaneously knock-in some genes and knock-out others within the same mouse, would I be forced to generate individually modified transgenic lines and then do some "fancy" breeding to generate the multiple-modified mice?
Answer: One reason is the low likelihood of success. Modifying a gene almost always involves a recombination event of plasmid DNA with a target site in the genome (and I say almost just because there may be some method that I don't know about, but all the ones I'm familiar with do). The likelihood of that decreases exponentially with the number of genes you're trying to modify. If you're trying to make several mutants of individual genes the likelihood of success decreases only linearly.
Another reason is having more knowledge and experimental power. You can learn little from a double mutant if you don't also have the individual mutants to compare. In fact, most reviewers would ask for individual mutant data if you've made a double mutant in your paper. This is especially true with flies and worms, as crosses take less time with them.
Also, the more mutant genes you have, the weaker the animal. Your mutants may not be viable at all with too many mutations. | {
"domain": "biology.stackexchange",
"id": 4430,
"tags": "dna, gene-expression"
} |
Why is this algorithm $O(n^3)?$ | Question: In a programming book that I'm currently reading it's stated that
$$\sum\limits_{i=1}^{n}i^2$$ is $O(n^3)$. My understanding was that $i\times i$ is a primitive operation and the complexity would be $O(n)$. What am I missing?
Answer: The expression
$$ \sum_{i=1}^n i^2 $$
is not an algorithm. It is just a number. In fact, what the book really means is the function
$$ n \mapsto \sum_{i=1}^n i^2. $$
Since
$$ \sum_{i=1}^n i^2 = \frac{n(n+1)(2n+1)}{6} = \frac{2n^3+3n^2+n}{6}, $$
we see that this sum is indeed $O(n^3)$, in fact, even $\Theta(n^3)$.
We often use big O notation in analyzing the running time of an algorithm. Quicksort runs in time $O(n\log n)$ (on average) since the function $T(n)$ measuring its average running time on inputs of length $n$ is $O(n\log n)$ — the function $T(n)$, not the algorithm itself. | {
"domain": "cs.stackexchange",
"id": 2994,
"tags": "time-complexity"
} |
Is this interpretation of Bell's theorem correct? | Question: A popular interpretation of the Bell's theorem is like this:
Take a set of objects with three boolean properties, say, $x$, $y$, and $z$.
For each object, randomly pick two of the three properties and compare their values.
Classic objects will show that with the probability of $\frac{2}{3}$ (max) these two randomly picked properties will have the same values. This part is usually clearly explained in most of the interpretations I have read.
But when it comes to the quantum objects, it is often explained like this:
“The punchline is that according to the hidden stamps idea, the odds of both boxes flashing the same color when we open the doors at random is at least 55.55%. But according to quantum mechanics, the answer is 50%. The data agrees with quantum mechanics, and it rules out the ‘hidden stamps’ theory”.
Where do these 50% come from?
I suspect, the author means that measuring a property of a quantum particle changing the property, so reading two boolean properties of the same particle is like flipping two coins - with the probability of $\frac{1}{2}$ there will be two heads or two tails.
This contradicts to my impression that measuring the same property twice gives the same result (but kind of resets the other properties). At least, this is how it is explained in Susskind's "Quantum Mechanics: The Theoretical Minimum".
Are there any mistakes in the interpretation?
What are my mistakes?
What, in your opinion, is the best illustration of Bell's theorem, that does not involve too much math but helps to understand what is the theorem about?
Answer: Let me remark that this is not the typical setting of Bell inequalities. In a more standard Bell scenario, the two parties/boxes can choose between two different measurement settings, and the violation is observed in the amount of correlation between the observed outcomes (quantified by a specific quantity).
This not to say that the framework discussed in the article is not interesting, but just that I find a bit disingenuous to sell it as a standard Bell scenario (unless this is remarked in the article, which in fairness I haven't read in full).
Where do these 50% come from?
One way to describe the quantum version of this setup is by characterising each measurement choice with a pair of orthonormal vectors/pair of states. Write the state corresponding to the $i$-th outcome for the $j$-th measurement setting on the $k$-th box as $|u_i^{j;k}\rangle$ (the possible values of the indices are $i\in\{1,2\}$, $j\in\{1,2,3\}$, $k\in\{A,B\}$).
Suppose that the measurement operators correspond to the Pauli matrices, which is a canonical set of orthogonal measurement operators. Then, e.g., $|u_i^{1;A}\rangle$ are the eigenstates of the Pauli $X$ operator on the first box. More explicitly,
$$
\begin{array}{cc}
|u_1^{1;A}\rangle=|+\rangle_A\quad & |u_2^{1;A}\rangle=|-\rangle_A \\
|u_1^{2;A}\rangle=|L\rangle_A\quad & |u_2^{2;A}\rangle=|R\rangle_A \\
|u_1^{3;A}\rangle=|0\rangle_A\quad & |u_2^{3;A}\rangle=|1\rangle_A.
\end{array}$$
We can use the same measurement operators/states for the second box (so the corresponding table would be the same modulo $A$ becoming $B$).
We are here using the standard convention to denote the eigenstates of the Pauli matrices: $|\pm\rangle\equiv\frac{1}{\sqrt2}(|0\rangle\pm|1\rangle)$, $|L,R\rangle\equiv\frac{1}{\sqrt2}(|0\rangle\pm i|1\rangle)$.
Ok, so what are then the possible outcomes when we press the same button on both boxes? These are obtained from the overlaps $\langle u_i^{1;A},u_i^{1;B}|\Psi\rangle$, and it is then easy to check that
$$
|\langle u_i^{j;A},u_i^{j;B}|\Psi\rangle|^2 = \frac12,
$$
for all $i=1,2$ and $j=1,2,3$.
For example,
$$\langle u_1^{1;A},u_1^{1;B}|\Psi\rangle = \langle+,+|\Psi\rangle = \frac1{\sqrt2},$$
and the other cases proceed similarly. Note how this implies that
$|\langle u_i^{j;A},u_{1-i}^{j;B}|\Psi\rangle|^2=0$ for all $i,j$, which means that we must always find the same outcome on both boxes when the same measurement is used, consistently with the conditions given by the problem.
What about the probability of getting the same outcome when different measurements are used? These are given by
$$
|\langle u_i^{k;A},u_i^{\ell;B}|\Psi\rangle|^2 = \frac14
$$
for $j\neq k$ and all $i=1,2$. Actually, you can verify that the following more general result holds:
$$
|\langle u_i^{k;A},u_j^{\ell;B}|\Psi\rangle|^2 = \frac14,
$$
for all $i,j$ and $k\neq \ell$. This means that, when different buttons are pressed, the four possible outcomes are all equally probable (whic in particular implies that the probability of getting the same outcome on both boxes is $1/2$).
I suspect, the author means that measuring a property of a quantum particle changing the property, so reading two boolean properties of the same particle is like flipping two coins - with the probability of 12 there will be two heads or two tails.
Well, this turns out to be true (in the sense that the output probabilities do indeed turn out to work like that when different buttons are pressed), but I don't think it's fair to say that this is what the author "means". I don't see any "intuitive" way to see that this happens without passing through some of the math. Quantum mechanics is weird like that.
This contradicts to my impression that measuring the same property twice gives the same result (but kind of resets the other properties). At least, this is how it is explained in Susskind's "Quantum Mechanics: The Theoretical Minimum".
I don't see the contradiction with the previous statement. This is indeed true, measuring the same property twice (at infinitesimally close times) gives the same outcome, but this is not relevant in the kind of setting we are dealing with here. There is an implicit assumption in these kinds of setup that every time you press the buttons the underlying quantum state is somehow "reset" to its initial state.
What, in your opinion, is the best illustration of Bell's theorem, that does not involve too much math but helps to understand what is the theorem about?
I'm afraid I don't know any way to understand the result without passing through some of the math. | {
"domain": "physics.stackexchange",
"id": 64044,
"tags": "quantum-mechanics, bells-inequality"
} |
How is three-dimensional laser cooling possible? | Question: I am a bit confused about laser cooling an atom in all three dimensions. I think I have understood the one-dimensional case: The atom absorbs doppler-shifted laser light and the momentum in this direction is reduced by the photon's momentum $p_{\gamma}$. When the excited state decays, the atom re-emits the photon in a random direction and the atom receives a momentum kick of $p_{\gamma} \cos\theta$ only.
However, in the 2D-case the atom also receives a momentum kick of $p_{\gamma} \sin\theta$ upon re-emission in the other coordinate as well, so the total energy did not change.
What am I missing?
Answer: In laser cooling, the laser light is red detuned, meaning it has lower energy than the atomic transition. An atom at rest cannot absorb it:
However, a moving atom now sees Doppler shifted laser beams. The light coming towards it from the right will be on resonance (for some velocity class) and will be absorbed:
The photon emitted by the laser has energy $\hbar \omega < \hbar \omega_0$. But when it is re emitted by the atom, however, it will have energy $\hbar \omega_0 > \hbar \omega$ !
So energy is conserved, but the emitted photon takes away some energy from the atomic cloud.
What about momentum?
The atom receives a momentum kick upon absorbing the photon, and another one (essentially of equal magnitude) when spontaneously re-emitting it. BUT the absorbed photons always come from the same direction (the laser beams) whereas the spontaneously emitted photon is random. Over time, the random spontaneous emission averages to zero, only giving you a decrease in the momentum along each laser beam direction.
So for laser beams in $6$ orthogonal directions ($\pm x, \pm y,$ and $\pm z$) you get cooling in all directions.
Limit of the above
This kind of "simple" laser cooling works until the Doppler temperature, set by the natural linewidth of the atom $\Gamma$: when the Doppler shifted frequency between right and left photons is less than $\Gamma$, the atom does not know which one to absorb because it cannot resolve it.
Eventually, the spontaneously emitted photon and the resulting momentum kick does limit the temperature you can reach, and that is called the recoil limit. Which is why, to get colder with light, you need to use conservative potentials and hence not rely on scattering.
Applications to cold atoms
One of the main applications of laser cooling is to reach quantum degeneracy.
The degeneracy parameter $D$ goes as $\exp(-S)$ where $S$ is the entropy. To get quantum ($D \sim 1$), it is not enough to lose energy, you also need to lose entropy.
The incoming photon from the laser is coherent, hence has low entropy. The spontaneously emitted photon is random, hence has higher entropy. So you are also extracting entropy from the cold atomic gas. | {
"domain": "physics.stackexchange",
"id": 72690,
"tags": "atomic-physics, laser, laser-interaction, cold-atoms"
} |
Ricci scalars for space and spacetime, local and global curvature | Question:
If Ricci scalar describes the full spacetime curvature, then what do we mean by $k=0,+1,-1$ being flat, positive and negative curved space?
Is $k$ special version of a constant "3d-Ricci" scalar?
What is the difference between the local and global spacetime curvature?
Answer: The $k$ notation is generally used to describe Friedmann Robertson Walker cosmological models. These are built on the assumptions of homogeneity and isotropy. The spacetime can be described as being foliated by spatial slices of constant curvature. The k value is the sign of this spatial curvature if the {-1, 0, +1} convention is adopted. As the curvature is a constant, it makes sense to talk of its sign. Further details here. | {
"domain": "physics.stackexchange",
"id": 7315,
"tags": "general-relativity, cosmology, curvature"
} |
Location of Souce code for Plugins on our local machine | Question:
Where are the source code for plugins present in the gazebo source on our machines?
For example, what is the location of this file https://bitbucket.org/osrf/gazebo/src/8479ce0c42565f05c6ad33bbcaaea0b72ab6f815/plugins/RandomVelocityPlugin.cc?fileviewer=file-view-default ?
Originally posted by meha on Gazebo Answers with karma: 13 on 2015-11-12
Post score: 0
Answer:
Hi,
depending on how you installed gazebo:
a) If from source, then the file is where you downloaded the repository:
/MyPath/gazebo/plugins/RandomVelocityPlugin.cc
b) If from debian (sudo apt-get install gazebo..):
you can find only the header file by running $ locate RandomVelocityPlugin.hh, it should give you a path similar to:
/usr/include/gazebo-5.1/gazebo/plugins/
Originally posted by AndreiHaidu with karma: 2108 on 2015-11-12
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by meha on 2015-11-12:
Thanks, I had found the header, but wasn't able to figure out why the source c++ file isn't there.
Is there any way I can add it to my source? OR In which folder can I try out writing new plugins?
Comment by AndreiHaidu on 2015-11-12:
If you install from debian you don't get the source files, you need to download the repository. If you want to start writing plugins you do it differently. Check out the plugin tutorials: http://gazebosim.org/tutorials?cat=write_plugin | {
"domain": "robotics.stackexchange",
"id": 3831,
"tags": "gazebo"
} |
Noether's theorem for time dependent non-cyclic Lagrangian | Question: I am asked to find the symmetries and conserved quantities for a system with the following Lagrangian:
$$\mathscr{L}=\frac{1}{2}m\dot{q}^2-af(t)q,$$
where $a$ is some constant and $f(t)$ is an arbitrary (but integrable) function of time.
I find this problem non trivial because the Lagrangian has no cyclic coordinates and it is a function of time, so neither the conjugate momentum $p$ or the energy $E$ are conserved quantities.
I proceed trying to find some symmetry such that $\delta\mathscr{L}=dg/dt$ (or maybe $=0$, the idea is that this condition is such that the Euler-Lagrange equations obtained via variational principle are left invariant). Then applying Noether's theorem, the conserved quantity would be:
$$C=\bigg(\frac{\partial\mathscr{L}}{\partial\dot{q}}\dot{q}-\mathscr{L}\bigg)\delta{t}-\frac{\partial\mathscr{L}}{\partial\dot{q}}\delta{q}-g,$$
where $g$ may, or may not be zero. So, for the Lagrangian in consideration:
$$\begin{align}\delta\mathscr{L}&=\frac{\partial\mathscr{L}}{\partial\dot{q}}\delta\dot{q}+\frac{\partial\mathscr{L}}{\partial q}\delta q+\frac{\partial\mathscr{L}}{\partial t}\delta t\\
&=(m\dot{q})\delta\dot{q}+(-af(t))\delta q+(-a\frac{\partial f}{\partial t}q)\delta t\end{align}$$
The problem here is that I can't think of any symmetry that can satisfy Noether's condition. Is there any other test that can give me the correct symmetries? Or maybe I can know the conserved quantities by looking at the form of the Lagrangian but I lack the intuition?
Answer: The conserved quantity that Frotaur got does not come from a symmetry.The way to get it is the following.We consider a transformation of the coordinates:
$$q \rightarrow q + \epsilon $$
The tranformed Langrangian is :
$$L'(q+\epsilon,\dot{q})=\dfrac{1}{2}m\dot{q}-af(t)q - af(t)\epsilon=L(q)-af(t)\epsilon$$
Taking the derivative with respect to $\epsilon$ at $\epsilon=0$ we have :
$$\dfrac{dL'}{d\epsilon}\Bigr|_{\epsilon=0}=-af(t)$$
With a little work you can show from Taylors theorem and Euler-Lagrange equations that in the general family of transformations $$q\rightarrow q + \epsilon K(q,\dot{q})$$
you get:
$$\dfrac{dL'}{d\epsilon}\Bigr|_{\epsilon=0}=\dfrac{d}{dt}\left( \dfrac{\partial L}{\partial \dot{q}}K(q,\dot{q})\right)$$
In our case $K(q,\dot{q})=1$ so we get the final result that Frotaur got:
$$\dfrac{d}{dt}\left( \dfrac{\partial L}{\partial \dot{q}}\right)=-af(t)$$
$$m\dot{q}=-a\int f(t) +C$$ | {
"domain": "physics.stackexchange",
"id": 40607,
"tags": "homework-and-exercises, lagrangian-formalism, symmetry, noethers-theorem"
} |
Iterating and fetching values from dictionary | Question: Here is my dictionary:
c_result1 = {"1": {"time": "89.46%"}, "2": {"date": "10.54%"}}
c_result2 = {"1": {"money": "89.46%"}, "2": {"finance": "10.54%"}}
I want extract, first key value of 1 i.e. time, length of the dictionary may vary.
My code:
c_result1 = {"1": {"time": "89.46%"}, "2": {"date": "10.54%"}}
c_result2 = {"1": {"money": "89.46%"}, "2": {"date": "10.54%"}}
flag = 0
result = []
result1 = ''
result2 = ''
for k,v in c_result1.iteritems():
for k1,v1 in v.iteritems():
result1 = k1
result.append(result1)
print 'result = ', result1
flag = 1
if flag == 1:
break
if flag == 1:
break
for k,v in c_result2.iteritems():
for k1,v1 in v.iteritems():
result2 = k1
result.append(result2)
print 'result = ', result2
flag = 1
if flag == 1:
break
if flag == 1:
break
print 'REsult is : , ', result
What is the standard way to do this, rather then using flag?
After that I want to extract time and money from each result and pack them into list like ['time','money'], but due to flag I have used here, I was not able to do this.
Answer: In your code you write
flag = 1
if flag == 1:
break
The if will always be True, so this is just
flag = 1
break
You only use flag for another if that also will always be True, so make this
break
break
Simplifying this to
c_result1 = {"1": {"time": "89.46%"}, "2": {"date": "10.54%"}}
c_result2 = {"1": {"money": "89.46%"}, "2": {"date": "10.54%"}}
result = []
result1 = ''
result2 = ''
for k,v in c_result1.iteritems():
for k1,v1 in v.iteritems():
result1 = k1
result.append(result1)
print 'result = ', result1
break
break
for k,v in c_result2.iteritems():
for k1,v1 in v.iteritems():
result2 = k1
result.append(result2)
print 'result = ', result2
break
break
print 'REsult is : , ', result
Removing the inner prints gives
c_result1 = {"1": {"time": "89.46%"}, "2": {"date": "10.54%"}}
c_result2 = {"1": {"money": "89.46%"}, "2": {"date": "10.54%"}}
result = []
for k,v in c_result1.iteritems():
for k1,v1 in v.iteritems():
result.append(k1)
break
break
for k,v in c_result2.iteritems():
for k1,v1 in v.iteritems():
result.append(k1)
break
break
print 'REsult is : , ', result
This can be simplified by using itervalues and not calling iteritems on the inner loop:
c_result1 = {"1": {"time": "89.46%"}, "2": {"date": "10.54%"}}
c_result2 = {"1": {"money": "89.46%"}, "2": {"date": "10.54%"}}
result = []
for v in c_result1.itervalues():
for k in v:
result.append(k)
break
break
for v in c_result2.itervalues():
for k in v:
result.append(k)
break
break
print 'REsult is : , ', result
This is actually just the same as calling next twice:
help(next)
#>>> Help on built-in function next in module builtins:
#>>>
#>>> next(...)
#>>> next(iterator[, default])
#>>>
#>>> Return the next item from the iterator. If default is given and the iterator
#>>> is exhausted, it is returned instead of raising StopIteration.
#>>>
However this is only simpler if you know it's going to succeed:
c_result1 = {"1": {"time": "89.46%"}, "2": {"date": "10.54%"}}
c_result2 = {"1": {"money": "89.46%"}, "2": {"date": "10.54%"}}
result = []
inner = next(c_result1.itervalues())
result.append(next(iter(inner)))
inner = next(c_result1.itervalues())
result.append(next(iter(inner)))
print 'REsult is : , ', result
Either way, one should make a function or use a loop to avoid repeating oneself:
c_result1 = {"1": {"time": "89.46%"}, "2": {"date": "10.54%"}}
c_result2 = {"1": {"money": "89.46%"}, "2": {"date": "10.54%"}}
result = []
for c_result in (c_result1, c_result2):
for v in c_result.itervalues():
for k in v:
result.append(k)
break
break
print 'REsult is : , ', result | {
"domain": "codereview.stackexchange",
"id": 11554,
"tags": "python, hash-map"
} |
Counteracting gravity in vertical circular motion | Question: When you have a mass moving on a rope in a vertical circular motion, at the top and the bottom of the swing, the force of tension cleanly takes care of gravity and creates a centripetal force. However, when the mass in the middle of a cycle like so:
How does the force of gravity get counteracted? The force of tension is the only force that can provide the centripetal force since the only other force, gravity, is acting perpendicular to the radius. So, what force takes care of gravity? If gravity isn't somehow counteracted, then the net force would be the vector addition of the gravity and the force of tension which would be somewhere in between the two vectors and would not suffice as a centripetal force since it needs to be parallel to the radius. There is no way I can see a normal force of some sort to exist since the mass is not pushing off any object. So, how is the force of gravity taken care of? Thanks to anyone who can help.
Answer: System: point mass, $m$
External forces: gravitational attraction $\vec F_{\rm g} = F_{\rm g,r}\,\hat r + F_{\rm g,\theta}\,\hat \theta$ and tension in string $\vec F_{\rm T} = -F_{\rm T}\,\hat r$.
As long as the string is in tension, $\vec F_{\rm g} + \vec F_{\rm T} = m\,\vec a$ which can be divided into a radial component and a tangential component as shown in the right-hand diagram.
Radial: $-F_{\rm T} + F_{\rm g,r} = m \,a_{\rm centripetal}$
Tangential: $F_{\rm g,\theta} = m\,a_{\rm tangential}$
Note that the tension force is always radially inwards and always contributes to the centripetal acceleration but never affects the tangential acceleration.
Except at positions $A$ and $C$, the gravitational force produces a tangential acceleration and except at positions $B$ and $D$ the gravitational force has a contribution the centripetal acceleration. | {
"domain": "physics.stackexchange",
"id": 91912,
"tags": "newtonian-mechanics, forces, orbital-motion, centripetal-force"
} |
Complexity of finding Exact Size Cut-Sets in Bipartite Graphs | Question: I am interested in the problem of deciding if a cut-set of a given size $k$ (i.e. the number of edges crossing the partitions is $k$) exists in a given bipartite graph (both the graph and $k$ are part of the input). Note that this is different from the problems of MINCUT and MAXCUT which simply ask to find the minimum and maximum sized cutsets.
For general graphs, the MINCUT problem can be solved in polynomial time while the MAXCUT problem is NP-Hard. It follows that deciding if a cutset of a particular size exists in a general graph is also NP-Hard since we could use it to find the maximum sized cutset.
For bipartite graphs however, the MAXCUT problem is trivial -- all the edges in the graph constitute the MAXCUT. Moreover, if it helps, I think I can show that for bipartite graphs, the edge - complement of a cutset is also a cutset. That is if $E_c \subseteq E$ is a cutset of a bipartite graph $G = (U\cup V, E)$ then $E-E_c$ is also a cutset of $G$.
However, I have not been able to determine if deciding if a cutset of some size $k$ exists is in P or is NP-Complete or something else. If it is Np-Complete, is there a family of graphs for which it is in P? Any pointers will be helpful.
Answer: This is a partial answer that proves the NP-completeness of your problem, by a reduction from the EXACT-CUT problem for general graphs1.
Given an instance $(G=(V,E),k)$ of the EXACT-CUT problem (that asks whether there is a cut of size $k$ in the graph $G$), we construct a new graph $G'$ as follows.
For each vertex $v\in V$, construct vertices $v_0^0,\ldots,v_m^0,v_0^1,\ldots,v_m^1$ as well as edges $(v_i^0, v_j^1)$ for all $i,j\in\{0,\ldots,m\}$ (so that they form a biclique). Denote $V_v=\{v_0^0,\ldots,v_m^0,v_0^1,\ldots,v_m^1\}$.
For each edge $(u,v)\in E$, construct two edges $(u_0^0, v_0^1)$ and $(u_0^1, v_0^0)$.
Here $m=2k$. Note $G'$ is a bipartite graph.
Now we can see if $(A,B)$ is a cut of size $k$ in $G$, then $(\cup_{v\in A}V_v,\cup_{v\in B}V_v)$ is a cut of size $2k$ in $G'$.
On the other hand, suppose there is a cut $(A',B')$ of size $2k$ in $G'$. Now consider a vertex $v$. Assume $|A'\cap\{v_0^0,\ldots,v_m^0\}|=a$ and $|A'\cap\{v_0^1,\ldots,v_m^1\}|=b$, then we have
\begin{align}
m=2k&\ge a(m+1-b)+b(m+1-a)\\
&=(m+1)(a+b)+\frac{1}{2}(a-b)^2-\frac{1}{2}(a+b)^2\\
&\ge(m+1)(a+b)-\frac{1}{2}(a+b)^2,
\end{align}
which implies $a+b<1$ or $a+b>2m+1$, i.e. $a=b=0$ or $a=b=m+1$. This means all vertices in $V_v$ must belong to the same cut-set. Hence $(\{v\mid V_v\subseteq A'\}, \{v\mid V_v\subseteq B'\})$ is a cut of size $k$ in $G$.
Now we have shown that there is a cut of size $k$ in $G$ if and only if there is a cut of size $2k$ in $G'$, which concludes the proof of the NP-completeness of the EXACT-CUT problem on bipartite graphs.
1 Your reasoning for the NP-completeness of the EXACT-CUT problem is incorrect because you used a Turing reduction while a many-one reduction is required. However, it is not hard to find such a polynomial-time many-one reduction. | {
"domain": "cs.stackexchange",
"id": 12435,
"tags": "complexity-theory, graphs, np-complete, np, max-cut"
} |
Is there a simple way to represent this concept? | Question: If something is represented in meters per second, it means the object is changing by x amount of meters every y amount of seconds. But when you have a unit of something similar to, "Meters · Seconds," what exactly does this mean logically?
Answer: Say you grow bananapples in your tropical garden. Bananapples are continuously produced by bananapple trees. All your trees are disposed along a 100 meter long line. Say it takes one week to get 100 kilograms of mature bananapples from your trees.
Now you can go to the market and sell your bananapples for 10 euros the $km.hour$
Indeed 100 $meters$ times 604800 $seconds$ (that's a week) is equivalent to 100 $kilograms$ which means that saying you have 1 $kilogram$ of bananapples is the same as saying you have the production of 604800 $m.s$, that is the amount produced after 604800 $seconds$ by one $meter$ of your aligned trees. Then, if I get the math right, 1 $km.h$ is equivalent to 5.95 $grams$ (yes, bananapples are expensive; that's because you do not eat them, you smoke the leaves).
So: what the unit means depends on what you are talking about. What it is proportional to. | {
"domain": "physics.stackexchange",
"id": 45867,
"tags": "units"
} |
Visual Programming languages | Question: Most of us learned programming using "textual" programming languages like Basic, C/C++, and Java. I believe it is more natural and efficient for humans to think visually. Visual programming allows developers to write programs by manipulating graphical elements. I guess using visual programming should improve the quality of code and reduce programming bugs. I'm aware of a few visual languages such as App Inventor, Scratch, and LabView.
Why are there no mainstream, general-purpose visual languages for developers? What are the advantages and disadvantages of visual programming?
Answer: In general, there is a trade-off in programming language design between ease of use and expressiveness (power). Writing a simple "Hello, world" program in a beginner language, such as Scratch or App Inventor, is generally easier than writing it in a general-purpose programming language such as Java or C++, where you might have a choice of several streams to output to, different character sets, the opportunity to change the syntax, dynamic types, etc.
During the creation of App Inventor (which I was part of), our design philosophy was to make programming simple for the beginner. A trivial example was basing our array indices at 1, rather than 0, even though that makes calculations likely to be performed by advanced programmers slightly more complex.
The main way, however, that visual programming languages tend to be designed for beginners is by eliminating the possibility of syntax errors by making it impossible to create syntactically invalid programs. For example, the block languages don't let you make an rvalue the destination of an assignment statement. This philosophy tends to yield simpler grammars and languages.
When users start building more complex programs in a blocks language, they find that dragging and dropping blocks is slower than typing would be. Would you rather type "a*x^2+b*x+c" or create it with blocks?
Justice can't be given to this topic (at least by me) in a few paragraphs, but some of the main reasons are:
Block languages tend to be designed for beginners so are not as powerful by design.
There is no nice visual way of expressing some complex concepts, such as type systems, that you find in general-purpose programming languages.
Using blocks is unwieldy for complex programs. | {
"domain": "cs.stackexchange",
"id": 7591,
"tags": "programming-languages"
} |
If a tree falls in the forest | Question: The question of whether or not a tree that falls in the forest makes a sound - if there is nothing or no one around to hear it - comes up frequently at my house.
So, my question is: is there any way to "prove" or "dis-prove" this using physics? If it can be proven what is the answer?
My idea is yes, of course it makes a sound even if there is nothing to sense it! However, my parents seem to think that if there is nothing to "take in" the sound waves, there is no sounds.
Answer: It depends on your definition of "a sound". If a sound is not a sound unless it is perceived as a sound (that is, processed in the auditory system of a sentient being), then the answer is "no". If a sound is a coherent disturbance in the pressure distribution of the air, and this disturbance propagates through the medium "at the speed of sound", then the answer is "yes".
The fall of the tree causes vibrations: the vibrating tree / branches / ground interact with the air (their movement results in a change in momentum of the air molecules that hit the surface - if the surface is moving towards the air, the pressure increases; and if it's moving away, it decreases). This mechanism is independent of an observer, and thus when a tree falls, sound (definition 2) is created. | {
"domain": "physics.stackexchange",
"id": 17805,
"tags": "experimental-physics, waves, acoustics"
} |
How would I drop something directly in the middle of a vertical pipe? | Question: Not sure if this is the right place to ask, but it is for a physics experiment.
I'm trying to find out how the internal diameter of a copper pipe affects the time it takes for a magnet to drop through. Dropping the magnet in the centre would allow me to ensure consistency for each drop.
Answer: Illustration following up from the comments
We used vacuum to hold the ball bearing being dropped, from above. The ball was hand placed up against the output port of a manifold. Around it was open air. The port holding the ball connected to COMMON port of a 3/2 solenoid valve. NC to vacuum regulator, NO vent to atmosphere. When valve vented the output, part went into free fall.
There wasn't much to it, really, although we had the benefit of symmetry, and the sphere-circle contact made great repeatability in horizontal position. | {
"domain": "engineering.stackexchange",
"id": 4076,
"tags": "experimental-physics"
} |
Avoiding label in a loop to sort links | Question: How can I refactor my code to remove the label and the need for it? I am aware that a loop with label is "forbidden". But I can not find a way to rewrite this without the label.
private List<myList> sortLinks(SegmentType s, Set<myList> LinkSet) {
List<myList> LinkList = new LinkedList<myList>();
String dep = s.getDep().toString();
mainLoop: for (int index = 0; !LinkSet.isEmpty(); index++) {
for (Iterator<myList> iterator = LinkSet.iterator(); iterator.hasNext(); ) {
myList link = iterator.next();
if (link.getLegDep().toString().equals(dep)) {
iterator.remove();
link.setLine(s.getLineCode());
link.setNb(s.getNb());
link.setSuff(s.getSuff());
link.setIndex(index);
linkList.add(link);
dep = link.getDest().toString();
continue mainLoop;
}
}
return Collections.emptyList();
}
return linkList;
}
Answer: In general, labelled code is uncommon, but 'forbidden' is a bit harsh. Break, and Continue have better characteristics than GoTo, and should not be 'painted with the same brush'.
The logic in your code is convoluted though.... you are 'searching' the input set, and ensuring that all members of the input set match a condition. As you search, if the member matches, you remove the member. If one of the members does not match, you immediately return an empty result.
Note, that if a 'middle' member fails to match, you have already removed the first members, and yet you return an empty set.
Your logic could be significantly simplified if you extracted part of your method as a 'helper' function:
private myList searchAndRemove(String dep, Set<myList> candidates) {
for (Iterator<myList> iterator = candidates.iterator(); iterator.hasNext(); ) {
myList link = iterator.next();
if (link.getLegDep().toString().equals(dep)) {
iterator.remove();
return link;
}
}
return null;
}
private List<myList> sortLinks(SegmentType s, Set<myList> LinkSet) {
List<myList> matched = new LinkedList<myList>();
String dep = s.getDep().toString();
int index = 0;
while (!LinkSet.isEmpty()) {
myList link = searchAndRemove(dep, LinkSet);
if (link == null) {
return Collections.emptyList();
}
matched.add(link);
link.setLine(s.getLineCode());
link.setNb(s.getNb());
link.setSuff(s.getSuff());
link.setIndex(index++);
dep = link.getDest().toString();
}
return matched;
}
In addition to the above, note that your class names and variable names are really, really bad....
myList is a class, and should be CapitalCamelCase naming style, and have a different name.
LinkList is a variable name, and should be lowerCamelCase, and have a name like matchedSegments
LinkSet is also a variable name, and should be lowerCamelCase.
In general, with the name-pollution of having a class with 'Link' in the name (probably because it's not a horrible name', but it conflicts with LinkedList, so try to avoid Objects with Link as the name.
Additionally, note that your index variable was not an index in to the source data, but an index of the output segment. Including it as part of the original 'for' loop implied that it was used as an index in to that. The reality is that it is independent. I have edited it to make that clear. | {
"domain": "codereview.stackexchange",
"id": 10468,
"tags": "java, sorting"
} |
Lens metrology: How to measure a double-sided thick aspherical lens optically? | Question: In lens metrology, how do people measure a double-sided thick aspherical lens optically? By "optically" I mean for example, using wavefront sensing, interferometry or the interferogram methods.
I searched a little but only found mirror surfaces measurement.
To me, the difficulty is, there are two sides to be measured, but we only get one "accumulated phase" from our measurement. For example, from Shack-Hartmann wavefront sensors we only get one single phase for the lens, but we need the surfaces from both sides. It is mixed.
So is it possible to do double-side lenses metrology optically?
Answer: Measuring the surface of both sides of an aspheric lens is difficult. You don't say if both sides are aspheric, or only a single side is aspheric. Whether it is one or both sides, many of the problems are identical.
There are two primary methods that are used in the industry be people who make such lenses. 1) Surface profilometer 2) Aspheric interferometer.
1) A surface profilometer uses a precision stylus to physically trace over the part. Typically two scans are done at 90 degrees to each other, and software fits the result. See, for example, www.mahr.com.
2) Aspheric interferometer - Zygo and others make interferometers that can measure the surface of an asphere. Most of these work by taking multiple measurements, then "stitching" the results together to construct the surface. There are limitations on the asphericity departure that they can measure.
Neither of these methods address possible phase errors within the lens (caused by inhomogeneity or birefringence). To measure those, you need an optical measurement that looks at the final output wavefront. If your lens does not form a good image by itself, you can construct a null lens that, when added to your biasphere, produced a good image. The good image can be tested interferometrically or via incoherent methods (star test, for example). Note you have to make the null lens very well, or you will incorrectly test the asphere.
With any lens, relating the opposite surfaces to each other can be tricky. For a biasphere lens (aspheres on both sides), each surface has an optical axis, which represents a line in space. These two axes can be displaced with respect to each other or not parallel, or some combination thereof. To quantify how the two surfaces relate to each other is likely beyond the scope of your question. It requires measurements on both surfaces that can somehow relate the two measurements in a shared coordinate system. For molded optics, a flange can often be molded into the part that makes this relative measurement easier. | {
"domain": "physics.stackexchange",
"id": 48851,
"tags": "optics, lenses"
} |
Why the Master is there ? Can't there be communication without Master? | Question:
Hi,
I have gone through almost all of ROS-1 wiki pages. And so, I am aware that in ROS-1 (I am yet to look into ROS-2), before 2 nodes start to communicate via pub-sub feature, both needs to register themselves with the Master. After which, both can then negotiate and establish one-to-one communication using, say TCPROS.
But a question that has been doing rounds in my mind is that "Why is there a Master? What is the need of Master? Why the things couldn't have been done without a Master?
Let's say I am going to work with 4 robots. So, I am aware about each others' IP addresses.
Therefore, I can design things such that the 4 robots (NODES) can establish one-to-one communication with each other whenever they want. So, why should a Master come in between ? Why is it that, to establish a connection among the 4 robots via topics, these robots need to first register themselves with the master, and then go for one-to-one negotiating and establishing TCPROS connection ? Is the concept of Master really helpful in such cases where say, we have only 4 robots to deal with, and we know the IP addresses of all of them ?
Originally posted by Aakashp on ROS Answers with karma: 41 on 2020-12-23
Post score: 2
Original comments
Comment by gvdhoorn on 2020-12-23:
You're question is somewhat a duplicate of #q218783.
Comment by Aakashp on 2020-12-24:
I feel this question is a bit different than the tagged question #q218783 in the sense that I am aware that Master provides a nameservice (a lookup feature about the nodes in the ROS graph), but my question is about its necessity and why can't we have a ROS application without a Master.
Answer:
But a question that has been doing rounds in my mind is that "Why is there a Master? What is the need of Master? Why can't things could have been done without a Master?
As the wiki/Master page explains, the Master provides name based lookup services. It functions like a switchboard operator or DNS system.
Its main function is to provide a mapping between names of nodes, topics, services and actions (also topics) and their IPs and TCP or UDP ports.
Let's say I am going to work with 4 robots. So, I am aware about each others' IP addresses.
This is a strong assumption: only if you've configured all robots yourself with a known IP -- and that IP address never changes will this work.
Therefore, I can design things such that the 4 robots (NODES) can establish one-to-one communication with each other whenever they want.
Of course. In this situation everything needed to directly communicate is known. Or at least, you imply everything is known in the nodes which need it. Note: just an IP address would not be sufficient. You'd also need TCP and/or UDP port numbers. But I assume that is what you meant to write.
So, why should a Master come in between ?
In this case? There would be no need for a Master.
Why is it that, to establish a connection among the 4 robots via topics, these robots need to first register themselves with the master, and then go for one-to-one negotiating and establishing TCPROS connection ? Is the concept of Master really helpful in such cases where say, we have only 4 robots to deal with, and we know the IP addresses of all of them ?
If you keep within the confines of the hypothetical situation you sketch, there would be no need for a Master: nodes could open sockets between each of them and use those to exchange data. Provided of course that you'd also know on which ports certain types of information is provided and services are offered.
Now if we leave your hypothetical situation, and:
accept that in most cases, IPs do change
IPs are not always known for all robots (fi on networks with DHCP or on cellular networks)
some applications have robots leaving and joining the ROS network while the application is running
we'd like to be able to start nodes on whichever robot without having to update the code of all the others
we'd like to be able to publish and subscribe to multiple topics, offer and use multiple services and offer and use multiple actions per node and robot
we'd like to use multiple PCs (or multiple NICs) per robot, and would not want to have to manually keep track of where (ie: on which PC) nodes are run (or via which NIC they connect to a network)
we may want to be able to start multiple instances of the same node (under different names of course)
and so on, the need for (or utility of) a Master should become clear.
It essentially functions as a central source of information about the running ROS application. And this information is updated during the runtime of the application as well (so is not static, as in your hypothetical setup).
So if a node creates a subscriber which wants to receive messages of type JointState on a topic called joint_states, it will somehow have to figure out where publishers are which publish JointState messages on the topic called joint_states. That is the kind of information the Master possesses (and is willing to share with nodes, if they ask nicely).
I am yet to look into ROS-2
One of the main differences between ROS 1 and ROS 2 is the absence of a Master(-like) entity in ROS 2 (see also #q287470).
So is a Master absolutely necessary? No. Not at all. See ROS 2. Is it needed in ROS 1: yes, as that's how ROS 1 has been designed and implemented.
Originally posted by gvdhoorn with karma: 86574 on 2020-12-23
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by gvdhoorn on 2020-12-23:
Related Q&A would be #q203129, which discusses how decoupled nodes really are in ROS.
Something else related to this subject: multimaster systems. An example packages would be multimaster_fkie.
Comment by gvdhoorn on 2020-12-23:
Could you please mark the question as answered by clicking the checkmark to the left of the answer if you feel it has been answered? It should turn green.
Comment by Aakashp on 2020-12-24:
@gvdhoorn: Thanks for your reply. However, I request you to clarify the below points:
So if a node creates a subscriber which wants to receive messages of type JointState on a topic called joint_states, it will somehow have to figure out where publishers are which publish JointState messages on the topic called joint_states. That is the kind of information the Master possesses (and is willing to share with nodes, if they ask nicely).
This kind of information, i.e. who is publishing what, and who is subscribing to what, can be maintained by the designer of the system (i.e. ROS application) as he/she would be aware that how many nodes / robots would be there. Let's say, the designer knows that at max 100 nodes would be there. And so, he can maintain the information that which node would be publishing what on which topic (same for subscribers). Please correct my understanding if I am missing something.
Comment by Aakashp on 2020-12-24:
Now, as ROS-2 does away with Master, could you please share a link to some information as to how ROS-2 manages without a Master ? @gvdhoorn
Comment by Aakashp on 2020-12-24:
@gvdhoorn: If we can have one-to-one communication between 2 nodes, then can we not have the below (taken from your response) without a Master ? :--
we'd like to be able to publish and subscribe to multiple topics, offer and use multiple services and offer and use multiple actions per node and robot
And how does Master help in this ? :--
we'd like to be able to start nodes on whichever robot without having to update the code of all the others
Comment by Aakashp on 2020-12-24:
@gvdhoorn: As you replied :
So is a Master absolutely necessary? No. Not at all. See ROS 2. Is it needed in ROS 1: yes, as that's how ROS 1 has been designed and implemented.
My question is that why Master is needed in ROS 1 ? Should I conclude that though the design of ROS 1 has provision for a Master, still we can have a ROS-1 application without any Master in it, as for the example where we just have 4 robots ?
Comment by gvdhoorn on 2020-12-24:
I think I can answer all your questions with a single comment: we have a Master as we don't want to maintain all that information ourselves, and we want to be able to create applications which change at runtime.
How would your designer put information in your system about nodes being created in response to certain events? How would your designer deal with changing IP addresses? How would your designer deal with there suddenly being 5 robots? Or 3?
Would you want to shutdown the complete system, change the code, and then start it again?
The reason the master exists is because it offers the advantage of being able to deal with all of the above, while still also being able to cater to situations where nothing changes during runtime.
Why would I manually keep track of on which IP address+TCP port a particular service server can be found, if I can have a machine do that for me? Do you open https://172.217.17.110:443 when browsing to Google, or https://google.com?
Comment by gvdhoorn on 2020-12-24:
And no, a Master is not strictly needed, but it's a design decision to include one. Just as it was a design decision to setup the Domain Name System. If the internet never changed, we could just hand out printed A4 papers with IP addresses to everyone who wants to browse the internet, and they'd lookup everything using those lists.
But the internet does change, so such printouts would go stale almost immediately.
ROS applications can be like that (and on the level of individual topics, services and actions they are like that, ie: often changing). So to cope with that you add a system which automatically tracks all those changes.
As to ROS 2: it was a design decision to work without a Master and make everything peer-to-peer, even discovery.
But such a design comes with its own set of trade-offs, see New Discovery Server for an interesting thread about that.
Comment by gvdhoorn on 2020-12-24:
Finally: no, you cannot really run a ROS 1 application without a Master.
I write "not really", as this is software, and anything is possible. But it'll be so much work that it won't be worth it. And it'll also be very brittle, as the system you describe would not be able to deal with any runtime changes at all.
Comment by Aakashp on 2020-12-24:
@gvdhoorn: Thank you very much for your detailed response to each and every query of mine. The answer is clear to me now. | {
"domain": "robotics.stackexchange",
"id": 35903,
"tags": "ros, communication, master"
} |
Missing outputs in multiple-output neural net | Question: I am looking at a task, where I want to predict multiple things from an image (an animal's breed [categorical], age [continuous number] and gender [categorical]). Unsurprisingly, my first thought was to use a neural network (e.g. adding multiple outputs to a pre-trained covnet) and I would try keras first (having used it before). I assume it would be useful to try to do this all in a single covnet (especially based on a TWiML talk on this paper, where the author suggested that in general one prediction target will help to improve features for another and given that e.g. how one would recognise gender will differ by breed etc.).
My question arises, because for some training/validation/test images some (but not all) of the multiple prediction targets may be missing. Sometimes we will just know the breed, for some images we will not know the gender and the exact age may oftne be unavailable (but only that the animal is adult versus younger than about 4 months, which a human can easily label by just looking at the animal).
If I had to guess, I would expect the missingess to be missing at random (but likely not missing completely at random). For example, for some breeds and ages it is harder to tell the gender by just looking at the animal (perhaps the neural network will manage to do it). So, someone who just takes a photo of an animal may not know the gender and humans are no good at labelling gender from photos. Another example: age might be unavailable, because someone just took a photo of an animal they do not have detailed information on or because someone bought an animal secondhand and never found out.
Is there some standard way of handling this? My intuition is that even records with one or two missing prediction target still should provide useful information for the other ones. Thus, listwise deletion of all records with any missing target information would be extremely inefficient (and possibly even otherwise problematic?!).
Answer: Since we are talking about multiple different types of targets (classes versus numerical for example) we already need a composite loss function. I will consider how to balance the different composite parts of the loss function outside of the scope of this answer but if you look into multi-task learning there are solutions to this. What you could do (both in training and evaluation) is to mask the parts of the loss function that are unknown.
You can do this fairly easily (but hacky) by passing weights as input during training or prediction for each output in your sample whether it was available or not as 1 or 0 and multiply that component with the weight. For analysis of the training and prediction you need to make sure to take the mean of only the available labels because otherwise your loss values are too optimistic.
I do think this approach is very valuable in a lot of cases, this can also help with semi-supervised learning where you do have labels available for a similar task. | {
"domain": "datascience.stackexchange",
"id": 2908,
"tags": "neural-network, keras, cnn, missing-data, multitask-learning"
} |
Question about the linearity of wave functions | Question: For piece-wise constant potential, the potential energy is constant so the time dependent wave function can take the form $\psi(x,t)=C_1e^{i(kx- \omega t)}+C_2e^{i(-kx-\omega t)}$ where $k=\frac{p}{h}=\frac{\sqrt{2mE}}{h}$ and $E$ is just the difference between the energy of the particle and the constant potential.
When testing $\psi(x,t)$ as an eigenfunction of the momentum operator, the result is that $-i\hbar \frac{\partial \psi(x,t)}{\partial t}=\hbar k(C_1e^{i(kx- \omega t)}-C_2e^{i(-kx-\omega t)}) \not= p\psi(x,t)$
But when testing the two components of the wave function separately, it is found that they are eigenfunctions.
$-i\hbar \frac{\partial \psi_1(x,t)}{\partial t}=-i\hbar \frac{\partial}{\partial t} (C_1e^{i(kx-\omega t)})=-i \hbar (ik\psi_1)=\hbar k\psi_1=p_x\psi_1$
$-i\hbar \frac{\partial \psi_2(x,t)}{\partial t}=-i\hbar \frac{\partial}{\partial t} (C_2e^{i(-kx-\omega t)})=-i \hbar (-ik\psi_2)=-\hbar k\psi_2=-p_x\psi_2$
$\psi_1$ describes the particle moving in the $+x$ direction and $\psi_2$ describes the particle moving in the $-x$ direction.
So if two solutions to the S.E. decribes definite states of momentum, the sum $\psi(x,t)$ does not necessarily have to describe a definite state of momentum?
The functions $\psi(x,t),\psi_1(x,t), \psi_2(x,t)$ are eigenfunctions of the energy operator $i\hbar \frac{\partial}{\partial t}$, and the commutator $[H,P]=i\hbar \frac{\partial V(x)}{\partial x} $ which means that definite states of energy and momentum can be found simultaneously as long as we have a constant potential. (However, $H$ can only be used in the time-independent case right? So I'm not sure if this applies.)
It's evident from the math above that $\psi(x,t)$ is not an eigenfunction of the momentum operator, but is there a physically reason for why it shouldn't be?
Answer: Correct. The state $\psi$ you've described is a superposition of two momentum eigenstates, with momentum $p$ and $-p$. So $\psi$ does not itself have a definite momentum - if you measured the momentum of a particle in this state you'd get either $p$ or $-p$ with a probability distribution that depends on the $C_1, C_2$.
The energy of a free particle depends only on its speed, not its direction of movement, so the energy basis is degenerate. Here your state has a definite value of energy because in all cases, the particle is moving with the same speed. However, momentum is a vector quantity and does depend on the direction of movement.
In general, a linear combination of two eigenstates of a given operator is not going to be another eigenstate of that operator - unless the two eigenstates also have the same eigenvalue. For example, here $\psi_1$ and $\psi_2$ are both eigenstates of $H$ with the same eigenvalue (energy) $E$, so $\psi$ is still an eigenvalue of $H$. But since $\psi_1$ and $\psi_2$ are eigenstates of $P$ with different eigenvalues (momenta) $p$ and $-p$, then $\psi$ is not an eigenstate of $P$.
(By the way, you wrote: "$H$ can only be used in the time-independent case right?" It requires a time-independent potential. It's fine to use it with time-dependent wavefunctions.) | {
"domain": "physics.stackexchange",
"id": 6883,
"tags": "quantum-mechanics, wavefunction, schroedinger-equation, operators"
} |
How to source the file permanently | Question:
every time I open the terminal try to run the package launch file but it always have to source the file first before to launch the file.
does there any way can source the file permanently?
Originally posted by Zero on ROS Answers with karma: 104 on 2016-04-19
Post score: 0
Answer:
This is explained in the Installation tutorial, section 1.6 - Environment setup (here from the Jade tutorial):
It's convenient if the ROS environment variables are automatically added to your bash session every time a new shell is launched:
echo "source /opt/ros/jade/setup.bash" >> ~/.bashrc
source ~/.bashrc
Note also the warning:
If you have more than one ROS distribution installed, ~/.bashrc must only source the setup.bash for the version you are currently using.
If you just want to change the environment of your current shell, you can type:
source /opt/ros/jade/setup.bash
Btw: sourceing is never 'permanent'. It will always only update the shell in which it was invoked.
Originally posted by gvdhoorn with karma: 86574 on 2016-04-19
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Zero on 2016-04-19:
Thank you! it work for me.
Comment by ranjeet on 2021-01-31:
It does not work, there must be some error in the syntax.
Comment by gvdhoorn on 2021-01-31:
If you're using a different ROS release, you'll have to update the command and replace jade with whatever ROS release you're using. | {
"domain": "robotics.stackexchange",
"id": 24400,
"tags": "linux"
} |
Application of CPT invariance : some trivial algebra | Question: I am having some problem in understanding one step in the following algebra.
Consider an interaction where initial state is defined as $ \left|i\right> $ and final state by $ \left|f\right> $. Now,
$$ \left|i\right> = \mathcal{ CPT}\left | \bar{i}\right> $$
$$ \left|f\right> = \mathcal{ CPT} \left| \bar{f}\right> $$
Using the CPT invariance condition,
$ \left(\mathcal{ CPT} \right)T \left(\mathcal{ CPT}\right)^{-1}= T^{\dagger}$,
where $T$ is the transition matrix;
$$ \left<f|T^{\dagger}|i\right> = \left<\bar{f}|T|\bar{i}\right>^{*}$$
Please show explicitly how to derive the last equation.
Answer: So let's start from the relations you gave and transform one of them from ket to bra.
$$ \left|i\right> = \mathcal{ CPT}\left | \bar{i}\right> $$
$$ \left<f\right| = \left< \bar{f}\right| (\mathcal{ CPT})^{\dagger} $$
Using the CPT invariance condition,
$ \left(\mathcal{ CPT} \right)T \left(\mathcal{ CPT}\right)^{-1}= T^{\dagger}$,
It is easy to show that:
$$ \left<f|T^{\dagger}|i\right> = \left<\bar{f}|(\mathcal{CPT})^{\dagger}(\mathcal{ CPT})T|\bar{i}\right>
$$
Then by the anti-linearity of $\mathcal{CPT}$,
$\left<\bar{f}|(\mathcal{CPT})^{\dagger}(\mathcal{ CPT})T|\bar{i}\right> = \left<\bar{f}|(\mathcal{CPT})(\mathcal{ CPT})T|\bar{i}\right>^* $
Since $\mathcal{CPT}^2$ = $\mathcal{I}$
$\left<\bar{f}|(\mathcal{CPT})(\mathcal{ CPT})T|\bar{i}\right>^* = \left<\bar{f}|T|\bar{i}\right>^* $
They key step is the anti-linearity condition. | {
"domain": "physics.stackexchange",
"id": 25249,
"tags": "homework-and-exercises, particle-physics, cpt-symmetry"
} |
If boolean function $f$ is computable by a k-CNF and an l-DNF then it can be computed by a decision tree of depth at most kl | Question: I have seen it stated that if boolean function $f$ is computable by a $k$-CNF and an $l$-DNF then it can be computed by a decision tree of depth at most $kl$. However, I am not able to see why this is the case.
Answer: Observe that if a $k$-CNF $\Phi$ is equivalent to an $l$-DNF $\Psi$, then every term of $\Psi$ implies every clause of $\Phi$, i.e., they share a literal.
If the Boolean function is not constant, pick a clause $C$ in $\Phi$ and query all its variables. This is a decision tree of depth at most $k$. For each branch, restrict $\Phi$ and $\Psi$ by the partial assignment determined by the branch.
Each such restriction turns $\Psi$ into an $(l-1)$-DNF: if $D$ is any term of $\Psi$, let $x$ be a literal shared by $C$ and $D$; if $x$ is false, then $D$ is falsified, otherwise it is shortened by at least one literal.
Thus, the function becomes constant on each branch after at most $l$ iterations of this process. | {
"domain": "cstheory.stackexchange",
"id": 5659,
"tags": "boolean-functions, boolean-formulas, decision-trees"
} |
What effect does calibrating the Kinect have? | Question:
How much of a difference does in make to calibrate the Kinect using the procedure described in http://www.ros.org/wiki/kinect/Tutorials/Calibrating%20the%20Kinect ? How much of an improvement is there with the new parameters versus just using the default parameters?
Are there any other kinect calibration routines I should know about?
Originally posted by Deon Joubert on ROS Answers with karma: 33 on 2011-02-21
Post score: 3
Answer:
As has been mentioned, the OpenNI drivers give you access to the factory calibration, which means you don't need to calibrate manually.
However, if you want to anyway (for example, to use libfreenect on systems that don't have ROS), it makes a huge difference in the alignment of the points and RGB values.
An easy way to demonstrate this is to fire up rviz, and visualize the PointCloud2 that's published by the kinect_camera driver; if you haven't calibrated, you'll notice a serious mismatch between the points and colors. If you then switch to the openni_camera driver, you'll notice how much better things are. Calibrating the sensor manually (kinect_calibration) will get alignment performance on par with openni_camera.
Originally posted by Mac with karma: 4119 on 2011-02-22
This answer was ACCEPTED on the original site
Post score: 5 | {
"domain": "robotics.stackexchange",
"id": 4819,
"tags": "kinect"
} |
When the sun's magnetic field reverses, how can "the north pole (have) already (...) reversed, and (we are) waiting for the south pole to catch-up"? | Question: I was reading a popular news account of the sun's upcoming magnetic field reversal at http://www.csmonitor.com/Science/2013/0807/Sun-s-magnetic-reversal-means-big-changes-for-the-solar-system-video, and was confused already in the second paragraph:
Data from NASA-supported observatories indicate that the next flip will happen in just three to four months – the north pole has already jumped the gun and reversed, and scientists are now just waiting for the south pole to catch-up. The completed flip will herald changes throughout the entire solar system, according to a NASA video.
This doesn't make any sense to me. Given that there are no magnetic monopoles, there is no way for the sun to have two "south" magnetic poles. The field lines have to close somehow. (Granted, I don't for the life of me understand what a flux rope is, although the videos are cool.) I conclude that one of the following must be true:
Most likely: the reporter misinterpreted / mistranslated-into-colloquial something a scientist said.
Reasonably likely: my understanding of magnetism is completely wrong.
Unlikely: all of the known physics of magnetism is completely wrong.
The reason I think 3. is unlikely is that it would have been the headline, not "Sun's magnetic reversal..."
So which is it? More generally, if it's possible to give a somewhat-technical description of "Sun's magnetic reversal", that would be awesome (but I realize that most likely that would require a book chapter, and is outside the scope of this forum). Feel free to gear your answer at the level of an incoming physics PhD student.
Answer: The news release from NASA does mention this. Apparently in this transitional phase the field is not well represented as a dipolar field. Right now it may be a Quadra polar or higher (even numbered) field. Think of the total field being the superposition of multiple non-axial non-centered dipoles. The field is weakening in intensity too and may become weaker still before it strengthens into a reversed dipole field. | {
"domain": "physics.stackexchange",
"id": 9005,
"tags": "astrophysics, solar-system, popular-science, magnetic-fields, magnetic-monopoles"
} |
Does mean solar time and sidereal time sometimes indicate the same time? | Question: I'm trying to wrap my head around the different definitions of time. Since mean solar time depends on the Sun, and sidereal time depends on the stars, and since the position of the Sun relative to the stars changes over the course of the year, does this mean that the difference between the two times increases and decreases over the course of the year?
Once a year, mean solar time and sidereal time will be the same. Is that right?
Answer: The 3m56s mean difference between solar and sidereal days is due to the Earth's orbital motion around the Sun.
This adds up to 1 day per year; for every 365¼ solar days there are 366¼ sidereal days.
Sidereal time equals the right ascension of what's on the celestial meridian at that time.
The Sun is on the meridian at 12:00 apparent solar time.
These align at the September equinox, when the Sun is at RA 12h.
Mean solar time is ~7.5 minutes behind apparent solar time on that date and aligns with sidereal time ~1.9 days earlier. | {
"domain": "astronomy.stackexchange",
"id": 4189,
"tags": "time, standards"
} |
Matsubara frequencies as poles of distribution function | Question: Is there any deeper meaning to why the bosonic/fermionic Matsubara frequencies appear as poles of their corresponding distribution functions (with an additional $i$)?
For example in the bosonic case we have:
$$\omega_n=\frac{2n\pi}{\beta}$$
which are the poles of
$$n_B=\frac{1}{e^{\beta \hbar \omega}-1}$$
if we say that $\omega = i\omega_n$
I know this is sometimes exploited when evaluating sums of functions of $\omega_n$.
See Matsubara frequency.
Answer: The point is that to achieve the sum over Matsubara frequencies
$$\sum_{n} g(i\omega_n)$$
we can use a contour integral
$$\oint_C g(z) f(z)$$
with the contour described in fig 1 here, so long as we choose an $f(z)$ with simple poles exactly at the Matsubara frequencies $\omega_n$. This determines $f(z)$ to be proportional to the Bose-Einstein distribution. In fact, if we think about the sum as computed in a correlation function, we could have derived the integral above instead by considering computing the expectation value in the thermal density matrix $\int dE n(\beta,E) |E\rangle\langle E|$ determined by the Bose-Einstein distribution $n(\beta,E)$. I think this is an equally good starting point of the logic, then deriving the Matsubara frequencies as the poles of the distribution function.
I'd say the whole point of the Matsubara frequencies is that they are the poles of the distribution function. | {
"domain": "physics.stackexchange",
"id": 45946,
"tags": "thermal-field-theory"
} |
Covariance specified for measurement on topic gps is zero | Question:
When I tried to integrate gps data(from gps_common package) with robot_pose_ekf the above is the error I am facing....
Can anyone help me??
And even facing this error "filter time older than gps message buffer"
note: I had set covariance matrix values in gps_common package to 99999 (all diagonal values)
Originally posted by new_forROS on ROS Answers with karma: 31 on 2015-05-29
Post score: 1
Answer:
Hy new_forROS, I had the same error, you have two options to solve it.
1º using the robot_pose_ekf + utm(with changes) + ublox like this:
<node pkg="robot_pose_ekf" type="robot_pose_ekf" name="robot_pose_gps_ekf" output="screen">
<remap from="imu_data" to="/imau/imu" />
<remap from="odom" to="/gps_odom" />
<param name="output_frame" value="gps_odom_pose"/>
<param name="freq" value="30.0"/>
<param name="sensor_timeout" value="1.0"/>
<param name="odom_used" value="true"/>
<param name="imu_used" value="true"/>
<param name="vo_used" value="false"/>
<param name="debug" value="false"/>
<param name="self_diagnose" value="false"/>
<param name="publish_tf" value="false"/>
</node>
/gps_odom, must be given to robot_pose_ekf in nav_msgs/Odometry message type (using utm gps_common),and you must had set covariance matrix not only x y z but with too rxx(rotation on x axis),ryy and rzz. I initialy copy the convariance of imu to the gps, if you see with attention, utm set the rxx, ryy and rz, I re-code the source and did this:
<node name="utm_odometry_inipos_node" pkg="gps_common" type="utm_odometry_inipos_node">
<remap from="fix" to="/gps/fix" /> <!--input-->
<remap from="odom" to="gps_odom" /> <!--ouput-->
<!--<param name="rot_covariance" value="1000000000000.0" />-->
<param name="rot_covariance_x" value="1000000000000.0" />
<param name="rot_covariance_y" value="1000000000000.0" />
<param name="rot_covariance_z" value="0.001" />
<param name="frame_id" value="world" />
<param name="child_frame_id" value="base_footprint" />
</node>
In this way I can control the values of rxx,ryy and rzz.
But as you know this approach ins't the best because with one gps you can't obtain the rotation of robot, but work.
2º Option robot_pose_ekf(new version jade, work in indigo) + utm(with changes) + ublox(changes):
With the new robot_pose_ekf they treat the gps values differently, they doesn't use rxx, ryy and rzz.
<node pkg="robot_pose_ekf" type="robot_pose_ekf" name="robot_gps_pose_ekf" output="screen">
<remap from="imu_data" to="/imu" />
<remap from="gps" to="/gps_odom"/>
<param name="freq" value="30.0"/>
<param name="sensor_timeout" value="1.0"/>
<param name="odom_used" value="false"/>
<param name="imu_used" value="true"/>
<param name="vo_used" value="false"/>
<param name="gps_used" value="true"/>
<param name="debug" value="false"/>
<param name="self_diagnose" value="false"/>
</node>
The difference is here <remap from="gps" to="/gps_odom"/> <param name="gps_used" value="true"/>.
Use the utm as the first option, and the big difference of 2º option is do some change at ublox.
Open the node.cpp from ublox package, go to function void publishNavPosLLH(const ublox_msgs::NavPOSLLH& m)
and add this, after fix.altitude = m.height*1e-3;:
fix.position_covariance[0] = m.hAcc*1e-3; // Horizontal Accuracy Estimate [mm]->[m]
fix.position_covariance[4] = m.hAcc*1e-3; // Horizontal Accuracy Estimate [mm]->[m]
fix.position_covariance[8] = m.vAcc*1e-3; // Vertical Accuracy Estimate [mm]->[m]
doing this you are setting the convariance x,y and z of gps. The utm will add the rxx,ryy and rzz by default but will be ignored by the new robot_pose_ekf.
I think this approach is more correct then the other.
It worked for me.
If you want some code of utm to help ask me.
Originally posted by Raul Gui with karma: 66 on 2015-08-20
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 21793,
"tags": "ros, navigation, gps-common, robot-pose-ekf"
} |
Does light have mass as it is attracted through gravitational force? | Question: I have heard that no one escape from the intense gravitational field of a black hole (obviously that's why it is black). And gravitational force is due to the mass having the body, no mass no gravity, as
$$F=GMm/r^2$$
Putting $0$ in one of the two bodies' masses, gives no gravitational force.
But light cannot escape from black hole due to its strong gravity, does that mean that light has mass? please correct me.
Answer: Light does not have mass and Newton's theory of gravity doesn't work here. Any massive object, like the Sun, locally deforms the flat fabric of spacetime into a curved one. Light bends because it has to follow the geodesic (shortest distance between two points in a curved space) as Einstein taught us in his general theory of relativity. | {
"domain": "physics.stackexchange",
"id": 34742,
"tags": "gravity, visible-light, mass"
} |
Calibrating an electronic temperature sensor based on power consumption | Question: I'm working with an electronic temperature logger that is being affected by heat generated internally.
How does one come up with a calibration equation to calculate a more accurate reading of ambient temperature based on what the temperature sensor reads, taking into account its own power consumption?
details:
After a few hours and in equilibrium, the sensor reports values that are actually 1 degree Celsius higher than the ambient room temperature (22C) measured by a calibrated device. The sensor is accurate to 0.1 degree C at reporting the temperature of the device itself (which due to heat generated by the electronics has gotten warmer)
The device consumes ~0.1 watts of power, weighs about 200g and has an average specific heat capacity of 1.0 j/g (weighted mix of glass, abs, fr-4, copper). Dimensions are 1"x 3"x 4".
What I've got so far is this heating calculation: 200g * 1c * 1.0j/g / 0.1w / 60s = ~33 minutes to heat up 1 degree.
I'm assuming what we need is to figure out Sensor value - Heat-generated + Heat-dissipated to arrive at actual temperature. Which will require measure the K in newton's law? then what?
I'd really appreciate you help here.
Answer: Typically you would attempt to measure rather than calculate the effect.
Perhaps by having a second, calibrated device with a long probe that provides an independent measurement of the temperature. You do this in situ if possible or in some reasonable test stand (which might be as simple as disposable cooler filled with you working fluid).
The only real alternative is to read the data sheet; either for the whole device (if is a off-the-shelf instrument), or for the particular chip (if it is something that you manufactured to spec).
As a desperate fall-back position you might be able to find a rule-of-thumb for devices in the same class, but those are unlikely to be centrally tabulated. Ask around is my best suggestion. | {
"domain": "physics.stackexchange",
"id": 5024,
"tags": "thermodynamics, experimental-physics, experimental-technique"
} |
Is ReduceLROnPlateau() Schedule too aggressive by default in pytorch? | Question: Hi I'm asking about the learning rate schedule ReduceLROnPlateau(). I've noticed that it decreases the learning rate by a factor of 10 each time and by default the patience is 10. Given phenomena like double descent its default parameters are very concerning.
Not to mention that in practice I find it to be very common that learning might stall for 10 epochs (even if not truly stuck). And for example this LR schedule (with patience=6) gives me learning_rate=2e-9 after just 100 epochs!! Presumably it would further decrease it if not for the epsilon parameter, which isn't intuitive as you would think it should decrease gradually...
Are they basing these defaults off of some paper? Can anyone speak to their experience using this schedule & the performance of its default settings? And/or share relevant papers.
Answer: I'll start by saying that optimization is as much art as it is science.
ReduceLROnPlateau is my favourite scheduler and I've found it performs quite well out of the box. Of course it's good to play with it to achieve optimal results but for what you experience there might be other reasons.
Depending on the size of your data, 100 epochs might be more than enough so this might not be a problem to begin with.
In case this is indeed a problem, since you don't give much details of your training, I can only provide some generic suggestions.
Check if your model learns without a scheduler. Find some good hyperparameters for:
The starting learning rate. It might be too small to begin with, or even too big (this could mess the network initialization).
The batch size cause it could greatly affect training dynamics.
Any other parameter you think it's important.
After you have these, then add the scheduler. If you still have issues try one of the following:
Check if the metric that you're monitoring is the right one, i.e. a loss on a validation set and that it's values make sense.
Check the iterations per epoch. A very small number of iterations per epoch could give noisy validation metrics. | {
"domain": "ai.stackexchange",
"id": 4191,
"tags": "learning-rate"
} |
Which is the real reason for why we don't see light interference patterns? | Question: There are many questions like mine in the web, but think I cannot fully understand it.
Let's look at this question. The first answer states that we cannot see light interference patterns because most of the light sources are incoherent. And, this implies that, if observed with a rough time scale (microseconds) the interference pattern disappear. This is shown in the following pictures.
Now, I've at least two three about this answer:
Is it saying that if I use an extremely good coherent laser, could I see interference patterns
(for instance by using two of them)?
Is it saying that our nervous system perform a time average with microsecond scale?
Why is the non-coherence the cause of our non-seeing the interference pattern? Let's consider
a couple of pure ideal coherent sources, for instance green at 550nm. I think we cannot see
their interference pattern because the shadow-light zones are too close to be seen by our eye.
What does that have to do with coherence?
Answer:
If the pattern moves (so that bright regions become dark, and dark become bright) on a timescale small compared to human eye response time, then we won't see the interference pattern (it will be averaged out).
If the distance between dark and light region is too small for our eyesight to resolve, then we won't see the interference pattern.
If either or both of (1) and (2) occur, we don't see the pattern. In practice you can get (1) alone, or (2) alone, or both together.
With an ordinary light source (not a laser) you can see interference of the type coming about by partial reflection from a pair of close surfaces. This contributes to the colourful patterns you can see on a film of oil, for example.
With a diffraction grating you can easily see the first (and higher) order diffraction beam, which is owing to interference. You can see this sort of thing in light reflected off a CD (compact disc).
Finally, if you want to interfere light coming from two entirely different sources then here, yes, you do need very special sources such as lasers, and indeed most lasers will not have a sufficiently well-defined frequency for it to work. Their frequency varies with time enough to wash the fringes out on a timescale of microseconds (or faster). But with very narrow-band lasers and a stable set-up you can get the fringes to stay still enough for human vision. | {
"domain": "physics.stackexchange",
"id": 90913,
"tags": "optics, waves, visible-light, interference, vision"
} |
How does upstream dilation of a pipe affect downstream velocity? | Question: I do not frequently post on this community, so hopefully my question fits within the community guidelines. Below is a picture of two pipe systems. All values that are color coded in green are identical between the two pipe systems (e.g. $v_o$, the inflow velocity, in the upper system is equivalent to $v_o$ in the bottom system).
The primary parameters that I am interested in is how a change in the middle chamber's diameter ultimately affects the final velocity $v_f$ exiting the smallest chamber of the pipe system. For clarification, $D_1 \lt D_2$. I wanted to confirm that, correspondingly, $v_{f-1} \lt v_{f-2}$.
If this is correct, I would greatly appreciate any intuitive idea as to why this is the case. I assume it has something to do with the kinetic energy of fluid molecules in the top system being lower than the kinetic energy of the fluid molecules in the bottom system after transiting through the middle chamber (perhaps as a consequence of experiencing more total resistance during the passage through the pipe system).
In the event that any further assumptions are required to best answer this question, assume the following:
the length of each chamber is the same between the two systems
the length of the connecting pieces (i.e. the sloped transition sections) are the same between the two systems. However, these transition sections are obviously increasing their diameter at different rates (but I don't think this really affects the answer)
gravity is either acting downwards OR right-to-left (but I don't think this really affects the answer)
the fluid is water (so it can be assumed to be incompressible)
Thank you! Cheers~
The following are orders of magnitude that the parameters referenced in the above picture will tend to float around:
diameter $\lt 4$mm
pressure $\lt 80$ mmHg
velocity $\lt 20$ cm/sec
Re $\lt 20$
Answer: The Change $D_1<D_2$ causes the exit velocity to be lowered $v_{f-1} \lt v_{f-2}$ through higher velocity stack which simple means smaller effective $D_c$ for the flow.
This higher velocity stack is caused by the velocity difference $V_{D1}>V_{D2}$
It should be noticed that there is no possibility for velocity stack after $D_A$ because there is no "defining nozzle".
The maximum flow velocity (pressure is zero) is thus created on the entrance of $D_C$. This is the defining nozzle for the whole system, and all other velocities can be calculated through it. More information is avaible in this answer;
Air core Vortex; Physical explanation of the "air Entrainment Hook" at $F_{co}=0.7$ | {
"domain": "physics.stackexchange",
"id": 65187,
"tags": "fluid-dynamics, flow"
} |
How does one evaluate the amount of spilled water of an open revolving cylindrical tank? | Question: For quite a while I've been working on the following problem:
An open cylindrical tank $2R$ meters in diameter and $H$ meters tall contains $h$ meters of water. The tank is revolved around its own vertical axis in such a way that the free surface's paraboloid touches the tanks base. How much water will be spilled?
I came up with some initial conditions, but I still cannot think of a way to find the minimum omega for which the paraboloid will touch the bed. I'm new to fluid mechanics - any help is appreciated
Answer: I don't believe that any information about fluid dynamics is needed. This seems to be purely a mathematics problem.
You have enough information to calculate the volume of water originally in the cylinder.
If you assume that after spin-up the resultant water surface touches both the centre of the cylinder bottom and the circular rim at the top of the cylinder then again you have enough information to calculate the volume of this assumed final shape.
Deal with this second volume being less than, equal to, or greater than the original volume... | {
"domain": "physics.stackexchange",
"id": 73413,
"tags": "homework-and-exercises, fluid-statics"
} |
Building an SQL query string | Question: How can I write this PHP code better? It puts together an SQL query string from user input. The task is to return search results based on one or more text fields. The user can determine if partial matches are allowed.
$compop = $allowpartial ? "LIKE" : "="; // use LIKE for wildcard matching
$any = $allowpartial ? "%" : ""; // SQL uses % instead of * as wildcard
$namecondstr = ($name === "") ? "TRUE" : ("Name $compop '$any$name$any'");
$citycondstr = ($city === "") ? "TRUE" : ("City $compop '$any$city$any'");
$itemnocondstr = ($itemno === "") ? "TRUE" : ("ItemNo $compop '$any$itemno$any'");
$ordernocondstr = ($orderno === "") ? "TRUE" : ("OrderNo $compop '$any$orderno$any'");
$serialcondstr = ($serial === "") ? "TRUE" : ("Serial $compop '$any$serial$any'");
$sortstr = ($name !== "") ? "Name" :
(($city !== "") ? "City" :
(($itemno !== "") ? "ItemNo" :
(($orderno !== "") ? "OrderNo" :
"Serial")));
$query = "SELECT * From Licenses
LEFT JOIN Items
ON Licenses.LicenseID = Items.LicenseID
WHERE $namecondstr
AND $citycondstr
AND $itemnocondstr
AND $ordernocondstr
AND $serialcondstr
ORDER BY $sortstr, Licenses.LicenseID";
Answer: Use Prepared Statements
Your code is vulnerable to SQL injection attacks. Use PDO or mysqli prepared statements to avoid this. See this answer for how to use PDO for this. If you are using mysql_* you should know that it is already in the deprecation process.
Miscellaneous
Consider using empty to check for empty strings (see comment from Corbin below).
Personally I would use an if else rather than have two identical ternary conditions. | {
"domain": "codereview.stackexchange",
"id": 2768,
"tags": "php, sql"
} |
missing run.py from rosjava_bootstrap | Question:
Hi,
I'm trying to do the "Getting started with NXT leJOS, using nxt_lejos_proxy" tutorial (http://www.ros.org/wiki/nxt_lejos_proxy/Tutorials/Getting%20Started#Create_a_Scratch_Package), but when i run "roslaunch proxy.launch" I get this error:
ERROR: cannot launch node of type [rosjava_bootstrap/run.py]: Cannot locate node of type [run.py] in package [rosjava_bootstrap].
Looking into my rosjava_bootstrap folder there isn't any run.py file.
I've seen that this file is present in the rosjava electric-tag release (http://code.google.com/p/rosjava/source/browse/?name=electric-tag) but I don't know how to download it.
Can someone give a tip?
Thank you in advance,
Camilla
Originally posted by camilla on ROS Answers with karma: 255 on 2012-07-12
Post score: 0
Answer:
leJOS had been updated to use the latest version of rosjava, but the tutorial had not been. The tutorial has now been updated.
Originally posted by LawrieGriffiths with karma: 131 on 2012-07-27
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 10166,
"tags": "rosjava"
} |
Notation of beta decay | Question: Is it an ion or a neutral atom that is created through beta decay?
For example: $_6^{14}\mathrm C \to ~ _7^{14}\mathrm N + \mathrm e^- + \mathrm{v{_e}}^{-}$
Isn't $_7^{14}N$ an ion since neutron gives us only a proton and an electron, not a proton and 2 electrons? If so, why isn't it stated in the example above (it's from Wikipedia)?
Answer: An ion is created. Usually this is not noted down in the nuclear reaction because while talking of nuclear reactions we only concern ourselves about nuclei, not the entire atom. The reaction is to be read as "A 14-$\ce{C}$ nucleus decays into a 14-$\ce{N}$, an electron, and an antineutrino."
Also, note that the neutrino has no charge. | {
"domain": "chemistry.stackexchange",
"id": 788,
"tags": "ions, nuclear"
} |
Scherk-Schwarz and other compactifications? | Question: I have been thinking about various types of compactifications and have been wondering if I have been understanding them, and how they all fit together, correctly.
From my understanding, if we want to compactify spacetime down from $D$ to $d$ dimensions by writing $M_D = \mathbb{R}^d \times K_{D-d}$. We can do this the following way:
"General" compactification:
Find the universal cover of $K$, and call it $C$. $G$ is a group that acts freely on $C$, and $K = C/G$.
Then, the $D$-dimensional Lagrangian only depends on orbits of the group action: $\mathcal{L}_D[\phi(x,y)] = \mathcal{L}[\phi(x,\tau_g y)]$, $\forall g \in G$.
A necessary and sufficient condition for this is to require that the field transform under a global symmetry:
$\phi(x,y) = T_g \phi(x,\tau_g y)$.
"General" compactifications seem to also be called Scherk-Schwarz compactifications (or dimensional reductions if we only keep the zero modes). An "ordinary" compactification has $T_g = Id$, and an orbifold compactification has a group action with fixed points.
Assuming this is correct, is this the most general definition of a compactification?
Is it reasonable to introduce gauge fields by demanding that the $T_g$ action be local instead of global? I thought we should generally not expect quantum theories to have global symmetries, but any reference I've seen seems to use only global symmetries in the Lagrangian.
Answer: The Scherk-Schwarz compactification is just an extremely special kind of compactification of one dimension in which the spacetime fermions are chosen antiperiodic along one circle of the compactification manifold. It's extremely far from a "general compactification" of string/M-theory.
General compactifications of string/M-theory allow many features that can't be discussed by the simple formulae above; in some sense, all of string/M-theory is needed to answer the question what the general compactification can be and cannot be. General compactifications may include a manifold. It doesn't have to be a manifold; it may be an orbifold with orbifols singularities. In fact, orbifold singularities are not the only allowed ones; one may have conifold singularities and probably much more general ones, too. Various fields may have nontrivial monodromies. One may wrap branes, try to incorporate boundaries at the "end of the world", and add many kinds of Wilson lines, fluxes, generalizing the electromagnetic fluxes, with various quantization conditions, and other features, some of which may even be unknown at the present moment.
The compactifications may get strongly coupled at various loci and interpolate between totally different descriptions such as type II string theory and M-theory, too. Moreover, one may have non-geometric compactifications, too. The question as formulated seems to be too broad.
Moreover, I also have to disagree with the first comment written under the question. It's a string/M-compactification, not a dimensional reduction, which is a consistent theory. The simple dimensional reduction is just an approximate effective theory at distances much longer than the size of the compactification manifold and such an effective theory almost always suffers from some kind of an inconsistency. To fix these inconsistencies, one needs to consider the fully consistent theory, namely the string/M-compactification.
There is no "universal algorithm" to "upgrade" a dimensionally reduced theory to a compactification (the term "upgrade" may mean either "dimensional oxidation" if we just try to add dimensions to a theory; or "UV completion" which means finding the precise compactification including all the short-distance physics that may be approximated by a given effective theory). Many compactifications – as many as the notorious number $10^{500}$ – of compactifications may lead to very similar effective theories in the large dimensions. There's no easy way to find the "correct one" among them.
It's also hard to understand in what sense "most of literature" discusses the dimensional reductions rather than compactifications. I don't think it's the case. Of course, if one looks at literature about dimensional reductions only, one may get this conclusion. However, true literature on string theory doesn't agree with the statement. It's mostly about the full physics of compactifications, not just the reductions – otherwise it wouldn't really be a stringy literature. This is true pretty much by definition. | {
"domain": "physics.stackexchange",
"id": 3346,
"tags": "string-theory, research-level, compactification"
} |
Echo server with CompletableFuture | Question: I recently wrote a simple echo server in Java 7 using the NIO APIs so it was asynchronous and non-blocking. Then I decided, as a learning experience, to redo it with Java 8 hoping to use a more functional style and not have nested callbacks. I'm struggling to understand how to use the new CompletableFuture class with the Futures returned by AsynchronousSocketChannel.
This code currently works but is slower than the Java 7 version. If anyone can point out ways to improve it (or maybe that I'm going about it completely wrong) it would be appreciated. The main problem, to me at least, is that I now have to call get() on three Futures, whereas before I didn't have any blocking operations.
try (final AsynchronousServerSocketChannel listener =
AsynchronousServerSocketChannel.open()) {
listener.setOption(StandardSocketOptions.SO_REUSEADDR, true);
listener.bind(new InetSocketAddress("localhost", 8080));
while (true) {
AsynchronousSocketChannel connection = listener.accept().get();
CompletableFuture<AsynchronousSocketChannel> connectionPromise =
CompletableFuture.completedFuture(connection);
CompletableFuture<ByteBuffer> readerPromise = CompletableFuture.supplyAsync(() -> {
ByteBuffer buffer = ByteBuffer.allocateDirect(1024);
try {
connection.read(buffer).get();
return (ByteBuffer) buffer.flip();
} catch (InterruptedException | ExecutionException e) {
connectionPromise.completeExceptionally(e);
}
return null;
});
readerPromise.thenAcceptAsync((buffer) -> {
if (buffer != null) {
try {
connection.write(buffer).get();
connection.close();
} catch (InterruptedException | ExecutionException | IOException e) {
readerPromise.completeExceptionally(e);
}
}
});
}
} catch (IOException | InterruptedException | ExecutionException e) {
e.printStackTrace();
}
Answer: What to do with checked Exceptions thrown within a CompletableFuture?
Let's look what does CompletableFuture do with the unchecked exceptions.
As @palacsint noted:
In all other cases, if a stage's computation terminates abruptly with
an (unchecked) exception or error, then all dependent stages requiring
its completion complete exceptionally as well, with a
CompletionException holding the exception as its cause.
That is to say: If completableFuture.get() would throw, say, an ExecutionException:NullPointerException; then so would future.thenApply(funcOne).get() and future.thenApply(funcOne).thenApply(funcTwo).get() and so on. This is done by at each stage a layer of CompletionException is peeled and wrapped in another. .get() peels it and wraps it in an ExecutionException so that the contents of the ExecutionException is not drowned in layers and layers of CompletionExceptions.
So what will happen if you wrap a checked exception in a CompletionException. It will percolate through the chain and you would get a ExecutionException:IOException, as if you could throw a checked exception from a Supplier<T>. Which you can handle differently than if you just wrapped in a RuntimeException. This is what you try to do when you do connectionPromise.completeExceptionally(e); as far as I can understand. And it is what would happen if someone else called completeExceptionally on the enclosing CompletableFuture. I am proposing this; because you might want to prefer unrecoverable RuntimeExceptions to be program errors as much as prudent(, and no more). If I am ignoring fifty RuntimeExceptions because some clients have sketchy wireless connection, then I might ignore some other RuntimeException that is indicative of a grave programming error. To be fair this may be considered as depending on undocumented behavior. If you are troubled by this, you can define you on you own unchecked checked exception wrapper.
Finally on this point; looking at the API CompletableFutures are clearly meant to be chained.
Accepting Connections Synchronously
You need not really do this. You can either start the CompletableFuture chain with a call to supplyAsync giving it a suitable Executor such as a fixed thread pool or cached poll or a custom ThreadPoolExecutor. Or similarly you accept connections in a callback using AsynchronousChannelGroup. Since I do not know about NIO2 I am copying that part from some online source.
public static void serverLoop() throws IOException {
// These can be injected and thus configured from without the application
ExecutorService connectPool = Executors.newFixedThreadPool(10);
Executor readPool = Executors.newCachedThreadPool();
Executor writePool = Executors.newCachedThreadPool();
AsynchronousChannelGroup group = AsynchronousChannelGroup.withThreadPool(connectPool);
try (AsynchronousServerSocketChannel listener =
AsynchronousServerSocketChannel.open(group)) {
listener.setOption(StandardSocketOptions.SO_REUSEADDR, true);
listener.bind(new InetSocketAddress("localhost", 8080));
while (true) {
listener.accept(null, handlerFrom(
(AsynchronousSocketChannel connection, Object attachment) -> {
assert attachment == null;
CompletableFuture.supplyAsync(() -> {
try {
ByteBuffer buffer = ByteBuffer.allocateDirect(1024);
connection.read(buffer).get();
return (ByteBuffer) buffer.flip();
} catch (InterruptedException | ExecutionException ex) {
throw new CompletionException(ex);
}}, readPool)
.thenAcceptAsync((buffer) -> {
try {
if (buffer == null) return;
connection.write(buffer).get();
} catch (InterruptedException | ExecutionException ex) {
throw new CompletionException(ex);
}
}, writePool)
.exceptionally(ex -> {
handleException(ex);
return null;
});
}));
}
}
}
private static <V, A> CompletionHandler<V, A> handlerFrom(BiConsumer<V, A> completed) {
return handlerFrom(completed, (Throwable exc, A attachment) -> {
assert attachment == null;
handleException(exc);
});
}
private static <V, A> CompletionHandler<V, A> handlerFrom(
BiConsumer<V, A> completed,
BiConsumer<Throwable, A> failed) {
return new CompletionHandler<V, A>() {
@Override
public void completed(V result, A attachment) {
completed.accept(result, attachment);
}
@Override
public void failed(Throwable exc, A attachment) {
failed.accept(exc, attachment);
}
};
}
Escaping from Callback Hell
This is more a comment than a review item. As far as I looked, there is not a fluent interface yet for general Futures. You could roll out your own, but you would have to implement dozens of methods yourself. Because of the peculiarities of Java's type system, one cannot have real monads, traits, extension methods etc facilities other languages provide for library developers that help with this burden.
EDIT
I realized that it is possible to do this:
hoping to use a more functional style and not have nested callbacks.
But I do not claim for one moment this is better than the usual callback hell.
CompletableFuture.completedFuture(
(TriConsumer<CompletionHandler<Integer, ByteBuffer>, ByteBuffer, AsynchronousSocketChannel>)
(self, bbAttachment, scAttachment) -> {
if (bbAttachment.hasRemaining()) {
scAttachment.write(bbAttachment, bbAttachment, self);
} else {
bbAttachment.clear();
}
})
.thenApply((consumer) -> (
(TriConsumer<Integer, ByteBuffer, AsynchronousSocketChannel>)
(result, buffer, scAttachment) -> {
if (result == -1) {
try {
scAttachment.close();
} catch (IOException e) {
e.printStackTrace();
}
}
scAttachment.write((ByteBuffer) buffer.flip(), buffer, handlerFrom(
(Integer result2, ByteBuffer bbAttachment, CompletionHandler<Integer, ByteBuffer> self) -> {
consumer.accept(self, bbAttachment, scAttachment);
}));
}))
.thenApply((consumer) -> (Consumer<AsynchronousSocketChannel>) connection -> {
final ByteBuffer buffer = ByteBuffer.allocateDirect(1024);
connection.read(buffer, connection, handlerFrom(
(Integer result, final AsynchronousSocketChannel scAttachment) -> {
consumer.accept(result, buffer, scAttachment);
}));
})
.thenAccept((consumer)->
listener.accept(null, handlerFrom(
(AsynchronousSocketChannel connection, Void v, CompletionHandler<AsynchronousSocketChannel, Void> self) -> {
listener.accept(null, self); // get ready for next connection
consumer.accept(connection);
})));
Where:
@FunctionalInterface
interface TriConsumer<T1, T2, T3> {
void accept(T1 t1, T2 t2, T3 t3);
}
private static <V, A> CompletionHandler<V, A> handlerFrom(TriConsumer<V, A, CompletionHandler<V, A>> completed) {
return handlerFrom(completed, (Throwable exc, A attachment) -> {
assert attachment == null;
handleException(exc);
});
}
private static <V, A> CompletionHandler<V, A> handlerFrom(
TriConsumer<V, A, CompletionHandler<V, A>> completed,
BiConsumer<Throwable, A> failed) {
return new CompletionHandler<V, A>(){
@Override
public void completed(V result, A attachment) {
completed.accept(result, attachment, this);
}
@Override
public void failed(Throwable exc, A attachment) {
failed.accept(exc, attachment);
}
};
}
Remarks
Since nio2 methods are already async, we stick with the sync methods of CompletableFuture.
Although the callbacks are not nested anymore they are upside down. | {
"domain": "codereview.stackexchange",
"id": 7216,
"tags": "java, asynchronous, networking, socket, java-8"
} |
Allowing query handlers to use other query handlers | Question: I recently started looking into CQRS when using Entity Framework and the impact it has had on my systems has been overwhelming.
I implemented the following patterns described in these blog posts:
Query implementation
Command implementation
I have implemented these with Autofac as dependency injection, and Entity Framework with code first.
Now for the question. I have decided to let my query handlers use each other, and build upon other query handlers results. Is that considered okay? What issues may I run into in the future (if any)? Is the code SOLID?
Here's an example of my GetEmployeesQueryHandler:
class GetEmployeesQueryHandler : IQueryHandler<GetEmployeesQuery, IQueryable<Employee>>
{
private readonly DatabaseContext database;
public GetEmployeesQueryHandler(DatabaseContext database)
{
this.database = database;
}
public IQueryable<Employee> Handle(GetEmployeesQuery query)
{
return database.Employees;
}
}
My Entity Framework DatabaseContext gets injected into the constructor, and I simply return all employees.
Then I have a more specific query, which bases itself on this query. It does that by first fetching all employees, and then filtering them based on the Id of the employee.
class GetEmployeeByIdQueryHandler : IQueryHandler<GetEmployeeByIdQuery, Employee>
{
private readonly IQueryProcessor processor;
public GetEmployeeByIdQueryHandler(IQueryProcessor container)
{
this.processor = container;
}
public Employee Handle(GetEmployeeByIdQuery query)
{
var employees = processor.Process(new GetEmployeesQuery());
return employees.SingleOrDefault(e => e.Id == query.EmployeeId);
}
}
And that's it. What do you think? Is it okay for my GetEmployeesByIdQueryHandler to use the GetEmployeesQueryHandler to get all employees first, for better code reuse?
Note that I return an IQueryable<Employee> when I fetch all employees, so the SQL query will still be lazily generated, and evaluated in an optimized way.
For those who want a TL;DR and don't want to read the blog posts, below are the interfaces and the rest of the classes backing up my code.
Implementations
GetEmployeesQuery
public class GetEmployeesQuery : IQuery<IQueryable<Employee>>
{
}
GetEmployeeByIdQuery
public class GetEmployeeByIdQuery : IQuery<Employee>
{
public Guid EmployeeId { get; set; }
}
QueryProcessor
/// <summary>
/// Represents a class which automatically instantiates a QueryHandler that corresponds to a given Query, and processes the result of running the Query on that QueryHandler.
/// </summary>
sealed class QueryProcessor : IQueryProcessor
{
private readonly IComponentContext context;
public QueryProcessor(IComponentContext context)
{
this.context = context;
}
/// <summary>
/// Automatically figures out which QueryHandler belongs to the given Query, instantiates it, and returns the result of running the Query on that QueryHandler.
/// </summary>
/// <typeparam name="TResult">The type of result to return. This can be infered from Query given.</typeparam>
/// <param name="query">The query to return to use as basis for finding a suitable QueryHandler and returning the result.</param>
/// <returns></returns>
public TResult Process<TResult>(IQuery<TResult> query)
{
var handlerType = typeof(IQueryHandler<,>).MakeGenericType(query.GetType(), typeof(TResult));
dynamic handler = context.Resolve(handlerType);
return handler.Handle((dynamic)query);
}
}
Interfaces
IQueryHandler
/// <summary>
/// Defines a test-friendly query handler that can be automatically instantiated and populated with dependency injection.
/// </summary>
/// <typeparam name="TQuery">The type of query that this query handler should handle. For example, GetEmployeeByIdQuery.</typeparam>
/// <typeparam name="TResult">The type of result that the query returns. Must be the same as defined in the query itself. For example, Employee.</typeparam>
public interface IQueryHandler<TQuery, TResult> where TQuery : IQuery<TResult>
{
TResult Handle(TQuery query);
}
IQueryProcessor
public interface IQueryProcessor
{
TResult Process<TResult>(IQuery<TResult> query);
}
IQuery
public interface IQuery<TResult>
{
}
Answer:
I have decided to let my query handlers use each other, and build upon other query handlers results. Is that considered okay? What issues may I run into in the future (if any)? Is the code SOLID?
You are letting components be built up by other components. This is completely SOLID and is a good approach.
What I would like to warn about is returning IQueryable<T>. This is fine as long as the query handler is used by other query handlers (since this enables composible queries with good performance), don't return an IQueryable<T> to the presentation layer. This makes the system unreliable and hard to test. This means that you will have to implement sorting and paging inside your business layer, but this is actually quite easy.
Note that your components should not dispose any dependencies that are injected into him. The reason is that such component doesn't own the dependency and has no idea what the lifetime of the dependency is. Disposing that dependency can make the application break, and this is something you already noted in the comments.
As a matter of fact, you are violating SOLID by injecting a dependency that implements IDisposable; you are violating the Dependency Inversion Principle. This principle states that "abstractions should not depend on details", but IDisposable is an implementation detail. If you prevent having IDisposable on the abstraction, but instead place it on the implementation, you'll see that it becomes impossible for the consumer to call Dispose() (which is good, because it has no idea whether or not it should call it), and now only the one who created that dependency can dispose that dependency (which is your composition root). This makes your application code much simpler, because you will hardly ever need to implement IDisposable at all.
In your case however, you can't remove IDisposable from the DatabaseContext, because you inherit from DbContext. But injecting a DbContext is itself a DIP violation. Although not all DIP violations are bad (and you will always have DIP violations somewhere in your application), I rather hide the DbContext from my code, for instance by using an IUnitOfWork abstraction with a single IQueryable<T> Set<T>() method. Advantage of this is that the DbContext can be resolved at runtime, instead of injected into consumers, and it allows to easily wrap the IUnitOfWork with some sort of security decorator that filters the results based on the user's rights.
Do note that the part of the system that created a disposable component typically holds the ownership, and is therefore responsible of disposing that. Although this ownership can be transferred, you should typically not do this, because this complicates your application (I think you already noticed this). So you composition root creates this dependency and should dispose it. In case you use a DI library, the library will create that insance for you. In that case, the library is also responsible of disposing it for you. Although you can view this as 'something magical', IMO it's simply a basic feature of the library you are using. You should understand the libraries you are use. In the case of Autofac, you can be pretty sure that Autofac handles this correctly for you. In case you would switch back to Pure DI (formally known as poor man's DI), your code will obviously again be in control over that dependency, and you will have to implement this disposal again manually. There's no design smell here. But that said, although disposing is not the problem, scoping might actually be. Please read this answer of mine.
Although the query handler pattern might seem over-engineered at first, if you read the article closely, you'll see that it is simply an implementation of the SOLID principles. IMO, you should always strive to adhere to the SOLID principles in the core parts of your application. Querying is obviously a core part of every application. My experience is that query handlers even work well on small projects. They allow adding cross-cutting concerns with such easy, that it can really boost the productivity and flexibility in smaller applications.
Using IQueryProcessor instead of injecting IQueryHandlers directly has some downsides, such as the fact that handlers get resolved lazily at runtime. This makes it harder to verify the complete object graph and it makes it hard to see what queries a component executes. On the other hand, it makes your code cleaner, because the generic types and often long names for query classes can give some noise in your code (that's more a limitation in C# than a limitation of the pattern). In that case the IQueryProcessor can help.
Since your IQueryProcessor implementation will be part of your composition root, it is completely fine to have a dependency on the container, since the rest of your composition will have a dependency on the container as well (although it would be good to call it AutofacQueryProcessor). It is incorrect to assume that this is a implementation the Service Locator anti-pattern. This is clearly explained by Mark Seemann here. In case you are swapping your DI library, you will have to change your complete composition root, including the IQueryProcessor implementation (because it is part of the composition root). There's nothing wrong with that.
It is no problem that your container is stored inside your graph. Although you could try making the process generic by injecting a Func, that would still mean that the injected Func would depend on the container, making the container still part of the object graph. This is actually what dependency inversion is all about. Components can use other components and code at runtime that they don't have a compile time dependency on. | {
"domain": "codereview.stackexchange",
"id": 27495,
"tags": "c#, entity-framework, dependency-injection, autofac"
} |
To which representation of the Lorentz group do the objects of the Duffin-Kemmer-Petiau (DKP) equation belong? | Question: The DKP equation is, allegedly, a relativistic spin-0 or spin-1 equation. In the spin-0 case the defining algebra is a 5 dimensional representation. For spin-1 the four matrices are 10 dimensional, in either case the E.O.M. looks superficially just like the Dirac equation.
Resources online about the equation are scarce. Is it in reducible representation of the Lorentz group? If not, how is it possibly Lorentz invariant? I.e. how does a real scalar field have more than a single component?
Answer: After some research I was able to answer this question. These references (1), (2) were extremely useful.
The elements $\beta$ belong to what is known nowadays as the meson algebra, and these are studied more contemporaneously alongside the usual Clifford algebras by mathematicians. Very loosely one may think of them as being 'reducible' generalizations of the usual Clifford algebra.
Speaking of, from the fact that $[\beta^\mu,\beta^\nu] = M^{\mu\nu}$ are generators of the Lorentz Lie algebra, we may by direct computation show the following facts about the two cases:
Spin-$0$:
In the $5$ dimension spin-$0$ DKE-equation, the wavefunction belongs in the reducible representation of the Lorentz group labelled $$(0,0)\oplus(\frac{1}{2},\frac{1}{2}).$$ That is to say, a scalar stacked atop of a four-vector, as guessed by user Buzz. Let us abuse notation and write the wavefunction via
$$
\psi = \begin{pmatrix}
\phi \\
A_\mu
\end{pmatrix},
$$with $\phi$ the Lorentz scalar field and $A_{\mu}$ the components of the vector field. In more familiar notation the DKP-equation can be written as:
$$
\begin{pmatrix}
\partial_\mu A^\mu \\
\partial_\mu \phi
\end{pmatrix} =
\frac{mc}{i\hbar}\begin{pmatrix}
\phi \\
A_\mu
\end{pmatrix}.
$$
Spin-$1$:
In the $10$ dimensional spin-$1$ DKE-equation, the wavefunction belongs in the reducible representation of the Lorentz group labelled $$(1,0)\oplus(0,1)\oplus(\frac{1}{2},\frac{1}{2}).$$ I.e. the sum of the adjoint (field strength tensor) and fundamental (vector) rep's. Let us abuse notation and write the wavefunction via
$$
\psi = \begin{pmatrix}
\vec{E} \\
\vec{B} \\
A^\mu
\end{pmatrix},
$$with $\vec{E},\vec{B}$ the timelike and spacelike $3$-vectors associated to an anti-symmetric tensor of rank-$2$ (E&M fields from a typical Faraday tensor $F^{\alpha \beta}$), and $A^{\mu}$ the components of a vector field. In the familiar notation the DKP-equation can be re-written as:
$$
\begin{pmatrix}
-\partial_t\vec{A}-\vec{\nabla}A^0 \\
\vec{\nabla}\times \vec{A} \\
\vec{\nabla}\cdot \vec{E} \\
\vec{\nabla}\times \vec{B} - \partial_t \vec{E}
\end{pmatrix} = -\frac{mc}{i\hbar}\begin{pmatrix}
\vec{E} \\
\vec{B} \\
A^\mu
\end{pmatrix}.
$$
(Despite the rather gross mismatch of notation it should be clear how the components line up if one counts the arguments dimensions.)
Thus we see the meson algebra is a clever way to embed the structure of coupled 'Proca' like equations into a single equation superficially identical to the Dirac equation. | {
"domain": "physics.stackexchange",
"id": 96962,
"tags": "special-relativity, field-theory, representation-theory, covariance"
} |
Magnetizing inductance in transformer | Question: I asked this question in electronics stack exchange as well but I thought it would also be applicable here as my question revolves largely around Maxwell's equations.
I am learning about transformers and in the equivalent circuit the magnetizing inductance of the excitation branch is supposed to represent the current needed to "set up" the flux in the core due to finite permeability of iron.
However, this does not explain to me why flux needs to be "set up" to begin with. The voltage induced is not a function of the magnitude of the flux but rather the rate of change of flux. The flux could at one instant be zero and changing quickly and the same voltage would be induced in a winding as if the magnitude were larger with the same rate of change.
This seems to be one of the biggest problems when analyzing "magnetic circuits" to me as there is a clear discrepancy in the duality with respect to Maxwell's equations. The rate of change of magnetic flux would seem to be the "magnetic current" especially for energy conservation and duality for Faraday's law with Ampere's law.
Can someone explain why there needs to be flux "set up" in the core and how finite permeability would change the relations for an inductor.
Answer: For a physical transformer, the self-inductance of the primary and secondary are both finite since the magnetic core has finite permeability.
Assuming perfect coupling and zero winding resistance, the coupled inductor equations are (in the phasor domain)
$$V_1 = j\omega L_1 I_1 + j\omega MI_2$$
$$V_2 = j\omega MI_1 + j\omega L_2 I_2$$
where
$$M = k\sqrt{L_1L_2}$$
and $k=1$ for perfect coupling.
Image credit
When the secondary is left open (no load attached to the secondary), the (phasor) current $I_2 = 0$ thus
$$I_1 = \frac{V_1}{j\omega L_1}$$
For an ideal transformer, $I_1$ should be zero when $I_2 = 0$ and so this is a departure from ideal transformer behavior. Clearly, placing an inductor with inductance $L_1$ in parallel with the primary of an ideal transformer models this departure and this parallel inductance is known as the magnetization inductance.
This magnetization inductance is due to the finite permeability of the core. If the core permeability were 'infinite', the primary (and secondary) inductance would be 'infinite', and then there would be no need for a parallel magnetization inductance. | {
"domain": "physics.stackexchange",
"id": 52936,
"tags": "electromagnetism, magnetic-fields, electric-circuits, maxwell-equations, electromagnetic-induction"
} |
Does the Earth's magnetic field sort cosmic rays by charge? | Question: The Earth is struck by both negative and positively charged particles. As these particles interact with the Earth's magnetic field they become deflected. Do the magnetic poles of the Earth receive correspondingly higher doses of negative particles at one pole and positive at the other?
Answer: Since the cosmic rays come from all over and the magnetic field only applies force perpendicular to the direction of the particles' travel, there isn't any polar sorting in the sense of "positive to the north, negative to the south" or vice versa. Magnetic fields are related to electric fields, but they're not electric and don't cause electrical polarization.
However, the magnetic field does alter the path of incoming rays, causing them to spiral in to the poles; and depending on which pole and which charge are involved, the particles will spiral in opposite directions; negative charges will spiral clockwise in the same field that positive charges spiral counterclockwise. So there is a kind of sorting going on, in that sense. | {
"domain": "physics.stackexchange",
"id": 24653,
"tags": "magnetic-fields, earth, cosmic-rays"
} |
how to use Timer | Question:
I have read the tutorial on webpage, but still don't know how to use it,special don't know how to write the funtion void callback( ). I hope someone could tell me how to write and use it more detailed. It's better if there are some examples.
Originally posted by cros on ROS Answers with karma: 3 on 2014-12-18
Post score: 0
Original comments
Comment by BennyRe on 2014-12-18:
Which timer and which tutorial do you mean?
Answer:
Let's say you know how a function call works as in the following example:
int main(int argc, char **argv) {
ros::Rate loop(50);
while(ros::ok()) {
callmyfunction(); // <= do that until the end of the world or ros kaputt
loop.sleep();
}
}
where
void callmyfunction() {
// function body
}
is your function defined somewhere in your code. That's the most usual way
Ok, now let's say you have a situation where you need to do other things in your main while() or for some reasons you don't want to call that function every loop (maybe because it has another rate or frequency). An example could be a function that publishes points to RViz. It does every 0.5 seconds but yur main routine (your robot) works much more faster (with a frequency of 100 Hz for istance).
In this case you can use timers and your code changes like following:
ros::Timer my_desired_frequency;
int main(int argc, char **argv) {
my_desired_frequency = nh_.createTimer( ros::Duration(2.0), callmyfunction); // Automatically calls every 2.0 seconds the function
ros::Rate loop(50);
while(ros::ok()) {
// Here the main code for your Terminator T1000
/* callmyfunction(); */ // Not needed anymore since it runs indipendently
ros::spinOnce; // NEEDED since callbacks for timers, publishers and listeners must be executed
loop.sleep();
}
}
void callmyfunction(const ros::TimerEvent& event) { // <= New signature here... for the same function
// function body same as above
}
Using that timer the function is going to be called indipendently and at another (or you want the same) frequency than your main routine. It doesn't need to be with the main synchronized. Keep it simple: you can think it is running in background and you don't need to call the function directly in your code.
UPDATE: this is not the only way to use a timer. You can create a timer as a member function/variable in a class or as a functor. For the right syntax can you check the following link and ask if yu have questions.
Hope my explanation helps.
Regards
Originally posted by Andromeda with karma: 893 on 2014-12-18
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by kramer on 2014-12-18:
One note, for clarity: since it's dependent on the callback mechanism, the timer cannot run faster than the spin rate. Slower, as you show, or the same rate, but not faster.
Comment by Andromeda on 2014-12-18:
Thanks kramer! | {
"domain": "robotics.stackexchange",
"id": 20368,
"tags": "timer"
} |
Imaginary part of refractive index = absorption or extinction? | Question: I know that the imaginary part of the complex refractive index results in a damping of the amplitude of an electromagnetic wave. I wonder if this is because of absorption alone or comprising scattering of photons out of the beam as well, which in the case of exponential laws (Beer-Lambert) is known as extinction coefficient (absorption + losses due to scattering out of the beam)?
And if the imaginary part of the refractive index does not account for scattering, but describes only absorption... what is then describing scattering? The real part of the refractive index?
Answer: Refractive index doesn't describe scattering at all. The real part indicates deflection of an incident ray after transmission, while the imaginary part describes extinction due to absorption.
Scattering is a large problem that has a bunch of input parameters like e.g. size distribution of scatterers, their (complex) refractive indices. Sometimes also shapes and orientations of scatterers are significant (e.g. when describing halos).
There's no such thing as a refractive index that would by itself characterize extinction due to scattering. What does characterize relation between extinction and absorption of a medium is single scattering albedo. | {
"domain": "physics.stackexchange",
"id": 89344,
"tags": "optics, electromagnetic-radiation"
} |
Can we measure the speed of light in one direction? | Question: I'm sorry if this is a dumb question. Recently I was learning that it's impossible for us to measure the speed of light in one direction. We can only measure it in two directions and we assume that it's speed is the same in both directions by convention.
https://en.wikipedia.org/wiki/One-way_speed_of_light
But we have been able to directly observe the effects of the finite speed of light since the 1600s, when astronomers tried to use Io as a way to measure time without a clock and found that it appeared to orbit slightly faster while the Earth was moving towards it and slightly slower while moving away from it, for a total maximum discrepancy of 16 minutes. This delay is not from light bouncing off of Io's surface, since it does so in both situations. The delay is only from the light needing to travel an extra 186 million miles between the two measurements.
So my question is, is this second example an actual measurement of the speed of light in one direction? I don't see why one source says we can't measure light in one direction while the second source suggests that we can. Please help me clear up this contradiction.
Answer: Yes, it is often assumed that Rømer measured the speed of light in one direction. It may seems trange, but Rømer velocity is also the velocity obtained under the tacit assumption of the equality of the speeds of light in opposite directions. The fact of the matter is that Rømer and Cassini were speculating about the movement of Jupiter’s satellites, automatically assuming that the observers’ space was isotropic.
The Estonian - Australian physicist Leo Karlov showed that Rømer actually measured the speed of light by implicitly making the assumption of the equality of the speeds of light back and forth.
L. Karlov, “Does Roemer's method yield a unidirectional speed of light?” Australian Journal of Physics 23, 243-258 (1970)
Also:
L. Karlov “Fact and Illusion in the speed of light determination of the Roemer type” American Journal of Physics, 49, 64-66 (1981)
Some reflections on the one-way speed of light are here.
Another interesting method to measure one - way speed of light that you may discover soon or later was so - called Double Fizeau Toothed wheel. That is two toothed wheel attached to opposite sides of long rotating shaft and a beam of light between the teeth. This method was employed (probably without proper due - diligence) by S. Marinov and M. D. Farid Ahmet.
However, Herbert Ives in his 1939 article "Theory of Double Fizeau toothed wheel" predicted that outcome of the measurement will be exactly c due to relativistic twist of the rotating shaft. | {
"domain": "physics.stackexchange",
"id": 80368,
"tags": "special-relativity, visible-light, speed-of-light"
} |
Python tkinter Snake | Question: I have made a little Snake game with Python tkinter. It's my first Python program with one GUI. What can I improve?
from tkinter import *
import random
boardWidth = 30
boardHeight = 30
tilesize = 10
class Snake():
def __init__(self):
self.snakeX = [20, 20, 20]
self.snakeY = [20, 21, 22]
self.snakeLength = 3
self.key = "w"
self.points = 0
def move(self): # move and change direction with wasd
for i in range(self.snakeLength - 1, 0, -1):
self.snakeX[i] = self.snakeX[i-1]
self.snakeY[i] = self.snakeY[i-1]
if self.key == "w":
self.snakeY[0] = self.snakeY[0] - 1
elif self.key == "s":
self.snakeY[0] = self.snakeY[0] + 1
elif self.key == "a":
self.snakeX[0] = self.snakeX[0] - 1
elif self.key == "d":
self.snakeX[0] = self.snakeX[0] + 1
self.eatApple()
def eatApple(self):
if self.snakeX[0] == apple.getAppleX() and self.snakeY[0] == apple.getAppleY():
self.snakeLength = self.snakeLength + 1
x = self.snakeX[len(self.snakeX)-1] # Snake grows
y = self.snakeY[len(self.snakeY) - 1]
self.snakeX.append(x+1)
self.snakeY.append(y)
self.points = self.points + 1
apple.createNewApple()
def checkGameOver(self):
for i in range(1, self.snakeLength, 1):
if self.snakeY[0] == self.snakeY[i] and self.snakeX[0] == self.snakeX[i]:
return True # Snake eat itself
if self.snakeX[0] < 1 or self.snakeX[0] >= boardWidth-1 or self.snakeY[0] < 1 or self.snakeY[0] >= boardHeight-1:
return True # Snake out of Bounds
return False
def getKey(self, event):
if event.char == "w" or event.char == "d" or event.char == "s" or event.char == "a" or event.char == " ":
self.key = event.char
def getSnakeX(self, index):
return self.snakeX[index]
def getSnakeY(self, index):
return self.snakeY[index]
def getSnakeLength(self):
return self.snakeLength
def getPoints(self):
return self.points
class Apple:
def __init__(self):
self.appleX = random.randint(1, boardWidth - 2)
self.appleY = random.randint(1, boardHeight - 2)
def getAppleX(self):
return self.appleX
def getAppleY(self):
return self.appleY
def createNewApple(self):
self.appleX = random.randint(1, boardWidth - 2)
self.appleY = random.randint(1, boardHeight - 2)
class GameLoop:
def repaint(self):
canvas.after(200, self.repaint)
canvas.delete(ALL)
if snake.checkGameOver() == False:
snake.move()
snake.checkGameOver()
canvas.create_rectangle(snake.getSnakeX(0) * tilesize, snake.getSnakeY(0) * tilesize,
snake.getSnakeX(0) * tilesize + tilesize,
snake.getSnakeY(0) * tilesize + tilesize, fill="red") # Head
for i in range(1, snake.getSnakeLength(), 1):
canvas.create_rectangle(snake.getSnakeX(i) * tilesize, snake.getSnakeY(i) * tilesize,
snake.getSnakeX(i) * tilesize + tilesize,
snake.getSnakeY(i) * tilesize + tilesize, fill="blue") # Body
canvas.create_rectangle(apple.getAppleX() * tilesize, apple.getAppleY() * tilesize,
apple.getAppleX() * tilesize + tilesize,
apple.getAppleY() * tilesize + tilesize, fill="green") # Apple
else: # GameOver Message
canvas.delete(ALL)
canvas.create_text(150, 100, fill="darkblue", font="Times 20 italic bold", text="GameOver!")
canvas.create_text(150, 150, fill="darkblue", font="Times 20 italic bold",
text="Points:" + str(snake.getPoints()))
snake = Snake()
apple = Apple()
root = Tk()
canvas = Canvas(root, width=300, height=300)
canvas.configure(background="yellow")
canvas.pack()
gameLoop = GameLoop()
gameLoop.repaint()
root.title("Snake")
root.bind('<KeyPress>', snake.getKey)
root.mainloop()
Answer: Review
Naming conventions
In Python, methods and variables should be named the following way: https://stackoverflow.com/questions/159720/what-is-the-naming-convention-in-python-for-variable-and-function-names
So in your case, it would be:
self.snakeLength > self.snake_length
def eatApple(self) > def eat_apple(self)
Choose better names for variables
Snake.snakeX seems to be a little bit too much. Why don't you
Snake.snakeX > Snake.x
Snake.snakeLength > Snake.length
Apple.appleX > Apple.x
Use class constants instead of global variables
Usually you want to have as few golbal variables / constants as possible. Instead of boardWidth = 30 you can create a cosntant and move it to some class, like BOARD_WIDTH = 30 (see example).
Wrap the main code
Create a new class which wraps the code your script runs. You could, for exapmple, use the Tk class (see example)
Use main
If someone wants to import your code, he will instatnly run the whole script, creating a Tkinter widget, running the gameloop infinitely. Instead, wrap it with a if __name__ == "__main__": statement.
Prevent the snake of moving backwards
In your implementation it is possible to do a 180° turn, which kills your snake instantly. You could always save the last move, e.g. "w", which blocks all incoming "s", so only "w", "a" and "d" are valid for the next move (see example).
Reduce visibility
You can reduce the visibility of class members by placing __ in front of them. For example
self.apple > self.__apple
Looks ugly, but this way these fields can't be accessed from outside your class.
Use @property
Instead of defining getters manually, use propteries (see example).
More information: https://www.smallsurething.com/private-methods-and-attributes-in-python/
Code improvements
Instead of event.char == "w" or event.char == "d" or event.char == "s" or event.char == "a" you can define a list of all keys KEYS = ["w", "a", "s", "d"] and then do event.char in Snake.KEYS
Do not write if snake.checkGameOver() == False, instead do if not snake.checkGameOver()
You might use type hinting
Since python 3.5 there is typehinting (see example).
More information: https://blog.jetbrains.com/pycharm/2015/11/python-3-5-type-hinting-in-pycharm-5/
Code example
snake.py
from tkinter import *
import random
from typing import List
class Apple:
def __init__(self):
self.__x = random.randint(1, App.BOARD_WIDTH - 2)
self.__y = random.randint(1, App.BOARD_HEIGHT - 2)
def create_new_apple(self) -> None:
self.__x = random.randint(1, App.BOARD_WIDTH - 2)
self.__y = random.randint(1, App.BOARD_HEIGHT - 2)
@property
def x(self) -> int:
return self.__x
@property
def y(self) -> int:
return self.__y
class Snake:
KEYS = ["w", "a", "s", "d"]
MAP_KEY_OPP = {"w": "s", "a": "d", "s": "w", "d": "a"}
def __init__(self, apple):
self.__apple = apple
self.__x = [20, 20, 20]
self.__y = [20, 21, 22]
self.__length = 3
self.__key_current = "w"
self.__key_last = self.__key_current
self.__points = 0
def move(self) -> None: # move and change direction with wasd
self.__key_last = self.__key_current
for i in range(self.length - 1, 0, -1):
self.__x[i] = self.__x[i - 1]
self.__y[i] = self.__y[i - 1]
if self.__key_current == "w":
self.__y[0] = self.__y[0] - 1
elif self.__key_current == "s":
self.__y[0] = self.__y[0] + 1
elif self.__key_current == "a":
self.__x[0] = self.__x[0] - 1
elif self.__key_current == "d":
self.__x[0] = self.__x[0] + 1
self.eat_apple()
def eat_apple(self) -> None:
if self.__x[0] == self.__apple.x and self.__y[0] == self.__apple.y:
self.__length = self.__length + 1
x = self.__x[len(self.__x) - 1] # snake grows
y = self.__y[len(self.__y) - 1]
self.__x.append(x + 1)
self.__y.append(y)
self.__points = self.__points + 1
self.__apple.create_new_apple()
@property
def gameover(self) -> bool:
for i in range(1, self.length, 1):
if self.__y[0] == self.__y[i] and self.__x[0] == self.__x[i]:
return True # snake ate itself
if self.__x[0] < 1 or self.__x[0] >= App.BOARD_WIDTH - 1 or self.__y[0] < 1 or self.__y[0] >= App.BOARD_HEIGHT - 1:
return True # snake out of bounds
return False
def set_key_event(self, event: Event) -> None:
if event.char in Snake.KEYS and event.char != Snake.MAP_KEY_OPP[self.__key_last]:
self.__key_current = event.char
@property
def x(self) -> List[int]:
return self.__x.copy()
@property
def y(self) -> List[int]:
return self.__y.copy()
@property
def length(self) -> int:
return self.__length
@property
def points(self) -> int:
return self.__points
class App(Tk):
BOARD_WIDTH = 30
BOARD_HEIGHT = 30
TILE_SIZE = 10
COLOR_BACKGROUND = "yellow"
COLOR_SNAKE_HEAD = "red"
COLOR_SNAKE_BODY = "blue"
COLOR_APPLE = "green"
COLOR_FONT = "darkblue"
FONT = "Times 20 italic bold"
FONT_DISTANCE = 25
TEXT_TITLE = "Snake"
TEXT_GAMEOVER = "GameOver!"
TEXT_POINTS = "Points: "
TICK_RATE = 200 # in ms
def __init__(self, screenName=None, baseName=None, className='Tk', useTk=1, sync=0, use=None):
Tk.__init__(self, screenName, baseName, className, useTk, sync, use)
self.__apple = Apple()
self.__snake = Snake(self.__apple)
self.__canvas = Canvas(self, width=App.BOARD_WIDTH * App.TILE_SIZE, height=App.BOARD_HEIGHT * App.TILE_SIZE)
self.__canvas.pack()
self.__canvas.configure(background=App.COLOR_BACKGROUND)
self.title(App.TEXT_TITLE)
self.bind('<KeyPress>', self.__snake.set_key_event)
def mainloop(self, n=0):
self.__gameloop()
Tk.mainloop(self, n)
def __gameloop(self):
self.after(App.TICK_RATE, self.__gameloop)
self.__canvas.delete(ALL)
if not self.__snake.gameover:
self.__snake.move()
x = self.__snake.x
y = self.__snake.y
self.__canvas.create_rectangle(
x[0] * App.TILE_SIZE,
y[0] * App.TILE_SIZE,
x[0] * App.TILE_SIZE + App.TILE_SIZE,
y[0] * App.TILE_SIZE + App.TILE_SIZE,
fill=App.COLOR_SNAKE_HEAD
) # Head
for i in range(1, self.__snake.length, 1):
self.__canvas.create_rectangle(
x[i] * App.TILE_SIZE,
y[i] * App.TILE_SIZE,
x[i] * App.TILE_SIZE + App.TILE_SIZE,
y[i] * App.TILE_SIZE + App.TILE_SIZE,
fill=App.COLOR_SNAKE_BODY
) # Body
self.__canvas.create_rectangle(
self.__apple.x * App.TILE_SIZE,
self.__apple.y * App.TILE_SIZE,
self.__apple.x * App.TILE_SIZE + App.TILE_SIZE,
self.__apple.y * App.TILE_SIZE + App.TILE_SIZE,
fill=App.COLOR_APPLE
) # Apple
else: # GameOver Message
x = App.BOARD_WIDTH * App.TILE_SIZE / 2 # x coordinate of screen center
y = App.BOARD_HEIGHT * App.TILE_SIZE / 2 # y coordinate of screen center
self.__canvas.create_text(x, y - App.FONT_DISTANCE, fill=App.COLOR_FONT, font=App.FONT,
text=App.TEXT_GAMEOVER)
self.__canvas.create_text(x, y + App.FONT_DISTANCE, fill=App.COLOR_FONT, font=App.FONT,
text=App.TEXT_POINTS + str(self.__snake.points))
if __name__ == "__main__":
App().mainloop()
As always, no warranty. As there are many different styles for programming python, these are some suggestions for you, how you COULD impove your code. Many things I mentioned are optional, not necessary. | {
"domain": "codereview.stackexchange",
"id": 32226,
"tags": "python, object-oriented, python-3.x, tkinter, snake-game"
} |
Why is the boiling point of ethyl fluoride lower than that of hydrogen fluoride? | Question: The book, Solomons' Organic Chemistry (for JEE Mains and Advance), contains the following question:
Hydrogen fluoride has a dipole moment of $\pu{1.82 D}$; its boiling point is $\pu{19.34 ^{\circ} C}$. Ethyl fluoride ($\ce{CH3CH2F}$) has an almost identical dipole moment and has a larger molecular weight, yet its boiling point is $\pu{-37.7 ^{\circ} C}$. Explain.
Now while explaining the (very) concept of boiling point and why it differs for different substances the book considers:
Dipole-dipole forces
London dispersion forces
Molecular weight
But when I tried to explain the phenomenon stated in the question, I found it hard to explain the lower melting point. Following is my approach on the explanation:
Since both have the same (similar) dipoles therefore there would be very negligible difference in their boiling points.
London dispersion forces increase in magnitude as the surface area of the molecule increases. So contrary to what was said $\ce{CH3CH2F}$ should have stronger dispersion force interaction (due to larger surface area) and hence higher boiling point as compared to $\ce{HF}$ (given other properties were same).
Given that $\ce{CH3CH2F}$ has higher mass than $\ce{HF}$, therefore it should lead to higher boiling point (as is stated in the question itself).
Now it can be clearly seen that the above stated arguments are against the phenomenon, so I believe that I am missing something in my explanation.
So
What is (if any) wrong/missing in my explanation?
Why is the boiling point of $\ce{CH3CH2F}$ lower than that of $\ce{HF}$?
Answer: The three reasons that you present are not sufficient to explain the high boiling point of pure $\ce{HF}$. There is a fourth reason, that you have not mentioned : the Hydrogen bond. Molecules $\ce{HF}$ are interacting with one another through $\ce{H}$-bonds, exactly like $\ce{H_2O}$ molecules attract one another because of $\ce{H}$-bonds. In the liquid state, $\ce{HF}$ behaves as if it is made of heavier molecules, something like $\ce{(HF)_n}$, with $\ce{n}$ being between about $4$ and $7$. The boiling point of $\ce{HF}$ and of $\ce{H_2O}$ is extremely high when comparing to substances of nearly the same molar weight, like $\ce{CH_4}$ or $\ce{Ne}$. The boiling point of $\ce{H_2O}$ and $\ce{HF}$ does not depend significantly on the van der Wals forces. $\ce{H}$-bonds are much more important than van der Wals forces.
And there are practically no $\ce{H}$-bonds in ethyl fluoride, because $\ce{H}$-bonds happens only when an $\ce{H}$ atom is attached to a strongly electronegative atom. | {
"domain": "chemistry.stackexchange",
"id": 13492,
"tags": "organic-chemistry, intermolecular-forces, boiling-point"
} |
Is it feasible for an unmanned vehicle to travel from outside the atmosphere of one planet to another without additional propulsion? | Question: Within the Solar System (or any other system for that matter) -
Is it feasible for an unmanned vehicle to travel from outside the atmosphere of one planet to another without additional propulsion? That is to say
Given a vehicle in orbit around a planet, say Mercury, suppose a body were launched with the destination Earth with the caveat that it have sufficient velocity to travel the distance so it reaches Earth at velocity necessary to be at Geosynchronous orbit.
Could this be achieved merely by using launch velocity so that the launched body does not need carry it's own propulsion?
Answer: Without a third body to interact with at the arrival point, then "no", it is not possible to take up orbit when you get there unless you have some thrust (but I'll say a little more about the weasel words in a minute).
The basic problem is that you arrive (by definition from far away) on a hyperbolic orbit and you will, therefore leave on a hyperbolic orbit.
Exceptions:
You hit the target (that's Richard's "lithobraking") or at least it's atmosphere (areobraking) and lose a lot of energy that way.
You are going "uphill" and can lose the hyperbolic energy to the Sun's gravity. (I've not actually seen a complete calculation for this, but I think that the mean value theorem requires it to be possible).
There is a third (or more) body nearby and you can trade some energy with it (them). Again, I haven't seen any calculation and they will be tedious and exacting.
You have some kind of thrust that is discounted by the question. Perhaps a solar sail. | {
"domain": "physics.stackexchange",
"id": 4148,
"tags": "celestial-mechanics"
} |
Calculating real or more accurate specific impulse | Question: This may fall a little under chemistry processes but I felt it has enough pertaining to aerospace to put it here. Basically, I'm in the process of attempting to develop a way to derive $I_{sp}$ as pertaining to rocket engines rather than rely on charted information.
Since specific impulse is essentially the exhaust gas velocity working against gravitational force,
$$
I_{sp} = \frac{v_e}{g_0}
$$
...it stands to reason that the ideal exhaust gas velocity equation can be substituted in here, giving something like
$$
I_{sp} = \frac{\sqrt{\frac{TR}{M}\cdot\frac{2\gamma}{\gamma-1}\cdot(1-\frac{\rho_e}{\rho}^{\frac{\gamma - 1}{\gamma}})}}{g_0}
$$
The obvious problem here is that this is the ideal exhaust gas velocity, so this is a sort of "perfect-universe" $I_{sp}$. Because most rocket engines use either hydrocarbon or hydrogen/oxygen propellants, water vapor is a major fraction of this exhaust. And as is usually taught early on in chemistry courses, water vapor is a textbook failure of ideal gas behaviors because of its intermolecular forces.
So my question is - is there a "real" specific impulse formula; something like the van der Waals equation for this application?
Answer: There is no general formula for isp that would provide accurate values all the propellant combinations and nozzle expansion ratios one might use. The NASA Chemical Equilibrium (CEA) program can provide the information you are looking for and would be an answer to your question. It is useful for any of the common propellant systems and many exotic propellant systems, e.g., fluorine, metals, etc.
Extracted from nasa CEA webpage:
CEA is a program which calculates chemical equilibrium product concentrations from any set of reactants and determines thermodynamic and transport properties for the product mixture. Built-in applications include calculation of theoretical rocket performance (isp at any nozzle expansion ratio), Chapman-Jouguet detonation parameters, shock tube parameters, and combustion properties.
The code in fortran is freely available (check web site). This code is the "bible" for the rocket industry. I used this 50 years ago while working on the Apollo program and working on other rocket systems.
The program is very fast. You could set CEA up to be a call from your application or you could run a bunch of cases with the propellants you are most interested in and call a curvefit to the data for the specific situations you are looking at. | {
"domain": "engineering.stackexchange",
"id": 187,
"tags": "aerospace-engineering"
} |
Junction conditions in GR including electromagnetism | Question: I have recently learned about the Israel junction conditions in GR (as explained in for example Gravitation by MTW). I then tried to generalize it when including Electromagnetism, i.e. matching two spacetimes over a junction but when there is some non-zero electromagnetic content in, but I ran into difficulties. Does anyone know where this has been studied in the literature?
Basically the problem I ran into was that the Einstein equations are schematically on the form
$$G=F^2+T$$
Here $T$ is a matter stress energy tensor for some delta function matter at the junction, $F$ is the electromagnetic field tensor and $G$ is the Einstein tensor. When deriving the Israel junction conditions we would integrate over a small slice over the junction and it turns out that the Einstein tensor has a delta function which comes from a discontinuity in the extrinsic curvature so we obtain relations between the jump of the extrinsic curvature and the stress energy tensor of the shell. However, no such delta function can appear in F^2 if we assume a discontinuity in e.g. the four potential. Differentiating A then gives a delta function in $E$ and/or $B$ but $F^2$ is then a delta function squared and contributes nothing.
If we argue that it is really $B$ and $E$ that are the physical quantities of interest and they can not possess a delta function, then $F^2$ would contribute zero to the junction integral since there are not derivatives wrt E or B. Thus the Israel junction conditions should be unchanged by adding electromagnetism, but I dont think this is the right answer?
Answer: I believed I found the answer to my own question. My reasoning seems to be right, the junction conditions obtained by a small integration over the shell are not altered by adding electromagnetism. However, to find the dynamics, these equations must be appended by at least one more equation, for example the Hamiltonian constraint, and these are indeed changed when there is a background stress energy tensor for the matter content. | {
"domain": "physics.stackexchange",
"id": 15857,
"tags": "electromagnetism, general-relativity, boundary-conditions"
} |
Onion Architecture | Question: After doing a whole bunch of research on Onion Architecture, I have made an attempt at implementing this in a new system that we are developing.
We have the Layers as per below:
Domain
Domain.Entities
Domain.Interfaces
Infrastructure
Infrastructure.Data
Infrastructure.DependencyResolution
Infrastructure.Interfaces
Infrastructure.Logging
Services
Services.Interfaces
Tests
Tests.Core
Web
Web.UI
Domain.Entities - All the domain models are kept here.
Base Entity
namespace Domain.Entities
{
public class BaseEntity
{
public virtual Guid ID { get; set; }
public virtual DateTime DateCreated { get; set; }
public virtual DateTime? DeletedDate { get; set; }
public override bool Equals (object obj)
{
if (obj == null)
return false;
var t = obj as BaseEntity;
if (t == null)
return false;
if (ID == t.ID)
return true;
return false;
}
public override int GetHashCode ()
{
int hash = GetType ().GetHashCode ();
hash = (hash * 397) ^ ID.GetHashCode ();
return hash;
}
}
}
Company Entity
namespace Domain.Entities
{
public class Company : BaseEntity
{
public virtual string Name { get; set; }
public virtual IEnumerable<User> Users { get; set; }
public virtual IEnumerable<Branch> Branches { get; set; }
public virtual IEnumerable<Department> Departments { get; set; }
public class CompanyCollection : List<Company>
{
}
}
}
Domain.Interfaces - I have a bunch of generic interfaces programmed here.
IReadRepository
IReadWriteRepsitory
IUnitOfWork
IWriteRepository
IReadRepository
namespace Domain.Interfaces
{
public interface IReadRepository<TEntity> where TEntity : class
{
IQueryable<TEntity> All ();
TEntity FindBy (Expression<Func<TEntity, bool>> expression);
TEntity FindBy (object id);
IQueryable<TEntity> FilterBy (Expression<Func<TEntity, bool>> expression);
}
}
IWriteRepository
namespace Domain.Interfaces
{
public interface IWriteRepository<TEntity> where TEntity : class
{
bool Add (TEntity entity);
bool Add (IEnumerable<TEntity> entities);
bool Update (TEntity entity);
bool Update (IEnumerable<TEntity> entities);
bool Delete (TEntity entity);
bool Delete (IEnumerable<TEntity> entities);
}
}
IReadWriteRepository
namespace Domain.Interfaces
{
public interface IReadWriteRepository<TEntity> :
IReadRepository<TEntity>, IWriteRepository<TEntity> where TEntity : class
{
}
}
IUnitOfWork
namespace Domain.Interfaces
{
public interface IUnitOfWork : IDisposable
{
void Commit ();
void Rollback ();
}
}
Infrastructure.Interfaces - Store all infrastructure service interfaces here
namespace Infrastructure.Interfaces
{
public interface IConfigService
{
string Connection{ get; }
}
}
namespace Infrastructure.Interfaces
{
public interface ILoggingService
{
bool IsDebugEnabled { get; }
bool IsErrorEnabled { get; }
bool IsFatalEnabled { get; }
bool IsInfoEnabled { get; }
bool IsTraceEnabled { get; }
bool IsWarnEnabled { get; }
void Debug (Exception exception);
void Debug (string format, params object[] args);
void Debug (Exception exception, string format, params object[] args);
void Error (Exception exception);
void Error (string format, params object[] args);
void Error (Exception exception, string format, params object[] args);
void Fatal (Exception exception);
void Fatal (string format, params object[] args);
void Fatal (Exception exception, string format, params object[] args);
void Info (Exception exception);
void Info (string format, params object[] args);
void Info (Exception exception, string format, params object[] args);
void Trace (Exception exception);
void Trace (string format, params object[] args);
void Trace (Exception exception, string format, params object[] args);
void Warn (Exception exception);
void Warn (string format, params object[] args);
void Warn (Exception exception, string format, params object[] args);
}
}
Infrastructure.Data - I'm using Fluent NHibernate to talk to a postgresql database.
In here I keep my Mapping Files, Unit of work implementation and the Generic repository implementation.
Hibernate Helper
namespace Infrastructure.Data.Helpers
{
public class NHibernateHelper
{
private ISessionFactory _sessionFactory;
private readonly string _connectionString;
public NHibernateHelper (string connectionString)
{
if (string.IsNullOrEmpty (connectionString))
throw new HibernateConfigException ("ConnectionString in Web.config is not set.");
_connectionString = connectionString;
}
public ISessionFactory SessionFactory {
get {
return _sessionFactory ?? (_sessionFactory = InitializeSessionFactory ());
}
}
private ISessionFactory InitializeSessionFactory ()
{
return Fluently.Configure () . . . . . . .;
}
}
}
Unit Of Work
namespace Infrastructure.Data.Helpers
{
public class UnitOfWork : IUnitOfWork
{
private readonly ISessionFactory _sessionFactory;
private readonly ITransaction _transaction;
public ISession Session { get; private set; }
public UnitOfWork (ISessionFactory sessionFactory)
{
_sessionFactory = sessionFactory;
Session = _sessionFactory.OpenSession ();
Session.FlushMode = FlushMode.Auto;
_transaction = Session.BeginTransaction (IsolationLevel.ReadCommitted);
}
public void Commit ()
{
if (!_transaction.IsActive) {
throw new InvalidOperationException ("Oops! We don't have an active transaction");
}
_transaction.Commit ();
}
public void Rollback ()
{
if (_transaction.IsActive) {
_transaction.Rollback ();
}
}
public void Dispose ()
{
if (Session.IsOpen) {
Session.Close ();
Session = null;
}
}
}
}
Base Mapping
namespace Infrastructure.Data.Mapping
{
public class BaseMapping : ClassMap<BaseEntity>
{
public BaseMapping ()
{
UseUnionSubclassForInheritanceMapping ();
Id (x => x.ID);
Map (x => x.DateCreated);
Map (x => x.DeletedDate);
}
}
}
Company Mapping
namespace Infrastructure.Data.Mapping
{
public class CompanyMapping : SubclassMap<Company>
{
public CompanyMapping ()
{
Abstract ();
Map (x => x.Name);
HasManyToMany<User> (x => x.Users).Table ("CompanyUser").Inverse ();
HasMany<Branch> (x => x.Branches).Inverse ().Cascade.All ();
HasMany<Department> (x => x.Departments).Inverse ().Cascade.All ();
Table ("Company");
}
}
}
Repository Implementation
namespace Infrastructure.Data.Repositories
{
public class Repository<TEntity> : IReadWriteRepository<TEntity>
where TEntity : class
{
private readonly ISession _session;
public Repository (ISession session)
{
_session = session;
}
#region IWriteRepository
public bool Add (TEntity entity)
{
_session.Save (entity);
return true;
}
public bool Add (System.Collections.Generic.IEnumerable<TEntity> entities)
{
foreach (TEntity entity in entities) {
_session.Save (entity);
}
return true;
}
public bool Update (TEntity entity)
{
_session.Update (entity);
return true;
}
public bool Update (System.Collections.Generic.IEnumerable<TEntity> entities)
{
foreach (TEntity entity in entities) {
_session.Update (entity);
}
return true;
}
public bool Delete (TEntity entity)
{
_session.Delete (entity);
return true;
}
public bool Delete (System.Collections.Generic.IEnumerable<TEntity> entities)
{
foreach (TEntity entity in entities) {
_session.Delete (entity);
}
return true;
}
#endregion
#region IReadRepository
public System.Linq.IQueryable<TEntity> All ()
{
return _session.Query<TEntity> ();
}
public TEntity FindBy (System.Linq.Expressions.Expression<System.Func<TEntity, bool>> expression)
{
return FilterBy (expression).SingleOrDefault ();
}
public TEntity FindBy (object id)
{
return _session.Get<TEntity> (id);
}
public System.Linq.IQueryable<TEntity> FilterBy (System.Linq.Expressions.Expression<System.Func<TEntity, bool>> expression)
{
return All ().Where (expression).AsQueryable ();
}
#endregion
}
}
Finally, the service for dependency injection:
namespace Infrastructure.Data.Services
{
public class ConfigService : IConfigService
{
#region IConfigService implementation
public string Connection {
get {
string strConnectionString = null;
var connectionSettings = ConfigurationManager.ConnectionStrings ["Connection"];
if (connectionSettings != null) {
strConnectionString = connectionSettings.ConnectionString;
}
return strConnectionString;
}
}
#endregion
}
}
Infrastructure.DependencyResolution - Using Simple Injector
In here I store all my registration packages / modules
ConfigPackage
namespace Infrastructure.DependecyResolution
{
public class ConfigPackage : IPackage
{
#region IPackage implementation
public void RegisterServices (SimpleInjector.Container container)
{
container.Register<IConfigService,ConfigService> ();
}
#endregion
}
}
RepositoryPackage
namespace Infrastructure.DependecyResolution
{
public class RepositoryPackage : IPackage
{
#region IPackage implementation
public void RegisterServices (SimpleInjector.Container container)
{
container.RegisterPerWebRequest<ISessionFactory> (() => {
var configPackage = container.GetInstance<IConfigService> ();
NHibernateHelper objNHibernate = new NHibernateHelper (configPackage.Connection);
return objNHibernate.SessionFactory;
});
container.RegisterPerWebRequest<IUnitOfWork, UnitOfWork> ();
container.RegisterPerWebRequest<ISession> (() => {
UnitOfWork unitOfWork = (UnitOfWork)container.GetInstance<IUnitOfWork> ();
return unitOfWork.Session;
});
container.RegisterOpenGeneric (typeof(IReadWriteRepository<>), typeof(Repository<>));
}
#endregion
}
}
LoggingPackage
namespace Infrastructure.DependecyResolution
{
public class LoggingPackage : IPackage
{
#region IPackage implementation
public void RegisterServices (SimpleInjector.Container container)
{
ILoggingService logger = GetLoggingService ();
container.Register<ILoggingService> (() => logger);
}
#endregion
private ILoggingService GetLoggingService ()
{
ConfigurationItemFactory.Default.LayoutRenderers.RegisterDefinition ("utc_date", typeof(UtcDateRenderer));
ConfigurationItemFactory.Default.LayoutRenderers.RegisterDefinition ("web_variables", typeof(WebVariablesRenderer));
ILoggingService logger = (ILoggingService)LogManager.GetLogger ("NLogLogger", typeof(LoggingService));
return logger;
}
}
}
Infrastructure.Logging - using NLog
In here is just the implementation of the logging interface.
Services.Interfaces - Applicaiton Service interfaces gets stored here
namespace Services.Interfaces
{
public interface ICompanyService
{
IQueryable<Company> GetCompanies ();
Company GetCompany (Guid guidId);
void CreateNewCompany (Company company);
void UpdateExistingCompany (Company company);
void DeleteCompany (Company company);
}
}
Tests.Core
Repository and Service Tests are in here at this stage, sure there will be more tests going in here.
Web.UI - ASP.NET MVC 4 Project as user interface application
Implementation of the Company Service Interface.
namespace Web.UI.Services
{
public class CompanyService : ICompanyService
{
private IUnitOfWork _unitOfWork;
private IReadWriteRepository<Company> _companyRepository;
public CompanyService (IUnitOfWork unitOfWork, IReadWriteRepository<Company> companyRepository)
{
_unitOfWork = unitOfWork;
_companyRepository = companyRepository;
}
#region ICompanyService implementation
public IQueryable<Company> GetCompanies ()
{
return _companyRepository.All ().Where (x => !x.DeletedDate.HasValue);
}
public Company GetCompany (Guid guidId)
{
return _companyRepository.FindBy (guidId);
}
public void CreateNewCompany (Company company)
{
if (_companyRepository.Add (company))
_unitOfWork.Commit ();
else
_unitOfWork.Rollback ();
}
public void UpdateExistingCompany (Company company)
{
if (_companyRepository.Update (company))
_unitOfWork.Commit ();
else
_unitOfWork.Rollback ();
}
public void DeleteCompany (Company company)
{
if (_companyRepository.Update (company))
_unitOfWork.Commit ();
else
_unitOfWork.Rollback ();
}
#endregion
}
}
The Inject the ICompanyService into the Controller. Create a view model for Company which the view can manipulate.
Simple Injector Service also gets initialized in this project. While it gets initialized I register the Services.Interfaces to Web.UI.Services implementation.
That is pretty much how I have implemented the Onion Architecture. Let me know if I went wrong somewhere. This is the first time I set up an application with a proper architecture.
Answer:
Let me know if I went wrong somewhere. This is the first time I set up an application with a proper architecture.
I'm afraid there is no such thing as general "proper" architecture. Relevant architecture is the one that enables/assists developers in implementing new functionality or adjusting your solution to new requirements.
In your implementation I don't see the reason to define your own logging and repository/UoW patterns:
logging - there is NLog/log4net/whatever, why abstract from them? Even if you may want to switch the implementation, use Common.Logging. All these frameworks do not (and should not) use dependency injection, so having ILoggingService injected will only make your life harder.
repository/UnitOfWork pattern - It's a long-running discussion between those who think repository is a must, and those who see it as a redundant layer that only makes your life harder, and leads to leaked abstractions in all but most simple scenarios. I'm in the latter camp, so here are a couple of links from one of the NHibernate core devs:
Repository is the new Singleton
Refactoring toward frictionless & odorless code: The baseline
Refactoring toward frictionless & odorless code: Hiding global state
Refactoring toward frictionless & odorless code: Limiting session scope
Refactoring toward frictionless & odorless code: A broken home (controller)
Refactoring toward frictionless & odorless code: The case for the view model
Refactoring toward frictionless & odorless code: Getting rid of globals
Refactoring toward frictionless & odorless code: What about transactions?
Note that the series of articles "Refactoring toward frictionless & odorless code" actually show how you can move the transaction/session management to infrastructure level, leaving your business logic clean an tidy.
And a couple of small comments:
I don't see any reason to have public class CompanyCollection : List<Company> {}
Repository - why do you return true in Update methods? If there is a failure - throw exception, not return false.
IConfigService/ConfigService breaks Open/Closed principle, as would need to edit this class (add properties) whenever a new configuration parameter is needed. Classes that require configuration should expect specific configuration values in constructor instead of consuming the general IConfigService. Inject these parameters at the DI registration time. If you need a dynamic configuration which can be changed during software execution - then some sort of IConfigService can be implemented, but I would suggest to have a general T GetValue<T>(string configName) in this case. | {
"domain": "codereview.stackexchange",
"id": 12474,
"tags": "c#, design-patterns, asp.net-mvc-4, repository, hibernate"
} |
Non invertibility of system $y[n]=x[n]-x[n-1]$ using transform method? | Question: For the system to be invertible, we should have different outputs for different inputs.
In terms of constant functions say,
$$X_1[n]=3 \quad \forall n \in \mathbb{Z}$$ and
$$X_2[n]=4 \quad \forall n \in \mathbb{Z}$$
It is perfectly clear that the above system gives 0 for both the inputs hence can be concluded as non invertible. But how can this be explained in terms of transform method (Bilateral z transform or unilateral z transform )?
Answer: The impulse response of your system is $h[n] = \delta[n] - \delta[n-1]$ and its Z-transform is accordingly
$$H(z)=1-z^{-1}$$
Now, you can evaluate the frequency response of this system:
$$
|H(z)|_{z=\exp(j\omega)}=|1-\exp(j\omega)|
$$
you see, that $H(z)|_{z=\exp(j0)}=0$ (i.e. it vanishes for $\omega=0$). Now, in order for a filter to be invertible, its Z-Transform on the unit circle (i.e. when setting $z=\exp(j\omega)$ must not vanish. If it does not vanish, the inverse is given by $\frac{1}{H(z)}$. If it vanishes, the inverse does not exist. | {
"domain": "dsp.stackexchange",
"id": 4670,
"tags": "linear-systems, z-transform"
} |
Turning simple Node.JS Postgres query and logging into most-functionally-pure-possible code | Question: After reading "How to deal with dirty side effects in your pure functional JavaScript," I've decided to take something I do on a regular basis ("Original approach" below)—connect to a Postgres database from Node.JS, perform some operations, and log what is going on—and try to make my code much closer to "purely functional" ("Functional approach" below) by using "Effect functors."
Am I on the right track?
Beyond functional-vs-declarative, are there other ways my code could be better-written?
Original approach - declarative
const {user, database, host} = require('./mydatabaseconnection') ;
const pg = require('pg') ;
const logging = require('./logging') ;
const pglogging = require('./pglogging') ;
// MODIFIED FROM https://medium.com/@garychambers108/better-logging-in-node-js-b3cc6fd0dafd
["log", "warn", "error"].forEach(method => {
const oldMethod = console[method].bind(console) ;
console[method] = logging.createConsoleMethodOverride({oldMethod, console}) ;
}) ;
const pool = new pg.Pool({
user
, database
, host
, log: pglogging.createPgLogger()
}) ;
pool.query('SELECT 1 ;').then(result => console.log(result)).then(()=>pool.end()) ;
Functional approach
const {user, database, host} = require('./mydatabaseconnection') ;
const pg = require('pg') ;
const logging = require('./logging') ;
const pglogging = require('./pglogging') ;
// MODIFIED FROM https://medium.com/@garychambers108/better-logging-in-node-js-b3cc6fd0dafd
["log", "warn", "error"].forEach(method => {
const oldMethod = console[method].bind(console) ;
console[method] = logging.createConsoleMethodOverride({oldMethod, console}) ;
}) ;
function Effect(f) {
return {
map(g) {
return Effect(x => g(f(x)));
}
, runEffects(x) {
return f(x) ;
}
}
}
function makePool() {
return {pool: new pg.Pool({
user
, database
, host
, log: pglogging.createPgLogger()
})} ;
}
function runQuery({pool}) {
return {query: pool.query('SELECT 1 ;'), pool} ;
}
function logResult({query, pool}) {
return {result: query.then(result => console.log(result)), pool} ;
}
function closePool({pool}) {
return pool.end() ;
}
const poolEffect = Effect(makePool) ;
const select1Logger = poolEffect.map(runQuery).map(logResult).map(closePool) ;
select1Logger.runEffects() ;
Common to both
logging.js
const moment = require('moment-timezone') ;
function simpleLogLine({source = '', message = '', objectToLog}) {
const objectToStringify = {date: moment().toISOString(true), source, message} ;
if (typeof objectToLog !== 'undefined') objectToStringify.objectToLog = objectToLog ;
return JSON.stringify(objectToStringify) ;
}
module.exports = {
createConsoleMethodOverride: ({oldMethod, console}) => function() {
oldMethod.apply(
console,
[(
arguments.length > 0
? (
(
arguments.length === 1
&& typeof arguments[0] === 'object'
&& arguments[0] !== null
)
? (
['message', 'objectToLog'].reduce((accumulator, current) => accumulator || Object.keys(arguments[0]).indexOf(current) !== -1, false)
? simpleLogLine(arguments[0])
: simpleLogLine({objectToLog: arguments})
)
: (
arguments.length === 1 && typeof arguments[0] === 'string'
? simpleLogLine({message: arguments[0]})
: simpleLogLine({objectToLog: arguments})
)
)
: simpleLogLine({})
)]
) ;
}
} ;
pglogging.js
module.exports = {
createPgLogger: () => function() {
console[
typeof arguments[0] !== 'string'
? 'log'
: (
arguments[0].match(/error/i) === null
? 'log'
: 'error'
)
](
arguments.length === 1
? (
typeof arguments[0] !== 'string'
? ({source: 'Postgres pool', object: arguments[0]})
: ({source: 'Postgres pool', message: arguments[0]})
)
: (
typeof arguments[0] !== 'string'
? ({source: 'Postgres pool', object: arguments[0]})
: (
arguments.length === 2
? ({source: 'Postgres pool', message: arguments[0], object: arguments[1]})
: ({source: 'Postgres pool', message: arguments[0], object: [...arguments].slice(1)})
)
)
) ;
}
} ;
Sample output (from either approach):
{"date":"2019-02-23T12:31:02.186-05:00","source":"Postgres pool","message":"checking client timeout"}
{"date":"2019-02-23T12:31:02.193-05:00","source":"Postgres pool","message":"connecting new client"}
{"date":"2019-02-23T12:31:02.195-05:00","source":"Postgres pool","message":"ending"}
{"date":"2019-02-23T12:31:02.196-05:00","source":"Postgres pool","message":"pulse queue"}
{"date":"2019-02-23T12:31:02.196-05:00","source":"Postgres pool","message":"pulse queue on ending"}
{"date":"2019-02-23T12:31:02.203-05:00","source":"Postgres pool","message":"new client connected"}
{"date":"2019-02-23T12:31:02.203-05:00","source":"Postgres pool","message":"dispatching query"}
{"date":"2019-02-23T12:31:02.207-05:00","source":"Postgres pool","message":"query dispatched"}
{"date":"2019-02-23T12:31:02.208-05:00","source":"Postgres pool","message":"pulse queue"}
{"date":"2019-02-23T12:31:02.209-05:00","source":"Postgres pool","message":"pulse queue on ending"}
{"date":"2019-02-23T12:31:02.209-05:00","source":"","message":"","objectToLog":{"0":{"command":"SELECT","rowCount":1,"oid":null,"rows":[{"?column?":1}],"fields":[{"name":"?column?","tableID":0,"columnID":0,"dataTypeID":23,"dataTypeSize":4,"dataTypeModifier":-1,"format":"text"}],"_parsers":[null],"RowCtor":null,"rowAsArray":false}}}
Answer: Not at all functional.
Functional means no side effects.
You have console[method], new pg.Pool and pool.query each of which are side effects.
Functional means using pure functions.
A pure function must be able to do its thing with only its arguments.
logResult and makePool require global references.
A pure function must always do the same thing for the same input.
runQuery, closePool, and makePool depend on the database state not the arguments for the result and are not pure.
Because you pass impure functions to Effect you can not predict its behavior and thus it is impure as well.
Now you may wonder how to fix these problems and make your code functional.
The best way to picture the problem is to imagine that each function must run on a isolated thread and rely only on its arguments to create a result. The arguments must be transportable and not rely on an external state (Remember the function is totally isolated)
If you break these rules even once in a mile of code you break the tenet that must be maintained to get the benefit that functional programming offers.
There is no half way functional, it's all or nothing. | {
"domain": "codereview.stackexchange",
"id": 33679,
"tags": "javascript, node.js, functional-programming, postgresql"
} |
Degenerate states, Boltzmann factor and statistical mechanics | Question: The probability of finding a particle with energy $E$ according to Maxwell-Boltzmann distribution is:
$$ P(E) =\frac{1}{Z}g(E)e^{\frac{-E}{k_BT}} \qquad eq(1)$$
where g(E) is the degeneracy of the energy level $E$.
However, the deduction of this formula is the following:
-Using the following definition of entropy: $S=k_Bln(\Omega(s))$, where $\Omega$ is the multiplicity of a macrostate $s$.
-Considering also a system in contact with a large reservoir, and two possible states: $s_1$ and $s_2$. Then, the ratio of their probabilities is the ratio of the multiplicities of the reservoir that make the system be in that state.
$$ \frac{P(s_1)}{P(s_2)} = \frac{\Omega_R(s_1)}{\Omega_R(s_2)}=\frac{e^{S_R(s_1)}}{e^{S_R(s_2)}}$$
For example, if $\Omega_R(s_1)=10$ and $\Omega_R(s_2)=5$, then its twice as likely to find the system in $s_1$ than $s_2$.
Then by using thermodynamic:
$$ dS_R = \frac{1}{T}dU_R$$
$$ dU_R = -dE$$
where E is the energy of the system and U the reservoir. Using these equations one can easily get to equation 1.
But, my question is the following:
Why in equation (1) there is the degeneracy term $g(E)$? Isn't that being accounted in the deduction when they count the multiplicity?
Answer: P(s), the probability of being in state s, isn't the same as P(E), the probability of having energy E. Suppose you knew perfectly knew P(s); how would you get P(E)? You'd transform the probability densities, which gets you a factor of dE/ds, which maps onto the multiplicity. | {
"domain": "physics.stackexchange",
"id": 54116,
"tags": "quantum-mechanics, statistical-mechanics"
} |
Taking the two qubit reduced density matrix on a 5 qubit system | Question: I am wanting to find the two qubit reduced density matrix on a 5 qubit system. I have the 5 qubit state in the form of a tensor product and I want to find the reduced density matrix of qubits 1 and 3. I have the following formula for the reduced density matrix for qubit 3:
\begin{align}
\mathrm{Tr}_1\left|h\right>_{13}\left<h\right| = \sum_{i=0}^{1}\left<i|h\right>_{13}\left<h|i\right>
\end{align}
I am unsure of how to extract $\left|h\right>_{13}$ from the 5 qubit state. Please help!
Answer: Doing a 5 to 2-qubit reduction is a little tedious, but we can illustrate how it works with a simpler example.
Let's take a 3-qubit state $\lvert\psi_{ABC}\rangle$ and boil it down to $\lvert\psi_{BC}\rangle$, say
$$\lvert\psi_{ABC}\rangle = \alpha\lvert 001\rangle + \beta\lvert 101\rangle + \gamma\lvert 011\rangle.$$
First of all, the density operator for the state $\lvert\psi_{ABC}\rangle$ is given by
$$\begin{align}
\rho_{ABC} &= \lvert\psi_{ABC}\rangle\langle\psi_{ABC}\rvert \\
&= \left(\alpha\lvert 001\rangle + \beta\lvert 101\rangle + \gamma\lvert 011\rangle\right)\otimes\left(\alpha^*\langle 001\rvert + \beta^*\langle 101\rvert + \gamma^*\langle 011\rvert\right)
\end{align}$$
Expanding this out, we get
$$
\begin{align}
\rho_{ABC} &= \alpha^*\alpha \lvert 001\rangle\langle 001\rvert + \beta^*\alpha \lvert 001\rangle\langle 101\rvert + \gamma^*\alpha \lvert 001\rangle\langle 011\rvert \\
&+\alpha^*\beta \lvert 101\rangle\langle 001\rvert + \beta^*\beta \lvert 101\rangle\langle 101\rvert + \gamma^*\beta \lvert 101\rangle\langle 011\rvert \\
&+\alpha^*\gamma \lvert 011\rangle\langle 001\rvert + \beta^*\gamma \lvert 011\rangle\langle 101\rvert + \gamma^*\gamma \lvert 011\rangle\langle 011\rvert
\end{align}
$$
The idea of the reduced density operator is to trace over the particles that you don't care about. For example, to find $\rho_{BC}$, we would trace over particle A:
$$\rho_{BC} = \text{Tr}_{A}\left(\rho_{ABC}\right)$$
Before we write out the whole thing, in symbols the trace over $A$ is:
$$\text{Tr}_{A}\left(\rho_{ABC}\right) = \langle 0_A\rvert\rho_{ABC}\lvert 0_A\rangle + \langle 1_A\rvert\rho_{ABC}\lvert 1_A\rangle.$$
Taking the trace will eliminate all terms where particle $A$ is not in the same state in the bra and ket. For example, the term $\beta\alpha^*\lvert101\rangle\langle001\rvert$ will disappear.
Carrying out the process, the terms that survive in the reduced density operator are
$$\begin{align}
\rho_{BC} &= \alpha^*\alpha \lvert 01\rangle\langle 01\rvert +
\gamma^*\alpha \lvert 01\rangle\langle 11\rvert \\
&+
\beta^*\beta \lvert 01\rangle\langle 01\rvert +
\alpha^*\gamma \lvert 11\rangle\langle 01\rvert \\
&+
\gamma^*\gamma \lvert 11\rangle\langle 11\rvert
\end{align}$$
If you want to further reduce to the $\rho_B$ density operator, then just trace over $C$, i.e.
$$\begin{align}
\rho_B &= \text{Tr}_C \left(\rho_{BC}\right) \\
&= \langle 0_C\lvert\rho_{BC}\rvert 0_C\rangle + \langle 1_C\lvert\rho_{BC}\rvert 1_C\rangle
\end{align}$$
In this case, all of the terms survive and the resulting density operator is
$$
\begin{align}
\rho_B &= \left(\alpha^*\alpha + \beta^*\beta\right)\lvert0\rangle\langle0\rvert + \gamma^*\gamma\lvert 1\rangle\langle 1\rvert \\
& \gamma^*\alpha\lvert 0\rangle\langle 1\rvert+ \alpha^*\gamma\lvert 1\rangle\langle 0\rvert
\end{align}
$$
(If we were being strict with notation, then we should really write $\lvert 1_A\rangle$ as $\lvert 1\rangle\otimes I^{\otimes 2}$ where $I^{\otimes 2}$ is the identity operator on the $BC$ subspace. But we'll understand that when we write something like $\langle 1_A\rvert001\rangle$ it really means $\left(\langle 1_A\rvert 0_A\rangle\right)\otimes\lvert 01\rangle = \lvert 01\rangle$. Similarly, $\langle 001\rvert 1_A\rangle = \left(\langle 0_A\rvert 1_A\rangle\right)\langle01\rvert = 0$.) | {
"domain": "quantumcomputing.stackexchange",
"id": 853,
"tags": "quantum-state, density-matrix"
} |
Why if I raise my hand over the water while being underwater, my whole body goes down? | Question: Another weird question, I know, and may sound simple but I'm now trying to see why a lot of the things that we usually do without thinking, have some sense in physics, like this one.
Let's say you are in a deep pool, and you are covered with water entirely but, you raise your hand above the water, what happens in the water is that your body actually goes down and sinks. Why is that? Like why by putting some parts of our body outside of the water level makes us to go down in the water? I know it may be simple but I want to know, is it because we are like "lifting" in some sense, our own arm, so it makes us to go down?
Answer: The Archimedes' principle can be used to explain this. When the arms are removed from the water, the buoyant force, which is proportional to the submerged volume, decreases. As a result, the balance of forces is changed in favor of the forces that push the object toward sinking. | {
"domain": "physics.stackexchange",
"id": 96769,
"tags": "water, free-body-diagram, fluid-statics"
} |
Does general relativity fail in conditions with very large gravitational forces? | Question: It is said in this wikipedia article (in the 7th paragraph) that where there exists huge masses and very large gravitational forces (like around binary pulsars), general relativistic effects can be observed better:
By cosmic standards, gravity throughout the solar system is weak. Since the differences between the predictions of Einstein's and Newton's theories are most pronounced when gravity is strong, physicists have long been interested in testing various relativistic effects in a setting with comparatively strong gravitational fields. This has become possible thanks to precision observations of binary pulsars.
But here in whystringtheory.com (in the last paragraph), it is said that
For small spacetime volumes or large gravitational forces Einstein has little to offer
I know that in singularities, GR fails and this is a motivation for quantum gravity theories. But the second quote above says in small spacetime volumes or large gravitational forces.
Is there any problem with general relativity in conditions with very large gravitational forces (in big enough volumes of spacetime)?
Answer: The whystringtheory page is written in a popularized style that makes it impossible to tell what they really have in mind. Their statement doesn't make sense if interpreted according to the standard technical definitions of the terms. GR doesn't describe gravity as a force. In the system of units normally used in GR, with G=c=1, force and power are unitless, so there is also no natural motivation for defining something like a Planck force by analogy with the Planck length, etc. Possibly their "force" really means curvature, in which case this could be interpreted as a correct statement that GR breaks down when the Riemann tensor corresponds to a radius of curvature comparable to the Planck length. | {
"domain": "physics.stackexchange",
"id": 8522,
"tags": "general-relativity, quantum-gravity"
} |
Can the universe build itself back together after the Heat Death? | Question: We all know that due to the 2nd Law of Thermodynamics the universe after a certain long period of time will eventually have all its energy spread out homogeneously and reach a Heat Death where nothing interesting ever happens anymore. However, given that the universe's lifespan is infinite, and the 2nd Law of Thermodynamics is a statistical law (meaning that decreases in entropy are low in probability but still possess nonzero chance), isn't it reasonable to assume that at some point, very far down the line in the Heat Death, the universe puts itself back together and forms stars, galaxies and everything interesting again? Similar to how a monkey on a typewriter, given an infinite amount of time, will type out the complete works of Shakespeare, isn't the large- scale reduction in entropy just an extremely, extremely rare, but inevitable event which will eventually occur given an infinite timespan?
If so, the universe can never really die, can it?
Answer: This was originally proposed by Boltzmann, who suggested that the entire universe could be a low-probability low-entropy fluctuation in a high-entropy world:
We assume that the whole universe is, and rests for ever, in thermal
equilibrium. The probability that one (only one) part of the universe
is in a certain state, is the smaller the further this state is from
thermal equilibrium; but this probability is greater, the greater is
the universe itself. If we assume the universe great enough, we can
make the probability of one relatively small part being in any given
state (however far from the state of thermal equilibrium), as great as
we please. We can also make the probability great that, though the
whole universe is in thermal equilibrium, our world is in its present
state. It may be said that the worlds is so far from thermal
equilibrium that we cannot imagine the improbability of such a state.
But can we imagine, on the other side, how small a part of the whole
universe this world is? Assuming the universe great enough, the
probability that such a small part of it as our world should be in its
present state, is no longer small.
If this assumption were correct, our world would return more and more
to thermal equilibrium; but because the whole universe is so great, it
might be probable that at some future time some other world might
deviate as far from thermal equilibrium as our world does at present.
Are Boltzmann worlds plausible? Current cosmological models have unbounded futures and are at finite temperature due to horizon radiation, so it looks like they are a prediction. This doesn't sit well with some physicists, of course. As Susskind points ut:
This would mean not that the future is totally empty space but that
the world will have all the features of an isolated finite thermal
cavity with finite temperature and entropy. Thermal equilibrium for
such a system is not completely featureless. On short time scales not
much can be expected to happen but on very long time scales everything
happens.
Smaller fluctuations are exponentially more likely than larger fluctuations. That means that occasionally field fluctuations will not just randomly produce particle-antiparticle pairs, but atoms, molecules, and entire organisms that briefly persist. This includes randomly generated observers that can have brains in arbitrary states ("Boltzmann brains", a headache to some philosophers and physicists) and domains that have low entropy (the Boltzmann worlds). But note the probability distribution: finding ourselves in a big universe with lots of remote galaxies is far, far less likely than being in a single solar system floating in nothingness. So our observations seem to rule out that we live in a Boltzmann world.
So if our observations rule out living in a Boltzmann world but theory predicts that most observers live in them, there is something awry. But there is no general agreement on whether anthropic arguments, the unboundedness of the future, the way we measure probability across "big worlds" or something else is the source of the contradiction.
A somewhat related, but distinct, phenomenon is Poincare recurrence. In systems that has a dynamic that maps their bounded phase spaces to themselves and conserve phase space volume the system state will after a sufficiently long but finite time return to a state very close to the initial state. It may hence seem plausible that the universe will eventually return to a state close to the initial state and history will repeat.
Contrarily, the expansion of the universe means the phase space is expanding and the theorem does not apply. In a very real sense the expansion makes the evolution of the universe irreversible even if local regions may experience Boltzmann recurrences due to random chance and relative isolation.
But see also Lubos Motl's answer to this question and Susskind's paper on de Sitter space. Motl claims the recurrence time is $\exp(10^{120})$ because by the holographic principle we have a dynamics on the horizon - which is unchanging in size.
Whether Poincare recurrence will happen hence appears to hinge on somewhat unsettled theoretical issues. | {
"domain": "physics.stackexchange",
"id": 52749,
"tags": "thermodynamics, universe, probability"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.