text stringlengths 49 10.4k | source dict |
|---|---|
java, performance, algorithm, logging, aspect-oriented
PluggableLogger pluggableLogger;
public static ConcurrentMap<Class<? extends PluggableLogger>, PluggableLogger> cachedLoggers = new ConcurrentHashMap<>();
static{
cachedLoggers.putIfAbsent(DefaultPluggableLoggerIfNotInjected.class, new DefaultPluggableLoggerIfNotInjected());
}
/**
* Captures the Annotations {@link LoggableObjects}
*
* And applies the logic to decide how to log the information based on the
* LogModes
*
* @param proJoinPoint
* @return
* @throws Throwable
*/
@Around("execution(* *(..)) && @annotation(LoggableObject)")
public Object aroundObjects(ProceedingJoinPoint proJoinPoint)
throws Throwable {
Signature methodSignature = proJoinPoint.getSignature();
String declaringClass = methodSignature.getDeclaringTypeName();
String methodName = methodSignature.getName();
Object[] args = proJoinPoint.getArgs();
LoggableObject loggObject = getLoggableObjectAnnt(args, methodSignature);
if (loggObject.disable()) {
return proJoinPoint.proceed();
}
Class<? extends PluggableLogger> clazzPluggLogg = loggObject.pluggableLoggerClass(); | {
"domain": "codereview.stackexchange",
"id": 13985,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, performance, algorithm, logging, aspect-oriented",
"url": null
} |
php, wordpress
$output = '<h2 class="nav-tab-wrapper">';
foreach ( $this->pages as $page )
{
$currentTab = strtolower(str_replace('_', '-', $page['id']));
$activeClass = $activeTab == $currentTab ? ' nav-tab-active' : '';
$output .= sprintf(
'<a href="?page=%1$s&tab=%2$s" class="nav-tab%4$s" id="%2$s-tab">%3$s</a>',
$page['id'],
$currentTab,
$page['title'],
$activeClass
);
}
$output .= '</h2>';
echo $output;
}
}
/**
* Display the sections and fields according to the page
*
* @param string $activeTab;
* @since 1.0
*/
private function displayOptionsSettings( $activeTab = '' )
{
$currentTab = strtolower( str_replace('-', '_', $activeTab) );
settings_fields( $currentTab );
do_settings_sections( $currentTab );
}
/**
* HTML output for text field
*
* @param array $option;
* @since 1.0
*/
public function displayTextField( $option = array() )
{
$value = get_option( $option['option'] );
printf(
'<input type="text" id="%1$s" class="regular-text" name="%2$s[%3$s]" value="%4$s">',
str_replace('_', '-', $option['optionName']),
$option['option'],
$option['optionName'],
sanitize_text_field($value[$option['optionName']])
);
} | {
"domain": "codereview.stackexchange",
"id": 7638,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, wordpress",
"url": null
} |
javascript, jquery
Title: Appending messages to hidden input fields This code contains an array of objects that I loop over and append to hidden input fields in the DOM. As these inputs have no ID, I first select them by name using $.each and then check the value by looping over the array using another $.each. If there is a match, I then append some HTML to the DOM.
It works fine, but I have doubts about the code, especially where I am using two $.each. Is there a better way of doing this?
var messages = [
{
"code":"203294641",
"message":"A night costs 332 miles"
},
{
"code":"203294642",
"message":"The night is dark and full of terrors. Costs 32 miles."
},
{
"code":"203294643",
"message":"The night is dark and full of terrors. Costs 67 miles."
},
{
"code":"203294644",
"message":"The night is dark and full of terrors. Costs 423 miles."
},
{
"code":"203294645",
"message":"The night is dark and full of terrors. Costs 431 miles."
},
{
"code":"203294646",
"message":"The night is dark and full of terrors. Costs 76 miles."
}
]; | {
"domain": "codereview.stackexchange",
"id": 21210,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery",
"url": null
} |
# find the number of possible positive integer solutions when the inequality is $a \times b \times c \lt 180$
As most of you know there is a classical question in elementary combinatorics such that if $$a \times b \times c = 180$$ , then how many possible positive integer solution are there for the equation $$?$$
The solution is easy such that $$180=2^2 \times 3^2 \times 5^1$$ and so , for $$a=2^{x_1} \times 3^{y_1} \times 5^{z_1}$$ , $$b=2^{x_2} \times 3^{y_2} \times 5^{z_2}$$ , $$c=2^{x_3} \times 3^{y_3} \times 5^{z_3}$$ .
Then: $$x_1+x_2+x_3=2$$ where $$x_i \geq0$$ , and $$y_1+y_2+y_3=2$$ where $$y_i \geq0$$ and $$z_1+z_2+z_3=1$$ where $$z_i \geq0$$.
So , $$C (4,2) \times C(4,2) \times C(3,1)=108$$.
Everything is clear up to now.However , i thought that how can i find that possible positive integer solutions when the equation is $$a \times b \times c \lt 180$$ instead of $$a \times b \times c = 180$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9802808718926534,
"lm_q1q2_score": 0.8014526247175453,
"lm_q2_score": 0.8175744806385543,
"openwebmath_perplexity": 206.83051354817093,
"openwebmath_score": 0.9289675354957581,
"tags": null,
"url": "https://math.stackexchange.com/questions/4000405/find-the-number-of-possible-positive-integer-solutions-when-the-inequality-is-a"
} |
recursion, tail-recursion
The same thing applies to Tail-Recursion Elimination (TRE). TRE is an optimization that is not guaranteed to happen. As far as I know, there is no common name for the language feature that corresponds to the optimization (like there is with PITCH / Proper Tail-Calls for TCO). It is typically just called Tail-Recursion, although I call it Proper Tail-Recursion in analogy to the Proper Tail-Calls. | {
"domain": "cs.stackexchange",
"id": 19453,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "recursion, tail-recursion",
"url": null
} |
coordination-compounds, color, crystal-field-theory
$\ce{Cu(NCS)_2}$ is black probably because of ligand to metal charge transfer (the same reason $\ce{Fe(III)NCS}$ is blood red) - i.e. on absorption you transiently form $\ce{Cu(I)}$ and $\ce{NCS}$ from $\ce{Cu(II)}$ and $\ce{NCS–}$. It's more complex that that for sure, but I think it's fair to say no one knows yet, because we only worked out the structure two years ago.
If you want a lot more information about $\ce{Cu(NCS)_2}$ we published on it here https://journals.aps.org/prb/abstract/10.1103/PhysRevB.97.144421 (on the arxiv at https://arxiv.org/abs/1710.04889). | {
"domain": "chemistry.stackexchange",
"id": 10848,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "coordination-compounds, color, crystal-field-theory",
"url": null
} |
neural-networks, machine-learning
Title: Confusing Matlab Artificial Neural Toolbox script I'm working on a project which uses artificial neural network. I looked up at the Matlab Neural Network toolbox. I got a Generated Script from it. When looking at this script, it is confusing because for both testing and training it seems that the toolbox just uses the same data. Could you explain the reason?
The script is given below:
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Train the Network
[net,tr] = train(net,inputs,targets);
% Test the Network
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance
trainTargets = targets .* tr.trainMask{1};
valTargets = targets .* tr.valMask{1};
testTargets = targets .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,outputs);
valPerformance = perform(net,valTargets,outputs);
testPerformance = perform(net,testTargets,outputs);
Also is it right to split the data set as below for training and testing?
trainData = inputData(:,1:213);
trainTargetData = targetData(:,1:213);
validationData = inputData(:,214:258);
testData = inputData(:,259:end);
testTargetData = targetData(:,259:end);
validationTargetData = targetData(:,214:258);
[net,tr] = train(net,trainData,trainTargetData);
% Validation
outputs = net(validationData);
errors = gsubtract(validationTargetData,outputs);
performance = perform(net,validationTargetData,outputs); | {
"domain": "ai.stackexchange",
"id": 190,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "neural-networks, machine-learning",
"url": null
} |
trees, python
Title: Computing all paths from root to the leaf nodes in a tree I have a this tree, i want to print out all paths from root to all child nodes:
NOTE: I wanted to come up with a solution that does not involve passing state between recursive calls.
a
/ | \
b c d
/ \ \
e f g
/ / | \
h i j k
This is the my code to create and print the paths.
class Node:
def __init__(self, data, children=[]):
self.data = data
self.children = children
tree = Node(
"A",
children=[
Node("B", children=[Node("E"), Node("F")]),
Node("C"),
Node("D", children=[Node("G", [Node("H"), Node("I"), Node("J"), Node("K")])]),
],
)
i want to enumerate all the paths to reach from root node to all leaf nodes.This is what i have come up with.
def paths(root):
x = []
if root.children:
for c in root.children:
for el in paths(c):
x.append(c.data + el)
else:
x.append("")
return x
a = paths(tree)
print(a)
i get this output:
['BE', 'BF', 'C', 'DGH', 'DGI', 'DGJ', 'DGK'] | {
"domain": "cs.stackexchange",
"id": 15877,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "trees, python",
"url": null
} |
ros
Title: Need correct syntax to call an actionlib service from a class member function
What is the correct syntax to call an actionlib service from a class member function. As it is defined below I believe the var ac goes out of scope. But I am my attempts to move it outside being weak in c++ have failed.
void handelerSimpleGoal(const geometry_msgs::PoseStamped msg)
{
typedef actionlib::SimpleActionClient<move_base_msgs::MoveBaseAction> MoveBaseClient;
// find the coke can and use it as goal
ros::ServiceClient client = n.serviceClient<gazebo_msgs::GetModelState>("/gazebo/get_model_state");
gazebo_msgs::GetModelState getmodelstate;
getmodelstate.request.model_name ="coke_can";
client.call(getmodelstate);
// send the new goal
MoveBaseClient ac("move_base", true);
move_base_msgs::MoveBaseGoal goal;
goal.target_pose.header.frame_id = "map";
goal.target_pose.header.stamp = ros::Time::now();
goal.target_pose.pose = getmodelstate.response.pose;
ac.sendGoal(goal);
}
Originally posted by rnunziata on ROS Answers with karma: 713 on 2013-10-15
Post score: 0
I would imagine at some point you want to know the status of this issued request. The tutorial on using this in the navigation stack suggests that you wait for the result (which will block), then check the result to see what happened. "ac" will still go out of scope once the function finishes but at least this way you'll have all the information you need. Depending on your exact use case needs you can also increase the scope of that MoveBaseClient variable and avoid this issue. Anyway according to http://wiki.ros.org/navigation/Tutorials/SendingSimpleGoals :
ac.waitForResult(); | {
"domain": "robotics.stackexchange",
"id": 15876,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
filters
See also this ipython notebook http://nbviewer.ipython.org/4393835/ for a little analysis.
Since it is entirely possible that the timestream filtering process cannot be represented by any filter in image space, an answer addressing "how do I approximate these filter functions" is appropriate. I would approximate the filter using an finite impulse design where the wrights of the FIR filter are computed based on the spectrum that the filter passes. You can do this by choosing samples of the desired frequency response (from the graphs above) and taking the inverse discrete Fourier transform of the samples. The result will be the impulse response of the desired filter. The filter is then realized as a weighted sum using the impulse response sample values as the weights. So your filter will be $A_0 + A_1Z^{-1} + A_2Z^{-2}...$ Where $A_0$ is the first value of the impulse response, $A_1$ is the second and so on. $Z^{-n}$ represents a sample delay of n (coming from a Z domain representation of the filter structure). This approach is sometimes referred to as the "frequency sampling" approach to FIR filter design. I have used this approach to create time sequences that contain specific spectral content. Google FIR filter design, frequency sampling. | {
"domain": "dsp.stackexchange",
"id": 618,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "filters",
"url": null
} |
As evidence for Model B over Model A, I give you the famous Monty Hall Problem, with which I hope you are familiar. As an example, say you choose door #1 and Monty Hall opened door #3. Most explanations go something like “Door #1 had a 1/3 chance to be right originally. Since Monty Hall can always open a door without the prize, that can’t change. So you still have a 1/3 chance, and door #2 has a 2/3 chance.” While this seems valid, and does get the right answer, it is incorrect reasoning.
If the prize is behind door #2 (P(D2)=1/3), Monty Hall must open door #3 (P(O3|D2)=1).
If the prize is behind door #3 (P(D3)=1/3), Monty Hall can’t open door #3 (P(O3|D3)=0).
If the prize is behind door #1 (P(D1)=1/3), Monty Hall must choose between door #3 (P(O3|D1)=X) and door #2 (P(O2|D1)=1-X).
Given that he opened door #3, the probability the prize is behind door #1 is:
P(D1|O3) = P(O3|D1)*P(D1)/[P(O3|D1)*P(D1)+P(O3|D2)*P(D2)+P(O3|D3)*P(D3)]
= (1/3)*(X)/[(1/3)*(X)+(1/3)*(1)+(1/3)*(0)]
= X/(1+X).
Similarly, P(D1|O2)=(1-X)/(2-X). Model A says X=1 and so P(D1|O3)=1/2. But then P(D1|O2)=0; and in fact, they are different for most X’s. The reasoning I quoted above is incorrect because it can’t reproduce this difference. | {
"domain": "wordpress.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.985496419030704,
"lm_q1q2_score": 0.8057167185797394,
"lm_q2_score": 0.8175744761936437,
"openwebmath_perplexity": 520.8433932804764,
"openwebmath_score": 0.5667750239372253,
"tags": null,
"url": "https://jmanton.wordpress.com/2010/06/07/boy-tuesday/"
} |
(2) Thus, YES! SUFFICIENT. See attachment.
Attachments
photo.JPG [ 259.84 KiB | Viewed 27667 times ]
_________________
Impossible is nothing to God.
Last edited by mbaiseasy on 20 Sep 2012, 07:21, edited 2 times in total.
Kudos [?]: 526 [5], given: 11
Intern
Joined: 19 Sep 2012
Posts: 13
Kudos [?]: 1 [0], given: 5
Re: If a, b, c, and d, are positive numbers, is a/b < c/d? [#permalink]
### Show Tags
20 Sep 2012, 01:51
I agree with your approach to the problem.
But is not it easier to realize about the rule of proper fraction to the power of 2 instead of manipulate the ecuation in the stem 2?
Kudos [?]: 1 [0], given: 5
Senior Manager
Joined: 13 Aug 2012
Posts: 462
Kudos [?]: 526 [0], given: 11
Concentration: Marketing, Finance
GPA: 3.23
Re: If a, b, c, and d, are positive numbers, is a/b < c/d? [#permalink]
### Show Tags
20 Sep 2012, 07:13
racingip wrote:
I agree with your approach to the problem.
But is not it easier to realize about the rule of proper fraction to the power of 2 instead of manipulate the ecuation in the stem 2?
You are right. That's what I did. I did cancelling of of the powers of . Sorry my explanation is not clear. haha!
I just summarized that when you start cancelling out, it's like multiplying that fraction I put up.
_________________
Impossible is nothing to God.
Kudos [?]: 526 [0], given: 11
Manager
Joined: 04 Jan 2014
Posts: 122
Kudos [?]: 13 [0], given: 24
Re: If a, b, c, and d, are positive numbers, is a/b < c/d? [#permalink]
### Show Tags | {
"domain": "gmatclub.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9849273708061969,
"lm_q1q2_score": 0.8249948202358408,
"lm_q2_score": 0.8376199552262967,
"openwebmath_perplexity": 4299.89453809255,
"openwebmath_score": 0.6671302914619446,
"tags": null,
"url": "https://gmatclub.com/forum/if-a-b-c-and-d-are-positive-numbers-is-a-b-c-d-135521.html"
} |
wifi, urg-node, hector, network, hokuyo
Original comments
Comment by SQ on 2016-03-29:
Hi i have the exact same issue here. I tried your method but it did not help. I'm using a UST 20LX. Any help? Thanks! :)
Comment by psprox96 on 2016-03-29:
Hi there! Its been quite some time back since I did this project. However, I hope I can help you. I am using UST-20LX too. First of all, you can actually connect the laser to your computer through wired network. If im not wrong, you cant use internet too.
Comment by psprox96 on 2016-03-29:
In order for you to connect to internet, you need to disconnect the wired network and connect to wireless. Thats a bit time consuming so I decided to do the above method. Hokuyo UST-20LX static IP is 192.168.0.10 while my WiFi connection is 192.168.0.10x. Therefore it clashes.
Comment by psprox96 on 2016-03-29:
Thats why I need to create a new static ip of 192.168.0.15, leaving aside the wifi connection of 192.168.0.10x. May I know your internet connection IP?
Comment by SQ on 2016-03-29:
Hi thanks for replying! My IP address is 192.168.0.100 for WiFi and 192.168.0.15 for the UST
Comment by psprox96 on 2016-03-29:
If you follow this:
auto eth0
iface eth0 inet dhcp
auto eth0:0
iface eth0:0 inet static
address 192.168.0.15
netmask 255.255.255.0
there should be no problem.
After setting this network interface, you need to type sudo ifup eth0:0 to use laser and sudo ifdown eth0:0 to use WiFi.
Comment by psprox96 on 2016-03-29:
Or is it that you are facing other errors?
Comment by SQ on 2016-03-30: | {
"domain": "robotics.stackexchange",
"id": 21939,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "wifi, urg-node, hector, network, hokuyo",
"url": null
} |
newtonian-mechanics, forces, work, potential-energy, definition
The potential energy comes from the negative work done by gravity. The work done by gravity is negative because the direction of the force is opposite to the displacement. When a force does negative work on an object it takes energy away from that object. In this case, gravity removes energy you gave the object due to your positive work, and stores it as gravitational potential energy of the earth-object system.
The work you do on the object is positive because your force is in the same direction as the displacement. If the negative work done by gravity exactly equals the positive work you did, the net work is zero. This would occur if the object began at rest on the ground and you brought it to rest at the height $h$. Since the object begins and ends at rest, the change in its kinetic energy ($\Delta KE$) is zero and from the work energy principle that means the net work done is zero.
Now, since gravity took the energy $mgh$ away from the object, what did it do with it? It stored it as a change in gravitational potential energy ($\Delta U$) of the earth-object system. Note that potential energy $U$ is a system property. The object alone does not posses it and the earth alone does not posses it. It only exists when both the object and the earth are present. Thus it "belongs" to both.
Suppose that instead of you brining it to rest at $h$, you kept lifting it so that it had a vertical velocity $v$ and kinetic energy of $\frac{1}{2}mv^2$ at $h$ . In that case at $h$ there would be a change in potential energy ($\Delta U$) and a change in kinetic energy ($\Delta KE$). The $\Delta KE$ then represents the net work done on the object per the work energy principle, or
$$W_{net}=W_{you}+W_{gravity}=\Delta KE$$
why it is defined as work done by conservative force | {
"domain": "physics.stackexchange",
"id": 83254,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, work, potential-energy, definition",
"url": null
} |
glaciology, antarctic, sea-ice
Title: Is Antarctic sea ice at record levels? Every time I read a news article about Antarctic ice extent, I don't seem to have a clear answer as to what the deal is. If I look at the February sea ice extent from the National Snow and Ice Data Center (NSIDC), I get no trend that I can see:
.
If I look at the September data from NSIDC, then I get a clear increase.. NSIDC even tells you what the trend is.
Monthly Antarctic September ice extent for 1979 to 2014 shows an increase of 1.3% per decade relative to the 1981 to 2010 average.
Finally, if I look at the monthly values it is even trickier:
What I can tell is that the minimum extent seems to be unchanged, while there is an increase in the maximum extent. If I where to analyze the trend using all the data, I wonder if any significant trend would be extracted.
I don't want to create any controversies, as I realize that the gains in the Antarctic (small increase of ~100,000 km2 per decade) are much smaller than the loss in the Arctic (decrease of ~500,000 km2 per decade) (NSIDC).
My questions are: Considering the level of variability in coverage, can the Antarctic sea ice extent be considered at "record levels"? If so, what is the consensus on the causes of this increase? There have been a number of studies based on observations and modelling of the Antarctic Sea Ice trends. One major observation is that since continuous satellite coverage began in late 1978 (3), there has been an increase in annual Antarctic sea ice extent (1)(2)(3)(4), that reach ~28% of the Arctic sea ice loss (1).
Critically, according to Simmonds (2015) (1) (and Fan et al. 2014 (3)), that despite interannual, sub-decadal and multidecadal variations, there is a statistically significant increasing trend in sea ice extent for all seasons since late 1978 (As shown in the image below). | {
"domain": "earthscience.stackexchange",
"id": 446,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "glaciology, antarctic, sea-ice",
"url": null
} |
javascript, performance, jquery
Title: Creating dynamic table rows I would like to know how to make this faster. It's okay for 100 rows but 1k, 10k, 100k is different. | {
"domain": "codereview.stackexchange",
"id": 32241,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, performance, jquery",
"url": null
} |
arduino, imu, rosserial
Original comments
Comment by tonybaltovski on 2015-01-31:
I've used the i2c library without any isseus. Try publishing slower on the arduino.
Comment by nvoltex on 2015-02-01:
Could you check my edit? I still don't understand what's happening.
Comment by tonybaltovski on 2015-02-01:
Can you post your code? Did you try slowing down the publishing rate?
Comment by nvoltex on 2015-02-02:
I have added the code, but note that it's the code before adding the subscriber. By slowing down the publishing rate are you talking about the baudrate used on the node? (thanks for helping!)
Comment by tonybaltovski on 2015-02-02:
I meant adding delay(15); or publishing at a certain rate. You maybe overfilling the serial buffer.
Comment by nvoltex on 2015-02-02:
At the time I tried using different delays however it didn't help. With the addition of Serial.begin() it started working, althought I don't know why that would help. However when I try to add a subscriber to the node, the IMU stops responding (I think he fails to initialize).
Comment by tonybaltovski on 2015-02-02:
Try initializing the node before you start the i2c comm. Also, try manually setting the baud nh.getHardware()->setBaud(BAUD); before the node initializes.
Comment by nvoltex on 2015-02-02:
I tried your suggestions and still got the same problem. I made an edit with some new information and the code I'm using right now.
Comment by tonybaltovski on 2015-02-02:
Did you create the ros_lib machine that is connecting to the Arduino?
Comment by nvoltex on 2015-02-02:
I didn't understand your question. The arduino is connected to a computer running ubuntu 14.04 and I installed the ros library on the arduino IDE following the tutorial: http://wiki.ros.org/rosserial_arduino/Tutorials/Arduino%20IDE%20Setup. | {
"domain": "robotics.stackexchange",
"id": 20744,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "arduino, imu, rosserial",
"url": null
} |
general-relativity, differential-geometry, operators, commutator, vector-fields
Let's consider the two terms separately. Recall that in the definition of the action of the product $\hat{A}\hat{B}$, we have to act with $\hat{B}$ first. With $\hat{B} = f(x)$, we have $\hat{B} v(x) = f(x) v(x)$. Then we act with $\hat{A}$ on the result of this action. Since the result of the action of $\hat{B}$ is the product $f(x) v(x)$, we act with $\hat{A}$ on this product. With $\hat{A} = \frac{\partial}{\partial x}$, this means that
$$\hat{A}\hat{B} v(x) = \hat{A} ( f(x) v(x) ) = \frac{\partial}{\partial x} ( f(x) v(x) ).$$
Thus, we can see that in the first term of the action of the commutator, a derivative acts on the product $f(x) v(x)$. | {
"domain": "physics.stackexchange",
"id": 42900,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, differential-geometry, operators, commutator, vector-fields",
"url": null
} |
poynting-vector
Title: Poynting vector direction Just a quick question if I may.
The Poynting vector, or the energy flux density, is given by:
$\mathbf{S} = \frac{1}{\mu_{0}}(\mathbf{E} \times \mathbf{B})$
So it's the cross product between the $\mathbf{E}$-field and $\mathbf{B}$-field. So depending on the direction of the fields, the Poynting vector will point in some direction. So lets say the $\mathbf{E}$-field has the direction $\mathbf{e}_{y}$ and the $\mathbf{B}$-field has the direction $\mathbf{e}_{z}$, then the resulting direction for $\mathbf{S}$ will be $\mathbf{e}_{z}$.
So my question is, is that the direction of which the energy is flowing, or is there some fancy thing I need to know, like it's the opposite or something like that ?
Thanks in advance :) The Poynting vector was defined as directional energy flux density. Therefore, it naturally shows the way energy flows and you do not have to switch the direction or anything. So, if you have an $\mathbf{E}$-field in the direction $\mathbf{e}_y$ and $\mathbf{B}$-field in the direction $\mathbf{e}_z$, Poynting vector is in the direction $\mathbf{e}_x$ and that is the direction in which energy flows. | {
"domain": "physics.stackexchange",
"id": 69816,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "poynting-vector",
"url": null
} |
quantum-mechanics, angular-momentum, wavefunction, schroedinger-equation, multipole-expansion
Title: Question from Weinberg Lectures on quantum mechanics In page number 37 of Weinberg's lectures of quantum mechanics book (2nd edition), After Eq.2.1.17, he states the following:
The Schrödinger equation (2.1.3) then takes the form
$E \psi(x) = -\frac{\hbar^2}{2\mu r^2} \frac{\partial}{\partial r} (r^2 \frac{\partial \psi(x) }{\partial r} ) + \frac{1}{2\mu r^2} L^2\psi(x) + V(x) \psi(x)$. (2.1.17)
Now let us consider the spectrum of the operator $L^2$. As long as
$V(r)$ is not extremely singular at $r = 0$, the wave function $\psi$ must be a
smooth function of the Cartesian components $x_i$ near $x = 0$, in the
sense that it can be expressed as a power series in these components.
Suppose that, for some specific wave function, the terms in this power
series with the smallest total number of factors of $x_1$, $x_2$, and $x_3$
have $l$ such factors. Here $l$ can be $0$, $1$, $2$, etc. The sum of all these terms forms
what is called a homogeneous polynomial of order $l$ in $x$. | {
"domain": "physics.stackexchange",
"id": 83110,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, angular-momentum, wavefunction, schroedinger-equation, multipole-expansion",
"url": null
} |
graphs, logic
Title: How to express a structure of a 5-colorable graph in logic A graph can be expressed as an structure $G = <A,R>$ satisfying the axioms
$ \forall xy R(x,y) \rightarrow R(y,x)$ and $ \forall x \lnot R(x,x)$.
How to extend the structure and/or axioms to express a 5 colourable graph? General ideas:
Require that each vertex ($\forall x \ldots$) satisfies at least one color predicate (Use $\lor$ among five color predicates).
Also require that no vertex ($\lnot\exists x \ldots$) has two distinct colors (you can enumerate all the ten distinct pairs: $\lnot({\sf red}(x)\land{\sf blue}(x))\land \lnot(\ldots)\land \cdots$).
For each color (repeat this axiom five times, once for each color), require that if $x$ has that color and $R(x,y)$ then $y$ has not that color (e.g. $\forall x y.\ {\sf red}(x)\land R(x,y) \implies \lnot{\sf red}(y)$). | {
"domain": "cs.stackexchange",
"id": 11379,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "graphs, logic",
"url": null
} |
molecular-genetics, mutations, biophysics
Title: What is the statistical relationship between radioactivity and mutation rate? This question tries to narrow down the scope of that question.
What is the statistical relationship between radioactivity and mutation rate? By how much would the mutation rate be lowered in a idealized world were radioactivity is absent? According to this wiki page , the average background radiation is 3.01 milli-Sievert per year (including natural and artificial sources). This equals 0.301 rad.
I found a short letter to nature that says the average forward mutation rate in human is 2.6 * 10^-7 per locus per rad = 2.6 mutations per ten million bases. Also it say that this mutation rate is quite uniform among species. The size of the human genome is approximately 3.2 gigabases.
So doing a quick maths: (2.6*3.2*100)*0.301 = 250.432 mutations per year per human. This is an approximation because I rounded the size of the human genome to 3.2 gigabases. Also this doesn't mean that we actually acquire this amount of mutations because we have repair mechanisms. | {
"domain": "biology.stackexchange",
"id": 3713,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "molecular-genetics, mutations, biophysics",
"url": null
} |
geophysics, sedimentology, glaciology, topography, isostasy
Are there any other reasons? What are the relative proportions in magnitude of these factors? Forming of coastline
During the last ice age, the North Sea was dry. When the ice melted sea levels slowly started to rise again and due to tides and currents
a barrier of dunes was formed along what approximately is today's coast line. This created an area of land that fell dry during ebb-tide and flooded during high tide (this can still be seen in the 'Waddenzee' in the North of the Netherlands where you can walk to some of the islands during low tide). The big rivers that flow through the Netherlands brought in more sand, slowly keeping larger parts of the land dry.
Isostatic rebound
During the ice age, the surface of Scandinavia was pushed down. After the ice melted it started to rise again and pushed the Western and Northern part of Netherlands down. Strangely enough the Southern and Eastern parts of the Netherlands are rising, so it seems the Netherlands is tilting.
I'm not sure how large the isostatic effect has been, but we know that the North of the Netherlands is still going down with about 2cm per century (source in Dutch).
Human influence
I know you've asked for non-anthropogenic causes, but I'm going to include this anyway because it seems human influence has had a much bigger effect on elevation than the isostatic rebound.
In the 11th century the Dutch started to actively shape the land by building dikes and later also by using wind mills to pump out water. The Flevopolder is an example of a large part of land that has been created by the Dutch in the 1950s and 60s. When groundwater levels became lower the moors settled and started oxidizing. Researchers think that the settling and oxidation of moors today is responsible for up to 15mm decline per year (source in Dutch).
Additionally in the 16th and 17th century a lot of peat was removed from the moors and used as fuel. Peat removal created new lakes, but some of those lakes were pumped dry later and were used as farmland. Also, the weight of dikes and houses on moors still cause subsidence today in areas in the West.
In the Northeastern part of the Netherlands gas extraction has also caused local elevation drops of up to 30cm (source in Dutch) in the last 40 years.
Other sources (all in Dutch): | {
"domain": "earthscience.stackexchange",
"id": 336,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "geophysics, sedimentology, glaciology, topography, isostasy",
"url": null
} |
• By the converse you mean Kuratowski's theorem? (I.e., the result that closedness of the projection characterizes compact spaces.) May 28, 2015 at 5:12
• BTW the proof of Kuratowski's theorem in Engleking (Theorem 3.1.16) does not use filters. Although it is very similar to Henno Brandsma's proof, so you will probably find that they are essentially the same. May 28, 2015 at 5:18
• @EricAuld I have tried to give a prove using nets. I hope I did not make some mistakes there. May 28, 2015 at 13:02
• Can you explain why $\bigcap \limits _{d \in D} p_Y[\overline{A_d}] = p_Y[\bigcap\limits _{d \in D} \overline{A_d}]$? Jun 27, 2015 at 4:43
• @EricAuld Thanks for noticing it. It was quite an embarrassing mistake. I have tried to post a new proof. Let us hope that this one is correct (fingers crossed). Jun 27, 2015 at 11:08 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.983342959746696,
"lm_q1q2_score": 0.8084938887946651,
"lm_q2_score": 0.8221891261650247,
"openwebmath_perplexity": 98.59585764790005,
"openwebmath_score": 0.9260526299476624,
"tags": null,
"url": "https://math.stackexchange.com/questions/22697/projection-map-being-a-closed-map/673505"
} |
slam, navigation, encoders, robot-localization, rtabmap
Title: Generate odometry from encoders, cmd_vel, differential velocity
Hi to all,
I'm trying to mix /imu/data with my robot (Clearpath Husky) odometry by using the robot_localization package in order to use them with the rtabmap_ros package with my RealSense and SICK data.
I'm doing this because I want to improve my 3D map by adding IMU information in RTAB-Map.
I already have the /imu/data information, but I don't have the /odometry topic.
During my field tests, I only recorded encoders, differential_velocity and cmd_vel.
Is it possbible to use these topics to generate the /odometry message needed by the robot_localization package?
I hope you can help me!
Thank you!
Originally posted by Marcus Barnet on ROS Answers with karma: 287 on 2016-07-20
Post score: 0
The EKF in r_l supports multiple input message types. See this section of the wiki. You aren't required to have any set of input topics; you can have as many or as few as you want (though more is usually better).
Originally posted by Tom Moore with karma: 13689 on 2016-08-28
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 25295,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "slam, navigation, encoders, robot-localization, rtabmap",
"url": null
} |
quantum-mechanics, hilbert-space, wavefunction, differentiation, quantum-chemistry
Title: What is $\frac{d}{d\psi}\langle\psi| \hat{O} | \psi\rangle$? I would like to know what is the derivative of an expectation value with respect to the molecular state
$$\frac{d}{d\psi}\langle\psi| \hat{\mathbf{O}} | \psi\rangle$$
Note that here $|\psi\rangle$ is a complex column vector of length $S$ where each of its components depend on the space $q$ and $\hat{\mathbf{O}}$ is a complex $S \times S$ matrix which also depends on the space.
I other words I want to know what is
$$\frac{d}{d\mathbf{\Psi}(q)}\int dq \mathbf{\Psi}^{\dagger}(q)\hat{\mathbf{O}}(q) \mathbf{\Psi}(q)$$
Is the answer
$$\frac{d}{d\mathbf{\Psi}(q)}\int dq \mathbf{\Psi}^{\dagger}(q)\hat{\mathbf{O}}(q) \mathbf{\Psi}(q) = \int dq \mathbf{\Psi}^{\dagger}(q)\hat{\mathbf{O}}(q) $$
correct ? What you have is called a functional derivative. The notation is usually somewhat different. It is defined by the relationship
$$ \frac{\delta \psi(q)}{\delta\psi(q')} = \delta(q-q') . $$
In your example, we have
$$ \langle\psi|\hat{O}|\psi\rangle = \int \psi^*(q)O(q,q')\psi(q') dqdq' . $$
Note that I have changed the way the $q$'s appear to make the operator a more general kernel function. Now we can apply the functional derivative: | {
"domain": "physics.stackexchange",
"id": 60385,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, hilbert-space, wavefunction, differentiation, quantum-chemistry",
"url": null
} |
atmosphere, moon, sun
Title: Why were both the sun and the moon red today? Today was a normal day, except the sun and moon colors were strange. After 5pm, the sky was covered with cirrostratus-like translucent clouds and the sky was a blend of blue and grey.
Everything would be fine, except that the sun was orange between 4-5pm. Then by around 5-6pm, the sun was completely red like blood even though it was still high up, and it was 1+ more hour to sunset.
Then, I didn't look at the sky until 8pm when it was already dark. When I went out, the moon was red just like the sun couple hours before.
The whole thing I saw from around south side od Chicago on US Labor Day (4 Sep, 2017). I didn't take pictures of the sun unfortunately because I disregarded its color, but I took pictures of the moon.
Here's how the moon looked through my phone camera, through binoculars:
Here is a similar picture but edited so that the moon looks exactly like I saw it with naked eye:
What can thia be caused by? (As of writing this at 10:06pm the moon is still red, and it is 6 hours since both the sun and the moon were red/orange)
EDIT: I also didn't see anything about this phenomenon in the media which is strange, and once again, this was seen from Chicago. Smoke. There was significant smoke across the USA, which attenuated the light from the sun/moon due to increased scattering. The smoke particles effectively cause the light to reflect in different directions, so you see more colors.
See below for the HMS Smoke Polygons for the day, which clearly shows smoke over your region from the intense smoke/wildfire activity in the Pacific Northwest. You can also see the NASA Worldview composite of VIIRS visible imagery for the day, with fire locations in red. | {
"domain": "earthscience.stackexchange",
"id": 1193,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "atmosphere, moon, sun",
"url": null
} |
c, serial-port, raspberry-pi
arrow_points[0][4] = Point( 50*mp, 100*mp );
arrow_points[0][5] = Point( 70*mp, 60*mp );
arrow_points[0][6] = Point( 60*mp, 60*mp );
};
const Point* ppt[1] = { arrow_points[0] };
int npt[] = { 7 };
fillPoly( img,
ppt,
npt,
1,
Scalar( 250, 0, 0 ),
lineType );
} Nice. I am working on a similar project. In order to make your code more modular and object oriented, you can write a Message class and a SerialPort class. It makes it much easier to restructure your code when you want to change the behavior. e.g.
SerialPort sp("COM8"); // or "/dev/ttyUSB0" for linux
Message msg("alskadflaskjfd");
Message rx; | {
"domain": "codereview.stackexchange",
"id": 3093,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, serial-port, raspberry-pi",
"url": null
} |
quantum-algorithms, quantum-state, mathematics, measurement
Title: Understanding a quantum algorithm to estimate inner products While reading the paper "Compiling basic linear algebra subroutines for quantum computers", here, in the Appendix, the author/s have included a section on quantum inner product estimation.
Consider two vectors $x,y \in \mathbb{C}^n, x= (x_1, \dots , x_n), y= (y_1, \ldots, y_n)$, we want to estimate the inner product $\langle x | y \rangle$. Assume we are given a state $|\psi \rangle = \frac {1} {\sqrt 2} \big(|0 \rangle |x \rangle + |1 \rangle |y \rangle \big)$, after applying a Hadamard transform to the first qubit, the result is:
$$|\psi \rangle = \frac {1} {2} \big(|0 \rangle (|x \rangle + |y \rangle) + |1 \rangle(|x \rangle - |y \rangle) \big).$$
The author then states that after measuring the first qubit in the computational basis, the probability to measure $|0 \rangle$ is given by $p = \frac {1} {2} \big(1 + \mathrm{Re}(\langle x | y \rangle) \big)$. I do not understand this statement. From what I understand, after applying a partial measurement to the first qubit, the probability of measuring $|0 \rangle$ is given by
$\frac {1} {4} \sqrt { \sum_{i=0}^{n}(\overline{(x_i+y_i)}(x_i+y_i)) }^2$ (in other words the norm of the vector squared), so I am not sure why these formulas are equivalent, or if I am mistaken. You just need to do a bit more algebra: Note that | {
"domain": "quantumcomputing.stackexchange",
"id": 785,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-algorithms, quantum-state, mathematics, measurement",
"url": null
} |
homework-and-exercises, thermodynamics, orbital-motion, estimation
And now we will assume that all of this comes from the Earth's orbit changing: nothing else changes. This assumption should fill you with fear, but all we're after is a ball-park figure remember. So what we have is
$$\Delta F_S \approx 1.9\,\mathrm{W}/\mathrm{m}^{2}$$
So now we can just replicate some of the stuff we did for the black-body model:
$$S = \frac{\alpha}{R^{2}}$$
And we know that $S_0 = 1370\,\mathrm{W}/\mathrm{m}^{2}$, and $R_0 = 1.5\times 10^{11}$ (I am using $0$ suffices for 'initially', and note that this $\alpha$ is not the same as the $\alpha$ in the previous section, sorry). So we can calculate $\alpha$:
$$\begin{align}
\alpha &= 1370\times\left(1.5\times 10^{11}\right)^{2}\\
&\approx 3.1\times 10^{25}
\end{align}$$
And now we know that $S = F_S\times 4/0.7$, so
$$\begin{align}
\Delta S &= 1.9\times 4/ 0.7\\
&\approx 11\,\mathrm{W}/\mathrm{m}^{2}
\end{align}$$
And finally we can calculate $R_w$:
$$\begin{align}
R_w &= \sqrt{\frac{\alpha}{S + \Delta S}}\\
&= \sqrt{\frac{3.1\times 10^{25}}{1370 + 11}}\\
&= 1.49\times 10^{11}\,\mathrm{m}
\end{align}$$ | {
"domain": "physics.stackexchange",
"id": 34032,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, thermodynamics, orbital-motion, estimation",
"url": null
} |
beginner, programming-challenge, rust, roman-numerals
Title: Number to Roman in Rust I solved LC12 (Integer to Roman) in Rust.
I am a beginner with Rust, so I translated my previous solution from C++ to Rust.
I am looking for feedback on how I could improve the following Rust code.
The make_digit is nested because I got error saying it's not available in the current scope.
I could find the following cases:
thousands: just add M for how many thousands are in the number;
non-thousand digit:
9: <Digit_1><Digit_10> (c1 and c10 in the code)
4: <Digit_1><Digit_5>
5 --> 8: <Digit_5><Digit_1 * digit % 5>
1 --> 3: <Digit_1 * digit> | {
"domain": "codereview.stackexchange",
"id": 43959,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, programming-challenge, rust, roman-numerals",
"url": null
} |
algorithms, graphs, shortest-path, weighted-graphs
to see that the maximum distance of a node to the new node C, as a
function of the length of $(A,C)$, is given by the superposition of
roof shaped (inverted V) curves, characterized by their top,
associated to nodes of the graph.
to see that irrelevant curves can be eliminated efficiently by a
scan in monotonic order of abscissa of tops (with minimum backtrack).
then minimal points can be enumerated simply by intersecting the remaining
curves two by two, again in monotonic order of abscissa of tops. | {
"domain": "cs.stackexchange",
"id": 3102,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, graphs, shortest-path, weighted-graphs",
"url": null
} |
-
Find the square root of a matrix
此処の 例を ■■■ideal I を用いて■■■;
I=<(w-4) (w+4) (11 w^2-192),11 w^3-272 w+128 z,11 w^3-272 w+192 y,-11 w^3+144 w+128 x>
(なる 等式を 先ず 証明し)
から 容易に 4つ の 解行列が 獲られます ので 具現願います;
-
– robjohn Jun 14 '13 at 10:21 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9653811571768047,
"lm_q1q2_score": 0.8487117344454347,
"lm_q2_score": 0.8791467785920306,
"openwebmath_perplexity": 202.73818572443764,
"openwebmath_score": 0.9421183466911316,
"tags": null,
"url": "http://math.stackexchange.com/questions/59384/find-the-square-root-of-a-matrix/59396"
} |
$$\require{color} \color{red} \ \ \text{ 0 is the first number for being true.} So, by the principle of mathematical induction P(n) is true for all natural numbers n. Use induction to prove that 10n + 3 Ã 4n+2 + 5, is divisible by 9, for all natural numbers n. P(1) ; 10 + 3 â 64 + 5 = 207 = 9 â 23. That is, 6k+4=5M, where M∈I. Induction proof - divisibility by 3. I need to prove that 7^n + 4^n +1 is divisible by 6 using induction, I habe gotten as far as the last step of n=k+1 which I am stuck on. Step 1: Show it is true for n=0. Now, we have to prove that P(k + 1) is true.$$That is, $$(k+2)(k+4)$$ is divisible by 4.\begin{aligned} \displaystyle(k+2)(k+4) &= (k+2)k + (k+2)4 \\&= 4M + 4(k+2) \color{red} \ \ \text{ by assumption at Step 2} \\&= 4\big[M + (k+2)\big] \color{red} \text{, which is divisible by 4} \\\end{aligned}Therefore it is true for $$n=k+2$$ assuming that it is true for $$n=k$$.Therefore $$n(n+2)$$ is always divisible by $$4$$ for any even numbers. Let us assume that P(n) is true for some natural number n = k. or K3 â 7k + 3 = 3m, mâ N (i). Answer Save. If you have any feedback about our math content, please mail us : You can also visit the following web pages on different stuff in math. Step 2 : Let us assume that P(n) is true for some natural number n = k. \)$$2(2+2) = 8$$, which is divisible by 4.Therefore it is true for | {
"domain": "businessclassof.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9626731126558705,
"lm_q1q2_score": 0.8083893201841182,
"lm_q2_score": 0.8397339756938818,
"openwebmath_perplexity": 801.9683889375204,
"openwebmath_score": 0.7480453848838806,
"tags": null,
"url": "https://shop.businessclassof.com/blog/site/843l7.php?e6dca3=community-action-beaverton"
} |
entropy, mathematical-physics, probability
\frac{1}{R}\int_{\frac{x}{R}}^{+\infty}dt\frac{e^{-t}}{t}=
\frac{E_1(\frac{x}{R})}{R},
$$
where $E_1(z)$ is an exponential integral.
Remark
If one wished to approach this problem from the point of view of Lagrange multipliers, one could maximize the entropy for the joint distribution function $p(x,r)$ with the constraint that $p(r)$ is given by the form required form:
$$
S = -\int_0^{+\infty}dr\int_0^rdxp(x,r)\log p(x,r) +
\int_0^{+\infty}dr\lambda(r)\left[\int_0^rdxp(x,r)-p_0(r)\right],\\
p_0(r)=\frac{1}{R}e^{-\frac{r}{R}}
$$
where differentiation in respect to a Lagrange multiplier is replaced by functional variation in respect to $\lambda(r)$. | {
"domain": "physics.stackexchange",
"id": 82861,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "entropy, mathematical-physics, probability",
"url": null
} |
complexity-theory, space-complexity, context-sensitive
Title: Membership problem for context sensitive languages PSPACE-complete I have read that the membership problem for CSL is PSPACE-complete but I couldn't find the proof anywhere. So I tried it myself.
Let's mark the membership problem for CSL as MEM.
First I have to proof that $ MEM \in PSPACE $.
This should be easy, just take Turing Machine that generate words from L in lexicographic order and check if any is the same as the input word w. We can stop with the Turing machine when we reach length w+1.
Seccond, make a reduction from some PSPACE language. Quantified Boolean formula problem (QBF) seems to be suitable for this reduction. I have seen how to make a reduction from MEM to QBF but here I need the opposite. If I had a word w, I could make a formula based on the configuration I must go through to get w and all those configurations would mean true for the QBF. The representation could be just going from the binary code to some formulas.
But when going from the opposite direction, I don't know how to make CSL work from any given formula. I'm assuming that the context-sensitive language is given to you as a context-sensitive grammar. Summarizing the Wikipedia article, here is how to show that the word problem for context-sensitive grammars is $\mathsf{PSPACE}$-complete. | {
"domain": "cs.stackexchange",
"id": 6925,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "complexity-theory, space-complexity, context-sensitive",
"url": null
} |
1.11.16 $\displaystyle g(w)$ $\displaystyle=w^{4}+pw^{2}+qw+r,$ $\displaystyle p$ $\displaystyle=(-3a^{2}+8b)/8,$ $\displaystyle q$ $\displaystyle=(a^{3}-4ab+8c)/8,$ $\displaystyle r$ $\displaystyle=(-3a^{4}+16a^{2}b-64ac+256d)/256.$ ⓘ Symbols: $w$: variable, $p$, $q$ and $r$ Permalink: http://dlmf.nist.gov/1.11.E16 Encodings: TeX, TeX, TeX, TeX, pMML, pMML, pMML, pMML, png, png, png, png See also: Annotations for §1.11(iii), §1.11(iii), §1.11 and Ch.1
The discriminant of $g(w)$ is
1.11.17 $D=16p^{4}r-4p^{3}q^{2}-128p^{2}r^{2}+144pq^{2}r-27q^{4}+256r^{3}.$ ⓘ Defines: $D$: discriminant (locally) Symbols: $p$, $q$ and $r$ Permalink: http://dlmf.nist.gov/1.11.E17 Encodings: TeX, pMML, png See also: Annotations for §1.11(iii), §1.11(iii), §1.11 and Ch.1
For the roots $\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}$ of $g(w)=0$ and the roots $\theta_{1},\theta_{2},\theta_{3}$ of the resolvent cubic equation | {
"domain": "nist.gov",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9927672353405446,
"lm_q1q2_score": 0.8272973484650266,
"lm_q2_score": 0.8333245891029456,
"openwebmath_perplexity": 13535.686312856615,
"openwebmath_score": 0.9937611222267151,
"tags": null,
"url": "https://dlmf.nist.gov/1.11"
} |
human-biology, chromosome
Title: Is it possible to correctly identify presence of Y chromosome with external physical test only? I asked a question related to the third sex, and I came to know that its always possible to categorize a human to male or female with presence of Y chromosome.
Now, I have another question. Is there a way to say if someone has Y chromosome only with external physical test? More specifically:
Someone has a penis, can we say that s/he will have Y chromosome?
Some one has Y chromosome, can we say that he will have a penis? No, an external physical examination would be inconclusive. The reason is the TDF
gene. To be more specific, if a person has XY and the gene is not active then the subject
would have a female appearance. Also we cannot conclude that a person has a Y
chromosome even if it has penis because that gene could be transferred on the X chromosome.
Here is a reference: http://en.wikipedia.org/wiki/Testis_determining_factor. | {
"domain": "biology.stackexchange",
"id": 528,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "human-biology, chromosome",
"url": null
} |
electromagnetism, electricity, electric-circuits, electric-current, electromagnetic-induction
Title: Coil Inducing a Back Emf in its Own Circuit
For this above question, how is even a back emf induced in the circuit because of the coil. Doesn't Faraday's Law say the a change in flux threading an external coil will induce an emf. So then how can the coil induce a back emf in its own circuit?
Below is the solution. Which looks right if a back-emf could be produced in the first place. The circuit has an EMF $\mathcal E_0$ in the form of the battery of $12~\mathrm V\,.$
The current $I$ through the circuit is not constant right from the beginning.
It was zero when the circuit was open.
After a sufficient amount of time-interval, $I$ would attain a steady value $I_0\,.$
Prior to that $\dot I \ne 0\,.$
It can't go from $0$ to $I_0$ at an instant.
So, as the current $I$ changes at the rate $\dot I(t),$ there then arises the induced electromotive force which would tend to run the current in such a direction so as to oppose the flux change.
So, applying the law of conservation of energy, we get $$\mathcal E_0 + \underbrace{\left(-~ \mathrm L~\dot I(t)\right)}_\textrm{Back EMF} = RI(t);\tag I $$ assuming the direction of the current driven by the battery as positive. | {
"domain": "physics.stackexchange",
"id": 43306,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, electricity, electric-circuits, electric-current, electromagnetic-induction",
"url": null
} |
friction, everyday-life, inertia
Title: Why lighter bottles stay on better rather than heavy bottles? I was watching a Mythbusters video of them doing the table cloth trick and Adam says that the lighter bottle worked better. Don't heavy bottles (more mass) have more inertia and therefore should work better? Can someone explain? I couldn't find an explanation. You are correct in saying that heavy bodies do have a larger inertia, however, heavier bodies also exert a larger normal force on the table and thus the friction force acting on them is also comparatively larger (since frictional force is proportional to the normal force, $f=\mu N$).
This means that while pulling, the larger frictional force will generate a larger torque and thus the heavier bottle will not let the table cloth slip beneath it, rather the bottle itself will start rotating. Thus it is due to this aspect that heavier bottles don't work well for this trick. But if the bottle is light enough, then the friction force will also correspondingly be lower (due to a smaller normal force) and thus the bottle will easily slip when we pull the table cloth.
However, do note that there are many other factors also at play here. You might observe completely different results if you repeat this trick with different cloth or a differently shaped bottle. So this observation isn't universal. | {
"domain": "physics.stackexchange",
"id": 66783,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "friction, everyday-life, inertia",
"url": null
} |
ros, roomba
All this is based off of my own understanding and experience with ROS, and ROS can certainly be more complex or simple depending on the application. But hopefully this is a starting point for you.
Cheers,
Dulluhan
Originally posted by dulluhan with karma: 26 on 2016-06-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by snakeninny on 2016-06-19:
In my understanding, nodes are like processes, performing various operations. According to your description, ROS seems like a commcenter of all nodes/processes inside a robot, may I say so? If there's such a commcenter in Roomba (or any other advanced robots), that's the equivalent of ROS, right?
Comment by dulluhan on 2016-06-19:
Yes, however it is important to note that most of ROS's functionalities are passive. ROS is a facilitator of information exchange between nodes. Just like how your ips connects you to other people, servers, computers but does not actually do computation for you, the nodes produce the information
Comment by snakeninny on 2016-06-19:
So actually ROS defines a universal standard for processes on robot operating systems to work with each other. Other vender OSes have done this too, but each in their own ways. Is that right?
Comment by snakeninny on 2016-06-20:
Found a definition from here:
At the core of ROS is an anonymous publish-subscribe middleware system
So I think that's it ;)
Comment by Icehawk101 on 2016-06-21:
That's not entirely accurate. In addition to anonymous publish-subscribe communications there is also services and action servers. | {
"domain": "robotics.stackexchange",
"id": 24974,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, roomba",
"url": null
} |
nuclear-engineering, nuclear-technology
Summary
So, our typical PWR unit might have on the order of 10-15,000 kg of U-235. A nuclear site may have 2 or more units (1 and 2 unit sites being predominate), so you'd have to multiply the above numbers depending on your definition of "one place" but to answer the question, our PWR has >>65 kg U-235.
References | {
"domain": "engineering.stackexchange",
"id": 3783,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "nuclear-engineering, nuclear-technology",
"url": null
} |
# 12.6 Outliers (Page 2/11)
Page 2 / 11
## Try it
Identify the potential outlier in the scatter plot. The standard deviation of the residuals or errors is approximately 8.6.
The outlier appears to be at (6, 58). The expected y value on the line for the point (6, 58) is approximately 82. Fifty-eight is 24 units from 82. Twenty-four is more than two standard deviations (2 s = (2)(8.6) = 17.2 ). So 82 is more than two standard deviations from 58, which makes (6, 58) a potential outlier.
## Numerical identification of outliers
In [link] , the first two columns are the third-exam and final-exam data. The third column shows the predicted ŷ values calculated from the line of best fit: ŷ = –173.5 + 4.83 x . The residuals, or errors, have been calculated in the fourth column of the table: observed y value−predicted y value = y ŷ .
s is the standard deviation of all the y ŷ = ε values where n = the total number of data points. If each residual is calculated and squared, and the results are added, we get the SSE. The standard deviation of the residuals is calculated from the SSE as:
$s=\sqrt{\frac{SSE}{n-2}}$
## Note
We divide by ( n – 2) because the regression model involves two estimates.
Rather than calculate the value of s ourselves, we can find s using the computer or calculator. For this example, the calculator function LinRegTTest found s = 16.4 as the standard deviation of the residuals
• 35
• –17
• 16
• –6
• –19
• 9
• 3
• –1
• –10
• –9
• –1
. | {
"domain": "jobilize.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9822877044076956,
"lm_q1q2_score": 0.828947777264442,
"lm_q2_score": 0.8438950966654772,
"openwebmath_perplexity": 899.0178928982239,
"openwebmath_score": 0.6024207472801208,
"tags": null,
"url": "https://www.jobilize.com/course/section/how-does-the-outlier-affect-the-best-fit-line-by-openstax?qcr=www.quizover.com"
} |
# Is the empty set homeomorphic to itself?
Consider the empty set $$\emptyset$$ as a topological space. Since the power set of it is just $$\wp(\emptyset)=\{\emptyset\}$$, this means that the only topology on $$\emptyset$$ is $$\tau=\wp(\emptyset)$$.
Anyway, we can make $$\emptyset$$ into a topological space and therefore talk about its homeomorphisms. But here, we seem to have an annoying pathology: is $$\emptyset$$ homeomorphic to itself? In order to this be true, we need to find a homeomorphism $$h:\emptyset \to \emptyset$$. It would be very unpleasant if such a homeomorphism did not exist.
I was tempted to think that there are no maps from $$\emptyset$$ into $$\emptyset$$, but consider the following definition of a map:
Given two sets $$A$$ and $$B$$, a map $$f:A\to B$$ is a subset of the Cartesian product $$A\times B$$ such that, for each $$a\in A$$, there exists only one pair $$(a,b)\in f\subset A\times B$$ (obviously, we denote such unique $$b$$ by $$f(a)$$, $$A$$ is called the domain of the map $$f$$ and $$B$$ is called the codomain of the map $$f$$).
Thinking this way, there is (a unique) map from $$\emptyset$$ into $$\emptyset$$! This is just $$h=\emptyset\subset \emptyset\times \emptyset$$. This is in fact a map, since I can't find any element in $$\emptyset$$ (domain) which contradicts the definition. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9683812309063186,
"lm_q1q2_score": 0.8131826190742959,
"lm_q2_score": 0.8397339736884712,
"openwebmath_perplexity": 286.3293608402492,
"openwebmath_score": 0.886424720287323,
"tags": null,
"url": "https://math.stackexchange.com/questions/1948742/is-the-empty-set-homeomorphic-to-itself"
} |
Apply $(1)$ to the product of $$(1+x)^m=\sum_{k=0}^\infty\binom{m}{k}x^k\tag{2}$$ and $$(1+x)^n=\sum_{k=0}^\infty\binom{n}{k}x^k\tag{3}$$ which is $$(1+x)^{m+n}=\sum_{k=0}^\infty\binom{m+n}{k}x^k\tag{4}$$ I extended the indices in the sums to $\infty$ since for $k>n$, $\binom{n}{k}=0$.
For the product of $(2)$ and $(3)$, we get $$(1+x)^m(1+x)^n=\sum_{k=0}^\infty\left(\sum_{j=0}^k \binom{m}{j}\binom{n}{k-j}\right)x^k\tag{5}$$ Comparing the coefficients of $x^k$ in $(4)$ and $(5)$ yields $$\binom{m+n}{k}=\sum_{j=0}^k \binom{m}{j}\binom{n}{k-j}\tag{6}$$ as desired.
-
that was most helpful, thank you! – FRD Aug 29 '12 at 21:44
For understanding the first part, see this question . – Yam Marcovic Jul 22 '13 at 12:15
@YamMarcovic: that formula is known as the Cauchy Product. – robjohn Jul 22 '13 at 13:23
@robjohn Cool, thanks. – Yam Marcovic Jul 22 '13 at 19:02 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9867771770811146,
"lm_q1q2_score": 0.8223056856272843,
"lm_q2_score": 0.8333245891029456,
"openwebmath_perplexity": 123.95310936973571,
"openwebmath_score": 0.9469272494316101,
"tags": null,
"url": "http://math.stackexchange.com/questions/188252/spivaks-calculus-exercise-4-a-of-2nd-chapter"
} |
roslaunch, raspberrypi
NODES
/
tracker_command (pi_tracker/tracker_command.py)
tracker_base_controller (pi_tracker/tracker_base_controller.py)
tracker_joint_controller (pi_tracker/tracker_joint_controller.py)
robotis_joint_controller (pi_tracker/robotis_joint_controller.py)
ROS_MASTER_URI=http://nutan-desktop:11311/
core service [/rosout] found
process[tracker_command-1]: started with pid [6185]
process[tracker_base_controller-2]: started with pid [6186]
process[tracker_joint_controller-3]: started with pid [6187]
process[robotis_joint_controller-4]: started with pid [6188]
Traceback (most recent call last):
File "/home/nutan/pi/pi_tracker/bin/robotis_joint_controller.py", line 103, in
jc = joint_controller()
File "/home/nutan/pi/pi_tracker/bin/robotis_joint_controller.py", line 58, in init
usb2dynamixel = USB2Dynamixel_Device(self.port)
File "/home/nutan/pi/robotis/src/robotis/lib_robotis.py", line 54, in init
self._open_serial( baudrate )
File "/home/nutan/pi/robotis/src/robotis/lib_robotis.py", line 86, in _open_serial
raise RuntimeError('lib_robotis: Serial port not found!\n')
RuntimeError: lib_robotis: Serial port not found!
[INFO] 1300435773.701176: Shutting down joint command controller node...
[INFO] 1300435773.706859: Initializing Tracker Command Node...
[INFO] 1300435773.712698: Initializing Base Controller Node...
[INFO] 1300435773.726363: Initializing Joint Controller Node... | {
"domain": "robotics.stackexchange",
"id": 5117,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "roslaunch, raspberrypi",
"url": null
} |
python, python-3.x, file-system, serialization
I don't understand what this does.
It appears to be a no-op.
There's a bunch of seeking here.
It's not obvious to me why that's more convenient
than repeatedly stating the file to see if things changed.
def finished(self):
This is nice enough.
Consider using a prefix with such a boolean predicate: is_finished()
documentation
Tell us about zombies and their use cases.
Tell us about racy behavior,
such as when we serialize /var/log/messages and syslogd is appending to it.
tests
Supply unit tests that exercise the code.
algorithm
Consider relying on stat for current size, rather than seek.
(Calling stat obviously doesn't change your current location.)
You have quite a few ifs that can raise.
It seems "hard" to trigger some of those.
The caller still has to expect that a Bad Thing can happen at any point,
lightning might strike, the filesystem could explode.
Rather than doing checks and raising your own error,
consider letting some FS errors just naturally propagate up,
and the caller can figure out which ones are important for him to catch.
Or, perhaps you could wrap such an error handler around your layer
so several trys become just one.
One is much easier to test! | {
"domain": "codereview.stackexchange",
"id": 35481,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, file-system, serialization",
"url": null
} |
php, html, sql, mysql, pdo
}
?>
Then I apply this SQL Query to INSERT the data:
INSERT INTO `posts` (`post`, `ref1`, `ref1id`, `ref2`, `ref2id`, `ref3`, `ref3id`) VALUES (:post, :ref1, :ref1id, :ref2, :ref2id, :ref3, :ref3id)
And this one to UPDATE the data:
UPDATE `posts` SET `post` = :post, `ref1` = :ref1, `ref1id` = :ref1id, `ref2` = :ref2, `ref2id` = :ref2id, `ref3` = :ref3, `ref3id` = :ref3id
in/of this MYSQL Table: http://sqlfiddle.com/#!9/7d22d8/1/0
The idea is that the user can make a private post for 3 people only, He writes their name in ref[x] and their passcode which is ref[x]id Consider the following table:
CREATE TABLE IF NOT EXISTS `posts` (
`id` int(6) unsigned NOT NULL,
`post` varchar(24) NOT NULL,
PRIMARY KEY (`id`)
) DEFAULT CHARSET=utf8;
We've stripped out the ref columns. Now we add a new table:
CREATE TABLE IF NOT EXISTS `post_refs` (
`posts_id` int(6) unsigned NOT NULL,
`ref` varchar(24) NOT NULL,
`ref_id` varchar(24) NOT NULL,
PRIMARY KEY (`posts_id`, `ref`)
) DEFAULT CHARSET=utf8; | {
"domain": "codereview.stackexchange",
"id": 29768,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, html, sql, mysql, pdo",
"url": null
} |
This is not the answer of your problem Mr. Ron Gordon but it only the other way to prove the integral. You may also find the other ways, really beautiful methods, to prove the integral here, my OP. If you don't mind, I would like to present an alternative approach that makes use of the fact that $$\int^\infty_0\frac{x^{p-1}}{1+x}dx=\frac{\pi}{\sin{p\pi}}$$ Simply factorise the denominator and decompose the integrand into partial fractions. \begin{align} \int^\infty_0\frac{x^\alpha}{x^2+2(\cos{\pi\beta})x+1}dx &=\int^\infty_0\frac{x^\alpha}{(x+e^{i\pi\beta})(x+e^{-i\pi\beta})}dx\\ &=\frac{1}{-e^{i\pi\beta}+e^{-i\pi\beta}}\int^\infty_0\frac{x^\alpha}{e^{i\pi\beta}+x}dx+\frac{1}{-e^{-i\pi\beta}+e^{i\pi\beta}}\int^\infty_0\frac{x^\alpha}{e^{-i\pi\beta}+x}dx\\ &=\frac{1}{-2i\sin{\pi\beta}}\int^\infty_0\frac{(e^{i\pi\beta}u)^\alpha}{1+u}du+\frac{1}{2i\sin{\pi\beta}}\int^\infty_0\frac{(e^{-i\pi\beta}u)^\alpha}{1+u}du\\ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9799765569561563,
"lm_q1q2_score": 0.8269974208809936,
"lm_q2_score": 0.8438951064805861,
"openwebmath_perplexity": 983.3417609269127,
"openwebmath_score": 0.9992042183876038,
"tags": null,
"url": "https://math.stackexchange.com/questions/268789/symmetry-of-function-defined-by-integral/268960"
} |
java, cryptography
// System.out.println(Arrays.toString(encryption));
long decrypt_start_time = System.currentTimeMillis();
System.out.println(StringEncrypter.decrypt(encryption));
long decrypt_end_time = System.currentTimeMillis();
System.out.println("Decryption speed: "+(decrypt_end_time - decrypt_start_time)+ " Milliseconds");
}
} | {
"domain": "codereview.stackexchange",
"id": 21555,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, cryptography",
"url": null
} |
c++, beginner, c++11, hash-map, homework
class CovidDB {
MyHashTable<std::vector<DataEntry>> HashTable;
…
};
Uninitialized value
If you use the default constructor of CovidDB, the member variable HashTable is initialized, but size is not.
However, instead of initializing size, I would just remove this member variable: you can just use HashTable.size() to get the size of the table. This avoids you having to keep your own size in sync.
Unnecessary use of final and inline
The keyword inline does nothing for member functions defined in the class declaration, those are already implicitly inline.
While marking a class as final might be useful in some cases, why do it here? There is no inheritance being used anywhere. And what if you really do want to inherit from CovidDB at some point? Would that really be a problem?
Avoid calling std::exit() from class member functions
By calling std::exit() from class member functions, you take away the possibility of the caller recovering from the error. Instead, consider throwing an exception instead (preferrably one of the standard exceptions or a custom class derived from those). If the exceptions are not caught, this will still cause the program to exit with a non-zero exit code, but now it allows the caller to try-catch and deal with the problem.
Missing error checking
You are doing a little bit of error checking on input/output, but there is a lot more that can go wrong than just opening the file: there could be a read or write error at any point for example.
When reading a file, when you think you have reached the end, check if file.eof() == true. If not, then an error occured.
When writing a file, check that file.good() == true after calling file.close(). If not, an error occured writing any part of it. | {
"domain": "codereview.stackexchange",
"id": 44705,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, beginner, c++11, hash-map, homework",
"url": null
} |
1.0,0.5190217391304348,0.13369565217391305,-0.14728260869565218,-0.31521739130434784,-0.36141304347826086,-0.27717391304347827,-0.24945652173913044,-0.1608695652173913,-0.002717391304347826,0.23369565217391305,0.14402173913043478,0.06304347826086956,-5.434782608695652E-4,-0.03804347826086957,-0.04076086956521739, 1.0,0.5189630085503281,-0.34896021596534504,-0.8000624914835336,-0.5043545150938301,0.16813498364430499,0.5761216033068776,0.41692503347430215,-0.06371622277688614,-0.38966662981297634,-0.3246273969517782,-0.031970253360281406,0.16771278110458265,0.13993946271399282,0.012475144157765343,-0.036914291507522644. Autocorrelation, also known as serial correlation, is the correlation of a signal with a delayed copy of itself as a function of delay. Property 3 (Bartlett): In large samples, if a time series of size n is purely random then for all k. Example 3: Determine whether the ACF at lag 7 is significant for the data from Example 2. | {
"domain": "konwakai.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9693242036063215,
"lm_q1q2_score": 0.8098544221703694,
"lm_q2_score": 0.8354835452961425,
"openwebmath_perplexity": 966.9755097158313,
"openwebmath_score": 0.786353588104248,
"tags": null,
"url": "http://konwakai.com/c1xr1e/how-to-calculate-autocorrelation-447e26"
} |
ros, usb-cam, camera-drivers, camera
Title: usb_cam: VIDIOC_S_FMT error 22
When using usb_cam package, I get the following error message:
VIDIOC_S_FMT error 22, Invalid argument
Originally posted by fergs on ROS Answers with karma: 13902 on 2011-02-16
Post score: 4
Original comments
Comment by mmwise on 2011-02-17:
when you see an answer you like, mark it as an accepted answer
I've seen this error on a lot of my student's computers -- try setting the "pixel_format" parameter to "yuyv"-- this fixed it on all computers I've seen.
Originally posted by tfoote with karma: 58457 on 2011-02-16
This answer was ACCEPTED on the original site
Post score: 6
Original comments
Comment by tfoote on 2011-02-16:
reposting so you can accept the answer. Working around bug http://askbot.org/en/question/293/how-can-an-admin-answer-own-question-and-accept-it
Comment by autonomy on 2013-01-16:
Did not work for me, but that it is because my QuickCam outputs in the YUV420 format, which usb_cam does not support. So if anyone else is trying to use a Logitech QuickCam with ROS, you can get usb_cam working by implementing a YUV420 to RGB conversion (not trival) and adding a YUV420 parameter.
Comment by blubbi321 on 2018-08-06:
For my Microsoft LifeCam VX-3000 I actually had to set the pixel format BACK to "mjpeg". After that the node starts to spam "[swscaler @ 0x15a5e00] deprecated pixel format used, make sure you did set range correctly" but the output works as expected .. | {
"domain": "robotics.stackexchange",
"id": 4772,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, usb-cam, camera-drivers, camera",
"url": null
} |
Test the null hypothesis that the steel beams are equal in strength to the beams made of the two more expensive alloys. Turn the figure display off and return the ANOVA results in a cell array.
`[p,tbl] = anova1(strength,alloy,'off')`
```p = 1.5264e-04 ```
```tbl=4×6 cell array Columns 1 through 5 {'Source'} {'SS' } {'df'} {'MS' } {'F' } {'Groups'} {[184.8000]} {[ 2]} {[ 92.4000]} {[ 15.4000]} {'Error' } {[102.0000]} {[17]} {[ 6.0000]} {0x0 double} {'Total' } {[286.8000]} {[19]} {0x0 double} {0x0 double} Column 6 {'Prob>F' } {[1.5264e-04]} {0x0 double } {0x0 double } ```
The total degrees of freedom is total number of observations minus one, which is $20-1=19$. The between-groups degrees of freedom is number of groups minus one, which is $3-1=2$. The within-groups degrees of freedom is total degrees of freedom minus the between groups degrees of freedom, which is $19-2=17$.
`MS` is the mean squared error, which is `SS/df` for each source of variation. The F-statistic is the ratio of the mean squared errors. The p-value is the probability that the test statistic can take a value greater than or equal to the value of the test statistic. The p-value of 1.5264e-04 suggests rejection of the null hypothesis.
You can retrieve the values in the ANOVA table by indexing into the cell array. Save the F-statistic value and the p-value in the new variables `Fstat` and `pvalue`. | {
"domain": "mathworks.cn",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9888419671077918,
"lm_q1q2_score": 0.8303641864572137,
"lm_q2_score": 0.8397339656668286,
"openwebmath_perplexity": 779.1603126949233,
"openwebmath_score": 0.7590203285217285,
"tags": null,
"url": "https://ww2.mathworks.cn/help/stats/anova1.html"
} |
python, python-3.x, game, tic-tac-toe
for i in range(0, 3):
if board[i] == board[i + 3] == board[i + 6]:
if board[i] != None: return board[i]
if board[0] == board[4] == board[8] or board[2] == board[4] == board[6]:
if board[4] != None: return board[4]
return None
def computer_turn(self):
position = None
def get_line(cell):
if cell <= 3: return 1
if cell <= 6: return 2
return 3
if self.turn == 1:
self.turn = 2
oponent_choice = self.board.index("X")
if oponent_choice + 1 in [1, 3, 7, 9]:
position = 5 - 1
else:
position = choice([1, 3, 7, 9]) - 1
elif self.turn == 2:
self.turn = 3
for i in [1, 3, 7, 9]:
if self.board[i - 1] == "X":
if self.board[5 - 1] == "X":
position = 10 - i - 1
if self.board[position] == "O":
position = choice([item - 1 for item in [1, 3, 7, 9] if item not in [position + 1, i]])
break
for j in [1, 3, 7, 9]:
if i != j and self.board[j - 1] == "X":
position = i + (j - i) // 2 - 1
if self.board[position] == "O":
position = choice([2, 4, 6, 8]) - 1
break | {
"domain": "codereview.stackexchange",
"id": 28378,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, game, tic-tac-toe",
"url": null
} |
ros, ros2, xacro, robot-description, robot-state-publisher
But, after saving in the same folder these three files from there: https://github.com/joshnewans/urdf_example/tree/main/description I could use xacro to get normal URDF from XACROURDF, it was parsed correctly too
ljaniec@ljaniec-PC:~/Desktop$ xacro example_robot.urdf.xacro > test_robot.urdf
ljaniec@ljaniec-PC:~/Desktop$ check_urdf test_robot.urdf
robot name is: robot
---------- Successfully Parsed XML ---------------
root Link: world has 1 child(ren)
child(1): base_link
child(1): slider_link
child(1): arm_link
child(1): camera_link
child(1): camera_link_optical
The different behavior in the terminal is surprising for sure.
Originally posted by ljaniec with karma: 3064 on 2022-08-03
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by manish.nayak on 2022-08-03:
I see. It indeed seems like a very old issue which has been fixed and even I believe that step 2 which converts to XML does the trick. What I don't understand is that I have been following the tutorials where they do the same. Also, this is the command and output where the exact same input works, albeit for a different urdf:-
ros2 run robot_state_publisher robot_state_publisher --ros-args -p robot_description:="$(xacro ros_articubot_tutorial/my_first_cpp_pkg/urdf/example_robot.urdf)"
Parsing robot urdf xml string.
Link base had 1 children
Link l3 had 1 children
Link l2 had 1 children
Link l1 had 1 children
Link grip had 0 children
Link camera had 0 children
[INFO] [1659520565.730462926] [robot_state_publisher]: got segment base ..... | {
"domain": "robotics.stackexchange",
"id": 37898,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros2, xacro, robot-description, robot-state-publisher",
"url": null
} |
kinematics, acceleration, velocity, differentiation, calculus
Title: Expressing acceleration in terms of velocity and derivative of velocity with respect to position we know that
$$a = \dfrac{dv}{dt}$$
dividing numerator and denominator by $dx$, we get $$a=v\dfrac{dv}{dx}$$ provided that $dx$ is not equal to zero or instantaneous velocity not equal to zero
when I questioned my teacher that this formula implies instantaneous velocity should not be zero or their should be no turnaround points then why we use this formula for deriving equations of particle's position , velocity which is performing SHM, but got no satisfactory answer.
what is wrong in my argument and what are the conditions under which above mentioned equation is not true? What is wrong is assuming that dv/dx is finite when v=0. Try this for motion with uniform acceleration. For example, for object thrown upwards with some initial velocity. At the top turning point the expression fir dv/dx tends to infinity. | {
"domain": "physics.stackexchange",
"id": 78509,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "kinematics, acceleration, velocity, differentiation, calculus",
"url": null
} |
# The sum of two independent variables following the Binomial Distributions
I would have thought the answer to the following was another Binomial distribution, but I can't seem to get Mathematica to output that fact:
PDF[TransformedDistribution[x1 + x2, {x1, x2} \[Distributed] BinomialDistribution[n, p]], y]
Your syntax is slighlty off. The way you wrote it, {x1, x2} \[Distributed] BinomialDistribution[n, p]] indicates that the vector variable {x1, x2} follows the multivariate distribution BinomialDistribution[n, p], which of course does not work.
Instead, you need to indicate the distribution for each variable:
PDF[TransformedDistribution[
x1 + x2, {x1 \[Distributed] BinomialDistribution[n, p],
x2 \[Distributed] BinomialDistribution[n, p]}], y]
This is shown in the second syntax example in the documentation for TransformedDistribution.
Bob Hanlon also pointed out that a more readable result can be obtained by evaluating the TransformedDistribution itself:
TransformedDistribution[x1 + x2,
{x1 \[Distributed] BinomialDistribution[n, p],
x2 \[Distributed] BinomialDistribution[n, p]}
]
(* Out: BinomialDistribution[2 n, p] *) | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9458012655937034,
"lm_q1q2_score": 0.8130710666178536,
"lm_q2_score": 0.8596637541053281,
"openwebmath_perplexity": 4021.20492733541,
"openwebmath_score": 0.46731293201446533,
"tags": null,
"url": "https://mathematica.stackexchange.com/questions/178389/the-sum-of-two-independent-variables-following-the-binomial-distributions"
} |
virology, coronavirus
Title: Triangulation number of the SARS CoV-19 virus Is the triangulation number of the SARS CoV-19 virus capsid, in the sense of the Caspar-Klug theory, known? In case the Caspar-Klug theory does not apply to it, is it known what is its tiling, in the sense of the viral tiling theory of Twarock (and coauthors)?
I am a mathematician, trying to learn the geometric and group-theoretic aspects of virus structures. Coronaviruses, like many viruses with a lipid membrane, are pleomorphic. The individual particles don't have a consistent shape. Here is a cryo-electron micrograph of SARS-CoV-1:
It might not be obvious, but each particle seen here is unique, they are not different rotations of identical copies. Since there is no consistent symmetry, there can be no triangulation number.
The image is from the following reference: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1563832/ | {
"domain": "biology.stackexchange",
"id": 11671,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "virology, coronavirus",
"url": null
} |
c, database, circular-list, embedded
/****************************************************************
* Function Name : BucketGetReadData
* Description : Reads the key and value for that feed/slot.
* Returns : false error, true on success.
* Params @key: key to be populated(static).
@value: value read from the bucket.
****************************************************************/
static bool BucketGetReadData(char *key, char *value){
uint8_t slotIdx;
bool status = true;
//Check if the tail is pointing to the end of the bucket or beyond, wrap around to start of bucket to continue reading
BucketTailWrapAroundToStart();
//Read the slot index for the feed
slotIdx = *(uint8_t*)cBucketBufTail; //this is an int8_t type, if greater, will lead to undefined behavior.
*cBucketBufTail++ = 0;
if(slotIdx > BUCKET_MAX_FEEDS){
printf("[BucketGetReadData], Error, Slot[%u] index is out of bounds\r\n", slotIdx);
return(false);
}else{
printf("[BucketGetReadData], Slot[%d] = %p\r\n", slotIdx, (void *)registeredFeed[slotIdx]->key);
}
//Copy the key for the corresponding slot
strncpy(key, registeredFeed[slotIdx]->key, strlen(registeredFeed[slotIdx]->key));
//Read data based on type
switch(registeredFeed[slotIdx]->type){
case UINT16:{ | {
"domain": "codereview.stackexchange",
"id": 40955,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, database, circular-list, embedded",
"url": null
} |
quantum-mechanics, homework-and-exercises, angular-momentum, quantum-spin, hilbert-space
What is wrong? Are you sure that's what the book is asking you to find?
$\hbar/2$ is the eigenvalue of the $S_{x}$ operator corresponding to spin up, but it is not part of the state vector. If the question is really asking you to express the $\mid S_{x};+\rangle$ ket in the $S_{z}$ basis, then you're nearly correct, just a minor sign error:
$$\mid S_{x};+\rangle = \frac{1}{\sqrt{2}}\mid+\rangle + \frac{1}{\sqrt{2}}\mid-\rangle$$ | {
"domain": "physics.stackexchange",
"id": 22789,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, homework-and-exercises, angular-momentum, quantum-spin, hilbert-space",
"url": null
} |
dark-matter, beyond-the-standard-model, elementary-particles
Title: Classification of elementary particles that have been proposed to explain dark matter I'd like to write a paragraph about elementary particles that have been proposed to explain dark matter, but I don't know exactly how to classify these particles or arrange them:
Scaler field
-- Standard model
Axions
Neutrinos
Sterile Neutrinos
Dark photons
-- supersymmetric model (SUSY)
-- Weakly Interacting Massive Particles (WIMPs)
Neutralino
Higgsino You will need to pick some property that all models share, and then sort by that. There is no universal rule, and you can choose any property convenient to your situation. Some examples:
The year in which the model was first proposed. Good for historical reviews.
Alphabetical by name. You can never go wrong with that. Or by the third letter of the second author in the first paper, because why not.
Coupling strength under some very specific condition to some very specific target. I'd not recommended that... most have couplings that others don't have so it will be hard to find a common ground.
By popularity, i.e. WIMPs, Axions, then the rest.
My suggestion: sort them by mass. That makes much sense because mass is a simple scalar number, and because detection techniques greatly vary depending of mass. At the sub-eV scale you're looking for waves, above Planck scale for composite partcles, and even heavier and one has to resort to astrophysical measurements. You will find reviews that are doing just that, notably from the recent SNOWMASS process. | {
"domain": "physics.stackexchange",
"id": 90458,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "dark-matter, beyond-the-standard-model, elementary-particles",
"url": null
} |
newtonian-mechanics, classical-mechanics, angular-momentum, rotational-dynamics, precession
Title: Can precession happen with no external forces? I wanted to ask the following question:
Can a body that experiences no forces whatsoever precess?
Let's say I have a body in space - no gravity or anything - can I make it precess without applying any forces or torques? If so, how? Under what conditions? What would its movement look like? Could you give an example of something like this? Allow me rephrase your question, it seems to me that the following formulation is closer to the case you are thinking about:
In the absence of any external force, can a rigid, axially symmetric body move in a way so that its motion is not axially symmetric?
(Of course I phrased the question that way specifically for the answer to be 'yes'.)
A spinning axially symmetric object can have a sustained wobble. The nature of this wobble is that the symmetry axis sweeps out a cone. This wobble can just as well occur on top of a precessing motion, in which case the wobble is called 'nutation'. When a qyroscope wheel is in a combined precession and nutation motion the amplitude of the nutation motion is smaller than the amplitude of the precession motion, and the frequency of the nutation is higher (in most cases far higher) than the frequency of the precession motion.
For the wobble in the no-external-force case:
One might have the expectation that when you throw some object, giving it a spin, then at the instant that you let go the object settles to a non-wobbling spin. But in fact an object, when thrown with spin, can and will have a sustained wobble if that is how it happened to be thrown.
There is a story known as 'Feynmans wobbling plate'. Feynman was sitting in a cafeteria of the University where he had a teaching position, and as Feynman recounted:"some guy, fooling around, throws a plate in the air. As the plate went up in the air I saw it wobble, and I noticed the red medallion of Cornell on the plate going around. It was pretty obvious to me that the medallion went around faster than the wobbling." | {
"domain": "physics.stackexchange",
"id": 47068,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, classical-mechanics, angular-momentum, rotational-dynamics, precession",
"url": null
} |
quantum-mechanics, superfluidity
Title: What causes liquid helium to climb walls? It is a phenomenon which can be observed if your search on the web. Apparently liquid Helium can crawl up through walls. Does every superfluid do this crawling action or is it just special for liquid helium? How is the motion of liquid Helium described and modeled in quantum mechanics? When helium, which turns liquid at about 4.2 Kelvin, is cooled further to below approximately 2 Kelvin, it undergoes a phase change. This phase of helium, referred to as Helium II, is a superfluid. What this means is that the liquid's viscosity becomes nearly zero. At the same time, its thermal conductivity becomes infinite.
Because the viscosity is almost zero, the fluid flows very easily as a result of the smallest pressure or change in temperature. The response is so strong that even the smallest forces will help the light-weight liquid climb against the force of gravity.
If you have liquid helium inside and outside your system, the liquid inside will flow till it matches the level of the liquid outside and the temperature | {
"domain": "physics.stackexchange",
"id": 38388,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, superfluidity",
"url": null
} |
electronic-configuration, transition-metals, organometallic-compounds
Title: Why does mercury in mercuric acetate have a lone pair? The configuration of mercury ends with $6s^{2}$. In mercuric acetate, mercury is in the +2 oxidation state, so does it still have a lone pair? TLDR:
Mercury in its (+2) salts does not have lone pairs readily accessible for covalent bonding. However, its 5d electrons do affect its properties, resulting in a relatively high polarisability.
The long story:
Mercury is the last 5d element and has electron configuration $[\ce{Xe}] 4f^{14} 5d^{10} 6s^2$. The $6s$ electrons are the outermost electrons and readily available for covalent bonding. $4f$ electrons are virtually inert since they are very deep. $5d$ electrons belong to the previous shell, but are the outermost of them and once mercury looses its $4s$ electrons, $5d$ electrons become the outermost and define its properties. They are still too strongly bound to the nucleus to participate in covalent bonds under ordinary conditions ($\ce{HgF4}$ was only observed spectroscopically in inert gas matrix). , but are far enough to wiggle around and participate in non-covalent interactions, such as van-der-waals interactions. $6p$ orbitals are also available for covalent bonding, though extent of their use is often debatable and/or unobvious. | {
"domain": "chemistry.stackexchange",
"id": 6122,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electronic-configuration, transition-metals, organometallic-compounds",
"url": null
} |
Hello Matty R!
No, that doesn't mean anything, does it?
Hint: what will Bea's age be when Claire is as old as Dawn is now?
3. Feb 27, 2010
### HallsofIvy
Staff Emeritus
"When Claire is as old as Dawn is now, Bea will be twice as old as Ann currently is.
Claire is older than Bea."
Claire will be as old as Dawn is now in d- c years. Bea's age then will be b+ (d- c) and that will be twice Ann's current age: b+ d- c= 2a or 2a- b+ c- d= 0.
You have four equations:
The sum of their ages is exactly 100 years.
a+ b+ c+ d= 100
The sum of Ann's and Dawn's ages is the same as the sum of Bea's and Claire's.
a- b- c+ d= 0
The difference between the ages of Claire and Bea is twice Ann's age.
2a+ b- c= 0
("Claire is older than Bea" tells you that the difference between the ages of Claire and Bea is c- b, not b- c).
When Claire is as old as Dawn is now, Bea will be twice as old as Ann currently is.
2a- b+ c- d= 0
4. Feb 28, 2010
### Matty R
Thanks for the replies.
I'd never have got that. I completely see how to get it now, but I just couldn't understand it before. | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9724147153749276,
"lm_q1q2_score": 0.8082157262517872,
"lm_q2_score": 0.8311430436757312,
"openwebmath_perplexity": 1712.8431602420803,
"openwebmath_score": 0.4360642433166504,
"tags": null,
"url": "https://www.physicsforums.com/threads/gaussian-elimination-translating-text-into-equations.382148/"
} |
involved in the optimization, it was challenging to converge to a. In this post, we are going to share with you, an implementation of nonlinear regression using ANFIS in MATLAB. Examples: There are examples included with TomSym for all areas of optimization. Interactively define the variables, objective function, and constraints to reflect the mathematical statement of the nonlinear program. Write Objective Function. PENLAB is an open source software package for nonlinear optimization, linear and nonlinear semidefinite optimization and any combination of these. Nonlinear Optimization Solve constrained or unconstrained nonlinear problems with one or more objectives, in serial or parallel To set up a nonlinear optimization problem for solution, first decide between a problem-based approach and solver-based approach. A general purpose solver for mixed-integer nonlinear optimization problems. Local minimum found that satisfies the constraints. Developer-oriented modeling language that facilitates custom applications. However, both problems can be approached by using least squares minimization, in the Optimization Toolbox. m for the objective function. The original code has been extended by a density filter, and a considerable improvement in efficiency has been achieved, mainly by preallocating arrays and vectorizing loops. Learn more about nonlinear optimization Optimization Toolbox, Global Optimization Toolbox. A differential and algebraic modeling language for mixed-integer and nonlinear optimization. It has been developed using the 99 line code presented by Sigmund (Struct Multidisc Optim 21(2):120–127, 2001) as a starting point. A non-linear optimization problem includes an objective function (to be minimized or maximized) and some number of equality and/or inequality constraints where the objective or some of the constraints are non-linear. The nonlinear solvers that we use in this example are fminunc and fmincon. Ordinarily, minimization routines use numerical gradients calculated by finite-difference approximation. Configure Optimization Solver for Nonlinear MPC. The software includes functions for many types of optimization including † Unconstrained nonlinear minimization † Constrained nonlinear minimization, including semi-infinite minimization problems † Quadratic and linear programming. Secant Method for Solving non-linear equations in Newton-Raphson Method for Solving non-linear equat Unimpressed face in MATLAB(mfile) Bisection Method for Solving non-linear equations Gauss-Seidel method using | {
"domain": "rsalavis.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9840936110872273,
"lm_q1q2_score": 0.8113477122095918,
"lm_q2_score": 0.8244619242200082,
"openwebmath_perplexity": 1403.5366973196892,
"openwebmath_score": 0.4197305142879486,
"tags": null,
"url": "http://hmhi.rsalavis.it/matlab-nonlinear-optimization.html"
} |
general-relativity, cosmology, vacuum, stress-energy-momentum-tensor, cosmological-constant
Title: Absorbing the Cosmological "Constant" in the standard Energy-Stress Tensor Recently I found some publications on Cosmologies with variable cosmological constant. The Bianchi Identity then implies that the divergence of the modified Energy Stress: Tensor $$\hat{T}_{ab}=T_{ab}+\Lambda g_{ab}$$ has to vanish.
I asked my professor about this and he said that variable cosmological constants don't really make sense, because one could always absorb the lambda term into the standard energy stress tensor and not be able to distinguish between the two.
I thought about a counter argument. The only thing I could come up with was that if you consider an ideal fluid $$T_{ab}=(\rho+p)u_au_b-pg_{ab}$$ Then the modified energy stress tensor will be: $$T_{ab}=(\rho+p)u_au_b-(p-\Lambda)g_{ab}$$
At this point I could rewrite $p'=p-\Lambda$ At a first glance then the Lambda term would reappear in the first term: $$T'_{ab}=(\rho+p'+\Lambda)u_au_b-(p')g_{ab}$$
One might be tempted to think that this is a counter argument however I could make another change and label $\rho'=\rho+\Lambda$ such that I recover the standard form of the Energy stress tensor of an ideal fluid. | {
"domain": "physics.stackexchange",
"id": 53696,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, cosmology, vacuum, stress-energy-momentum-tensor, cosmological-constant",
"url": null
} |
homework-and-exercises, electromagnetism, special-relativity, tensor-calculus, maxwell-equations
Title: Simple derivation of the Maxwell's equations from the Electromagnetic Tensor Lets start by considering the electromagnetic tensor $F^{\mu \nu}$:
$$F^{\mu \nu}=\begin{bmatrix}0 & -E_x/c & -E_y/c & -E_z/c \\ E_x/c & 0 & -B_z & B_y \\ E_y/c & B_z & 0 & -B_x \\ E_z/c & -B_y & B_x & 0\end{bmatrix}$$
And now consider Maxwell's equation:
$$\nabla \cdot \vec{E}=\frac{\rho}{\varepsilon _0}$$
$$\nabla \cdot \vec{B}=0$$
$$\nabla \times \vec{E}=-\frac{\partial \vec{B}}{\partial t}$$
$$\nabla \times \vec{B}=\mu _0 \vec{j}+\mu _0 \varepsilon _0 \frac{\partial \vec{E}}{\partial t}$$
The claim is that the first and the fourth equations are equivalent to the following tensor equation:
$$\partial _{\mu}F^{\mu \nu}=\mu _0 j^{\nu}$$
(where: $j^{\nu}=(c\rho , \vec{j})$) and that the second and the third equations are also equivalent to:
$$dF=0$$
where the $dF$ is simply a shortcut to write:
$$\partial _{\lambda}F_{\mu \nu}+\partial _{\nu}F_{\lambda \nu}+\partial _\mu F_{\nu \lambda}$$
My objective is to prove, using tensor algebra, that this statement is indeed correct: Lets begin, the first part of the statement is easy; if we think about the first term: | {
"domain": "physics.stackexchange",
"id": 69063,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, electromagnetism, special-relativity, tensor-calculus, maxwell-equations",
"url": null
} |
propositional-logic, model-checking, software-verification
Make a formal model of the real-world system. (The formal model might be a state-transition graph or automaton, for example.)
Write down a specification that hopefully captures the real-world requirements.
Verify that the formal model of the system meets the specification.
Formal methods provides guarantees about step 3; if step 3 succeeds, then we know that the formal model meets the spec. It doesn't provide any guarantees about steps 1 or 2. If there is a bug in the model, that causes it to fail to accurately capture the behavior of the real-world system, then the result of the verification process becomes meaningless and all guarantees are off. So, you need some separate way to gain assurance in those steps.
The atomic propositions are part of that modelling process. Usually, there are some parts of the real-world system that we model in the state-transition graph and others that we don't. For instance, consider a traffic light. It has some sensors to sense presence of cars and pedestrians; some logic to determine what color light to show in each direction; and some actuators (light bulbs) to actually display those chosen colors. We might choose to model the logic in a formal model, and verify that the logic works properly. This verification process might not verify anything about the sensors or the actuators; it might just assume they are correct. That might be justified if, for instance, we think the logic is the part that is most likely to have subtle errors. Then you might have an atomic proposition for "sensor #1 detected a pedestrian at such-and-such cross-walk" ($p_1$) and an atomic proposition for "turn on the green light facing in the N direction" ($q_N$). Those atomic propositions represent the inputs or outputs to the logic, or facts about the state of the system. The spec then has to be stated in terms of the atomic propositions, e.g., "it should be impossible to have the green light turned on in the N direction and in the E direction" ($G (\neg q_N \lor \neg q_E)$). | {
"domain": "cs.stackexchange",
"id": 11623,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "propositional-logic, model-checking, software-verification",
"url": null
} |
noise, infinite-impulse-response
If we want to find the sum of all the terms squared in the sequence:
$$\begin{align}
\sum_{n=-\infty}^{\infty}h^2[n]
&=\sum_{n=-\infty}^{\infty}\left[(0.1)^{n-5} u[n-5]\right]^2\\
&=\sum_{n=5}^{\infty}\left[(0.1)^{n-5}\right]^2\\
&=\sum_{m=0}^{\infty}(0.1)^{2m}\\
&=\sum_{m=0}^{\infty}(0.01)^{m}\\
&=\frac{1}{1-0.01}\\
&=\frac{100}{99}\\
\end{align}$$
Where I've used the formula for the infinite geometric sum.
EDIT:
Note that a smaller NRR implies a worse performance of the filter at reducing noise. In this case, NRR turned out to be almost $1$ (or $0 \ \mathrm{dB}$), so the variance of the noise at the output is practically the same as in the input. As Fat32 pointed out in the comments, this depends on the position of the pole of your filter. The noise is white (as far as I understood from your comments), so it has infinite bandwidth. If the filter's bandwidth is too large, then "a lot" of noise will pass through it, appearing at the output. On the contrary, if it is narrow, then it will "filter out" most of it, leading to a decrease in the variance at the output. In this case, the filter's bandwidth is rather wide due to the pole being near the origin, and then the NRR approaches $1$: not a really good filter to reduce noise. | {
"domain": "dsp.stackexchange",
"id": 6660,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "noise, infinite-impulse-response",
"url": null
} |
ompl, ros-indigo
<rosparam command="load" file="$(find clam_moveit_config)/config/kinematics.yaml"/>
<rosparam command="load" file="$(find clam_moveit_config)/config/ompl_planning.yaml"/>
</launch>
ompl_planning.yaml :
planner_configs:
SBLkConfigDefault:
type: geometric::SBL
LBKPIECEkConfigDefault:
type: geometric::LBKPIECE
RRTkConfigDefault:
type: geometric::RRT
RRTConnectkConfigDefault:
type: geometric::RRTConnect
LazyRRTkConfigDefault:
type: geometric::LazyRRT
ESTkConfigDefault:
type: geometric::EST
KPIECEkConfigDefault:
type: geometric::KPIECE
RRTStarkConfigDefault:
type: geometric::RRTstar
BKPIECEkConfigDefault:
type: geometric::BKPIECE
arm:
planner_configs:
- SBLkConfigDefault
- LBKPIECEkConfigDefault
- RRTkConfigDefault
- RRTConnectkConfigDefault
- ESTkConfigDefault
- KPIECEkConfigDefault
- BKPIECEkConfigDefault
- RRTStarkConfigDefault
projection_evaluator: joints(shoulder_pan_joint,shoulder_pitch_joint)
longest_valid_segment_fraction: 0.05
gripper_group:
planner_configs:
- SBLkConfigDefault
- LBKPIECEkConfigDefault
- RRTkConfigDefault
- RRTConnectkConfigDefault
- ESTkConfigDefault
- KPIECEkConfigDefault
- BKPIECEkConfigDefault
- RRTStarkConfigDefault | {
"domain": "robotics.stackexchange",
"id": 19348,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ompl, ros-indigo",
"url": null
} |
The scalar product $\vec x\cdot\vec s$ of vectors $\vec x$ and $\vec s$ with coordinates $(x_1,x_2)$ and $(s_1,s_2)$, respectively, is $x_1s_1+x_2s_2$, so e.g. the coordinates of $\Delta\vec a-(\Delta\vec a\cdot\vec m)\vec m$ are $\Delta a_1-(\Delta a_1 m_1+\Delta a_2m_2)m_1$ and $\Delta a_2-(\Delta a_1 m_1+\Delta a_2m_2)m_2$. The norm (length) $|\vec x|$ of a vector $\vec x$ with coordinates $(x_1,x_2)$ is given by $\sqrt{x_1^2+x_2^2}$, so the first coordinate of the first of the last two equations is
$$b_{11}=b_1+\frac{\sqrt{\Delta a_1^2+\Delta a_2^2}}6m_{a1}\;.$$
I hope that's roughly what you were looking for. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9814534349454032,
"lm_q1q2_score": 0.8135648449341729,
"lm_q2_score": 0.8289388125473628,
"openwebmath_perplexity": 263.6457109901968,
"openwebmath_score": 0.8442350029945374,
"tags": null,
"url": "http://math.stackexchange.com/questions/51827/finding-two-b%c3%a9zier-control-points-given-three-points"
} |
ros, c++
Originally posted by dornhege with karma: 31395 on 2013-06-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by mateo_7_7 on 2013-06-12:
actually i receive the name of the topic (the string), which i have to read from, from another subscriber (and for sure the updating of this string is performed correctly)....but, as I said, i cannot read from this new topic"/vrep/visionSensorData"+targetII_id because the topic remains:
Comment by mateo_7_7 on 2013-06-12:
"/vrep/visionSensorData"+NULL even if the string is updated. The updating of this string (apart from the first time that is performed after 1 loop) is performed very rarely ....how can i do???
Comment by dornhege on 2013-06-12:
You have to give the subscribers some time to connect and receive messages in your code design. If you really just want to get something quickly, a service might be better suited. | {
"domain": "robotics.stackexchange",
"id": 14528,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, c++",
"url": null
} |
machine-learning, neural-network, deep-learning
# Return The Error Of The Current Layer
return weightedError
def calculateError(self, weightedError):
# The Container For The Error Of The Current Layer
errorOfCurrentLayer = []
# The Weighted Error For The Neuron With The Neruon
for wE, n in zip(weightedError, self.neurons):
# Add The Product Of The Weighted Error With The Z Of The Current Neuron Run To Sigmoid Prime
errorOfCurrentLayer.append(wE * sigmoidPrime(n.getZ()))
# Return The Error Of The Current Layer
return errorOfCurrentLayer
def updateWeightsAndBiases(self, errorOfCurrentLayer, aOfPrevLayer):
# The Error For The Neuron With The Neuron
for e, n in zip(errorOfCurrentLayer, self.neurons):
# Error Of Current Layer Is Equal To The Delta Of The Bias So Apply That
n.updateBias(e)
# Forward The Error And All The Activity Of The Previous Layer To The Current Neuron To Update It's Weights
n.updateWeights(e, aOfPrevLayer)
def setInput(self, x):
# Set It To Every Neuron
for neuron, val in zip(self.neurons, x):
neuron.setInput(val)
def getA(self):
aOfLayer = []
for neuron in self.neurons:
aOfLayer.append(neuron.getA())
return aOfLayer
def getNeuronNum(self):
return len(self.neurons)
def getLayerInfo(self):
return "Layer( %i ), has %i Neurons" % (self.layerNum, len(self.neurons))
----------
class Neuron:
def __init__(self, numNeuronsPrevLayer):
self.a = 0
self.z = 0
self.b = 0.5
if numNeuronsPrevLayer != 0:
self.w = np.random.uniform(low = 0, high = 0.5, size=(numNeuronsPrevLayer,)) | {
"domain": "datascience.stackexchange",
"id": 2517,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, neural-network, deep-learning",
"url": null
} |
ros, arduino, rosserial-arduino, rosserial-python
Title: Why would rosserial_python fail when it connect to arduino?
Hi! I running in 14.04 indigo, I need a hand to fix this problem :
I followed the tutorial of arduino in ros , try to create a publisher but it failed
$rosrun rosserial_python serial_node.py /dev/ttyACM0
[ERROR] [WallTime: 1509590980.692109] Creation of publisher failed: line:
yaml https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/osx-homebrew.yaml osx
unsupported pickle protocol: 4
I tried to use $rosdep update to fix but nothing happend
Arduino code is here:
#include <SPI.h>
#include <MFRC522.h>
/////setup/////
#include <ros.h>
#include <std_msgs/String.h>
#include <std_msgs/Byte.h>
#include <std_msgs/Char.h>
#include <std_msgs/Int32.h>
ros::NodeHandle nh;
std_msgs::String str_msg;
ros::Publisher chatter("RFID", &str_msg);
char hello[13] = "hello world!";
char RFID[32];
////////////////
constexpr uint8_t RST_PIN = 9; // Configurable, see typical pin layout above
constexpr uint8_t SS_PIN = 10; // Configurable, see typical pin layout above
MFRC522 rfid(SS_PIN, RST_PIN); // Instance of the class
MFRC522::MIFARE_Key key;
// Init array that will store new NUID
byte nuidPICC[4]; | {
"domain": "robotics.stackexchange",
"id": 29256,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, arduino, rosserial-arduino, rosserial-python",
"url": null
} |
vba, excel
'offset target by 1 to capture change
ColumnCount = 1
'Set target offset to new cell
Set target = target.Offset(, ColumnCount)
'Generate key and pass target offset value to table update module
modTableUpdate.TableUpdate target, destsheet, sourcesheet, ColumnHeader, ColumnFormula, Celladdress, Key, cell, result
'Set target offset back to original cell
ColumnCount = -1
Set target = target.Offset(, ColumnCount)
'Set column count to 0 to maintain alignment when loop repeats
ColumnCount = 0
End If
Next RowCount I'm doing a little guess work here because you've not shown us the method'd signature. I apologize if I've made some bad assumptions.
Typically, when I see a Target Range, it means we're working inside of a worksheet's OnChange event. Assuming that, I would recommend introducing a new Range variable into the mix so you're not accidentally modifying the incoming range. Adding an extra variable would make it clear and explicit when/if you're programmatically changing the same range the user modified.
This snippet makes me believe it's the right thing to do even if my assumption is wrong.
'Set target offset to new cell
Set target = target.Offset(, ColumnCount)
'Generate key and pass target offset value to table update module
modTableUpdate.TableUpdate target, destsheet, sourcesheet, ColumnHeader, ColumnFormula, Celladdress, Key, cell, result
'Set target offset back to original cell
ColumnCount = -1
Set target = target.Offset(, ColumnCount)
'Set column count to 0 to maintain alignment when loop repeats
By introducing an offsetTarget variable, you should be able to avoid tracking the column count and all this setting and resetting because you've never modified target to begin with.
The other thing I would be careful of is detailed at the end of this answer and again assumes this code resides in the Worksheet_Change event. Target could possibly be a multi-cell Range. Double check your code to make sure it can deal with that edge case.
One last thing. This method takes in an awful lot of parameters. | {
"domain": "codereview.stackexchange",
"id": 12323,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vba, excel",
"url": null
} |
electrostatics, electrons, atomic-physics, physical-chemistry, dielectric
Title: What force creates electronegativity? Electric charge is used to describe the behavior of electrons which seek to counterbalance the positive charge of protons.
But I have read about other forces which also attract electrons to atoms, could someone explain the mechanism behind electronegativity? Or why an electron might seek to enter the orbit of an atom which was already electrically neutral? The short answer is: electromagnetism also causes electronegativity.
The longer answer is: what electron distribution develops in equilibrium around a set of positive nuclei (and hence, in a molecule) depends on the distribution of nuclei and their charges. It is quite intuitive that the electrons will feel more attracted to higher charged nuclei, just as in some circles, attractive women will feel more attracted to the guys with the most muscle mass, whereas the beardless beanpole will sip alone on his glass of orange juice at the party. But like in the mating case, the situation is a little more complicated with electronegativity, where the radius of orbitals also plays a role as to what nucleus attracts electrons the most.
Independent of the specific reasons for preferred attraction, a resulting charge distribution could be naively described by tabulating the charge density at regular picometer intervals in x, y, z direction, or it can be described by something called a multipole expansion. There is quite some math involved in multipole expansion, but the intuition behind it is basic. It describes the angular distribution of charge with respect to a certain reference point (for example the center of gravity of a molecule). | {
"domain": "physics.stackexchange",
"id": 86849,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics, electrons, atomic-physics, physical-chemistry, dielectric",
"url": null
} |
ros, ros2, colcon
Title: Colcon build fail - notify2
OS: Pop OS 20.04
I was running the command colcon build and receive the following output:
Starting >>> turtlesim
Finished <<< turtlesim [0.64s] | {
"domain": "robotics.stackexchange",
"id": 36803,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros2, colcon",
"url": null
} |
java, swing, audio
public UIBuilder withDefaultCloseOperation(int windowConstant) {
frame.setDefaultCloseOperation(windowConstant);
return this;
}
public UIBuilder withMenuBar(@NotNull Supplier<JMenuBar> menuBarSupplier) {
Objects.requireNonNull(menuBarSupplier);
frame.setJMenuBar(menuBarSupplier.get());
return this;
}
public UIBuilder withResizable(boolean isResizable) {
frame.setResizable(isResizable);
return this;
}
public UIBuilder withLocationRelativeTo(JComponent component) {
frame.setLocationRelativeTo(component);
return this;
}
public void visualize() {
frame.setVisible(true);
}
}
public static class ComponentBuilder {
private JComponent component;
public ComponentBuilder(@NotNull Supplier<JComponent> panelSupplier) {
Objects.requireNonNull(panelSupplier);
this.component = panelSupplier.get();
}
public ComponentBuilder withBorder(@NotNull Supplier<Border> borderSupplier) {
Objects.requireNonNull(borderSupplier);
component.setBorder(borderSupplier.get());
return this;
}
public ComponentBuilder withSize(int width, int height) {
checkBounds(width, height);
component.setSize(width, height);
return this;
}
public ComponentBuilder withComponent(@NotNull String position, @NotNull Supplier<JComponent> componentSupplier) {
Stream.of(position, componentSupplier).forEach(Objects::requireNonNull);
component.add(position, componentSupplier.get());
return this;
}
public ComponentBuilder withComponent(@NotNull Supplier<JComponent> componentSupplier) {
Objects.requireNonNull(componentSupplier);
component.add(componentSupplier.get());
return this;
} | {
"domain": "codereview.stackexchange",
"id": 44996,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, swing, audio",
"url": null
} |
openni, xtion
Title: Asus Xtion Not enough bandwidth
I'm trying to use the Asus Xtion Pro camera with openni on Ubuntu 12.04. It does not work. Running dmesg in the terminal gives:
[15844.380294] usb 3-9: new high-speed USB device number 13 using xhci_hcd
[15844.400319] usb 3-9: New USB device found, idVendor=1d27, idProduct=0600
[15844.400327] usb 3-9: New USB device strings: Mfr=2, Product=1, SerialNumber=0
[15844.400333] usb 3-9: Product: PrimeSense Device
[15844.400337] usb 3-9: Manufacturer: PrimeSense
[15844.401720] usb 3-9: Not enough bandwidth for new device state.
[15844.401738] usb 3-9: can't set config #1, error -28
I've tried all the USB ports on the computer. It always gives the same error.
Originally posted by atp on ROS Answers with karma: 529 on 2013-11-30
Post score: 0
I've solved this problem by deactivating xHCI and EHCI Hand-off in the bios of the motherboard.
Originally posted by atp with karma: 529 on 2013-12-03
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Martin Günther on 2013-12-04:
Yup, that seems like a good solution. The Xtion and Kinect don't work on USB 3.0 (xhci) ports. If you're lucky, your PC has at least one USB 2.0 (ehci) port; otherwise, you have to disable xhci (either in the bios, or by blacklisting the kernel module). | {
"domain": "robotics.stackexchange",
"id": 16306,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "openni, xtion",
"url": null
} |
complexity-theory, time-complexity, machine-learning, classification, pattern-recognition
k-nearest neighbors (https://stats.stackexchange.com/questions/219655/k-nn-computational-complexity),
Naive Bayes is linear for those PDFs that can be estimated in linear time (e.g. Poisson and Multinomial PDFs).
Approximate SVM (https://stats.stackexchange.com/questions/96995/machine-learning-classifiers-big-o-or-complexity)
Logistic Regression can be linear (https://cstheory.stackexchange.com/questions/4278/computational-complexity-of-learning-classification-algorithms-fitting-the-p)
AdaBoost and Gradient Boosting require at least a weak learner and are linear only if their weak learner is linear (f m n log(n)) (https://stackoverflow.com/questions/22397485/what-is-the-o-runtime-complexity-of-adaboost)
Neural Networks can be linear (https://ai.stackexchange.com/questions/5728/time-complexity-for-training-a-neural-network)
The following classifiers are nonlinear either in the training or in the testing phase:
SVM requires polynomial training with respect to the number of samples (https://stackoverflow.com/questions/16585465/training-complexity-of-linear-svm)
Decision tree and correspondingly Random forest are loglinear (m n log(n)) (https://stackoverflow.com/questions/34212610/why-is-the-runtime-to-construct-a-decision-tree-mnlogn)
Linear (and Quadratic) Discriminant Analysis (LDA and QDA) are polynomial (n m^2) (https://stats.stackexchange.com/questions/211177/comparison-of-lda-vs-knn-time-complexity)
Restricted Boltzmann Machine (RBM) is polynomial with respect to m (http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.BernoulliRBM.html) | {
"domain": "cs.stackexchange",
"id": 11737,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "complexity-theory, time-complexity, machine-learning, classification, pattern-recognition",
"url": null
} |
filters, discrete-signals, infinite-impulse-response, digital-filters
Title: Coefficient Scaling of IIR filter to obtain unity gain response I am designing a 2nd order IIR digital filter :
My tf equation with coefficients is
b = [0 1.209e09]
a= [9.2175 -2.6952 1.0000]
sys=tf(b,a,0.1,'Variable','z^-1')
bode(sys)
I have couple of questions :
How to get unity scaling .. as a formula... so that i can always multiply it with the 'sys' tf.
I usually seen that coefficients are less than 1 , is there a way i can make them so ?
My coefficients are calculated from some adaptive filter optimization algorithm, so they keep on changing, is there a way i can reduce the filter to filter coefficients changes to a a minimum.? It's important to specify at which frequency you want unity gain. But assuming you mean DC ($\omega=0$), because that filter has a low pass characteristic, the DC gain of an IIR filter is given by
$$G_{DC}=\frac{\sum_kb[k]}{\sum_ka[k]}\tag{1}$$
It's also common to normalize the denominator coefficients such that $a[0]=1$. In your example that would give
a = [1.00000 -0.29240 0.10849]
and
b = [0.00000 131163547.59967]
Finally, normalizing by the DC gain $(1)$ will give you a filter with unity gain at DC. This leaves the denominator coefficients unchanged, and the new normalized numerator coefficients are given by
b = [0.00000 0.81609] | {
"domain": "dsp.stackexchange",
"id": 9026,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "filters, discrete-signals, infinite-impulse-response, digital-filters",
"url": null
} |
terminology, sea-ice, cryosphere
h_2 &= 0.5 \times 0.5 \text{m} + 0.5 \times 0.3 \text{m} = 0.4 \text{m}
\end{align}$$
Both variables can be useful. As I have seen conflicting names, I am a bit puzzled with the naming for $h_1$ and $h_2$. What are the standard names for $h_1$ and $h_2$ used in the sea ice community? It would appear that you are spot on with the commonly used terms in your question. A well-referenced post on the Arctic forums (not by me) entitled 'Average sea ice thickness vs effective sea ice thickness', provide information about this very question. Specifically:
Average Sea Ice Thickness = Volume of Sea Ice/Ice Covered Area
Effective Sea Ice Thickness = Volume of Sea Ice/Grid Cell Area
According to the author of the post, (an Administrator of that site), these terms are imbedded in a lot of the published literature - for example, in the article "Uncertainties in Arctic sea ice thickness and volume: new estimates and implications for trends". | {
"domain": "earthscience.stackexchange",
"id": 960,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "terminology, sea-ice, cryosphere",
"url": null
} |
ubuntu-precise, ubuntu
Title: lwheel and rwheel frame to link up to URDF
So, this seem to be the final thing I am stuck on, please help
My wheels in my URDF is lwheel and rwheel
When I list topics, the wheels are listed
/rwheel
/rwheel_vel
/rwheel_vtarget
When I do rosrun tf view_frames
Exception thrown:"odom" passed to lookupTransform argument source_frame does not exist.
The current list of frames is:
Frame laser exists with parent base_frame.
Frame base_frame exists with parent map.
Frame scanmatcher_frame exists with parent
What the hell do I need to map these so the URDF maps to the publisher? I think its something to do with a TF?
Originally posted by burf2000 on ROS Answers with karma: 202 on 2017-02-12
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 26996,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ubuntu-precise, ubuntu",
"url": null
} |
quantum-state, linear-algebra, terminology-and-notation
Title: Notation for two qubit composite product state In my lecture notes on quantum information processing my lecturer gives an example of composite systems as $|\phi\rangle=|0\rangle |0\rangle=|00\rangle$. I understand that if we have two qubits then its product state will be in 2n dimensional Hilbert space and I understand the 2 qubit state $|00\rangle$ to be represented in matrix representation as $\begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix}$ (if that is wrong please do correct my misunderstanding though). My question is about the notation $|0\rangle|0\rangle=|00\rangle$, how can we calculate this with matrices on the left-hand side we have a 2 by 1 matrix multiplied by a 2 by 1 matrix which cannot be calculated. I thought perhaps it was a matter of direct products but my calculation led to an incorrect result there too.
Could anyone clarify this for me, please?
Edit: It occurred to me that I think I'm mistaken about the matrix representation of $|00\rangle$, I think it would make more sense to be $\begin{pmatrix} 1 \\ 0\\0\\0 \end{pmatrix}$ in which case the direct product does work and I should take the notation $|0\rangle|0\rangle$ to be a shorthand for the direct product not the multiplication of two matrices, is that correct? $|0\rangle|0\rangle$ is actually a shorthand for $|0\rangle \otimes |0\rangle$ or $\begin{bmatrix} 1 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} 1 \\ 0\end{bmatrix} $ where $\otimes$ stands for the tensor product or essentially the Kronecker product. To quote Wikipedia: | {
"domain": "quantumcomputing.stackexchange",
"id": 698,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-state, linear-algebra, terminology-and-notation",
"url": null
} |
game, vba, excel, tetris
Private Function getX(ByVal ID As Long, ByVal Z As Long, ByVal X As Long, ByVal Index As Long) As Long
Dim Data As Variant: Data = Array(1, 1, 1, 1, 0, 1, 2, 3, 2, 2, 2, 2, 0, 1, 2, 3, 1, 1, 0, 1, 0, 0, 1, 2, 1, 2, 1, 1, 0, 1, 2, 2, 1, 1, 1, 2, 0, 1, 2, 0, 0, 1, 1, 1, 2, 0, 1, 2, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 2, 0, 1, 0, 0, 1, 1, 1, 2, 0, 1, 1, 1, 2, 2, 0, 1, 2, 1, 1, 0, 1, 1, 1, 0, 1, 2, 1, 1, 2, 1, 0, 1, 1, 2, 1, 0, 1, 0, 0, 1, 1, 2, 2, 1, 2, 1)
getX = Data((ID - 1) * 16 + Z * 4 + Index - 1) + X
End Function | {
"domain": "codereview.stackexchange",
"id": 29783,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "game, vba, excel, tetris",
"url": null
} |
Joined: Dec 2006
From: Lexington, MA
Posts: 3,267
Thanks: 408
Re: sum of integers squared
Hello, gelatine1!
Quote:
I have been searching for a formula for this, but i could only find one for the sum of the first n integers squared which was: $\frac {n(n+1)(2n+1)}{6}$ Now i have been producing a formula that can calculate the sum of integers squared, no matter where you start or the difference between the integers.
With a constant difference between the integers, we have an arithmetic sequence.
$\text{Then: }\:S \;=\;a^2\,+\,(a+d)^2\,+\,(a+2d)^2\,+\,(a+3d)^2\,+\ ,(a+4d)^2 \,+\,\cdots\:+\,(a+nd)^2$
$\;\;\;\text{where }a\text{ is the first term, }d\text{ is the common difference,}
\;\;\;\text{and there are }n+1\text{ terms.}$
$\text{Add these squares:}$
[color=beige]. . . [/color]$\begin{array}{c} a^2\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \\
\vdots \;\;\;\;\;\;\; \vdots \;\;\;\;\;\;\; \vdots \\
$\text{And we have: }\:S \;=\;(n+1)a^2\,+\,n(n+1)ad\,+\,\frac{n(n+1)(2n+1)} {6}d^2$
[color=beige]. . . . [/color]$\text{ta-}DAA!$
October 15th, 2012, 06:23 PM #4
Math Team
Joined: Jul 2011
From: North America, 42nd parallel
Posts: 3,372
Thanks: 233 | {
"domain": "mymathforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9678992914310606,
"lm_q1q2_score": 0.8168054756046985,
"lm_q2_score": 0.843895106480586,
"openwebmath_perplexity": 2622.527159287881,
"openwebmath_score": 0.5939618349075317,
"tags": null,
"url": "http://mymathforum.com/number-theory/30941-sum-integers-squared.html"
} |
ros, logger, dwa-local-planner
And I have set the ROSCONSOLE_CONFIG_FILE environment variable to point to the file in the original post. My bad, I should have mentioned that.
Also I have tried the rqt gui with the logger settings but I can't find the settings for the DWALocalPlanner (not the DWAPlannerRos code).
Comment by petrik on 2022-03-17:
Ok I think I figured it out. I was thinking that the missing log messages from the DWALocalPlanner were due to me not setting the log level correctly, however it looks like the problem (in my case) is that the planner doesn't generate any plans until a recovery rotate happens. After that it finds a viable path and uses it. So the missing log messages weren't actually missing, the code that generates the log messages was never hit. | {
"domain": "robotics.stackexchange",
"id": 37490,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, logger, dwa-local-planner",
"url": null
} |
radial-velocity
Title: Radial velocities - wings and core Why are radial velocities measured at the core and at wings of a line profile? I mean interactive measurement -- comparing the direct and flipped images of the line profiles.
What is the difference of these radial velocities (I do not mean values)? For instance, why is it better to measure radial velocities of wings for H alpha emission line for Be stars? Does it make sense to measure RVs on a blue wing and a red wing? Or are radial velocities of a core measured for absorption lines and radial velocities of wings for emission lines?
Source: https://slideplayer.com/slide/9358196/ A line profile may be formed in material that has a range of line of sight velocities. The answer to your question depends on what radial velocity you are trying to measure.
For example, if the line broadening is dominated by macroscopic plasma motions, then the wavelength of any particular point in the line profile corresponds to a different line of sight velocity. The core of the line will correspond to the line of sight velocity to the region with the greatest optical depth. The wings of the line correspond to material which is blushifted or redshifted with respect to that, with correspondingly different line of sight velocities.
If you are trying to measure the line of sight velocity of the star as a whole, then you wouldn't want to include any wings of a line that are caused by inflow or outflow of material.
On the other hand, the core of the line might be corrupted by emission from a structure that doesn't correspond to the photosphere (e.g. chromospheric emission line cores in cool stars) and in which case, maybe finding the average radial velocity assuming that the wings of the line are symmetric might be the way to go.
You mention Be stars. These are usually stars that have some kind of disc. The wings of the line will reflect motion towards and away from the observer. In a Keplerian disc, this motion will be symmetric, so the line wings will be symmetrically place either side of the true velocity of the star. The line core might be produced by infalling material, so would be redshifted with respect to the true stellar velocity. | {
"domain": "astronomy.stackexchange",
"id": 5688,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "radial-velocity",
"url": null
} |
organic-chemistry, conformers, cyclohexane
For example, I have put structures of $\alpha$- ($\bf{1}$) and $\beta$-hexoses ($\bf{2}$) in the same image. Just look at orientations of 2,3-dihydroxy-groups in chair conformation of $\bf{1}$. They are in 1,2-ax,eq-orientation, thus are in cis-orientation (see point (3) above). You can confirm that by looking at planer version of the molecule.
In addition, if you look at the orientations of 2,4-dihydroxy-groups in the same chair conformation, you'd see they are in 1,3-ax,eq-orientation as well. Thus are in trans-orientation (see point (6) above). You can again confirm that by looking at planer version of the molecule. | {
"domain": "chemistry.stackexchange",
"id": 14400,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, conformers, cyclohexane",
"url": null
} |
So let's math up. (The local equivalent of Barney Stinson's "Suit up!")
### Particular Case
If both $X$ and $Y$ were dichotomous, then you can assume, without loss of generality, that both assume only the values $0$ and $1$ with arbitrary probabilities $p$, $q$ and $r$ given by \begin{align*} P(X=1) = p \in [0,1] \\ P(Y=1) = q \in [0,1] \\ P(X=1,Y=1) = r \in [0,1], \end{align*} which characterize completely the joint distribution of $X$ and $Y$. Taking on @DilipSarwate's hint, notice that those three values are enough to determine the joint distribution of $(X,Y)$, since \begin{align*} P(X=0,Y=1) &= P(Y=1) - P(X=1,Y=1) = q - r\\ P(X=1,Y=0) &= P(X=1) - P(X=1,Y=1) = p - r\\ P(X=0,Y=0) &= 1 - P(X=0,Y=1) - P(X=1,Y=0) - P(X=1,Y=1) \\ &= 1 - (q - r) - (p - r) - r = 1 - p - q - r. \end{align*} (On a side note, of course $r$ is bound to respect both $p-r\in[0,1]$, $q-r\in[0,1]$ and $1-p-q-r\in[0,1]$ beyond $r\in[0,1]$, which is to say $r\in[0,\min(p,q,1-p-q)]$.) | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9697854103128328,
"lm_q1q2_score": 0.8408386417374486,
"lm_q2_score": 0.867035771827307,
"openwebmath_perplexity": 485.78910135418664,
"openwebmath_score": 0.9997827410697937,
"tags": null,
"url": "https://stats.stackexchange.com/questions/258704/does-covariance-equal-to-zero-implies-independence-for-binary-random-variables"
} |
deep-learning, reference-request, long-short-term-memory, applications
Title: Does LSTM provide any unique value or advantages compared to other algorithms, including "vanilla" RNN? I have heard a lot of hype around LSTM for all kinds of time-series based applications including NLP. Despite this, I haven't seen many (if any) applications of LSTM where LSTM performs uniquely well compared to other type of deep learning, including more vanilla RNN.
Are there any examples where LSTM does significantly better on a particular task, compared to other modern algorithms and architectures? LSTMs were the state-of-the-art (SOTA) in many cases (e.g. machine translation) until transformers came along - now I don't really know the SOTA or where LSTMs still perform better than e.g. transformers. LSTMs were introduced to solve the vanishing and exploding gradient problems. Even the LSTM paper tells you that
In comparisons with RTRL, BPTT, Recurrent Cascade-Correlation,
Elman nets, and Neural Sequence Chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, articial long time lag tasks that have never been solved by previous recurrent network algorithms.
For a specific case where LSTM achieved SOTA (if I remember correctly), you can check the neural machine translation paper. Google used LSTMs for some time in Google Translate. See this paper for more details. | {
"domain": "ai.stackexchange",
"id": 3811,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "deep-learning, reference-request, long-short-term-memory, applications",
"url": null
} |
pcl, arm-navigation, pr2
#7 0x0000000000000000 in ?? ()
Thread 4 (Thread 0x7fffe611c700 (LWP 25339)):
#0 0x00007ffff5500d84 in pthread_cond_wait@@GLIBC_2.3.2 ()
from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x00007ffff7708a32 in ros::ROSOutAppender::logThread() ()
from /opt/ros/fuerte/lib/libroscpp.so
#2 0x00007ffff4f29ce9 in thread_proxy () from /usr/lib/libboost_thread.so.1.46.1
#3 0x00007ffff54fce9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#4 0x00007ffff5229cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
#5 0x0000000000000000 in ?? () | {
"domain": "robotics.stackexchange",
"id": 13041,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "pcl, arm-navigation, pr2",
"url": null
} |
fluid-dynamics, angular-momentum, momentum, vortex
Can anyone help me understand this mysterious passage? It's all about velocity induction in ideally irrotational flows (irrotational except for an infinitely small region of the domain, like vortex tubes).
I haven't read the page you're referring to, so I'll leave the first point for a further edit of the answer
The circulation of a vortex ring is defined as the line integral of the velocity field on a path that form a loop around a infinitely small section of the vortex ring (it's "bound" to the vortex ring, passing one time through the hole of the vortex ring). Helmholtz's theorems state properties of the circulation, roughly speaking constant in time for every path winded once on the vortex ring). This circulation can be interpreted as a result of the viscous, rotational effects inside the core of the vortex ring.
being the circulation constant along the vortex ring, every section of a circular vortex ring induces velocity in the same direction on all the other regions of the vortex ring. Let's use cylindrical coordinates to descibe the vortex ring and the velocity field, with circulation $\mathbf{\Gamma}(\mathbf{r}_1) = \Gamma \mathbf{\hat{\theta}}(\mathbf{r}_1)$. The velocity induced by the circulation at the point $\mathbf{r}_1$ on the point $\mathbf{r}_2$ has direction determined by the cross product $ \mathbf{\Gamma}(\mathbf{r}_1) \times (\mathbf{r}_2 - \mathbf{r}_1) ≈ d\mathbf{u}_1(\mathbf{r}_2)$. The overall induced velocity on the point $\mathbf{r}_2$ is the integral of all the contributions from the points of the vortex ring (intgeral over coordinate $\mathbf{r}_1$). It should not be hard to prove that all these contributions are positive in the $\mathbf{\hat{z}}$-direction (try to drraw it and use tje right-hand rule for the cross product). | {
"domain": "physics.stackexchange",
"id": 90137,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fluid-dynamics, angular-momentum, momentum, vortex",
"url": null
} |
# Simple and Compound Interest Problem
• January 14th 2011, 01:41 AM
dumluck
Simple and Compound Interest Problem
Hi All,
Q:Shawn invested one half of his savings in a bond that paid simple interest for 2 years and received $550 as interest. He invested the remaining in a bond that paid compound interest, interest being compounded annually, for the same 2 years at the same rate of interest and received$605 as interest. What was the value of his total savings before investing in these two bonds?
1. $5500 2.$ 11000
3. $22000 4.$ 2750
5. $44000 Answer Explanation... 1. Interest for the first year of the simple compound bond is 275/2 -$275.
2. So we need to determine the rate of interest based on this so...
605 - 550 = 55. That's the difference between the interest earned on the simple vs compound interest bonds.
55/275 * 100/1 = 11/55 * 100/1 = 20% Interest
3. 275 represents 20% interest of a number
275/20 * 100/1 = 55/4 * 100/1 = $1375. 4. This represents half the money so 1375*2 =$2750. (D).
My questions is: Why are we using 55. I.E. The difference between the two interest to determine the interest in 2. What does this 55 represent (besides the difference between the two?)
• January 14th 2011, 09:20 AM
Soroban
Hello, dumluck!
I'm not impressed with their explanation.
Quote:
Q: Shawn invested one half of his savings in a bond
that paid simple interest for 2 years and received $550 as interest. He invested the remaining in a bond that paid compound interest, compounded annually, for the same 2 years at the same rate of interest and received$605 as interest.
What was the value of his total savings before investing in these two bonds?
. . $1.\;\5500 \qquad 2.\;\11000 \qquad 3.\;\22000 \qquad 4.\;\2750 \qquad 5.\;\44000$
Let $\,r$ be the annual interest rate for both accounts.
Let $\,P$ be the amount invested in each account. | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9697854164256366,
"lm_q1q2_score": 0.8143617516361387,
"lm_q2_score": 0.839733963661418,
"openwebmath_perplexity": 3076.916411170652,
"openwebmath_score": 0.7637285590171814,
"tags": null,
"url": "http://mathhelpforum.com/business-math/168305-simple-compound-interest-problem-print.html"
} |
supernova, stellar-evolution, metallicity
Title: Why do type Ia supernovas produce more iron than type II My course book on astronomy states the following.
Older stars seem have higher oxygen abundances than iron. Explanation is that back in the days when these older stars were being formed type II supernova's were common, while type Ia weren't. So later, when the type Ia was becoming more common, the - now younger - stars were formed with higher iron abundances.
Why are type Ia supernova's better for iron enrichment than type II, and were these type II supernova's in any way better for higher oxygen abundances - or just less good at producing iron (and why)? Context
Iron has the highest nuclear binding energy per nucleon of all the elements (not completely true, but sufficiently accurate in an astronomical context). So, fusion of light elements into iron or something lighter is an exothermic process - you gain energy doing it, allowing the star to function. This is what happens in the last stages of a type II supernova. The core of a massive star in its last moments of life is hot and dense enough to fuse silicon into iron. Just before the supernova explosion, there is an iron ball of about 1.4 solar masses at the centre.
The progenitor of a supernova type Ia is a binary system where a "normal" star loses mass to a compact stellar remnant (a white dwarf). Once the white dwarf has accreted enough mass to be above a limit of 1.4 solar massses, fusion starts again, completely disintegrating the compact object.
Explosion
A SN Ia completely destroys the white dwarf progenitor in a runaway fusion process.
In a SN II, the pressure on the central iron ball exceeds the degeneracy pressure exerted by the electrons in the iron atoms' electron shell. The Fermi principle in quantum mechanics states that no Fermion (such as an electron) may occupy the same quantum mechanical state as another. The pressure exerted here is so large that the electrons of the iron atoms can no longer obey it and are forced into the nucleus, where they react with the protons to form neutrons.
Iron abundance | {
"domain": "astronomy.stackexchange",
"id": 2305,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "supernova, stellar-evolution, metallicity",
"url": null
} |
field-theory, hamiltonian-formalism, solitons
Polyakov (1976), "Isomeric states of quantum fields," Sov. Phys. JETP, 41:988-995 (http://jetp.ac.ru/cgi-bin/dn/e_041_06_0988.pdf)
Bougie et al (2012), "Supersymmetric Quantum Mechanics and Solvable Models," Symmetry 4:452-473 (https://www.mdpi.com/2073-8994/4/3/452/htm)
Flügge (1999), Practical Quantum Mechanics, Springer (https://books.google.com/books?id=VpggN9qIFUcC&pg=PA94)
Alvarez-Castillo (2007), "Exactly Solvable Potentials and Romanovski Polynomials in Quantum Mechanics" (https://arxiv.org/abs/0808.1642)
Dereziński and Wrochna (2010), "Exactly solvable Schrödinger operators" (https://arxiv.org/abs/1009.0541) | {
"domain": "physics.stackexchange",
"id": 73739,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "field-theory, hamiltonian-formalism, solitons",
"url": null
} |
homework-and-exercises, solid-mechanics
Title: What machine element can we estimate a bicycle rim to be? In mechanical terms, what machine element can we consider a bicycle rim to be?
Like can we design it based on the assumption that it is a curved beam/ a hoop? In the mechanical engineering world, it is modeled as a hoop. There is a set of equations and analytical tools that deal with hoop stresses which have been applied to spoked wheels, and a large body of literature exists on them. In particular, see the papers by Jobst Brandt. | {
"domain": "physics.stackexchange",
"id": 72959,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, solid-mechanics",
"url": null
} |
javascript, jquery, optimization
Title: Setting variables inside if / else-if statment blocks I have written the code below and I am trying to work out if there is a more efficient way of doing it: i.e. less lines of code and quicker etc.
I am also wondering whether it is OK to declare variables inside of an if ... else if statement.
function test() {
var x = 3;
if (x > 5) {
var msg = "m", state = "d";
} else if (x < 5) {
var msg = "a", state = "d";
} else {
var msg = "e", state = "n";
}
$('#id1').removeClass().addClass(state);
$('#id2').html("Some text " + msg + " and more text.");
$('#id3').attr('title', "Different text " + msg + " still different text.");
}
As background, the following code is the original code I had before refactoring/rewriting: | {
"domain": "codereview.stackexchange",
"id": 483,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery, optimization",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.