text stringlengths 49 10.4k | source dict |
|---|---|
The next figure shows one realization of a Poisson process $$(N_t)$$, with jumps at each new arrival.
np.random.seed(1234)
T = 5
Ws = np.random.exponential(size=T)
Js = np.cumsum(Ws)
Ys = np.arange(T)
fig, ax = plt.subplots()
ax.plot(np.insert(Js, 0, 0)[:-1], Ys, 'o')
ax.hlines(Ys, np.insert(Js, 0, 0)[:-1], Js, label='$N_t$')
ax.vlines(Js[:-1], Ys[:-1], Ys[1:], alpha=0.25)
ax.set(xticks=[],
yticks=range(Ys.max()+1),
xlabel='time')
ax.grid(lw=0.2)
ax.legend(loc='lower right')
plt.show()
## 2.3. Stationary Independent Increments¶
One of the defining features of a Poisson process is that it has stationary and independent increments.
This is due to the memoryless property of exponentials.
It means that
1. the variables $$\{N_{t_{i+1}} - N_{t_i}\}_{i \in I}$$ are independent for any strictly increasing finite sequence $$(t_i)_{i \in I}$$ and
2. the distribution of $$N_{t+h} - N_t$$ depends on $$h$$ but not $$t$$.
A detailed proof can be found in Theorem 2.4.3 of .
Instead of repeating this, we provide some intuition from a discrete approximation.
In the discussion below, we use the following well known fact: If $$(\theta_n)$$ is a sequence such that $$n \theta_n$$ converges, then | {
"domain": "quantecon.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9830850852465429,
"lm_q1q2_score": 0.8451226167552972,
"lm_q2_score": 0.8596637559030338,
"openwebmath_perplexity": 719.5748317107717,
"openwebmath_score": 0.9237521886825562,
"tags": null,
"url": "https://continuous-time-mcs.quantecon.org/poisson.html"
} |
orbital-mechanics, orbital-elements, positional-astronomy, python
things = (sun, earth, jupiter, mumbai)
names = ('Sun', 'Earth', 'Jupiter', 'Mumbai')
positions = [thing.at(time).position.km for thing in things]
velocities = [thing.at(time).velocity.km_per_s for thing in things]
for name, position, velocity in zip(names, positions, velocities):
print(name, position, velocity)
print("relative to Earth's Geocenter")
for name, position, velocity in zip(names, positions, velocities):
print(name, position-positions[1], velocity-velocities[1])
print("relative to Mumbai")
for name, position, velocity in zip(names, positions, velocities):
print(name, position-positions[3], velocity-velocities[3]) | {
"domain": "astronomy.stackexchange",
"id": 3984,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "orbital-mechanics, orbital-elements, positional-astronomy, python",
"url": null
} |
special-relativity, approximations, differentiation, calculus
Well it all depends on precisely what you are doing. Examples:
If you are attempting to determine what the Lie algebra $\mathfrak{so}(3,1)$ of the Lorentz group $\mathrm{SO}(3,1)$ is, then you are doing something exact because the Lie algebra of the Lorentz group is obtained precisely by considering a general one-parameter family $\Lambda(\epsilon)$ of Lorentz transformations that "begin" at the identity, $\Lambda(0) = I$, and then determining the derivative of such a family at $\epsilon = 0$. The result $X$ is an element of the Lie algebra (up to a factor of $i$ depending on your conventions):
\begin{align}
\Lambda'(0) = X
\end{align}
How is this related to neglecting second-order terms and higher? Well recall that we can Taylor expand the family $\Lambda(\epsilon)$ as follows:
\begin{align}
\Lambda(\epsilon) = I + \epsilon\Lambda_1 + O(\epsilon^2)
\end{align}
And now, if we take the derivative with respect to $\epsilon$ and then set $\epsilon$ to zero, we obtain precisely the coefficient of the term fist order in $\epsilon$; $\Lambda'(0) = \Lambda_1$, which is therefore an element of the Lie algebra of the Lorentz group.
If you are attempting to determine what happens if you perform a "small" Lorentz transformation, then considering a family $\Lambda(\epsilon)$ as above and only keeping the terms up to first order is an approximation, but it can be viewed as a more precise definition of "small."
If you are attempting to show that some object, like a term in a Lagrangian, or an action etc., is Lorentz-invariant, then often you only care to show that this invariance holds "infinitesimally," namely to first order. One reason for this is that such infinitesimal invariance is sufficient for the conclusions of Noether's theorem to hold, so in this context you don't so much care if the object being considered is invariant under a full Lorentz transformation. | {
"domain": "physics.stackexchange",
"id": 10332,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, approximations, differentiation, calculus",
"url": null
} |
navigation, exploration, turtlebot, gmapping, explore
/move_base
Outbound:
/rviz_1370012116847082561
/move_base
/rosout
/rviz_1370012116847082561
/rviz_1370012116847082561
/move_base
/navigation_velocity_smoother :
Inbound:
/navigation_velocity_smoother
/mobile_base_nodelet_manager
/mobile_base
/cmd_vel_mux
/bumper2pointcloud
Outbound:
/mobile_base_nodelet_manager
/mobile_base
/bumper2pointcloud
/navigation_velocity_smoother
/cmd_vel_mux
/rosout
/bumper2pointcloud :
Inbound:
/bumper2pointcloud
/mobile_base_nodelet_manager
/mobile_base
/cmd_vel_mux
/navigation_velocity_smoother
Outbound:
/rosout
/mobile_base_nodelet_manager
/mobile_base
/bumper2pointcloud
/navigation_velocity_smoother
/cmd_vel_mux
/master_sync_turtlebot_0196_3193_1407679111 :
Inbound:
Outbound:
/slam_gmapping :
Inbound:
/camera/camera_nodelet_manager
/slam_gmapping
/robot_state_publisher
/robot_pose_ekf
/mobile_base_nodelet_manager
Outbound:
/rosout
/rviz_1370012116847082561
/robot_pose_ekf
/move_base
/slam_gmapping
/explore
/rviz_1370012116847082561
/move_base
/mobile_base :
Inbound:
/mobile_base
/mobile_base_nodelet_manager
/cmd_vel_mux
/bumper2pointcloud
/navigation_velocity_smoother
Outbound:
/mobile_base_nodelet_manager
/mobile_base
/bumper2pointcloud
/navigation_velocity_smoother
/cmd_vel_mux
/rosout
/rosout :
Inbound:
/zeroconf/zeroconf_avahi
/robot_state_publisher
/diagnostic_aggregator
/mobile_base | {
"domain": "robotics.stackexchange",
"id": 14378,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation, exploration, turtlebot, gmapping, explore",
"url": null
} |
c++, converting
// Split
vow_v = split(vow_tmp.c_str(),'*');
con_v = split(con_tmp.c_str(),'*');
// Put consonants and vowels (form is CVCV for each 3 bytes of data)
uint32_t np = vow_v.size()+con_v.size();
for (i=0;i<np;i++)
{
if (i%2 == 0) all_v.push_back(con_v[i/2]);
if (i%2 == 1) all_v.push_back(vow_v[i/2]);
}
// Put padding
for (i=0;i<padc;i++)
all_v.push_back("?");
uint32_t vlen = all_v.size();
i = 0;
while (vlen-- && (all_v[si] != "?"))
{
// Find which position is the one that corresponds to cons. and vow.
if (si%2 == 0) ca4[i++] = (uint8_t)string_find(cons,con_v[si/2],128);
if (si%2 == 1) ca4[i++] = (uint8_t)string_find(vows,vow_v[si/2], 32);
si++;
if (i == 4)
{
ca4_to_ca3(ca4,ca3);
for (i=0;i<3;i++)
{
if (i == 0) out += ca3[0];
if (i == 1) out += ca3[1];
if (i == 2) out += ca3[2];
}
i = 0;
}
}
if (i)
{
for (j=i;j<4;j++)
ca4[j] = 0;
ca4_to_ca3(ca4,ca3); | {
"domain": "codereview.stackexchange",
"id": 22622,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, converting",
"url": null
} |
catkin, dependencies, publisher, linking, subscribe
geometry_msgs::Twist getCommandMsg() {
geometry_msgs::Twist msg;
// body coordinates
// x is forward linear motion
msg.linear.x = DefaultVelocityCmd;
// rotation around the Z axis
msg.angular.z = DefaultTurnRateCmd;
// all other parameters are ignored in the Twist message
// NOTE:: args[0] is the name of the
if (incoming.linear.x) {
try {
msg.linear.x = incoming.linear.x;
msg.linear.x = std::max(msg.linear.x, -MaxVelocityCommand);
msg.linear.x = std::min(msg.linear.x, MaxVelocityCommand);
}
catch (...) {
// could not parse the input, use default value
msg.linear.x = DefaultVelocityCmd;
}
}
if (incoming.angular.z) {
try {
msg.angular.z = incoming.angular.z;
msg.angular.z = std::max(msg.angular.z, -MaxTurnRateCommand);
msg.angular.z = std::min(msg.angular.z, MaxTurnRateCommand);
}
catch (...) {
// could not parse the input, use default value
msg.angular.z = DefaultTurnRateCmd;
}
}
return msg;
}
Sorry about the wonky formatting...I'm still trying to get use to this posting stuff. Please let me know if you need more info! And thanks for the help!!!
Here is my cmake file like requested!
cmake_minimum_required(VERSION 2.8.3)
project(vel_cmd_filter)
find_package(catkin REQUIRED COMPONENTS
j5_msgs
roscpp
rospy
std_msgs
genmsg
)
INCLUDE_DIRS include
LIBRARIES vel_cmd_filter
CATKIN_DEPENDS j5_msgs roscpp rospy std_msgs
DEPENDS system_lib
)
include_directories(
${catkin_INCLUDE_DIRS}
) | {
"domain": "robotics.stackexchange",
"id": 23514,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "catkin, dependencies, publisher, linking, subscribe",
"url": null
} |
c++, beginner, object-oriented, datetime, homework
std::cout << "Enter the year: ";
year = getSanitizedNum();
}
This should not be a member function of the Date class. Objects should have a single responsibility (this is the Single Responsibility Principle). Objects with only a single responsibility are easier to maintain in the long run. Here, your Date class is concerned with representing a date and gathering inputs. The second responsibility should be split out. This would be better as a stand-alone function that created and returned a new Date. This means that you are going to need a constructor too (will show the code for it below).
/**
* Makes sure data isn't malicious, and signals user to re-enter proper data if invalid
*/
unsigned getSanitizedNum()
{
unsigned input = 0;
while(!(std::cin >> input))
{
// clear the error flag that was set so that future I/O operations will work correctly
std::cin.clear();
// skips to the next newline
std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
std::cout << "Invalid input. Please enter a positive number: ";
}
return input;
}
This function is not particularly flexible, and is not necessarily offering that much safety. For example, it is used to read in a month, which should be between 1 and 12, but it will accept any positive integer. Also, it does not repeat the prompt on invalid input. Repeating the prompt is more user friendly so that a user who may have mis-typed doesn't have to look several lines up to remember what they are entering. To make this more flexible, as well as safer, we can introduce some parameters:
unsigned int getSanitizedNum(std::string prompt, unsigned int min, unsigned int max)
{
unsigned int input = 0; | {
"domain": "codereview.stackexchange",
"id": 11034,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, beginner, object-oriented, datetime, homework",
"url": null
} |
quantum-mechanics, quantum-spin, interference, measurement-problem
Title: Stern Gerlach and interference I recently came across this experiment: a beam of spin 1/2 particles pass through a Stern Gerlach apparatus oriented in the z direction. After passing through it and splitting, the beams are again merged into one with another magnetic field.First part of the experiment in the z direction. | {
"domain": "physics.stackexchange",
"id": 81759,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-spin, interference, measurement-problem",
"url": null
} |
datetime, typescript, browser-storage
localStorage
? NotificationsStorage
? (NotificationsStorage.lastCheck && NotificationsStorage.lastCheck > today)
? (
this.lastCheck = NotificationsStorage.lastCheck,
localStorage.setItem('NotificationsStorage', JSON.stringify({ lastCheck: now }))
)
: (
localStorage.setItem('NotificationsStorage', JSON.stringify({ lastCheck: today })),
this.lastCheck = today
)
: (
localStorage.setItem('NotificationsStorage', JSON.stringify({ lastCheck: today })),
this.lastCheck = today
)
: console.warn('Current browser doesn\'t support Local Storage API'); Whether you use ternaries or classic if/else you can often make your nesting easier to read by inverting your conditions. i.e.
! window.localStorage
? console.warn('Current browser doesn\'t support Local Storage API');
: NotificationsStorage
&& NotificationsStorage.lastCheck
&& NotificationsStorage.lastCheck > today
? (
this.lastCheck = NotificationsStorage.lastCheck,
localStorage.setItem('NotificationsStorage', JSON.stringify({ lastCheck: now }))
)
: (
localStorage.setItem('NotificationsStorage', JSON.stringify({ lastCheck: today })),
this.lastCheck = today
)
Although you can't do it with a ternary operator, I also usually prefer to use early returns to limit nesting. This also avoids the issue that you access localStorage in your first line even though it might not exist.
if (! window.localStorage) {
console.warn('Current browser doesn\'t support Local Storage API');
return;
} | {
"domain": "codereview.stackexchange",
"id": 30507,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "datetime, typescript, browser-storage",
"url": null
} |
c#, bitwise, .net-2.0
if (bitIndex == 7)
{
return value.IsBitSetAtIndex(7) == second.IsBitSetAtIndex(7);
}
var maskIndex = (byte) (6 - bitIndex);
return value.MaskRight(maskIndex) == second.MaskRight(maskIndex);
}
/// <summary>
/// Determine if the byte arrays share a common prefix from left-to-right. Indexing is 0-based.
/// </summary>
/// <param name="first">The first byte array.</param>
/// <param name="second">The byte array to compare against.</param>
/// <param name="bitIndex">The bit index to check up to.</param>
/// <returns></returns>
/// <exception cref="IndexOutOfRangeException">
/// Thrown when the bit index is greater than the number of bits in the smallest byte array.
/// </exception>
public static bool HasSamePrefix(
this byte[] first
, byte[] second
, int bitIndex
)
{
var maxIndex = Math.Min(first.Length, second.Length) - 1;
var endByteIndex = bitIndex / 8;
var byteIndex = 0;
if (endByteIndex > maxIndex)
{
throw new IndexOutOfRangeException("Bit index exceeds bit index of smallest array.");
}
for (; byteIndex < endByteIndex; byteIndex++)
{
if (first[byteIndex] != second[byteIndex])
{
return false;
}
}
return first[byteIndex].HasSamePrefix(
second[byteIndex]
, (byte) (7 - bitIndex % 8)
);
}
}
} ShiftRight does not need to be a series of shift-by-1 operations.
An other approach is:
shift whole bytes, by a distance of distance / 8
shift the bytes by distance % 8, while shifting in bits from the next byte | {
"domain": "codereview.stackexchange",
"id": 36072,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, bitwise, .net-2.0",
"url": null
} |
A : Dice shows 5.
P(A/E1) = $\frac16$
P(A/E2) = $\frac18$
By Bayes Theorem,
$P(E1/A) = \frac{\text{P(E1)P(A/E1)}}{\text{P(E1)P(A/E1) + P(E2)P(A/E2)}}$
Put values to get the answer.
It is important to know whether ${\tt H}$ chooses the 6-sided or 8-sided dice. $$\begin{cases} \Pr(5|{\tt H})=\frac{1}{6}\\ \Pr(5|{\tt T})=\frac{1}{8} \end{cases}$$ in the former case, and $$\begin{cases} \Pr(5|{\tt H})=\frac{1}{8}\\ \Pr(5|{\tt T})=\frac{1}{6} \end{cases}$$ in the latter.
From the Bayes rule: | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9719924802053235,
"lm_q1q2_score": 0.8078647884277755,
"lm_q2_score": 0.8311430436757313,
"openwebmath_perplexity": 953.4520843817892,
"openwebmath_score": 0.5460915565490723,
"tags": null,
"url": "https://math.stackexchange.com/questions/2085953/conditional-probability-of-flipping-a-coin-and-throw-a-dice"
} |
ros, turtlebot-calibration
Title: turtlebot_calibration isn't working as expected
My robot isn't a turtlebot, but it's very close / uses many turtlebot parts.
I'm trying to run the turtlebot calibration routine. On the first step, it rotates 360 degrees. On the second, it only makes it half way, 180 degrees.
Alternativly, I can set odom_angular_scale_correction so that it goes 2 full circles the first step and the other speeds only go one.
Any idea what would cause the lower speed s the higher speeds to be completely off?
[edit] I replaced the create and the power/gyro board without and changes, so it's probably sotware [/edit]
Originally posted by Murph on ROS Answers with karma: 1033 on 2012-04-01
Post score: 1
Original comments
Comment by Murph on 2012-04-02:
Dropping the update_rate of turtlebot_node to 10 instead of 30 made it perform a lot better in the calibration, but the rotation still seems way off when I actually drive the robot around..
It is supposed to turn 720 the first time then 360 the following times.
Originally posted by tfoote with karma: 58457 on 2012-04-02
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 8822,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, turtlebot-calibration",
"url": null
} |
python, beginner, python-3.x, machine-learning, neural-network
@staticmethod
def ReLU_deriv(Z):
return Z > 0
@staticmethod
def one_hot(Y):
one_hot_Y = np.zeros((Y.size, Y.max() + 1))
one_hot_Y[np.arange(Y.size), Y] = 1
return one_hot_Y.T
@staticmethod
def calc_diff(dz, t):
return MINV * dz @ t, MINV * np.sum(dz, axis=1).reshape(-1, 1)
def __init__(self, X, Y, a):
self.X = X
self.Y = Y
self.a = a
self.init_parameters()
def init_parameters(self):
self.weights = [
np.random.normal(size=size) * scale for size, scale in PARAMETERS
]
def forward_prop(self):
Z1 = self.W1 @ self.X + self.b1
A1 = Neural_Network.ReLU(Z1)
Z2 = self.W2 @ A1 + self.b2
A2 = Neural_Network.softmax(Z2)
self.corrections = [Z1, A1, Z2, A2]
def backward_prop(self):
dZ2 = self.A2 - Neural_Network.one_hot(self.Y)
dW2, db2 = Neural_Network.calc_diff(dZ2, self.A1.T)
dZ1 = self.W2.T @ dZ2 * Neural_Network.ReLU_deriv(self.Z1)
dW1, db1 = Neural_Network.calc_diff(dZ1, self.X.T)
self.deltas = [self.a * i for i in (dW1, db1, dW2, db2)]
def update_parameters(self):
self.weights = [a - b for a, b in zip(self.weights, self.deltas)]
def get_predictions(self):
self.predictions = np.argmax(self.A2, 0) | {
"domain": "codereview.stackexchange",
"id": 44691,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, python-3.x, machine-learning, neural-network",
"url": null
} |
javascript, jquery, canvas
const zoomSettings = [
{widths: [400, 600, 1400, 1600, 1800], zoom: 1.1},
{widths: [800, 1000, 1200], zoom: 1.2}
];
const getZoom = (width) => {
var zoom = zoomSettings.find(arr => {
return arr.widths.some(val => width >= val && width <= val + 30);
});
if (zoom) { return zoom.zoom }
return 1;
}
function query (query) { return document.querySelector(query) }
const eLetter = query("#Eletter");
const letteringL = query("#LetteringL");
eLetter.addEventListener("keyup", keyUpEvent);
function keyUpEvent(event) {
const obj = canvas.getActiveObject();
if (obj) {
const widc = Math.round(obj.getWidth());
if (event.target.value.length > letteringL.value) {
canvas.setZoom(canvas.getZoom() / getZoom(widc));
} else {
canvas.setZoom(canvas.getZoom() * getZoom(widc));
}
}
canvas.getActiveObject().setText(event.target.value);
canvas.renderAll();
letteringL.value = event.target.value.length;
}
<script src = "https://cdnjs.cloudflare.com/ajax/libs/fabric.js/1.7.20/fabric.min.js" ></script>
<div class="wrapper-canvas">
<canvas class="" id="editor" width="1000" height="auto"></canvas>
</div>
<input class="text-input" type="text" id="Eletter" value="Your Text">
<input id="LetteringL" type="hidden" value=""> | {
"domain": "codereview.stackexchange",
"id": 28360,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery, canvas",
"url": null
} |
homework-and-exercises, special-relativity, dirac-matrices, clifford-algebra
\end{align*}
When taking into consideration that $\omega$ is infinitesimal the second identity is trivial. Neglecting higher order terms in $\omega$ and using $(1)$ yields
\begin{align*}
\left(1+\frac{i}{2}\omega_{\rho\sigma}~S^{\rho\sigma}\right)\gamma^\mu\left(1-\frac{i}{2}\omega_{\rho\sigma}~S^{\rho\sigma}\right) &= \gamma^\mu + \frac{i}{2}\omega_{\rho\sigma}[S^{\rho\sigma}, \gamma^\mu]\\
&= \left(1-\frac{i}{2}\omega_{\rho\sigma}~\mathcal{J}^{\rho\sigma}\right)^{\mu}{}_{\nu}~\gamma^{\nu}
\end{align*} | {
"domain": "physics.stackexchange",
"id": 31790,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, special-relativity, dirac-matrices, clifford-algebra",
"url": null
} |
bioinformatics, molecular-genetics, genomics
Title: How can I download a gene sequence from GenBank (NCBI)? Could you tell me the steps to find and download a gene sequence from GeneBank?
I would appreciate your help. Go to the NCBI website, fill the search field with relevant information in the upper part of page, select "nucleotide" from the drop-down menu just to the left of the search field and click search. You will get a list of items, by clicking "FASTA" link below any of them will bring you the corresponding sequence.
Depending on your interest, you can choose other databases from that menu, and some of them are interlinked. For example you can do the search by selecting "gene" instead of "nucleotide" and when displaying a selected gene info, you can see the links that lead to nucleotide database for getting the sequence. | {
"domain": "biology.stackexchange",
"id": 2787,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "bioinformatics, molecular-genetics, genomics",
"url": null
} |
harmonic-oscillator, differential-equations, models, coupled-oscillators
Edit 1:
I have anchored the second point mass to the leftmost point L as suggested. Therefore my equations looks like such:
$$x_1''(t) = \frac{-C_1}{M_1}*x_1(t) + \frac{C}{M_1}*(x_2(t)-x_1(t)) - \frac{B}{M_1}*\frac{x_1'(t)}{|x_1'(t)|}$$
$$x_2''(t) = \frac{-C_2}{M_2}*(x_2(t)-L) - \frac{C}{M_2}*(x_2(t)-x_1(t)) - \frac{B}{M_2}*\frac{x_2'(t)}{|x_2'(t)|}$$
Under the influence of sliding friction alone, should my masses come to rest eventually? When I solve the system for B>>1 the masses oscillate as if they were undamped -- why might this be? You need to relate the $C_2$ spring force to the position of mass 2 relative to the anchoring point on the right wall. Currently it is anchored to the origin. So, try
$$
(C_2/M_2) (L-x_2(t))
$$
where $L$ is the coordinate of the right wall. | {
"domain": "physics.stackexchange",
"id": 52434,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "harmonic-oscillator, differential-equations, models, coupled-oscillators",
"url": null
} |
machine-learning, neural-network, time-series, rnn
Title: EEG data layout for RNN How should one structure an input data matrix (containing EEG data) for an RNN?
Normally, RNNs are presented as language models where you have a one hot vector indicating the presence of a word. So if you input was the sentence "hello how are you", you would have 4 one hot vectors (I think):
[1, 0, 0, 0]
[0, 1, 0, 0]
[0, 0, 1, 0]
[0, 0, 0, 1] | {
"domain": "datascience.stackexchange",
"id": 1547,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, neural-network, time-series, rnn",
"url": null
} |
rust, numerical-methods
for i in 1..NT {
// RK4 scheme
k1 = g[i - 1];
l1 = -f[i - 1];
k2 = k1 + l1 * dt / 2.0;
l2 = l1 - k1 * dt / 2.0;
k3 = k1 + l2 * dt / 2.0;
l3 = l1 - k2 * dt / 2.0;
k4 = k1 + l3 * dt;
l4 = l1 - k3 * dt;
// next values
f[i] = -l1 + (k1 + 2.0 * k2 + 2.0 * k3 + k4) * dt / 6.0;
g[i] = k1 + (l1 + 2.0 * l2 + 2.0 * l3 + l4) * dt / 6.0;
}
let mut file_path = PathBuf::from("results");
fs::create_dir_all(&file_path)?; //note that the `?` postfix operator is for error propagation, see the previously linked book chapter for details
file_path.push("RK4.dat");
let mut my_file = fs::File::create(&file_path)?;
for j in 0..NT {
writeln!(my_file, "{} {} {}", t[j], fe[j], f[j])?;
}
my_file.flush()?;
Ok(())
}
Playground
Your code is pretty good overall, especially considering you've only been using rust for a day- the issues primarily come from the fact that file systems suck and code examples love to avoid proper error handling. Anyway, since you're interested in numerical modeling, here are a few crates that might be of interest: | {
"domain": "codereview.stackexchange",
"id": 43921,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rust, numerical-methods",
"url": null
} |
ros, inverse-kinematics, moveit, ros-melodic
Title: UR5e + 2F_85 gripper moveit configuration
Hi everyone,
I am using the default setup of universal robots repo (calibration-devel branch of fmauch) and the ur_robot_driver (melodic branch) of Universal_Robots_ROS_Driver.
As I have an attached gripper (Robotiq 2f_85 model) to my robot and I need the robot to do some tasks with Inverse Kinematics, it would be perfect to have a moveit configuration with the merge of both robot and gripper.
I saw that universal robot repo doesn’t provide any configuration like this unfortunately, can you please explain me how to do it properly or can you share an already existing package in case there is one?
Or do you know a way to modify properly the URDF file with a new frame for the gripper? In such a way that the IK works wrt to that frame instead of tool0?
Thank you very much!! I would appreciate a lot your help!!!
Originally posted by francirrapi on ROS Answers with karma: 18 on 2022-06-18
Post score: 0
This repository uses two UR5e with Robotiq grippers to perform assembly tasks. You can check its moveit_config and description package to see how to connect the robot and gripper.
The steps in short:
Add the robot and gripper into a URDF scene
(Optional) In the URDF, define a frame called gripper_tip as a child of the robot's end-effector or flange link, offset it to where the tip of the gripper would be when closed. This is useful to define your IK chain later.
Run MoveIt Setup Assistant, following the tutorial and setting the end effector link to your gripper_tip frame
Originally posted by fvd with karma: 2180 on 2022-06-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 37777,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, inverse-kinematics, moveit, ros-melodic",
"url": null
} |
dna, molecular-genetics, mutations
Title: gene inversion and DNA directionality The directionality of the DNA goes from the 3-prime end to the 5-prime end.
Thus, the inversion of a gene would connect a 5-prime to a 5-prime. How could that be?
Maybe inverting a gene also switches between the two strands of the DNA?
Thus, if one stand goes from 5 to 3, and the other goes from 3 to 5,
Inversion would make the first go from 3 to 5, and the other from 5 to 3,
And then each strand would be able to connect to the other?
Does inversion both changes direction, and switches stands?
If not, then how is gene inversion possible?
Thanks!
edit:
My main question arises from the contradiction between:
1) The DNA has a direction (3-prime to 5-prime).
2) inversion inverts the direction.
Thus, my question is: How can an inversion mutation happen?
How can a piece of the DNA, after being inverted (i.e., it is now in the wrong direction), connect back to the DNA? As you know, chromosomal DNA has two strands, with each strand running in opposite 5' <-> 3' directions. When a chromosomal segment is inverted, both the 5'->3' sequence and its complementary 3'->5' sequence from the other strand are inverted together. When reattached to the two strands it came from, the segment will have effectively switched its orientation on both strands by switching strands. This happens exactly because a 5' end must mate with a 3' end. This diagram should help make this clear (although it shows a duplication with inversion instead of an inversion in place):
Source: Wikipedia Commons | {
"domain": "biology.stackexchange",
"id": 10649,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "dna, molecular-genetics, mutations",
"url": null
} |
quantum-mechanics, energy, hilbert-space, operators, wavefunction
Title: Why does applying the kinetic energy operator to a free particle result in a divergent integral? The wavefunction of a free particle is just
$$\psi = Ae^{i(kx-\omega t)}$$
and when you plug this into the Schrodinger equation you get the dispersion relation
$$E = \frac{\hbar^2 k^2}{2m}$$
However, using the kinetic energy operator to get the expected value for the kinetic energy leads to a divergent integral
$$\hat{T} = \frac{-\hbar^2}{2m} \frac{\partial^2}{\partial x^2}$$
$$\left< T \right> = \int_{-\infty}^{\infty}\psi^*\hat{T}\psi dx$$
$$\left< T \right> = \frac{A^2\hbar^2k^2}{2m} \cdot \int_{-\infty}^{\infty} dx$$
Why doesn't this approach work? The equation for the expectation value of an operator:
$$ \langle\hat A\rangle = \int \psi^* \hat A \psi ~ d^3x $$
assumes that the wavefunction $\psi$ is normalised. If it is not normalised you need to use:
$$ \langle\hat A\rangle = \frac{\int \psi^* \hat A \psi ~ d^3x}{\int \psi^* \psi ~ d^3x} $$
The problem is that the infinite plane wave is not a physical state and cannot be normalised, so you can't simply calculate its kinetic energy. This isn't a problem because the infinite plane wave would describe an infinitely delocalised particle and in real life no particles are infinitely delocalised. A real particle would have a wavefunction that is a wave packet of some form. | {
"domain": "physics.stackexchange",
"id": 87938,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, energy, hilbert-space, operators, wavefunction",
"url": null
} |
$$\log_c(a^b) = b\log_c(a)$$
This property is known as the “logarithm power rule”.
Your question is about the specific case where $$b = \dfrac12$$. You can see that it’s true by rewriting $$\ln(\sqrt{a})$$ and then using the logarithm property, like this:
$$\ln(\sqrt{a}) = \ln\left(a^{1/2}\right) = \frac{\ln a}{2}$$
The proof of the rule is as follows:
$$a = c^{\log_c(a)} \tag*{Exponentiation as inverse of \log}$$ $$a^b = \left(c^{\log_c(a)}\right)^b \tag*{Each side to the power of b}$$ $$a^b = c^{b\log_c(a)} \tag*{Power rule of exponentiation}$$ $$\log_c\left(a^b\right) = \log_c\left(c^{b\log_c(a)}\right) \tag*{\log_c of both sides}$$ $$\boxed{\log_c\left(a^b\right) = b\log_c(a)} \tag*{\log as inverse of exponentiation}$$
$$\ln x = \log_e x$$. Multiply $$\frac{\ln(a)}{2} = \ln(\sqrt{a})$$ by $$2$$ to get $$\ln(a) = 2\ln(\sqrt{a})$$, and by the log rule $$x\log_a b = \log_a b^x$$, we get $$\ln(a) = \ln(\sqrt{a}^2)$$, which is obviously true.
-FruDe | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9780517475646369,
"lm_q1q2_score": 0.8332539341605781,
"lm_q2_score": 0.851952809486198,
"openwebmath_perplexity": 187.4747005732582,
"openwebmath_score": 0.9344634413719177,
"tags": null,
"url": "https://math.stackexchange.com/questions/3761608/is-it-true-that-frac-lna2-ln-sqrta-for-a0-in-particular-is-fr/3761619"
} |
javascript
}
function offHover2()
{
$(".img-op1").attr('src', 'http://reneinla.com/tasha/style/images/stills/FORD-SUPER-BOWL.jpg');
}
function onHover3()
{
$(".img-ford1").attr('src', 'http://reneinla.com/tasha/style/images/gifs/giphy.gif');
}
function offHover3()
{
$(".img-ford1").attr('src', 'http://reneinla.com/tasha/style/images/stills/OPERATOR.jpg');
}
UPDATE
Thanks for the help!
i was able to do some research and found something that works.
new fiddle I think you could reduce a loot your code and do it in a way that it is more clear and effective.
First of all this part you could just remove without concerns:
$(document).ready(function () {
onHover();
offHover();
onHover1();
offHover1();
onHover2();
offHover2();
onHover3();
offHover3();
});
The reason, is because those functions are intended to be called on events triggering. This is the logic reason.
The functional reason, is that at the end you have the same situation of the beginning.
If your intention was to provide some sort of effect here, it is better to have a dedicated function to this. First of all because those kind of effects could change in time for different reasons than the event handlers.
Second reason is because the meaning is much evident.
I thin you could put all the url in your HTML using the data attribute:
<figure>
<img src="#" alt="" class="img-ford"
data-out="/tasha/style/images/gifs/giphy.gif"
data-out="/tasha/style/images/stills/FORD-SUPER-BOWL.jpg"/>
</figure> | {
"domain": "codereview.stackexchange",
"id": 26541,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript",
"url": null
} |
c++, algorithm, security, cryptography, aes
for (size_t i=0; i<4; ++i){
for (size_t j=0; j<4; ++j){
byte = state[i][j];
result[i][j] = lookupByte(byte);
}
}
return result;
}
Generally, writing naked for loops in modern C++ is frowned upon. Instead, you should use algorithms.
Let’s assume the state is a flat array<int, 16> instead of a 2D array. In that case, this function could be:
constexpr auto subBytes(std::array<int, 16> state)
{
std::ranges::transform(state, state.begin(), lookupByte);
return state;
}
Obviously, things are more complicated with a 2D array… but that’s really a hint that you shouldn’t be using 2D arrays.
On to the row shift:
array<int,4> shiftRow(const array<int,4>& row, const int shift){
//Recursive function to shft a row left by given value
array<int,4> result = {};
result = row;
if(shift){
//Shift by 1
int temp = result[0];
for (size_t i=0; i<3; ++i){
result[i] = result[i+1];
}
result[3] = temp;
//reduce shift and perform again
result = shiftRow(result, shift -1);
}
else{
return result;
}
}
This is a place where using an algorithm would really help, for a couple of reasons.
First, the simplicity. Check it out:
constexpr auto shiftRow(array<int, 4> row, int const shift) noexcept
{
std::ranges::rotate(row, row.begin() + shift);
return row;
} | {
"domain": "codereview.stackexchange",
"id": 43329,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, algorithm, security, cryptography, aes",
"url": null
} |
# Problem with definition of complexification $V \otimes_{\mathbb{R}}\mathbb{C}$
Let $V$ be a finite dimensional real vector space. According to Wikipedia, the complexification of $V$ is defined to be $$V \otimes_{\mathbb{R}}\mathbb{C}$$ This can be made into a complex vector space by defining complex scalar multiplication by $$\lambda( v \otimes z) := v\otimes (\lambda z) \qquad v \in V,\lambda,z \in \mathbb{C}$$ Hence we define it on simple tensors only. So one has to check that it fulfills the axiom for scalar multiplication. Take another vector $u \otimes w$, where $u \in V$ and $z \in \mathbb{C}$. It is well-known, that $v \otimes z + u \otimes w$ must not be simple. So how can one show $$\lambda(v \otimes z + u \otimes w)$$ ?
I mean, $V \otimes_{\mathbb{R}}\mathbb{C}$ is a real vector space, and with the above we want to make it into a complex one. How exactly does one show the distributive property? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9835969684454967,
"lm_q1q2_score": 0.8300526626278274,
"lm_q2_score": 0.8438951005915208,
"openwebmath_perplexity": 129.95350032957975,
"openwebmath_score": 0.9514215588569641,
"tags": null,
"url": "https://math.stackexchange.com/questions/2414774/problem-with-definition-of-complexification-v-otimes-mathbbr-mathbbc/2414796"
} |
c#, object-oriented, generics, classes
What changes has been made in the code since last question?
The previous question focused on the concatenation methods, and the main idea here is to implement a SubPlane method which can extract a specific region data block by given parameters.
Why a new review is being asked for?
Although it seems that the above version of this SubPlane method works well, I am not sure is it efficient with copying data element-by element using for loop. I am not sure how to construct the output sub-plane with Array.Copy in this situation. If there is any better idea, please let me know. A few points :
The default value of int is zero, so public int Width { get; } = 0; is unnecessary.
Width = Math.Max(width, 0); what is the reason of doing this ? isn't going to give you the same as Width = width; or is there is some cases where the values would differ ?
In the third constructor if (sourceGrid == null) { return; }, not recommended, either throw ArguementNullException or you could initiate the default instructor would be better.
public T this[int x, int y]
{
get
{
if (x < 0 || x >= Width)
{
throw new ArgumentOutOfRangeException(nameof(x));
}
if (y < 0 || y >= Height)
{
throw new ArgumentOutOfRangeException(nameof(y));
}
return Grid[x, y];
}
set
{
if (x < 0 || x >= Width)
{
throw new ArgumentOutOfRangeException(nameof(x));
}
if (y < 0 || y >= Height)
{
throw new ArgumentOutOfRangeException(nameof(y));
}
Grid[x, y] = value;
}
}
is equal to this :
public T this[int x, int y] { get; set; }
Unless you implement a custom validation, your current indexer will be redundant, because it replicates the indexer default behavior. Mostly, you need to implement your requirements within the setter, but the getter in most cases doesn't need more than default behavior. As you're required to handle what's going inside your class (setter), but not what goes out of it (getter). | {
"domain": "codereview.stackexchange",
"id": 39787,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, object-oriented, generics, classes",
"url": null
} |
newtonian-mechanics, reference-frames, definition
Title: Two definitions for a reference frame My textbook defines a reference frame in two different ways:
A collection of at least 3 collinear points that are rigidly connected.
A reference frame is defined by three orthogonal unit vectors and one point (the origin). | {
"domain": "physics.stackexchange",
"id": 66066,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, reference-frames, definition",
"url": null
} |
c++, random, numerical-methods
int main()
{
// Test Parameters
const int M = 10; // Simulations per program execution
const int N = 2'000'000; // Points per simulation
unsigned long long overallPointsInCircle = 0;
unsigned long long overallPoints = 0;
using Real = double;
Real actualPI = 4 * std::atan(Real(1));
for (int i = 0; i < M; ++i) {
int pointsInCircle = 0;
int totalPoints = 0;
using Rand = std::mt19937_64;
Seed<Rand> seed;
Rand rng(seed);
std::uniform_real_distribution<Real> uid(-1.0, 1.0);
for (int j = 0; j < N; ++j) {
Point<Real> p{ uid(rng), uid(rng) };
++totalPoints;
++overallPoints;
if (p.mag() <= 1.0) {
++pointsInCircle;
++overallPointsInCircle;
}
}
std::cout << "\nIteration: " << i + 1 << std::endl;
report<Real>(pointsInCircle, totalPoints);
}
std::cout << "\n\nFinal values:\n";
report<Real>(overallPointsInCircle, overallPoints);
auto computedPi = approximatePI<Real>(overallPointsInCircle, overallPoints);
std::cout << "Percent Error: " << std::setprecision(10) << percentError(computedPi, actualPI);
} | {
"domain": "codereview.stackexchange",
"id": 26147,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, random, numerical-methods",
"url": null
} |
ros, navigation, tutorials, dependencies
Title: navigation tutorial dependencies
I have done the URDF tutorial and have a model of my robot. I also have a neato lidar sensor publishing data using http://www.ros.org/wiki/xv_11_laser_driver. Odometry is being published by the Serializer package.
My question is in the Navigation tutorial it says to do a
"roscreate-pkg my_robot_name_2dnav move_base my_tf_configuration_dep my_odom_configuration_dep my_sensor_configuration_dep"
What do I put in for
"my_tf_configuration_dep"
"my_odom_configuration_dep"
and "my_sensor_configuration_dep"
Thanks in advance
Originally posted by ringo42 on ROS Answers with karma: 55 on 2011-07-31
Post score: 1
Hi Ringo,
I believe these refer to packages you already use for your sensors (e.g. your xv11 laser), odometry data (your serializer) and your URDF description which may or may not be in its own packge (e.g. my_robot_description). So one possibility in your case would be:
$ roscreate-pkg my_robot_name_2dnav move_base xv_11_laser_driver serializer my_robot_description
If you already have a working package for your robot (e.g my_robot_package) that itself depends on the xv_11_laser_driver and serializer packages as well as your URDF description, you could use:
$ roscreate-pkg my_robot_name_2dnav move_base my_robot_package
--patrick
Originally posted by Pi Robot with karma: 4046 on 2011-07-31
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by ringo42 on 2011-08-04:
Thanks. Do you happen to have a launch file that works for the navigation tutorial I can use for a template. I'm not sure exactly what to put for things like odom_node_type, transform_configuration_type, etc. | {
"domain": "robotics.stackexchange",
"id": 6302,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, navigation, tutorials, dependencies",
"url": null
} |
machine-learning, computational-learning-theory, bias-variance-tradeoff, approximation-error
Title: What's the difference between estimation and approximation error? I'm unable to find online, or understand from context - the difference between estimation error and approximation error in the context of machine learning (and, specifically, reinforcement learning).
Could someone please explain with the help of examples and/or references? Section 5.2 Error Decomposition of the book Understanding Machine Learning: From Theory to Algorithms (2014) gives a description of the approximation error and estimation error in the context of empirical risk minimization (ERM) and, in particular, in the context of the bias-complexity tradeoff (which is strictly related to the bias-variance tradeoff).
Error/risk decomposition
The expected risk (error) of a hypothesis $h_S \in \mathcal{H}$ selected based on the training dataset $S$ from a hypothesis class $\mathcal{H}$ can be decomposed into the approximation error, $\epsilon_{\mathrm{app}}$, and the estimation error, $\epsilon_{\mathrm{est}}$, as follows
\begin{align}
L_{\mathcal{D}}\left(h_{S}\right)
&=
\epsilon_{\mathrm{app}}+\epsilon_{\mathrm{est}} \\
&=
\epsilon_{\mathrm{app}}+ \left( L_{\mathcal{D}}\left(h_{S}\right)-\epsilon_{\mathrm{app}} \right) \\
&=
\left( \min _{h \in \mathcal{H}} L_{\mathcal{D}}(h)\right) + \left( L_{\mathcal{D}}\left(h_{S}\right)-\epsilon_{\mathrm{app}} \right) \label{1}\tag{1}
\end{align}
Approximation error
The approximation error (AE), aka inductive bias, defined as
$$\epsilon_{\mathrm{app}} = \min _{h \in \mathcal{H}} L_{\mathcal{D}}(h) $$ | {
"domain": "ai.stackexchange",
"id": 3995,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, computational-learning-theory, bias-variance-tradeoff, approximation-error",
"url": null
} |
c++, algorithm, matrix, c++20
MatrixView<T> submatrix(std::size_t r1, std::size_t c1, std::size_t r2, std::size_t c2) const {
assert(r1 <= r2 && c1 <= c2 && r2 < R && c2 < C);
std::size_t RV = r2 - r1 + 1;
std::size_t CV = c2 - c1 + 1;
std::unique_ptr<std::size_t[]> index(new std::size_t[RV * CV]);
for (std::size_t r = 0; r < RV; r++) {
for (std::size_t c = 0; c < CV; c++) {
index.get()[r * CV + c] = (r1 + r) * C + (c1 + c);
}
}
MatrixView<T> sub(RV, CV, const_cast<T*>(&data.get()[0]), std::move(index));
return sub;
}
Matrix& operator+=(T val);
Matrix& operator-=(T val);
Matrix& operator*=(T val);
Matrix& operator/=(T val);
template <Scalar T2>
Matrix& operator+=(const Matrix<T2>& rhs);
template <Scalar T2>
Matrix& operator+=(const MatrixView<T2>& rhs);
template <Scalar T2>
Matrix& operator-=(const Matrix<T2>& rhs);
template <Scalar T2>
Matrix& operator-=(const MatrixView<T2>& rhs); | {
"domain": "codereview.stackexchange",
"id": 40258,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, algorithm, matrix, c++20",
"url": null
} |
=> T = 49/36 + 7/216
=> T = (294 + 7)/216
=> T = 301/216
-
Regarding this problem, the question states: "I got the solution by dividing by 7 and subtracting it from original sum. Repeated for two times.(Suggest me if any other better way of doing this)." This answer appears to suggest the same method. – Jonas Meyer Jul 31 '12 at 16:03
Sorry, Din't read that thing! – Ashwyn Jul 31 '12 at 16:24
(Edit: Upps, I see now this is essentially solution (2) of Peter Tamaroff's answer, but because it's much shorter I just leave it here)
Your sequence can be separated into 2 sequences, where we add each pair:
$\begin{eqnarray} &1&2&4&7&11&16&\cdots & = a_k\\ \hline =&1&1&1&1&1&1&\cdots \\ +&0&1&3&6&10&15&\cdots \\ \hline \end{eqnarray}$
Then the partial sums are, beginning the index k at 1:
$\begin{eqnarray} &1&3&7&14&25&41&\cdots &=&s_k\\ \hline =&1&2&3&4&5&6&\cdots &= &&=&k\\ +&0&1&4&10&20&35&\cdots &=&\binom{1+k}{3}&=&{(k+1)!\over 3! (k-2)!}\\ \hline =&1&3&7&14&25&41&\cdots &=&s_k&=& k+ {(k+1)!\over 3! (k-2)!}\\ \end{eqnarray}$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9777138190064204,
"lm_q1q2_score": 0.831028562339694,
"lm_q2_score": 0.8499711737573762,
"openwebmath_perplexity": 919.3021631325508,
"openwebmath_score": 0.932900071144104,
"tags": null,
"url": "http://math.stackexchange.com/questions/171754/sum-of-the-series-1-2-4-7-11-cdots"
} |
java, beginner, natural-language-processing, telegram
return st1.toLowerCase().contains(st2.toLowerCase())
&& st1.toLowerCase().contains(st3.toLowerCase());
}
@Override
public void onUpdateReceived(Update update) {
// TODO Auto-generated method stub
if (update.hasMessage()) {
Message message = update.getMessage();
if (message.hasText()) {
SendMessage sendMessageRequest = new SendMessage();
sendMessageRequest.setChatId(message.getChatId()
.toString());
sendMessageRequest.enableMarkdown(true);
ArrayList<String> nouns = new ArrayList<String>();
// ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Initialize the tagger
MaxentTagger tagger = new MaxentTagger(
"taggers/english-left3words-distsim.tagger");
// The sample string
String sample1 = message.getText();
// tokenize the sentence
String delimeter2 = " ";
// split the string using the delimeter and the parameter
String[] words1 = sample1.split(delimeter2);
System.out.println(words1);
for (String word : words1) {
System.out.println(word);
// The tagged string
String tagged = tagger.tagString(word); | {
"domain": "codereview.stackexchange",
"id": 22336,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, beginner, natural-language-processing, telegram",
"url": null
} |
Eliminating the Parameter from a Pair of Trigonometric Parametric Equations. The equation is the general form of an ellipse that has a center at the origin, a horizontal major axis of length 14, and. The rectangular coordinates (x , y) and polar coordinates (R , t) are related as follows. The student is expected to: (A) graph a set of parametric equations;. y = x -3 is equivalent to 3 cos sin 3 (cos sin ) 3 3 cos sin xy rr r r TT TT TT. Which equation should be solved for the parametric variable depends on the problem -- whichever equation can be most easily solved for that parametric variable is typically the best choice. x=t-l, Check these with yesterday's graphs: B. Write the complex number in trigonometric form, using degree measure for the argument. In many cases, we may have a pair of parametric equations but find that it is simpler to draw a curve if the equation involves only two variables, such as x x and y. x = 7 sin and y = 2 cos 62/87,21 Solve the equations for sin and cos. Let us just look at a simple example. Clearly, both forms produce the same graph. Then, we do this substitution into the function: x → x c - y d y → x d + y c. Eliminate the parameter from the given pair of trigonometric equations where $$0≤t≤2\pi$$ and sketch the graph. For the first case we need to supplement the equations by two inequalities: 0 <= t <= 4 Pi && 0 < x < 4 Pi. Eliminate the parameter and find a corresponding rectangular equation. For instance, you can eliminate the parameter from the set of parametric equations in Example 1 as follows. Construct a table of values for the given parametric equations and sketch the graph: the data from the parametric equations and the rectangular equation are plotted together. Converting Polar To Rectangular. A cartesian equation gives a direct relationship between x and y. x = ½t + 4. x = cos 2t, y = sin t, for t in 1-p, p2. In rectangular coordinates, each point (x, y) has a unique representation. ACE TRIG Final EXAM REVIEW. After going through these three problems can you reach any conclusions on how the argument of the trig | {
"domain": "outdoortown.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9780517456453798,
"lm_q1q2_score": 0.8646361687069855,
"lm_q2_score": 0.8840392878563336,
"openwebmath_perplexity": 665.260664865313,
"openwebmath_score": 0.822030246257782,
"tags": null,
"url": "http://aipc.outdoortown.it/parametric-to-rectangular-with-trig.html"
} |
spacetime, field-theory, dirac-equation, spinors, clifford-algebra
I'm wondering, first, if that dubious route is valid. If it is valid, I'm wondering if, as a consequence, all relativistic fields on spacetime, even for bosons, take values in some subspace of $\mathbb{C}^4$ spinors. I think you are mixing up some basic facts about four-vectors and $\mathbb{C}^4$ spinors. So let us review some of them here (this is more of a long comment than an answer): | {
"domain": "physics.stackexchange",
"id": 92148,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "spacetime, field-theory, dirac-equation, spinors, clifford-algebra",
"url": null
} |
newtonian-mechanics, forces, free-body-diagram
Title: I am pushing a block by my hands with a force $F$ since block is in contact with my hand so my hand will apply normal force on the block $N_1$
N1 here is the normal force my handexert on the block due to contact between block and my hand.
F is the external force which i apply
N is the normal force with surface applies on the block.
So my question is the fbd right or wrong and why? The $F_{ext}$ is the normal force. Both are same , when your hand is in contact with the block's surface , The normal interaction is the external force , because the particles in the block do repel your hand and the same repulsion is felt in the block which creates the external force you mentioned !
So there is really only one force acting on the block | {
"domain": "physics.stackexchange",
"id": 96535,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, free-body-diagram",
"url": null
} |
-
Before I accept your answer, could you please give me a reference for the fact stated in the first paragraph ? (do you mean with "summable" that $\sum_{i\in I} ||x_i||^2$ converges?) – user38525 Aug 23 '12 at 10:26
I provided some definitions and results about summable family. I also improve the very first statement in order to cover correctly the case you're considering. – Ahriman Aug 23 '12 at 12:29
Wow, that was very detailed. But I'm not sure how you arrived at proving that the sequence of the RHS is summable. The Bessel inequality tells me that $$\sum_k | \langle x,f_k \rangle |^2 \leq ||x||^2.$$ On the other hand I know that $$\sum_k \alpha_k \langle x,f_k \rangle f_k$$ is summable iff $$\sum_k ||\alpha_k \langle x,f_k \rangle f_k||=\sum_k |\alpha_k| | \langle x,f_k \rangle |$$ is summable. But how do I combine these two to get what I want ?[...] – user38525 Aug 23 '12 at 15:14 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9724147169737825,
"lm_q1q2_score": 0.8359496774003746,
"lm_q2_score": 0.8596637451167997,
"openwebmath_perplexity": 181.4821597672043,
"openwebmath_score": 0.9623497724533081,
"tags": null,
"url": "http://math.stackexchange.com/questions/185794/rearranging-the-spectral-theorem"
} |
Figuring out if a function is real analytic is a pain; figuring out whether a complex function is analytic is much easier. First, understand that a real function can be analytic on an interval, but not on the entire real line.
So what I try to do is consider f as a function of a complex variable in the neighborhood of the point say x = a in question. If f(z) (z complex) = f(x) when y = 0 (z = x + iy) then f(z) is an extension of f to a neighborhood of a. To show that f(z) is analytic you need only show that it has a derivative as a complex variable at a. If so then its Taylor's series will converge to f(z) in some neighborhood of a. f as a real function is analytic on the interval that this neighborhood covers.
This approach does show that sin and cos are analytic in the entire plane, and thus on the x-axis. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9674102542943774,
"lm_q1q2_score": 0.8082553410880018,
"lm_q2_score": 0.8354835371034368,
"openwebmath_perplexity": 144.7468817010959,
"openwebmath_score": 0.9568079113960266,
"tags": null,
"url": "http://math.stackexchange.com/questions/590455/how-to-check-the-real-analyticity-of-a-function"
} |
runtime-analysis, loops
I know that the first two for-loops will have n(n+1)/2 iterations. My problem is with the third for-loop. Due to this third loop what factor am I supposed to multiply with n(n+1)/2 to get the total number of iterations?
Any help would be appreciated? Thanks Instead of a general "formula", you should try to work out from principles first, at least in the beginning. You can see that the number of executions are:
$$\sum\limits_{i=1}^{n} \sum\limits_{j=1}^{i} \sum\limits_{k=j}^{i} 1$$
We try solving them one step at a time. The innermost (right-most) summation can be solved as:
$$\sum\limits_{k=j}^{i} 1 = i - j + 1$$ (how many times will the loop execute from $j$ to $i$, inclusive).
Going on to the next level, by substituting the above result, we get:
$$\sum\limits_{j=1}^{i} \sum\limits_{k=j}^{i} 1 = \sum\limits_{j=1}^{i} (i - j + 1) = i\sum\limits_{j=1}^{i} 1 -\sum\limits_{j=1}^{i}j + \sum\limits_{j=1}^{i}1$$
$$ = i^2 - \frac{i(i+1)}{2} + i = \frac{i(i+1)}{2}$$
Notice that when summation is over the variable $j$, $i$ behaves like a constant, and so could be taken out of the summation.
The above expression gives the number of times the inner two loops are executed for every value $i$ that the outer loop takes. Solving the outermost summation with this expression ($i$ runs from $1$ to $n$), we have: | {
"domain": "cs.stackexchange",
"id": 768,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "runtime-analysis, loops",
"url": null
} |
javascript, pagination, ecmascript-6
/**
* Strategy Interface
*
* The logic to decide which action to take
* - In other words which display logic to choose
*
* Notes: Break point refers to an ellipsis Ex: (1 2 3 ... 5).
* Scope is the list of pages including ellipsis, as well
* as the focus which is the page which the user is on
*/
get strategyInterface() {
if (this.hasBreakPoint) {
if (this.isFocusInbetweenBreakPoints) {
return this.actions.focusBetweenEllipsis;
} else {
return this.actions.focusBeforeOrAfterEllipsis;
}
} else {
return this.actions.noEllipsis;
}
}
// Does the paginator have a break point (ellipsis)?
get hasBreakPoint() {
const { totalPages } = this.params;
const { unbrokenPoint } = this.constants;
return (totalPages > unbrokenPoint)
}
// Is the focus between break points (ellipsis)?
get isFocusInbetweenBreakPoints() {
return this.isFocusAfterFirstBreakpoint && this.isFocusBeforeEndBreakpoint
}
get isFocusAfterFirstBreakpoint() {
const { currentPage } = this.params;
const { breakPoint } = this.constants;
return (currentPage > breakPoint)
}
get isFocusBeforeEndBreakpoint() {
const { currentPage, totalPages} = this.params;
const { breakPoint } = this.constants;
return currentPage <= (totalPages - breakPoint)
} | {
"domain": "codereview.stackexchange",
"id": 17666,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, pagination, ecmascript-6",
"url": null
} |
Think first how the lack of divisors of zero is stated for $A/J$. The product of two cosets $J+a$ and $J+b$ in $A/J$ is $(J+a)(J+b)=J+ab$. Moreover, the zero element of $A/J$ is the coset $J=J+0$, so to say that $A/J$ has no divisors of zero implies that $J+ab=J\Rightarrow J+a=J$ or $J+b=J$.
Recall also that for any $x\in A$, $x\in J \Leftrightarrow J+x=J$. It then becomes obvious that the property $J+ab=J\Rightarrow J+a=J$ or $J+b=J$ in $A/J$ corresponds to the property $ab\in J\Rightarrow a\in J$ or $b\in J$, which is the defining property of a prime ideal $J$.
So it appears that in some cases it is easy to choose an ideal $J$ whose properties are conveyed to a quotient ring $A/J$.
Proof: any element of a group of prime order generates the group
The following self-contained post clarifies the proof of theorem 4, chapter 13, p. 129, of the book “A book of abstract algebra” by Charles C. Pinter.
Let $G$ be a group of prime order $|G|=p$. It will be shown that for any $a\in G$ with $a\ne e$ it holds that $\langle a\rangle=G,$ where $\langle a\rangle$ is the set generated by $a$ and $e$ is the identity element of $G$.
Since the order of group $G$ is $|G|=p<\infty$, the order $\mbox{ord}(a)$ of element $a$ is also finite, i.e. $\mbox{ord}(a)=n<\infty$ for some positive integer $n$. | {
"domain": "papamarkou.blog",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9875683506103591,
"lm_q1q2_score": 0.8186337275174852,
"lm_q2_score": 0.8289388040954683,
"openwebmath_perplexity": 90.78749345088153,
"openwebmath_score": 0.968115508556366,
"tags": null,
"url": "https://papamarkou.blog/category/mathematics/abstract-algebra/"
} |
javascript, jquery
CSS:
/*since styles are common, you can stack them to avoid repeating*/
/*success styles*/
#NewCustomerSubmitStatusPopUp.success{
background: #FFF;
border: 10px solid #9cc3f7;
}
#NewCustomerSubmitStatusPopUp.success h1,
#ctl00_ContentPlaceHolder1_NewCustomerlblSubmitStatusMsg.success{
color: #9cc3f7;
}
/*error and uerror styles*/
#NewCustomerSubmitStatusPopUp.error,
#NewCustomerSubmitStatusPopUp.uerror{
background: #F7DFDE;
border: 10px solid #BD494A;
}
#NewCustomerSubmitStatusPopUp.error h1,
#NewCustomerSubmitStatusPopUp.uerror h1,
#ctl00_ContentPlaceHolder1_NewCustomerlblSubmitStatusMsg.error,
#ctl00_ContentPlaceHolder1_NewCustomerlblSubmitStatusMsg.uerror{
color: #BD494A;
} | {
"domain": "codereview.stackexchange",
"id": 1874,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery",
"url": null
} |
ds.algorithms, graph-algorithms, directed-acyclic-graph
Now suppose we want to find the shortest $s$-$t$ path for a given pair of vertices $(s, t) \in V^2$. Can precomputing all-pairs shortest paths in $H_1, \dots, H_k$ be used to speed up the algorithm?
Any references to papers that use similar ideas would be helpful. Yes, you certainly can (based on the fact that any subpath of a minimal path must also be minimal). That is, any shortest path entering $H_i$ at $u$ and leaving at $v$ must follow the shortest path from $u$ to $v$ in $H_i$.
Basically, you can compute the shortest-path distance matrix $D$ for any $H_i$ (it would be the same for all of them, of course), and replace every subgraph $H_i$ by one consisting only of the in- and out-nodes (that is, the nodes connected to other subgraphs; presumably fewer than $|V_i|$), and use only direct edges from the in-nodes to the out-nodes, with weights given by $D$.
You don't need to explicitly construct this new graph, of course. If you have the macrostructure of $G$ available in implicit form, you can compute $D$, and use that together with the macrostructure of $G$ in a (slightly customized) DP algorithm for finding the shortest path. | {
"domain": "cstheory.stackexchange",
"id": 973,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ds.algorithms, graph-algorithms, directed-acyclic-graph",
"url": null
} |
# Magnetic moment μ approximation
I've been reading a bit about the magnetic moment (spin-only) $\mu_{s.o}$ where they give a formula relating this to the number of unpaired electrons
$$\mu_{s.o}=\sqrt{n(n+2)}$$
where $n$ is the number of unpaired electrons.
However in our lecture today we were using the approximation $\mu_{s.o} \approx n+1$. Is this an acceptable approximation for the magnetic moment or should I stick to using the previous one.
Obviously using $\mu_{s.o} \approx n+1$ is easier to use for calculations but I would like someone's opinion on this.
I think you can come at this approximation in two ways. Using more advance methods, the approximation is obtained as a truncation of the Laurent series of $\sqrt{x(x+2)}$ about $x=\infty$.
This is possible, but I think needlessly complex in this case. Using just algebra, we can note $$\sqrt{n(n+2)}=\sqrt{n^2+2n}\approx\sqrt{n^2+2n+1}=\sqrt{(n+1)^2}=n+1$$ By looking at a plot, we can see this approximation is very good, giving essentially the exact result at $n=10$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9626731083722525,
"lm_q1q2_score": 0.8494895726492212,
"lm_q2_score": 0.8824278618165526,
"openwebmath_perplexity": 383.649416931835,
"openwebmath_score": 0.8450661897659302,
"tags": null,
"url": "https://chemistry.stackexchange.com/questions/84684/magnetic-moment-%CE%BC-approximation"
} |
ros, moveit, planning-scene, collision-object, movegroup
Title: MoveIt Attach Object Error
Here is my node to add an object and attach it to the robot :
#include <moveit/move_group_interface/move_group.h>
#include <moveit/planning_scene_interface/planning_scene_interface.h>
#include <geometric_shapes/shape_operations.h>
using namespace Eigen;
int main(int argc, char **argv)
{
ros::init(argc, argv, "add_workpiece_wall");
ros::NodeHandle nh;
ros::AsyncSpinner spin(1);
spin.start();
moveit::planning_interface::PlanningSceneInterface current_scene;
sleep(2.0);
Vector3d b(0.001, 0.001, 0.001);
moveit_msgs::CollisionObject co;
co.id = "workpiece_wall";
shapes::Mesh* m = shapes::createMeshFromResource("package://mitsubishi_rv6sd_support/meshes/workpiece_wall.stl",b);
ROS_INFO("Workpiece Wall mesh loaded");
shape_msgs::Mesh mesh;
shapes::ShapeMsg mesh_msg;
shapes::constructMsgFromShape(m, mesh_msg);
mesh = boost::get<shape_msgs::Mesh>(mesh_msg);
co.meshes.resize(1);
co.mesh_poses.resize(1);
co.meshes[0] = mesh;
co.header.frame_id = "base_link"; | {
"domain": "robotics.stackexchange",
"id": 26053,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, moveit, planning-scene, collision-object, movegroup",
"url": null
} |
c++, optimization, performance, c++11
It should be obvious that this algorithm is in O(n²). To improve the performance of you program, you should try to come up with an algorithm that's asymptotically faster. A first improvement can be achieved by noticing that the correlation is symmetric in i and j. I used a std::vector to store the sentences alongside with the weighting and slightly changed the sentence_intersection algorithm, the new name is intersection_weight:
for (std::size_t i = 0; i < sentLen; i++)
{
sentences[i].w += 2 * intersection_weight(sentencesC[i].begin(),
sentencesC[i].end(),
titles.begin(), titles.end());
for (auto j = i+1; j < sentLen; j++)
{
auto const res = intersection_weight(sentencesC[i].begin(),
sentencesC[i].end(),
sentencesC[j].begin(),
sentencesC[j].end());
sentences[i].w += res;
sentences[j].w += res;
}
}
As this is the most time-consuming part, this optimization halves the total run-time of the program.
Size of the intersection
double SmartAnalyzer::sentence_intersection(std::set<std::string> const& a,
std::set<std::string> const& b)
{
std::vector<std::string> common;
std::set_intersection(a.begin(), a.end(), b.begin(), b.end(),
std::back_inserter(common));
return (double)common.size() / ((a.size() + b.size()) / 2);
} | {
"domain": "codereview.stackexchange",
"id": 7321,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, optimization, performance, c++11",
"url": null
} |
do i= n,1,-1
s = i
sum = sum + 1.0/s
end do
write(*,*) 'The sum of series is = ', sum
end program harmonic
now for n=10000000, I get the sum as 16.695311365859890 , which is very close
to the Mathematica answer, 16.6953. I don't know how to get more precise answers
in Mathematica. I used the command
Code (Text):
Sum[1.0/i, {i,10000000}]
Next, I tried to alter gsal's code as
Code (Text):
program harmonic
implicit none
integer :: i,n
real*8 :: s,sum=0.0
write(*,*)'How many terms you want to sum ?'
do i= 1, n
s = i
sum = sum + 1.0/s
end do
write(*,*) 'The sum of series is = ', sum
end program harmonic
Now here, for n=10000000, I get the sum as 16.695311365856710 , which is different in
last 4 digits from the answer from gsal's code.
So using double precision variables certainly improves the precision , but the order in which
we sum the series seems to do some funny things......As commented by alephzero,
summing smallest terms first is probably more accurate. He has asked me to think over it
as an exercise. I found something on yahoo answers
Here, its suggested that
So how would I do this kind of splitting. Also I want to time my code to get the time the compiler takes to compute the result. I have Win XP as OS and I am using Ubuntu inside
VirtualBox. Compiler is gfortran......
thanks
6. Jul 21, 2012
### gsal
If you are going to get into timing the program, that's another matter which I personally am not that good at...I am talking about setting things up according to what task takes longer, moving things in and out of the register, vectorizing loops, etc. | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9136765281148513,
"lm_q1q2_score": 0.807725943058572,
"lm_q2_score": 0.8840392832736084,
"openwebmath_perplexity": 2872.515356559849,
"openwebmath_score": 0.47182798385620117,
"tags": null,
"url": "https://www.physicsforums.com/threads/fortran-program-for-harmonic-series-sum.622296/"
} |
Dividing by $4$: $$n(n+1)=\frac{z-x}{2}\frac{z+x}{2}$$
So we just need a factorization, $n(n+1)=UV$ with $U<V$. Then $z=V+U$ and $x=V-U$.
The most basic answer is $V=n(n+1)$ and $U=1$. Then $x=n(n+1)-1, y=2n+1, z=n(n+1)+1$.
The "obvious" answer, $U=n$, $V=n+1$ just gets you $x=1, y=2n+1, z=2n+1$.
Since $n(n+1)$ is even, we can write $U=2$, $V=\frac{n(n+1)}{2}=T_n$. Then $$x=T_n-2, y=2n+1, z=T_n+2$$
When $n>0$ we see that the number of distinct positive solutions $(x,2n+1,z)$ is $$\frac{\tau(n(n+1))}{2}=\frac{\tau(n)\tau(n+1)}2$$
If $UV=n(n+1)$ we can get $a,b,c,d$ from the first solution by defining: $$a=(V,n), b=(U,n+1), c=(V,n+1), d=(U,n)$$
then $$bc=n+1, ad=n, ac=V, bd=U$$
So $bc-ad=1$, $bc+ad=2n+1=y$, $V-U=ac-bd=x$ and $V+U=ac+bd=z$.
So any result from the second method is a result from the first method, which means the first method gets all positive solutions, too. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9780517424466174,
"lm_q1q2_score": 0.8063664300011887,
"lm_q2_score": 0.824461932846258,
"openwebmath_perplexity": 282.48952596415694,
"openwebmath_score": 0.9247227311134338,
"tags": null,
"url": "https://math.stackexchange.com/questions/351491/integral-solutions-of-hyperboloid-x2y2-z2-1"
} |
You're almost there. But you need to think about lines with angle approaching $\pi$. Maybe you want to think about $[0,\pi]$ and a new quotient topology question :)
What I thought about that was that the line $r_\pi$ is the same line as the line $r_0$. And so the only possible discontinuity that could appear, which were at the limits of the interval, disappears because the two points are actually the same. What I didn't know how to formalize was that: How to send open sets to open sets when the set contains the limit angle $r_0$. What exactly do you mean by new quotient topology question? Just taking $\pi$ as another line and defining the new quotient space with a partition containing bot $r_0$ and $r_\pi$? Like gluing the extremes of a line? – MyUserIsThis May 8 '13 at 19:41
You're right there! You want to see that if you take $[0,\pi]$ and identify $0$ and $\pi$, the resulting topological space is homeomorphic to $S^1$. – Ted Shifrin May 8 '13 at 19:47
Well, conceptually, you're thinking of the space of lines as being homeomorphic to $[0,\pi]$ with the endpoints identified, and then you compose with the homeomorphism to $S^1$. Your original attempt was flawed because you didn't know how to make lines whose angles were close to $\pi$ be close to lines whose angles were close to $0$. There are other ways around this, but I was suggesting the way I thought best fit the way you were thinking. – Ted Shifrin May 8 '13 at 22:54 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.986979508737192,
"lm_q1q2_score": 0.8092179240686171,
"lm_q2_score": 0.8198933381139645,
"openwebmath_perplexity": 208.2649354405427,
"openwebmath_score": 0.8741921186447144,
"tags": null,
"url": "http://math.stackexchange.com/questions/385814/quotient-space-homeomorphism"
} |
quantum-mechanics, nuclear-physics, heisenberg-uncertainty-principle, quantum-tunneling
Certainly if you were to make many position measurements and many momentum measurements of similarly prepared systems like this you would find that $\Delta x\Delta p\geq\hbar/2$, but I am not sure if that means this relation is what caused the tunneling. I suppose the most you could do is use the HUP to make an argument that if you know $\Delta p$, then you could make an argument as to how small $\Delta x$ could be. If this smallest value (given the mean position $\langle x\rangle$) still allows for the possibility of finding a particle in an classically forbidden region, then you could predict that tunneling is possible for the system. But just because you are using the consistency of the HUP with the rest of quantum mechanics doesn't necessarily mean the HUP caused the tunneling.
Additionally, a decrease in $\Delta p$ does not necessarily mean an increase in $\Delta x$. The only time you can say this for sure is if your state is already at the limit $\Delta x\Delta p=\hbar/2$. Then decreasing $\Delta p$ necessitates an increase in $\Delta x$ because the uncertainty principle must apply.
I would just explain quantum tunneling as an effect of quantum superposition. The probability of finding a particle somewhere can be expressed as a linear combination of position states. Quantum tunneling occurs because, according to Schrodinger's equation (at least non-relativistically) certain position states in the superposition corresponding to classically inaccessible positions will pick up non-zero probability amplitudes, and hence there is a probability of observing tunneling.
Of course, I may be completely missing some other way to look at QM here. Since a lot of the intuition of QM comes from the mathematical formalism, sometimes you can look at things differently and it still be ok. So, I hope I have at least provided an additional way to look at things here. | {
"domain": "physics.stackexchange",
"id": 66602,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, nuclear-physics, heisenberg-uncertainty-principle, quantum-tunneling",
"url": null
} |
Improved intermediate value theorem
Suppose $f\colon [a,b] \to \mathbb{R}$ is a continuous function with $f(a)<0$, $f(b)>0$. Can it be proved that there exists $s_1\leq s_2$ and $\epsilon>0$ such that $f(s)=0$ for all $s\in[s_1,s_2]$, whilst $f(s)<0$ for all $s\in [s_1-\epsilon, s_1)$ and $f(s)>0$ for all $s\in (s_2,s_2+\epsilon]$?
If not, what about in the case that one assumes $f$ is $C^1$, or smooth? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9683812299938005,
"lm_q1q2_score": 0.800572188127008,
"lm_q2_score": 0.8267117983401364,
"openwebmath_perplexity": 105.92063157566784,
"openwebmath_score": 0.8739438652992249,
"tags": null,
"url": "https://math.stackexchange.com/questions/2905088/improved-intermediate-value-theorem/2905219"
} |
kinect
Title: How to get the topics data from kinect?
I use ubuntu12.04 64 bits with ROS fuerte.
I follow BagRecordingPlayback tutorial.
I run:
roslaunch openni_launch openni.launch depth_registration:=true
optirun rosrun rviz rviz
I set the frame to /camera_link and add point cloud2 section to visualize /camera/depth_registered/points topic but get nothing.
I try to figure out what happened.
I follow several link of kinect problems, and I find some thread also have the similar situations.
I couldn't find an exact solvable solutions.
Can anyone help me step by step to solve this problem?
Thank you very much~
=========================
When I run:
sam@sam:~$ rosrun image_view disparity_view image:=/camera/depth_registered/disparity
Inconsistency detected by ld.so: dl-close.c: 759: _dl_close: Assertion `map->l_init_called' failed!
sam@sam:~$
It shows the window with /camera/depth_registered/disparity title but no image on it.
I check the lib version without problems:
sam@sam:~$ apt-cache policy libopenni-dev
libopenni-dev:
Installed: (none)
Candidate: 1.5.4.0-3+precise1
Version table:
1.5.4.0-3+precise1 0
500 http://packages.ros.org/ros/ubuntu/ precise/main amd64 Packages
sam@sam:~$ apt-cache policy libopenni-sensor-primesense-dev
libopenni-sensor-primesense-dev:
Installed: (none)
Candidate: 5.1.0.41-2+precise1
Version table:
5.1.0.41-2+precise1 0
500 http://packages.ros.org/ros/ubuntu/ precise/main amd64 Packages
sam@sam:~$ | {
"domain": "robotics.stackexchange",
"id": 12344,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "kinect",
"url": null
} |
navigation, odometry, turtlebot, quaternion, yaw
Hope this helps.
Originally posted by vinjk with karma: 96 on 2015-07-14
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 22094,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation, odometry, turtlebot, quaternion, yaw",
"url": null
} |
genomics, variation, human-genome, data-management
Anyone with experience give a review or high-level guide to this platform space? An epic question. Unfortunately, the short answer is: no, there are no widely used solutions.
For several thousand samples, BCF2, the binary representation of VCF, should work well. I don't see the need of new tools at this scale. For a larger sample size, ExAC people are using spark-based hail. It keeps all per-sample annotations (like GL, GQ and DP) in addition to genotypes. Hail is at least something heavily used in practice, although mostly by a few groups so far.
A simpler problem is to store genotypes only. This is sufficient to the majority of end users. There are better approaches to store and query genotypes. GQT, developed by the Gemini team, enables fast query of samples. It allows you to quickly pull samples under certain genotype configurations. As I remember, GQT is orders of magnitude faster than google genomics API to do PCA. Another tool is BGT. It produces a much smaller file and provides fast and convenient queries over sites. Its paper talks about ~32k whole-genome samples. I am in the camp who believe specialized binary formats like GQT and BGT are faster than solutions built on top of generic databases. I would encourage you to have a look if you only want to query genotypes.
Intel's GenomicDB approaches the problem in a different angle. It does not actually keep a "squared" multi-sample VCF internally. It instead keeps per-sample genotypes/annotations and generates merged VCF on the fly (this is my understanding, which could be wrong). I don't have first-hand experience with GenomicDB, but I think something in this line should be the ultimate solution in the era of 1M samples. I know GATK4 is using it at some step.
As to others in your list, Gemini might not scale that well, I guess. It is partly the reason why they work on GQT. Last time I checked, BigQuery did not query individual genotypes. It only queries over site statistics. Google genomics APIs access individual genotypes, but I doubt it can be performant. Adam is worth trying. I have not tried, though. | {
"domain": "bioinformatics.stackexchange",
"id": 23,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "genomics, variation, human-genome, data-management",
"url": null
} |
c, random, image, animation
The others could easily be replaced with inline functions which are much easier to debug. Like this:
inline int MAZE_I (const int x, const int y)
{
return y * MAZE_W + x;
}
Now you can actually step into the function to debug it! And you don't end up with any of the weird side effects of using macros.
Use Whitespace
You also say:
Here's the program (136 code, 17 blank and 7 comment lines, all < 80 characters)
Is there a reason why those numbers are important to you? I find the code to be rather cramped and hard to read. For example, I would never put the body of a for loop on the same line as the conditions as you do in main(). Same with an if statement. (Further, I would always put brackets around the body of an if, for, or while, even if it's only a single line.)
Of all your functions, only the last one, solve() has any blank lines in it. The blank lines really make it easier to read. You should add them between logical parts of your other functions.
Use The Right Type
I don't see much point in making a 1-dimensional array to represent a 2-dimensional array. First, it means you have to use the funky macro to get a value into or out of it. Second, it doesn't really save you anything. A 2D array of ints is going to be the same size.
Furthermore, rather than making it an array of ints then assigning enums to each member of the array, why not just make it an array of enums? That makes the code easier to understand and makes your intent more clear. Doing that requires making a typedef of the enum rather than leaving it anonymous. Something like this:
typedef enum {
COLOR_WALL,
COLOR_HOLE,
COLOR_PATH,
COLOR_ENDS,
COLOR_COUNT
} Color; | {
"domain": "codereview.stackexchange",
"id": 27638,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, random, image, animation",
"url": null
} |
This is an essential point. A mathematical concept typically can be modeled in a number of different ways. We must not confuse the concept with one model. This is another aspect of the abstractness of mathematics.
I continued:
Now let's look back at your problem.
You are trying to answer this question:
Each time Sue rents a bicycle, she pays a fixed base
cost plus an hourly rate for the time the bicycle is
rented. Last Saturday she paid $12.00 to rent a bicycle for 6 hours, and yesterday she paid$9.50 to
rent a bicycle for 4 hours. Which of the following
equations shows the total cost C, in dollars, for
Sue to rent a bicycle for n hours?
That problem is not about a graph, but about rental costs. Your approach to solving it was to represent it as a problem about finding the equation of a line by looking for the slope and intercept. (There are other ways to model the problem, but you chose this because it is familiar. A standard method of problem solving is to model a new problem as one that is familiar, so you can use tools you already have.)
So Sam has shown that he can choose a representation for a problem. Good!
You also recognized that "a fixed base cost plus an hourly rate" means that the cost is a linear function. Suppose that function is
C = mn + b
or, using the variables you are more familiar with,
y = mx + b
Again, you think of it this way just because you want to turn the problem into one you know how to solve. All these variable names -- x, y, m, and b -- are things you chose to introduce, along with the idea of graphing. (By the way, the variables m and b here are parameters, which means they are considered fixed for the sake of the problem, namely graphing a line, whereas x and y are considered to actually vary. But really they are all just variables -- letters representing numbers you don't know. They just play different roles.)
Sam also recognized that the letters n and C played the roles commonly taken by x and y. The concept of a parameter may be the hardest here.
## Parameters transformed | {
"domain": "themathdoctors.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9674102589923635,
"lm_q1q2_score": 0.8082553430316649,
"lm_q2_score": 0.8354835350552604,
"openwebmath_perplexity": 316.17781176263634,
"openwebmath_score": 0.679009735584259,
"tags": null,
"url": "https://www.themathdoctors.org/when-parameters-become-variables/"
} |
In particular, this convergence is true for left- and right-hand sums as well as the average of the two: $$\tag{1}\lim_{N \to \infty}A(P_N,f) = \lim_{N \to \infty} \sum_{k=1}^N \frac{1}{2}(f(x_{k-1}) + f(x_{k}))(x_k - x_{k-1}) = \int_0^1 f(x) \, dx.$$
Note that
$$\tag{2}A(P_N,f_N) = \sum_{k=1}^N \frac{1}{2}(f_N(x_{k-1}) + f_N(x_{k}))(x_k - x_{k-1}) \\= \sum_{k=1}^N \frac{1}{2}\{[f_N(x_{k-1})-f(x_{k-1})] + [f_N(x_{k})-f(x_k)]\}\,(x_k - x_{k-1}) + \sum_{k=1}^N \frac{1}{2}(f(x_{k-1}) + f(x_{k}))(x_k - x_{k-1}).$$
Hence, using (2) and applying the triangle inequality we have | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9820137889851291,
"lm_q1q2_score": 0.8366293972517306,
"lm_q2_score": 0.8519528000888386,
"openwebmath_perplexity": 117.75766733413857,
"openwebmath_score": 0.9793974161148071,
"tags": null,
"url": "https://math.stackexchange.com/questions/2562405/a-trapezoidal-approximation-for-a-sequence-of-uniformly-converging-functions"
} |
inorganic-chemistry, oxidation-state
Title: Oxidation Number What does Oxidation number actually signify?
If it means the number of electrons gained or accepted, why is it a fraction in some cases like in super oxides? Quoting from Wikipedia:
The oxidation state, often called the oxidation number, is an
indicator of the degree of oxidation (loss of electrons) of an atom in
a chemical compound. Conceptually, the oxidation state, which may be
positive, negative or zero, is the hypothetical charge that an atom
would have if all bonds to atoms of different elements were 100%
ionic, with no covalent component. This is never exactly true for real
bonds.
Coming to fractional oxidation state:
It is not actually a fraction rather it is the average of various oxidation states of the element present in the compound.
For example:
$\ce{Fe3O4}$
Here the two Iron atoms that are bonded to three oxygen atoms have an oxidation state of +3 each, and the Iron atom that is bonded to two oxygen atom has an oxidation state of +1.
Thus taking the average (3+3+2)/3 = 8/3;
You would have got the same if you used the fact that $\ce{Fe3O4}$ is neutral.If you assume the oxidation state of Iron to be x, then 3x+4(-2)=0. Solving you would get x as 8/3(However this oxidation state would be the average oxidation state of Iron).
$\ce{KO2}$
In this case the oxygen on the left has an oxidation state of zero as it is only bonded to itself and the oxygen on the right hand side is in oxidation state of -1 (1 bond to another O and a charge of -1). In reality the -1 charge would be spread over the whole species - oxidation state is only a system of chemical " book-keeping". The oxidation state of -1/2 can be regarded as the mean of zero and -1.
There are many more such examples.
Image courtesy:ChemSpider | {
"domain": "chemistry.stackexchange",
"id": 7109,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "inorganic-chemistry, oxidation-state",
"url": null
} |
xacro, ros-indigo
<xacro:Sensor_setup with_Sensor="$(arg with_Sensor)" Sensor_setting="$(arg Sensor_setting)" />
Originally posted by Franzisdrak on Gazebo Answers with karma: 43 on 2016-11-23
Post score: 0
Multiple complex sensors load in Gazebo (take Atlas, Valkyrie, and PR2 as examples).
This is likely a problem with your configuration. I don't think you should have two robot_description elements. Given that ROS is failing, you might want to ask answers.ros.org.
Originally posted by nkoenig with karma: 7676 on 2016-11-23
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 4017,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "xacro, ros-indigo",
"url": null
} |
linear-algebra
Title: How doesn't combining two eigenvectors that have the same eigenvalue for a specific matrix represent every vector left in the plane? If we have a 2D plane and the hermitian matrix $L$ where:
$$L|\lambda_1\rangle=\lambda|\lambda_1\rangle$$
$$L|\lambda_2\rangle=\lambda|\lambda_2\rangle$$
Given that $|\lambda_1\rangle$ and $|\lambda_2\rangle$ are linearly independent, we can make any vector
$$|A\rangle=\alpha|\lambda_1\rangle +\beta|\lambda_2\rangle$$
that will be an eigenvector for $L$ with the same eigenvalue.
Can't I, with the last equation, create every vector in that plane and therefore every vector is an eigenvector for the matrix with the same eigenvalue?
I know this question might sound ridiculous and that there is a mistake in my reasoning but I can't find where is the mistake. There is no mistake. If you have a $2 \times 2$ matrix with one eigenvalue $\lambda$ and 2 linearly independent eigenvectors, then the whole plane is the eigenspace and the matrix is equal to $\lambda I$. | {
"domain": "quantumcomputing.stackexchange",
"id": 5370,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "linear-algebra",
"url": null
} |
פורסם בקטגוריה מאמרים בנושא יודאיקה. אפשר להגיע לכאן עם קישור ישיר. | {
"domain": "judaicabennysart.com",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9715639653084245,
"lm_q1q2_score": 0.8258013602812431,
"lm_q2_score": 0.8499711699569787,
"openwebmath_perplexity": 3834.9378956019077,
"openwebmath_score": 0.9542654752731323,
"tags": null,
"url": "http://judaicabennysart.com/sma-life-ehk/619035-integral-of-e%5E2x"
} |
special-relativity, speed-of-light
Title: The effects of light needing to reach observers from observed objects I know some special relativity, but the material that I've learned has always treated observers as being able to instantly view objects. In reality, the light from an object first needs to travel to the observer. Doesn't this additional time cause some additional effects?
Imagine observer A stands on Earth, while observer B leaves Earth with speed $v$, travels to some point distance $L$ away (distance $L$ according to A), and returns.
Assume that $v$ is very close to $c$. What does A then observe? When he sees B at the midpoint $L/2$, B is actually very nearly at $L$. $B$ then reaches $L$, and starts the return journey. On this return journey, $B$ travels slightly behind the light he emits. Hence A will see the return in a very short burst, and see B travel at a speed far larger than $c$. Also, A will see B travel at a speed of slightly more than $v/2$ during the outbound journey, since he sees $B$ at $L/2$ when he is actually nearly at $L$.
I've tried to put this mathematically: Let $x(t)$ be the distance at which A sees B at time $t$. A sees where B was a period of time of $x/c$ earlier, since this is the time it takes light to traverse the distance. So we get
\begin{align}
x&=v\left(t-\frac{x}{c}\right)=vt-\frac{v}{c}x \\
\Longrightarrow x&=\frac{v}{1+\frac{v}{c}}t \\
\Longrightarrow v'&=\frac{v}{1+\frac{v}{c}}
\end{align}
where $v'$ is the observed velocity. | {
"domain": "physics.stackexchange",
"id": 98305,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, speed-of-light",
"url": null
} |
slam, navigation, ros-kinetic, rtabmap, pointcloud
R: [0.9961427784023481, -0.08753401450757768, 0.006112392415202644, 0.08753067792426865, 0.9961614936500219, 0.0008117826527036613, -0.006159988552602093, -0.00027362957528120994, 0.999980989653247]
P: [212.49235049334482, 0.0, 124.71735668182373, 0.0, 0.0, 212.49235049334482, 114.55551147460938, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 0
binning_y: 0
roi:
x_offset: 0
y_offset: 0
height: 0
width: 0
do_rectify: False
right
header:
seq: 1316
stamp:
secs: 431303367
nsecs: 0
frame_id: "base_link"
height: 250
width: 250
distortion_model: "plumb_bob"
D: [-0.3571397802529545, 0.14349299686495984, 0.0007979733631606835, 0.000265477807614921, 0.0]
K: [217.01144313802783, 0.0, 116.14327950230813, 0.0, 217.53267767947415, 119.10319721480494, 0.0, 0.0, 1.0] | {
"domain": "robotics.stackexchange",
"id": 30315,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "slam, navigation, ros-kinetic, rtabmap, pointcloud",
"url": null
} |
algorithms, greedy-algorithms, correctness-proof, integers
Title: Prove a greedy algorithm that obtains the minimum integer with at most k adjacent swaps is correct This problem is from LeetCode.
You're given a string num representing the digits of a very large integer and an integer k. You are allowed to swap any two adjacent digits of the integer at most k times.
Return the minimum integer you can obtain also as a string.
My question is, why does the following greedy algorithm work?
minimum_integer(num, k):
n <- length of num
i <- 0
while i < n and k > 0:
pos <- position of the first smallest element in num[i..min(n-1, i + k)]
while pos > i:
swap pos and pos-1 of num
decrease pos by 1
decrease k by 1
increase i by 1
return num | {
"domain": "cs.stackexchange",
"id": 19813,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, greedy-algorithms, correctness-proof, integers",
"url": null
} |
python, python-3.x
return {str(key): list(value) for key, value in imports.items()}
def get_args() -> Namespace:
"""Parses and returns the command line arguments."""
parser = ArgumentParser(description=DESCRIPTION)
parser.add_argument('path', nargs='*', type=Path, default=[Path.cwd()],
help='the files and folders to scan for imports')
parser.add_argument('-E', '--exclude-stdlib', action='store_true',
help='exclude imports from the standard library')
parser.add_argument('-e', '--exclude-modules', nargs='+', default=(),
help='exclude the specified imports')
parser.add_argument('-i', '--indent', type=int, metavar='spaces',
help='set indentation for JSON output')
parser.add_argument('-r', '--roots', action='store_true',
help='print only distinct root modules and packages')
parser.add_argument('-s', '--stdlib', type=Path, default=STDLIB,
help='specifies the root of the standard library')
parser.add_argument('-v', '--verbose', action='store_true',
help='print verbose messages')
return parser.parse_args()
def main():
"""Runs the script."""
args = get_args()
basicConfig(format=LOG_FORMAT, level=INFO if args.verbose else WARNING)
files = filter(ispyfile, chain(*map(iterfiles, args.path)))
imports = get_imports_from_files(files)
excludes = set(args.exclude_modules)
if args.exclude_stdlib:
excludes |= set(lsstdlib(args.stdlib)) | {
"domain": "codereview.stackexchange",
"id": 40251,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x",
"url": null
} |
python, python-3.x, console
def calc_s():
print ("----------------------------------")
num1 = int(input("Enter a number: "))
num2 = int(input("Enter a number: "))
print ("----------------------------------")
answer = num1-num2
print ("Your answer is:" ,answer)
print ("----------------------------------")
time.sleep(3)
end()
def calc_d():
print ("----------------------------------")
num1 = int(input("Enter a number: "))
num2 = int(input("Enter a number: "))
print ("----------------------------------")
answer = num1/num2
print ("Your answer is:" ,answer)
print ("----------------------------------")
time.sleep(3)
end() | {
"domain": "codereview.stackexchange",
"id": 13350,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, console",
"url": null
} |
java, exception-handling
Here are some examples of what can be done when the above example is not sufficient:
public static class ImageReadException extends Exception {
private static final long serialVersionUID = 1L;
public ImageReadException(String message, Throwable cause) {
super(message, cause);
}
public ImageReadException(String message) {
super(message);
}
}
private static BufferedImage readWrap(String imagePath) throws ImageReadException {
try {
return readPropagate(imagePath);
} catch (IOException e) {
// log with your normal logger
logException(e, "An error occured while loading image " + imagePath);
// Message should be something meaningful to the calling site
throw new ImageReadException("An error occured while loading image", e);
}
}
private static BufferedImage readWithDefault(String imagePath) {
try {
return readPropagate(imagePath);
} catch (IOException e) {
// log with your normal logger
logException(e, "An error occured while loading image " + imagePath);
//default image may be passed as a parameter instead
return IconManager.DEFAULT_IMAGE;
}
}
// YOU SHOULD NOT DO THIS IN LIBRARY CODE
// The decision of whether to crash the program belongs to the top level application
private static BufferedImage readThrowSilently(String imagePath) {
try {
return readPropagate(imagePath);
} catch (IOException e) {
// you can do this, e.g., while loading a desktop application
// you discover a vital resource could not be loaded
// continuing execution does not make sense
throw new RuntimeException("An error occured while loading image " + imagePath, e);
}
}
private static void logException(IOException e, String message) {
Logger.getLogger(IconManager.class.getName()).log(Level.SEVERE, message, e);
} | {
"domain": "codereview.stackexchange",
"id": 3124,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, exception-handling",
"url": null
} |
• Because $\sin(x)=x-\frac{x^3}6+O(x^5)$, and so $\sin(x)/x = 1-\frac{x^2}6+O(x^4)$. – Glen O May 13 '13 at 5:48
• Nevermind I didn't read carefully enough – name May 13 '13 at 6:15
• @Wishingwell the dummy variable is $i$ and not $n$. $n$ is a constant in the expression. – Milind Hegde May 13 '13 at 6:17
• @Wishingwell : $$\sum_{i=1}^n\frac1n=\frac1n\sum_{i=1}^n1=\frac1n\cdot n=1$$ – DonAntonio May 13 '13 at 6:20 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9802808759252645,
"lm_q1q2_score": 0.821102833335627,
"lm_q2_score": 0.8376199653600372,
"openwebmath_perplexity": 535.1886920590706,
"openwebmath_score": 0.8489954471588135,
"tags": null,
"url": "https://math.stackexchange.com/questions/390115/find-lim-n-to-infty-frac-sin-12-sin-frac12-cdotsn-sin-frac1n/390118"
} |
classical-mechanics, resource-recommendations, education
Title: Landau and Lifshitz or Goldstein classical mechanics For someone who has a PhD in electromagnetic engineering and a good mathematical background, and who wishes to move to physics and start by mastering classical mechanics, would it be better to read Landau & Lifshitz's book first or Goldstein's? And although, ideally, reading both would be best, which would make a more complete reading (commensurate with modern education standards) on the subject and is more self-contained, if only one of them could be read (due to allowed time)? The intention is to have solid foundations in classical mechanics and then move on to other topics in physics (quantum mechanics, relativity, field theory, etc). Goldstein first.
L&L's Mechanics is for sure one of the most beautiful books ever written in physics. Every line you read you feel like reading a masterpiece. Some people complain that L&L misses the Noether's theorem. That is not quite true. In fact it does not mention it, but the whole book is written on basis of the relation between symmetry and conservation laws.
L&L is not a textbook in the sense it cannot be used alone to guide a throughout study on classical mechanics. The book assumes a good background on classical mechanics. It does not define elementary concepts. Moreover it does not follow a historical construction of classical mechanics, it simply starts with the Hamilton principle as a postulate. This is the book to read after you already have a good grasp of what classical mechanics is.
Goldstein is not as beautiful as L&L but it is a superb textbook. Everything is in there. It is self contained. It does follow a logical and historical construction, starting from the d'Alembert principle. It is a shame though that it does not obtain Hamilton Principle from d'Alembert Principle. It has very good examples and very good (and difficult) exercises. In my opinion is the best choice to use on a standard course on classical mechanics. | {
"domain": "physics.stackexchange",
"id": 39405,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classical-mechanics, resource-recommendations, education",
"url": null
} |
ros, navigation, ekf, navsat-transform, robot-localization
<rosparam param="process_noise_covariance">[0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, | {
"domain": "robotics.stackexchange",
"id": 22344,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, navigation, ekf, navsat-transform, robot-localization",
"url": null
} |
ros, gazebo, spawn
Title: can't spawn model in gazebo
hello everyone , i start gazebo with $ roscore & $ rosrun gazebo_ros gazebo, then i want to add a coke_can to it , and i type: $ rosrun gazebo_ros spawn_model -database coke_can -gazebo -model coke_can -y 1 but nothing happen in gazebo world . and the info in my terminate is below :
spawn_model script started
Deprecated: the -gazebo tag is now -sdf
[INFO] [WallTime: 1420599431.921951] [0.000000] Loading model xml from Gazebo Model Database
[INFO] [WallTime: 1420599431.922563] [0.000000] Waiting for service /gazebo/spawn_sdf_model
[INFO] [WallTime: 1420599431.925847] [0.000000] Calling service /gazebo/spawn_sdf_model
[INFO] [WallTime: 1420599441.933129] [619.918000] Spawn status: SpawnModel: Model pushed to spawn queue, but spawn service timed out waiting for model to appear in simulation under the name coke_can
if anyone know , please tell me , thx
Originally posted by forinkzan on ROS Answers with karma: 141 on 2015-01-06
Post score: 6
Hi! I had the same problem, with the same output from the terminal.
In my case the problem was due to the urdf file that i was using. It seems like the urdf had some deprecated functions from old versions of gazebo, so the parser just gave an error and didnt spawn the model. The "queue error" i think was because the spawn node just send the model and waits for Gazebo to generate it in the simulation. Since Gazebo cant do that, it just says that he is just "waiting to be published".
My solution was to just delete that unused old fuctions from the urdf and it worked fine!. Anyway if this is your case, you should see some parse error about the urdf in the terminal where you are running Gazebo.
I hope this answer help you! | {
"domain": "robotics.stackexchange",
"id": 20494,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, gazebo, spawn",
"url": null
} |
homework-and-exercises, pressure, buoyancy, fluid-statics
Title: Difference between water and ethanol density for an object I'm just slightly confused.
Say that I had an object that floated 19.4m in water from the bottom of the object to the surface.
Now I was going to change the fluid to ethanol, which has a density of 789kg $m^{-3}$
Would this mean that the object sinks deeper so the calculation to find the new sinking depth be $19.4 \times (\frac{1030}{789}) = 25.3 \ metres$ or the opposite? Archimedes principle says:
the upward buoyant force that is exerted on a body immersed in a fluid is equal to the weight of the fluid that the body displaces
Or as an equation:
$$ \rho V = mg $$
where $V$ is the volume displaced and $\rho$ is the density of the liquid. A quick rearrangement of this gives:
$$ V = \frac{mg}{\rho} $$
So if you reduce the density $\rho$ the volume displaced $V$ goes up i.e. your object sinks lower.
Relating $V$ to the depth the object sinks depends on the shape of the object. | {
"domain": "physics.stackexchange",
"id": 28612,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, pressure, buoyancy, fluid-statics",
"url": null
} |
ros, function
Title: Calling a function in a function
Hi, can help me to check what is my mistake in the following code please? I did try to refer to http://wiki.ros.org/roscpp_tutorials/Tutorials/UsingClassMethodsAsCallbacks but fail. Thank you.
#include "ros/ros.h"
#include <visualization_msgs/Marker.h>
#include <pcl/ros/conversions.h>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
#include <pcl/filters/voxel_grid.h>
#include <pcl_ros/point_cloud.h>
ros::Subscriber sub_;
ros::Publisher markerpub_; | {
"domain": "robotics.stackexchange",
"id": 16495,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, function",
"url": null
} |
frequency-spectrum, upsampling
Title: Up-sampling images and the Periodicity of the copies In the three figures below, $H(e^{j\omega})$, $H(e^{2j\omega})$, and $H(e^{4j\omega})$ are shown. I understand that up-sampling shrinks the spectrum. However, I need a few clarifications:
Why does up-sampling result in the creation of new copies (imaging)?
What is the periodicity of the images? In other words, how, for example, in the third figure, the second spectrum is located at $\pi/2$ and the third spectrum is centered at $\pi$? | {
"domain": "dsp.stackexchange",
"id": 12315,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "frequency-spectrum, upsampling",
"url": null
} |
waves, causality, spacetime-dimensions, huygens-principle
Title: Regarding the velocity of waves in even dimensions A few years ago I asked on Reddit about the behavior of wave propagation in even and odd dimensions. I received this answer:
"The answer lies in the solutions to the wave equations. Essentially, in odd dimensions a wave will propagate at a single characteristic velocity $v$, while in even dimensions it propagates with all velocities $<v$."
Another user added: "If you interpret the mathematics strictly, the speeds are all strictly less than $v$."
This article, however, says in the second paragraph: "Of course, the leading edge of a wave always propagates at the characteristic speed $c$."
For that reason, I was wondering, is that information on Reddit correct? Does the wave, in even dimensions, propagate with all speeds less than $v$, or does it propagate with all speeds equal or less than $v$?
Edit:
The original comment (which is linked above) refers to the wave equation in this manner (direct quote):
“(I think the wave equation can approximately be written as v2 d2 /dx2 - d2 /dt2 = 0 in terms of v, at least up to some dimensionless constant)” It doesn't make sense to critique statements about wave speed when they don't even specify what speeds they are talking about.
In $n$-spherical waves there are at least three speeds, all of which are generally different: phase speed, group speed and leading edge speed. The latter is obviously always $c_0$, the speed of wave in the medium. The former two are generally different—both from $c_0$ and from each other. Moreover, these speeds also depend on distance from origin, approaching $c_0$ as the distance increases (this is because the wavefronts flatten, become closer to those of plane waves, which always travel at $c_0$).
In ref. 1 we can find the expressions for phase and group speeds of cylindrical and spherical monochromatic waves (expressible in cylindrical and spherical Hankel functions, respectively). The graphs for order $0$ functions can also be found there (figure 6): | {
"domain": "physics.stackexchange",
"id": 73228,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "waves, causality, spacetime-dimensions, huygens-principle",
"url": null
} |
I am not sure how to manipulate it from there. Is that right so far?
7. ## Re: Prove x > ln(1+x) for all x > 0
Originally Posted by vidomagru
To show this let $\displaystyle f(x) = x - \ln(1+x);$ and finding the derivative of $\displaystyle f(x)$ we have: $\displaystyle f'(x) = 1 - \frac{1}{1+x} = \frac{x}{1+x}$ which is clearly positive for all $\displaystyle x > 0.$ We know that $\displaystyle f(0) = 0$ and that $\displaystyle f(x)$ is continuous on $\displaystyle [0,a]$ and differentiable on $\displaystyle (0,a)$ for any $\displaystyle a>0$. So using the mean value theorem we have:
$\displaystyle \frac{f(a) - f(0)}{a-0} = \frac{f(a)}{a} = \frac{a - \ln(1+a)}{a} = \frac{x}{1+x}.$
I am not sure how to manipulate it from there. Is that right so far?
Yes, that is right so far. Next, you know there exists $\displaystyle c\in (0,a)$ such that $\displaystyle f'(c) = \frac{f(a)}{a}$ so $\displaystyle f(a) = a\cdot f'(c)$. You just showed that $\displaystyle f'(c)>0$ and this is true for any $\displaystyle a>0$. The product of two positive numbers is positive. So, $\displaystyle f(a)$ is positive. Again, this is true for any $\displaystyle a>0$. So, you are done.
8. ## Re: Prove x > ln(1+x) for all x > 0
So here is a revamped attempt at the whole underlying question: | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363531263362,
"lm_q1q2_score": 0.8159546014681165,
"lm_q2_score": 0.828938806208442,
"openwebmath_perplexity": 208.99076706309833,
"openwebmath_score": 0.9970565438270569,
"tags": null,
"url": "http://mathhelpforum.com/calculus/222527-prove-x-ln-1-x-all-x-0-a.html"
} |
Thus, we have a total of $\frac{(8!)^2}{8!}=40320$ ways.
• And how are you accommodating for the overcounting? For example, putting the first rook in row 1 column 1, the second rook in row 2 column 2, everything else in the same place, we have an identical configuration.) – Cameron Buie May 3 '13 at 2:40
• BTW, $8! = 40320$, not $5040$ (which is $7!$). :-) – ShreevatsaR May 3 '13 at 2:43
• Worth mentioning, then, yes? – Cameron Buie May 3 '13 at 2:43
• Oops...NOW the page updates to show me your edit. My bad. – Cameron Buie May 3 '13 at 2:45
• Thanks for all the help and comments! After this long day ... – Alice May 3 '13 at 2:47
As you have $8$ rows and $8$ rooks and no two rooks can be on the same row, each row should have exactly one rook.
As you have $8$ columns and $8$ rooks and no two rooks can be on the same column, each column should have exactly one rook.
So you can come up with a rook configuration by placing the first rook on some column of the first row, then the second rook on some other column of the second row, and so on. The number of configurations is therefore the number of ways you can list the $8$ different columns such that each of them is covered and none of them repeats. This is the number of permutations of the $8$ columns, which is $$8! = 8 \times 7 \times 6 \times 5 \times 4 \times 3 \times 2 \times 1 = 40320.$$
For two rooks to be attacking each other, they must either share a row or a column. For two rooks to not attack each other they must not share a row and they must not share a column. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.977022625406662,
"lm_q1q2_score": 0.8284836836736112,
"lm_q2_score": 0.8479677564567913,
"openwebmath_perplexity": 199.35189549394843,
"openwebmath_score": 0.7929572463035583,
"tags": null,
"url": "https://math.stackexchange.com/questions/379882/in-how-many-different-ways-can-we-place-8-identical-rooks-on-a-chess-board-so"
} |
in a cantilever beam the deflection is $$\delta_{max} = \frac {PL^3}{3EI}$$
In this case assuming free sliding between the planks the load P is going to be supported equally between the 3 planks.
So the deflection will be $$\delta_{max} = \frac {(P/3)L^3}{3EI_{\text{single board}}}$$
Because $I$ is proportional to the cube of the board's height (in this case, its thickness), the single board's inertia will be $(1/3)^3=1/27$ that of the bonded boards. Therefore the unbonded deflection will be greater than the bonded boards by a factor of
$$\begin{gather} \dfrac{\left(\frac{1}{3}\right)}{\left(\frac{1}{27}\right)} = \frac{27}{3} = 9 \\ \therefore \delta_{unbonded}= 9\delta_{bonded} \end{gather}$$
• @Wasabi, thanks for the edit. I am liable to make spelling and arithmetic errors because I use my cell phone to write my answers. – kamran Sep 5 '18 at 19:06
• The three beams are subject to a point load at the end, but also act on each other unless the point load is perfectly distributed among perfectly equal beams (even if we ignore friction there will be a vertical component) – mart Aug 7 '20 at 5:23
Thanks to @kamran for his answer.
I simulated the problem in ANSYS student v19 to verify his approach. In the pictures below, the upper beam is solid, the middle one is split into two segments and the lower one is split into 3 segments. Each segment is allowed to slide in respect to its neighbors. It is clear that the deflection of the 3 segments beam is 9 times the full one.
In the case the segments are bonded together (i.e cannot slip in respect to each other) - We get the same results for all the case. All the beam act like a full solid body in bending:
Although I agree with @kamran, I have another way of thinking about it | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9626731158685837,
"lm_q1q2_score": 0.8201520656810721,
"lm_q2_score": 0.851952809486198,
"openwebmath_perplexity": 733.1173525945858,
"openwebmath_score": 0.7078895568847656,
"tags": null,
"url": "https://engineering.stackexchange.com/questions/23612/deflection-of-a-cantilever-beam-composed-of-separate-not-bonded-planks/23619"
} |
navigation, ros2
Title: [ROS2][Nav2] Nav2 Planner Metrics
Hello,
I am new to ROS and I don't know if this is the right place to ask about this.
I did research but I couldn't find anything about how to calculate some planner metrics.
I would like to run a wide variety of planners SmacPlanner, A*, Hybrid-A* and varying some parameters while measuring the time to obtain the global path and its length. For now, I'm clueless about how to do it.
Additionally, I would like to set the initial pose and the goal pose programmatically so I can guarantee that the start and final poses are always the same. I do not have this precision using rviz commands. I tried publishing to the topic /initialpose with the following command but without success.
ros2 topic pub -1 /initialpose geometry_msgs/PoseWithCovarianceStamped '{ header: {stamp: {sec: 0, nanosec: 0}, frame_id: "map"}, pose: { pose: {position: {x: 10, y: 10.0, z: 00.0}, orientation: {w: 0.1}}, } }'
Said that I have the following questions:
Is there a topic that I can subscribe in order to get those metrics (path length and average time to compute global path)?
Is the initialpose the correct topic to publish this pose? Which topic can I use to publish the goal pose? I tried the /goal_pose but also without success. | {
"domain": "robotics.stackexchange",
"id": 36609,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation, ros2",
"url": null
} |
# Homework Help: A somewhat simple Probabilty problem
1. Dec 16, 2011
### Whitishcube
1. The problem statement, all variables and given/known data
Suppose two teams are playing a set of 5 matches until one team wins three games (or best of five).
Considering the possible orderings for the winning team, in how many ways could this series end?
2. Relevant equations
3. The attempt at a solution
So I think I have the solution, I just would like my logic to be checked in this.
If we think of the case where the winning team gets three wins first, we can think of the
last two games as losses, so we can essentially think of the number of possible outcomes
as the number of permutations of the set {W, W, W, L, L} (W is win, L is loss). This ends up being the multinomial coefficient:
$$\left( \begin{array}{c} 5\\ 3,2 \end{array} \right)= \frac{5!}{2! 3!} = 10,$$
so there are 10 possible outcomes.
Is this correct? Probability is a strange feel compared to most of the other math I have encountered...
2. Dec 16, 2011
### BruceW
Your answer would be correct if they always played all 5 games. But this isn't true, because they will stop as soon as the winner has won 3 games. So you have over-counted. You've got to think of a different way of counting the possible outcomes.
3. Dec 16, 2011
### Whitishcube
Hmm any pointers besides that? The only things I can think of are permutations and combinations. If they don't always play 5 games though, how can I use these methods here?
4. Dec 16, 2011
### dacruick
technically you could use the logic that they play all 5 games and subtract from that the amount of ways that 5 games wouldn't be played. You know that a minimum of 3 have to be played.
5. Dec 16, 2011
### Whitishcube
So you're saying I should take the possible number of ways the 5 games could be played, which is 10 ways, and subtract the number of games where they win before all 5 games are played? | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9893474897884492,
"lm_q1q2_score": 0.8265835361401433,
"lm_q2_score": 0.8354835330070838,
"openwebmath_perplexity": 596.8107944920438,
"openwebmath_score": 0.48757830262184143,
"tags": null,
"url": "https://www.physicsforums.com/threads/a-somewhat-simple-probabilty-problem.560724/"
} |
- 4 years, 8 months ago
That's really great. Two completely different ways of solving the problem :) I wouldn't know how to prove that $$a,b$$ are finite though.
- 4 years, 8 months ago
If you want a finiteness proof, use first principles. Let $$N_A$$ be the number of tosses until $$A$$ stops. The probability $$P[N_A>n]$$ is the probability that $$A$$ has not thrown HT in the first $$n$$ tosses. This can only happen if $$A$$ has thrown $$j$$ tails followed by $$n-j$$ heads, for some $$0 \le j \le n$$. Thus there are $$n+1$$ possible sequences, all equally likely, and so $P[N_A > n] = (n+1)2^{-n}$ Thus $P[N_A \ge n] \;=\; P[N_A > n-1] \; = \; n2^{1-n}$ Normal probability and series summation tricks give us $E[N_A] \;=\; \sum_{n=1}^\infty P[N_A \ge n] \; = \; \sum_{n=1}^\infty n2^{1-n} \; = \; 4$ The result for $$B$$ is a little more challenging. Let $$N_B$$ be the number of tosses until $$B$$ stops. The probability that $$N_B > n$$ is the probability that $$B$$ does not throw HH in the first $$n$$ tosses. This is the probability that the first $$n$$ tosses are made up of "HT"s and "T"s stuck together, plus the probability that the first $$n-1$$ tosses are made up of "HT"s and "T"s stuck together, followed by a single "H". | {
"domain": "brilliant.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.978712648206549,
"lm_q1q2_score": 0.8571438058606087,
"lm_q2_score": 0.8757869916479466,
"openwebmath_perplexity": 660.6993215232189,
"openwebmath_score": 0.9759904146194458,
"tags": null,
"url": "https://brilliant.org/discussions/thread/puzzle-paradox-in-probability/"
} |
• Which is fine until you have an $xy$ term, of course. – Chappers Jun 1 '15 at 12:44
If $a$ is not a “perfect square” there's no problem either. If your equation is $$ax^2+bx+c=0$$ then it's equivalent to $$4a^2x^2+4abx+4ac=0$$ and completing the square is more evident: $$4a^2x^2+4abx+b^2-b^2+4ac=0$$ or $$(2ax+b)^2-(b^2-4ac)=0$$ If you just want to factor the polynomial $ax^2+bx+c$ (with $a\ne0$, of course, just do the same: $$ax^2+bx+c=\frac{1}{4a}(4a^2x^2+4abx+4ac)= \frac{1}{4a}\bigl((2ax+b)^2-(b^2-4ac)\bigr)$$ If $b^2-4ac<0$ there's nothing else to do, because the polynomial is irreducible over the reals; if $b^2-4ac=0$ it is $$\frac{1}{4a}(2ax+b)^2$$ and, if $b^2-4ac>0$ you get $$ax^2+bx+c=\frac{1}{4a}(x+2a-\sqrt{b^2-4ac})(x+2a+\sqrt{b^2-4ac})$$
Actually, the quadratic formula is derived BY completing the square. Yes, any quadratic equation can be solved by completing the square. The only reason to use the quadratic formula is that it might be simpler than completing the square.
One method that avoids fractions until the very end is to multiply through by $4a$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464485047916,
"lm_q1q2_score": 0.8352648927138593,
"lm_q2_score": 0.8558511524823263,
"openwebmath_perplexity": 257.3476980951744,
"openwebmath_score": 0.9940724968910217,
"tags": null,
"url": "https://math.stackexchange.com/questions/1307025/i-think-i-can-complete-the-square-of-any-quadratic-is-it-true-any-reason-to-e/1307043"
} |
Putting together these facts, we conclude that $$f(a)$$ is the maximum value of $$f$$ for $$x \in (a - \delta, a + \delta)$$, i.e. $$f$$ has a local maximum at $$x=a$$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462219033657,
"lm_q1q2_score": 0.8100105235795251,
"lm_q2_score": 0.8198933359135361,
"openwebmath_perplexity": 83.59668776819058,
"openwebmath_score": 0.9322595596313477,
"tags": null,
"url": "https://math.stackexchange.com/questions/3073390/f-differentiable-5-times-around-x-a-fa-fa-fa-0-f4x/3073706#3073706"
} |
c#, .net, console
private static void PrintOutMovies()
{
Console.WriteLine();
Console.WriteLine("Your movies in your list are:");
foreach (var movie in ToBeWatchedMovies)
Console.WriteLine(movie);
}
private static bool WishToContinue()
{
while(true)
{
Console.WriteLine();
Console.WriteLine("Do you want to enter another movie? Y/N?");
var userInput = Console.ReadLine();
if (string.Equals(userInput, AnswerYes, StringComparison.OrdinalIgnoreCase))
return true;
if (string.Equals(userInput, AnswerNo, StringComparison.OrdinalIgnoreCase))
return false;
Console.WriteLine($"Please provide either '{AnswerYes}' or '{AnswerNo}' ");
}
}
}
I hope this helped you a bit. | {
"domain": "codereview.stackexchange",
"id": 38779,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, .net, console",
"url": null
} |
gazebo, navigation, rviz, ros-kinetic, ubuntu
Original comments
Comment by logan.ydid on 2019-07-22:
Yay! Thank you this works! Now I'm just having trouble with the orientation of the sensor. In gazebo if I place something in front of the robot it appears above the robot rotated 90 degrees about the z access. I've tried adjusting the pose tag after the first comment in your code but it didn't seem to make a difference. I've also tried changing the orientation in the joint definition but that just changes the orientation of the sensor in gazebo but not in rviz. Any suggestions?
Comment by Solrac3589 on 2019-07-23:
Maybe my error adding as a main frame "world". Can you try to add as a main frame a link instead of a joint? I have checked my code and i have that.
Anyway I am not quite sure that this is the solution, but lets try xD
Comment by logan.ydid on 2019-07-24:
Haha I thought that seemed weird . So do you mean to do something like this?<pose frame="link_name">0.0 0.0 1.0 0.0 -1.5708 1.5708</pose>
Comment by Solrac3589 on 2019-07-24:
Yes! That's what I wanted to say! Any news?
Comment by logan.ydid on 2019-07-24:
Hmmm unfortunately i still have been unable to get it to work. i have tried changing the link name to hokuyo_link just like my laser scanner from before and to base_link but to no avail. i have also tried changing the name of the camera in the plugin to match the sensor name but that didnt change anything either. when i look at the camera view from rviz it looks like everything is in the correct orientation but the point cloud still shows up above the robot. If it helps i put the urdf i am currently using as an update to my post. thank you so much for your help!
Comment by logan.ydid on 2019-07-27: | {
"domain": "robotics.stackexchange",
"id": 33477,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gazebo, navigation, rviz, ros-kinetic, ubuntu",
"url": null
} |
newtonian-gravity, acceleration, units, dimensional-analysis, textbook-erratum
Since $m$ has units $\text{kg},$ this constant must have the units $\text m/\text s^2,$ and since $a = F/m,$ it is the acceleration that every object on that surface feels due to gravity. That's actually very important: it means that if you fill up a plastic water bottle 1/4 of the way full, and another all of the way full, even though one of them has 4 times the mass and experiences 4 times the force, those two effects perfectly balance out and they both fall exactly the same. If you have the plastic bottles to spare, feel free to do this experiment several times at home, releasing them side-by-side and confirming that they both hit the ground at the same time.
If we were to modify $g$ to have units $\text m/\text s$ then we would probably have to modify Newton's laws to say not $a = F/m$ but rather $v = F/m,$ and this would be disastrous: it would mean, for example, that you could not throw a ball upwards because the moment it left your hand it would have to have negative velocity. We would then have to postulate new "forces" to account for the fact that balls can be thrown in practice, call it an "inertial force" or whatever, that tries to keep it moving the direction it's moving... that's why Newton's laws are so very useful, because they don't require this and you can just say "once that ball leaves your hand, the only forces on it are gravity and wind resistance." | {
"domain": "physics.stackexchange",
"id": 40163,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-gravity, acceleration, units, dimensional-analysis, textbook-erratum",
"url": null
} |
c#
switch (layoutAttribute._DataType)
{
case DataType.ALPHANUMERIC:
propertyValue = propertyValue.PadRight((layoutAttribute.EndPosition - layoutAttribute.StartPosition) + 1, ' ');
break;
case DataType.NUMERIC:
propertyValue = propertyValue.PadLeft((layoutAttribute.EndPosition - layoutAttribute.StartPosition) + 1, '0');
break;
}
for (int i = 0, j = layoutAttribute.StartPosition; i < propertyValue.Length; i++, j++)
result[j] = propertyValue[i];
}
break;
}
}
return result.ToString().ToUpper();
}
public abstract bool IsValid();
}
}
Test source file:
Header.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using FlatParser;
namespace FlatFileTest
{
[Layout(lineSize: 1200)]
public class Header : LayoutUtility
{
[LayoutDetails(0, 0, DataType.ALPHANUMERIC)]
public string TIPO_DE_REGISTRO { get; set; }
//[LayoutDetails(1, 16, DataType.ALPHANUMERIC)]
//public string FILLER0 { get; set; }
[LayoutDetails(17, 27, DataType.ALPHANUMERIC)]
public string NOME_DO_ARQUIVO { get; set; }
[LayoutDetails(28, 35, DataType.DATETIME, dateTimeFormat: "yyyyMMdd")]
public DateTime DATA_DE_GRAVACAO { get; set; }
[LayoutDetails(36, 43, DataType.NUMERIC)]
public string NUMERO_DA_REMESSA { get; set; }
//[LayoutDetails(44, 1198, DataType.ALPHANUMERIC)]
//public string FILLER1 { get; set; }
[LayoutDetails(1199, 1199, DataType.ALPHANUMERIC)]
public string FIM { get; set; } | {
"domain": "codereview.stackexchange",
"id": 36240,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#",
"url": null
} |
The two concepts match. Let us at first revisit the logarithmic function:
The multivalued logarithm is defined as \begin{align*} \log(z)=\log|z|+i\arg(z)+2k\pi i\qquad\qquad k\in\mathbb{Z}\tag{1} \end{align*} In order to make single-valued branches of $$\log$$ we make a branch cut from $$0$$ to infinity, the most common being the negative real axis. This way we define the single-valued principal branch or principal value of $$\log$$ denoted with $$\mathrm{Log}$$ and argument $$\mathrm{Arg}$$. We obtain \begin{align*} \mathrm{Log}(z)=\log |z|+i\mathrm{Arg}(z)\qquad\qquad -\pi <\mathrm{Arg}(z)\leq \pi\tag{2} \end{align*}
Now let's look at the square root function:
The two-valued square root is defined as \begin{align*} z^{\frac{1}{2}}&=|z|^{\frac{1}{2}}e^{i\frac{\arg(z)+2k\pi}{2}}\\ &=|z|^{\frac{1}{2}}e^{i\frac{\arg(z)}{2}}(-1)^k\qquad\qquad k\in\mathbb{Z}\tag{3} \end{align*} | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9881308800022472,
"lm_q1q2_score": 0.8008592304625846,
"lm_q2_score": 0.81047890180374,
"openwebmath_perplexity": 158.31567822274167,
"openwebmath_score": 0.9843408465385437,
"tags": null,
"url": "https://math.stackexchange.com/questions/2536235/what-are-the-branches-of-the-square-root-function"
} |
python, recursion, math-expression-eval
def infix_eval(expr):
"""
Reduced infix eval, only works with 2 numbers.
>>> infix_eval('9 + 4')
13.0
>>> infix_eval('2 * -6')
-12.0
"""
a, oper, b = expr.split()
return OP_FOR_SYMBOL[oper](float(a),float(b))
def full_eval(expr, eval_type):
"""
Evals by the rules of eval_type starting from the inner
parenthesis.
>>> full_eval("(* 4 5 (+ 4 1))", polish_eval)
100.0
>>> full_eval("(* 4 (/ 10))", polish_eval)
0.4
>>> full_eval("(1 + (5 * 2))", infix_eval)
11.0
"""
if len(expr.split(' ')) == 1:
return float(expr)
inn = innermost_parens(expr)
new_expr = expr.replace('('+str(inn)+')',str(eval_type(inn)))
return full_eval(new_expr, eval_type)
def interface():
which_expr = input("Polish or infix? ")
if 'polish' in which_expr.lower():
evaller = lambda expr: full_eval(expr, polish_eval)
else:
evaller = lambda expr: full_eval(expr, infix_eval)
while True:
result = evaller(input('> '))
print(result) | {
"domain": "codereview.stackexchange",
"id": 12964,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, recursion, math-expression-eval",
"url": null
} |
Since the Kullback-Leibler divergence is an asymmetric measure, an alternative directed divergence can be obtained by reversing the roles of the two models in the definition of the measure. Bregman Divergence. The mean parameters for each Gaussian are sto. """ epsilon = 0. Jefferson and R. We may use a MaxEnt point of view that consists in further minimizing the Kullback-Leibler information divergence I(fjjg), with respect to f. UL with the maximum expected KL divergence is then added to the set of labeled data points, where ˆy is the true label of ˆx. The Hellinger distance is an example of divergence measure, similar to the Kullback-Leibler (KL) divergence. (2) for different number of points, as a function of the Euclidean distance between i and j. Variational inference (VI) converts this problem into the minimization of the KL-divergence for some simple class of distributions parameterized by. However, unlike the KL-divergence the Hellinger divergence is a symmetric metric. Pairwise Kullback-Leibler divergence between the distributions of the projected data (using B^s) Gaussian Approximation Consider only di erences in the rst two moments!KL-Divergence between Gaussians (max. Bivariate Gaussian distribution example Assume we have two independent univariate Gaussian variables x1 = N(m1, s2) and x 2 = N(m2, s2) Their joint distribution p( x1, x2) is:. 1 Weighted Symmetrized Kullback-Leibler Centroid The Kullback-Leibler divergence is part of the broad fam-ily of Bregman divergences [3]. Harremos, "Rényi divergence and Kullback-Leibler divergence," IEEE Transactions on Information Theory, vol. NIPS, 2004. Similarly as for discrete distributions, once Gaussians are far apart, the KL grows unbounded, whereas the geodesic distance levels off. w9b - More details on variational methods, html, pdf. Let's say I want to compute the pairwise KL divergence between a large number (O(100)) of multivariate Gaussian | {
"domain": "fastandstore.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.987946221548465,
"lm_q1q2_score": 0.8077196058073102,
"lm_q2_score": 0.8175744673038222,
"openwebmath_perplexity": 933.642810380257,
"openwebmath_score": 0.8982875347137451,
"tags": null,
"url": "http://fastandstore.it/wwch/kl-divergence-between-two-gaussians.html"
} |
math-expression-eval, powershell
Problem #0: Not using cmdlet parameter filters, e.g. instead of collecting all the [System.Math]::Pow($operators.Count,$range.count-1) permutations and then narrowing the huge array using the Where-Object cmdlet, I collect only desired values;
Problem #4: Searching text: the original script computes a string in the (commonly usable) ConvertTo-Base function and then casts it as [string[]][char[]]; | {
"domain": "codereview.stackexchange",
"id": 34648,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "math-expression-eval, powershell",
"url": null
} |
### y = fix(x)
Round towards zero means we we want floor for positive arguements and ceil for negative arguments. We can write this as
Logical model
Implement the implications with integer $$y$$ and a combined model for ceil and floor
### y = rem(x,m)
$$y = \mathop{rem}(x,m)$$ means $$y = x - nm, n = \mathop{fix}(x/m)$$, meaning we have to implement $$\mathop{fix}(x/m)$$ using the model above (we assume $$m$$ is constant).
### y = mod(x,m)
$$y = \mathop{mod}(x,m)$$ means $$y = x - nm, n = \mathop{floor}(x/m)$$, meaning we have to implement $$\mathop{floor}(x/m)$$ using the model above (we assume $$m$$ is constant).
### y = sgn(x)
A logical model of $$s = \mathop{sgn}(x)$$ is
This is interpreted as
A big-M representation of the implications, using a margin $$\epsilon$$ around 0 if wanted leads to
### y = nnz(x)
To count elements let $$y = \sum_{i=1}^n \z_i$$. Introduce additional binary vectors $$v,u$$ and the logical model
Once again standard implications
### y = f(x), x scalar integer
For an arbitrary function defined over a bounded integer set (here for simple notation assumed to be $$1\leq x \ M$$, we simply see it as as the disjoint logic model
This is compactly written as
### y = piecewise affine function
A typical piecewise affine model is represented as if $$A_ix\leq b_i$$ then $$y = c_i^Tx+d_i$$ where $$i = 1,\ldots,N$$. From above, this is
Standard implication…
### y = piecewise quadratic function | {
"domain": "github.io",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.985496421586532,
"lm_q1q2_score": 0.8212384026206947,
"lm_q2_score": 0.8333245911726382,
"openwebmath_perplexity": 816.054317907762,
"openwebmath_score": 0.7899975776672363,
"tags": null,
"url": "https://yalmip.github.io/tutorial/logicprogramming/"
} |
homework-and-exercises, operators, supersymmetry, commutator
\begin{equation}
\begin{split}
&\langle \ell_{i}, \ell_{j} \rangle = \ell_{i+j} \qquad mod \qquad n+1 \\
&\langle \ell_{i}, \ell_{j} \rangle = (-1)^{i j +1} \langle \ell_{j}, \ell_{i} \rangle
\end{split}
\end{equation}
satisfying the Jacobi identity, which is not needed for the current discussion and we are not writing down here.
Now, the super-Poincare algebra with generators $P^{\mu}, J^{\mu \nu} \in L_0$ and $Q_{\alpha}, \bar{Q_{\dot{\alpha}}} \in L_{1}$ is a graded Lie algebra of grade $n=1$.
We are set up to ask and answer the following question. Which pairs of generators commute and which anticommute?
Case 1: Let us assume that $\ell,m \in L_0$. We have that $\langle \ell,m \rangle = - \langle m, \ell \rangle \in L_0$ and hence the product corresponds to a commutator in this case. Therefore, we have the commutation relations
\begin{equation}
[P,P], \quad [P,J], \quad [J,J]
\end{equation}
Case 2: Let us assume that $\ell \in L_0$ and $m \in L_1$. In this case, we have $\langle \ell,m \rangle = - \langle m, \ell \rangle \in L_1$ and as before we have commutation relations amongst the generators. Specifically, we have
\begin{equation}
[P,Q], \quad [P,\bar{Q}], \quad [J,Q], \quad [J,\bar{Q}]
\end{equation} | {
"domain": "physics.stackexchange",
"id": 66064,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, operators, supersymmetry, commutator",
"url": null
} |
java, algorithm, pathfinding, sliding-tile-puzzle, priority-queue
Title: Comparing puzzle solvers in Java I have this program that solves a \$(n^2 - 1)\$-puzzles for general \$n\$. I have three solvers:
BidirectionalBFSPathFinder
AStarPathFinder
DialAStarPathFinder
AStarPathFinder relies on java.util.PriorityQueue and DialAStarPathFinder uses so called Dial's heap which is a very natural choice in this setting: all priorities are non-negative integers and the set of all possible priorities is small (should be \$\{ 0, 1, 2, \dots, k \}\$, where \$k \approx 100\$ for \$n = 4\$).
DialHeap.java:
package net.coderodde.puzzle;
import java.util.HashMap;
import java.util.Map;
import java.util.NoSuchElementException;
/**
* This class implements Dial's heap.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Nov 16, 2015)
* @param <E> the type of the actual elements being stored.
*/
public class DialHeap<E> {
private static final int INITIAL_CAPACITY = 64;
private static final class DialHeapNode<E> {
E element;
int priority;
DialHeapNode<E> prev;
DialHeapNode<E> next;
DialHeapNode(E element, int priority) {
this.element = element;
this.priority = priority;
}
}
private final Map<E, DialHeapNode<E>> map = new HashMap<>();
private DialHeapNode<E>[] table = new DialHeapNode[INITIAL_CAPACITY];
private int size;
private int minimumPriority = Integer.MAX_VALUE;
public void add(E element, int priority) {
checkPriority(priority);
if (map.containsKey(element)) {
return;
}
ensureCapacity(priority);
DialHeapNode<E> newnode = new DialHeapNode(element, priority);
newnode.next = table[priority]; | {
"domain": "codereview.stackexchange",
"id": 18446,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, algorithm, pathfinding, sliding-tile-puzzle, priority-queue",
"url": null
} |
Here are my questions regarding this problem:
1) Are my sample space and sigma field correct for this experiment?
2) The initial question was a little vague and am unsure of what $P$ I'm looking for here, so I took a guess at that solution. I'm fairly certain my answer for that part is incorrect. From examples I've seen online, a specific event is typically given, and you're required to find the probability of that event. So, I tried to expand that to include any possible rolling combinations.
To clarify some notation just in case, $A_{111}$ is the event that you roll a 1 three times. Similarly, $A_{352}$ would be the event that you roll a 3, then a 5, then a 2.
Thank you!
I think you have a good hang of the concept. However, things can always be written better.
For example, the sample space for three independent dice, rather than the suggestive $\{(1,1,1),...,(6,6,6)\}$(which is correct, so credit for that) can be written succinctly as $\Xi \times \Xi \times \Xi$, where $\Xi = \{1,2,3,4,5,6\}$. This manages to express every element in the sample space crisply, since we know what elements of cartesian products look like.
The sigma field is a $\sigma$-algebra of subsets of $\Omega$. That is, it is a set of subsets of $\Omega$, which is closed under infinite union and complement. Ideally, the sigma field corresponding to a probability space, is the set of events which can be "measured" relative to the experiment being performed i.e. it is possible to assign a probability to this event, with respect to the experiment being performed. What this specifically means, is that based on your experiment, your $\sigma$-field can possibly be a wise choice. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9693241956308277,
"lm_q1q2_score": 0.8035104497478023,
"lm_q2_score": 0.8289388146603364,
"openwebmath_perplexity": 186.03464242469272,
"openwebmath_score": 0.8870362639427185,
"tags": null,
"url": "https://math.stackexchange.com/questions/2601941/probability-space-of-rolling-a-fair-die-three-times"
} |
newtonian-mechanics, energy, work
The term $M|\vec V|^2/2$ doesn't play a role here because we're considering fixed inertial frames only and $\vec V$ is, much like this whole term, constant. Only $\vec P$ is changing as the object keeps on accelerating. | {
"domain": "physics.stackexchange",
"id": 6990,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, energy, work",
"url": null
} |
deep-learning, aws
Title: AWS : Workflow for deep learning I am using my company computer since I don't have another one or linux. Therefore, I am starting to use cloud resources to perform some tasks.
I have a very simple question: Since most cloud resources don't have a GUI, how can I perform simple checks e.g. visualizing the bounding boxes my algorithm has found on a picture?
How do people performing such tasks usually accomplish this? Is there an easy fix or do I have to go through the installation of GUI on the cloud instance? Or, do you usually just download the results locally and view them locally? My Simple suggestion is to install python , Anaconda on the Linux machine from command line.
As Jupyter Notebook gets installed with Anaconda package.
Just give command
jupyter notebook
Now we can connect to ipython notebook in the Virtual Machine from the token generated from above command from any browser in any other machine locally. | {
"domain": "datascience.stackexchange",
"id": 5971,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "deep-learning, aws",
"url": null
} |
organic-chemistry, molecular-structure
This can be considered as an experimental confirmation of predicted earlier hybridization of cyclobutane ($\psi(\ce{C-C}) = \mathrm{sp^{4.28}}$, $\psi(\ce{C-H}) = \mathrm{sp^{2.19}}$) [3]:
The difference between the bond overlaps of the planar and non-planar models is very small. [...] This clearly indicates that other interactions (even if small), must be dominant in causing non-planarity of the molecule. Non-bonded repulsions make a dominant contribution in determining conformations in acyclic and larger cyclic systems (e. g. ethane, cyclohexane), and they favour staggered arrangements of $\ce{CH}$-bonds. In a planar model of cyclobutane all hydrogen atoms are in eclipsed positions, this is a cause of additional strain and is energetically less favourable. A relieve is obtained by bending the skeleton of the molecule, since the $\ce{CH}$ bonds are then approaching a staggered conformation.
Cyclobutane, just as cyclopropane, is a strained cycle with the bent "banana" bonds with intermediate character between $\sigma$ and $\pi$, which allows for mimicking the behavior of double-bond compounds, such as decolorization of bromine water [4, p. 195], even though protons in cyclobutane are less acidic than in cyclopropane:
Four-membered rings also exhibit angle strain, but much less than three-membered rings, and for that reason are less easily opened. Cyclobutane is more resistant than cyclopropane to bromination, and although it can be hydrogenated to butane, more strenuous conditions are required.
Bibliography | {
"domain": "chemistry.stackexchange",
"id": 8732,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, molecular-structure",
"url": null
} |
quantum-state, entanglement, information-theory, mutual-information, schmidt-decomposition
Title: Schmidt decomposition for tripartite system $ABC$ with vanishing mutual information between $A$ and $C$ Suppose I have a tripartite system $ABC$ in a pure state $|\psi_{ABC}\rangle$ with mutual information $I(A:C)=0$. This implies that the reduced density matrix $\rho_{AC}$ factorizes as $\rho_{AC} = \rho_A \otimes \rho_C$.
How do I show that this implies the existence of a Schmidt decomposition of $|\psi_{ABC}\rangle$ of the form
\begin{align}
|\psi_{ABC}\rangle = \sum_{kl} \sqrt{\lambda_k p_l} |\psi_k\rangle_A \otimes |\phi_{kl}\rangle_B \otimes |\varphi_l\rangle_C
\end{align}
where $|\psi_k\rangle_A$, $|\phi_{kl}\rangle_B$, $|\varphi_l\rangle_C$ are orthonormal states on Hilbert spaces $A$,$B$, and $C$ respectively? TL;DR: The key observation is that Schmidt basis on a subsystem consists of eigenvectors of the reduced state of that subsystem. Consequently, if the reduced state is a product state then its Schmidt basis can be chosen to consist of pure product states. | {
"domain": "quantumcomputing.stackexchange",
"id": 3530,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-state, entanglement, information-theory, mutual-information, schmidt-decomposition",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.