anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Grade of Service Probability Function Python | Question: Consider the following typical probability scenario:
I defined this function to handle that scenario, I'm curious if Python has a more efficient method to handle this, or if this is the best way:
from scipy.special import binom
def grade_of_service(n, p, c):
prob = 0
k = c + 1
while k <= n:
prob += binom(n, k)*(p**k)*((1-p)**(n-k))
k += 1
return prob
EDIT: Here's an example solution from the author of this example: "If n = 100, p=0.1, and c=15, the probability of interest turns out to be 0.0399."
My probability does round to this result. But due to the roundoff error / numerical precision, it doesn't seem to matter if k = n or k = n+1.
Answer: Pure Python
You should use a for loop rather than a while loop.
You can use a generator comprehension to build all prob's.
You can use sum to get the sum.
def grade_of_service(n, p, c):
return sum(binom(n, k)*(p**k)*((1-p)**(n-k)) for k in range(c+1, n+1))
Numpy and friends
Use numpy.arange rather than range.
Write out the equation same as above, just not in a comprehension.
Change sum to numpy.sum.
import numpy
def grade_of_service(n, p, c):
k = numpy.arange(c+1, n+1)
return numpy.sum(binom(n, k)*(p**k)*((1-p)**(n-k)))
This has a problem with numbers that exceed a certain size, as numpy numbers are finite. | {
"domain": "codereview.stackexchange",
"id": 31047,
"tags": "python, python-3.x, mathematics, statistics, scipy"
} |
Kinetic Energy in central force motion | Question: My question is how can I get to know the velocity of a particle in a elliptical trajectory if a particle is affected by a potential
\begin{equation}
U(r) = -\frac{k}{r}
\end{equation}
I have a particle of mass $m$ moving under the action of $F(r) = -k/r^2$ for, obviously in correspondence with the gravitational force, $k > 0$. In a point called $P$ distant $a$ from the origin the velocity is $v_0 = \frac{k}{2ma}$ and the velocity in this instant is perpendicular to the position vector in $P$.
I have found the effective potential
\begin{equation}
U_{eff} = \frac{ka}{4r^2} - \frac{k}{r}
\end{equation}
Using that the angular momentum of this system is a constant of the motion. Then, if you have that the total energy is also a constant of the motion because the force is independent of the time we can find $E$ using
\begin{equation}
E = \frac{mv_0^2}{2} + U_{eff}(a)
\end{equation}
But then my problem is that I get that the Kinetic Energy is the same both for the $r_{máx} = a$ and for the $r_{mín} = a/3$ the apsidal distances of this orbit. Is the assumption that $U_{eff}(r_{máx}) = U_{eff}(r_{mín})$ wrong ? I'm using this because of the Plot of the equation of the $U_{eff}$. We have two values that satisfy $U_{eff}(r) = -3k/4a = U_{eff}(a)$.
What is the Kinetic Energy for $r_{mín}$?
Answer: You must be mistaken in your calculations.
Let R = max distance from O(according to your diagram), and $v_0$be the velocity at r = R(r is the general radial distance of any point P from O).
Since energy is conserved, we have : $\frac{-k}{R} + \frac{1}{2}m{v_0}^2$ = $\frac{-k}{r} + K(r)$ [K(r) is the kinetic energy for radial distance r].
From this we obtain $K(r) = k(\frac{1}{r} - \frac{1}{R}) + \frac{1}{2}m{v_0^2}$.
Clearly, $K(R) = \frac{1}{2}m{v_0^2} \ne K(R/3) = \frac{2k}{R} + \frac{1}{2}m{v_0^2}$ | {
"domain": "physics.stackexchange",
"id": 34663,
"tags": "newtonian-mechanics, classical-mechanics, orbital-motion, potential-energy"
} |
Mini HTML document builder in Java | Question: (See the next iteration.)
I have this tiny library for building HTML pages in Java code:
HtmlViewComponent.java
package net.coderodde.html.view;
/**
* This abstract class defines the API for logical HTML view components, that
* may consist of other view components;
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Mar 18, 2016)
*/
public abstract class HtmlViewComponent {
@Override
public abstract String toString();
}
HtmlViewContainer.java
package net.coderodde.html.view;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
/**
* This class defines the API for HTML elements that may contain other HTML
* elements.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Mar 18, 2016)
*/
public abstract class HtmlViewContainer extends HtmlViewComponent {
protected final List<HtmlViewComponent> components = new ArrayList<>();
public void addHtmlViewComponent(HtmlViewComponent component) {
Objects.requireNonNull(component, "The input component is null.");
components.add(component);
}
public boolean containsHtmlViewComponent(HtmlViewComponent component) {
Objects.requireNonNull(component, "The input component is null.");
return components.contains(component);
}
public void removeHtmlViewComponent(HtmlViewComponent component) {
Objects.requireNonNull(component, "The input component is null.");
components.remove(component);
}
@Override
public abstract String toString();
}
HtmlPage.java
package net.coderodde.html.view;
/**
* This class is the top-level container of view components.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Mar 18, 2016)
*/
public class HtmlPage extends HtmlViewContainer {
private final String title;
public HtmlPage(String title) {
this.title = title != null ? title : "";
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder().append("<!DOCTYPE html>\n")
.append("<html>\n")
.append("<head>\n")
.append("<title>")
.append(title)
.append("</title>\n")
.append("</head>\n")
.append("<body>\n");
components.stream().forEach((component) -> {
sb.append(component.toString());
});
return sb.append("</body>\n")
.append("</html>").toString();
}
}
DivComponent.java
package net.coderodde.html.view.support;
import net.coderodde.html.view.HtmlViewContainer;
/**
* This class implements a {@code div} component.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Mar 18, 2016)
*/
public class DivComponent extends HtmlViewContainer {
@Override
public String toString() {
StringBuilder sb = new StringBuilder("<div>\n");
components.stream().forEach((component) -> {
sb.append(component.toString());
});
return sb.append("</div>\n").toString();
}
}
TableComponent.java
package net.coderodde.html.view.support;
import java.util.ArrayList;
import java.util.List;
import net.coderodde.html.view.HtmlViewComponent;
/**
* This class represents the table component.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Mar 18, 2016)
*/
public class TableComponent extends HtmlViewComponent {
private final int columns;
public TableComponent(int columns) {
checkColumnNumber(columns);
this.columns = columns;
}
private final List<List<? extends HtmlViewComponent>> table = new ArrayList<>();
public void addRow(List<HtmlViewComponent> row) {
while (row.size() > columns) {
row.remove(row.size() - 1);
}
table.add(row);
}
private void checkColumnNumber(int columns) {
if (columns <= 0) {
throw new IllegalArgumentException(
"The number of columns must be a positive integer. " +
"Received " + columns);
}
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder().append("<table>\n");
for (List<? extends HtmlViewComponent> row : table) {
sb.append("<tr>");
for (HtmlViewComponent cell : row) {
sb.append("<td>");
sb.append(cell.toString());
sb.append("</td>");
}
for (int i = row.size(); i < columns; ++i) {
sb.append("<td></td>");
}
sb.append("</tr>\n");
}
return sb.append("</table>\n").toString();
}
}
TextComponent.java
package net.coderodde.html.view.support;
import net.coderodde.html.view.HtmlViewComponent;
/**
* This class represents a simple text.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Mar 18, 2016)
*/
public class TextComponent extends HtmlViewComponent {
private String text;
public TextComponent() {
this("");
}
public TextComponent(String text) {
setText(text);
}
public void setText(String text) {
this.text = text != null ? text : "";
}
public String getText() {
return text;
}
@Override
public String toString() {
return text;
}
}
Demo.java
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import net.coderodde.html.view.HtmlPage;
import net.coderodde.html.view.HtmlViewComponent;
import net.coderodde.html.view.support.DivComponent;
import net.coderodde.html.view.support.TableComponent;
import net.coderodde.html.view.support.TextComponent;
public class Demo {
public static void main(String[] args) {
HtmlPage page = new HtmlPage("FUNKEEH PAGE");
DivComponent div1 = new DivComponent();
DivComponent div2 = new DivComponent();
div1.addHtmlViewComponent(new TextComponent("Hey yo!\n"));
TableComponent table = new TableComponent(3);
// Arrays.asList is immutable, so copy to a mutable array list.
List<HtmlViewComponent> row1 = new ArrayList<>(
Arrays.asList(new TextComponent("Row 1, column 1"),
new TextComponent("Row 1, column 2"),
new TextComponent("Row 1, column 3"),
new TextComponent("FAIL")));
List<HtmlViewComponent> row2 = new ArrayList<>(
Arrays.asList(new TextComponent("Row 2, column 1"),
new TextComponent("Row 2, column 2")));
table.addRow(row1);
table.addRow(row2);
div2.addHtmlViewComponent(table);
div2.addHtmlViewComponent(new TextComponent("Bye, bye!\n"));
page.addHtmlViewComponent(div1);
page.addHtmlViewComponent(div2);
System.out.println(page);
}
}
I don't think it is of any use, but I want to hear comments on the hierarchy aspect. Please, tell me anything that comes to mind.
Answer: Since HtmlViewComponent has no implementation only a method declaration, it would be more natural to use an interface instead.
In addRow, when there are more columns than allowed in the table, instead of mutating the input list by deleting elements, it would be better to call subList. It will not modify the input list and more efficient too.
You don't need to call .toString() in code like sb.append(obj.toString()), it gets called automatically.
toString is not really designed for display purposes. It's also strange to see abstract String toString declarations. Since your methods are about HTML formatting, toHtml would be a better name. | {
"domain": "codereview.stackexchange",
"id": 19074,
"tags": "java, html, library"
} |
Could a rocky rogue planet get trapped in the orbit between Earth and Mars? | Question: I know it's extremely unlikely a rogue planet passing by there, and I know the size of rocky planets can vary a lot (IIRC there is a rocky planet 5 times the size of Earth) but in any hypothetical case, could a rogue planet be trapped in any part of the orbit between Earth and Mars? If so, which are the possible effects it could have over Earth and Mars? In which cases would they be expelled from the solar system?
Answer: It is exceptionally unlikely.
Imagine that there was a planet already there. What could cause it to suddenly escape from the solar system? It would have to be a massive event, such as a second passing rogue planet catapulting it out of orbit.
Running time backwards, for a rogue planet to be captured and neatly end up orbiting between Mars and Earth would require an unbelievably unlikely sequence of events, such as
two rogue planets happen to pass between Earth and Mars at the same time and the interaction between the two results in a transfer of energy and momentum, which (by lucky chance) leaves one of the planets in a nearly circular orbit, (while the other escapes)".
Given how big space is, and in consequence, how rarely a rogue planet enters the inner solar system, this would make the above sequence of events practically impossible.
In this unbelievable scenario, the effects on Earth and Mars would depend on how closely the new planet passes. If we are close enough for significant effects on the planet, we are in a great deal of danger. | {
"domain": "astronomy.stackexchange",
"id": 3477,
"tags": "solar-system, earth, mars, rogue-planet"
} |
Shift property of the Short-term Fourier transform (STFT) | Question: Properties of STFT (Short-term Fourier transform) say that it preserves time shift up to modulation. Does it mean it is sensitive to time-shift in signal?
Answer: Of course the STFT changes when the signal is shifted. According to the definition we have
$$\text{STFT}\{x(t)\}=X(\tau,\omega)=\int_{-\infty}^{\infty}x(t)w(t-\tau)e^{-j\omega t}dt\tag{1}$$
If we define a shifted signal $y(t)=x(t-T)$, the corresponding STFT becomes
$$\begin{align}\text{STFT}\{y(t)\}=Y(\tau,\omega)&=\int_{-\infty}^{\infty}x(t-T)w(t-\tau)e^{-j\omega t}dt\\&=\int_{-\infty}^{\infty}x(u)w(t-(\tau-T))e^{-j\omega u}e^{-j\omega T}du\\&=e^{-j\omega T}X(\tau - T,\omega)\tag{2}\end{align}$$
From $(2)$ you see that time shifting results in a phase factor and a shift in the STFT. The magnitudes of the STFTs are related by just a time shift, which is of course identical to the time shift of the signal:
$$|\text{STFT}\{x(t-T)\}|=|Y(\tau,\omega)|=|X(\tau-T,\omega)|\tag{3}$$ | {
"domain": "dsp.stackexchange",
"id": 4161,
"tags": "stft, time-frequency"
} |
Mock customer DAO using abstract factory pattern in React | Question: I wanted to implement abstract factory method in typescript with react for sharepoint framework webpart.
So I tried to translate ideas from this Java tutorial. I don't have method implementations yet, but so far it compiles, and I want to know if I am doing it right:
component .tsx file
import * as React from 'react';
import styles from './TypescriptDesignPatterns02AbstractFactory.module.scss';
import { ITypescriptDesignPatterns02AbstractFactoryProps } from './ITypescriptDesignPatterns02AbstractFactoryProps';
import { escape } from '@microsoft/sp-lodash-subset';
import { ITypescriptDesignPatterns02AbstractFactoryState } from './ITypescriptDesignPatterns02AbstractFactoryState';
import SharepointListDAOFactory from './Factory/SharepointListDAOFactory';
import DAOFactory from './Factory/DAOFactory';
import ICustomerDAO from './Factory/ICustomerDAO';
import DataSources from './Factory/DatasourcesEnum';
import Customer from './Factory/Customer';
export default class TypescriptDesignPatterns02AbstractFactory extends React.Component<ITypescriptDesignPatterns02AbstractFactoryProps, ITypescriptDesignPatterns02AbstractFactoryState> {
constructor(props: ITypescriptDesignPatterns02AbstractFactoryProps, state: ITypescriptDesignPatterns02AbstractFactoryState) {
super(props);
this.setInitialState();
}
public render(): React.ReactElement<ITypescriptDesignPatterns02AbstractFactoryProps> {
switch(this.props.datasource) {
case "Sharepoint":
let sharepointlistdaofactory: SharepointListDAOFactory = DAOFactory.getDAOFactory(DataSources.SharepointList);
let customerSharepointDAO: ICustomerDAO = sharepointlistdaofactory.getCustomerDAO();
this.state = {
items: customerSharepointDAO.listCustomers()
};
break;
case "JSON":
let jsondaofactory: SharepointListDAOFactory = DAOFactory.getDAOFactory(DataSources.JsonData);
let customerJsonDAO: ICustomerDAO = jsondaofactory.getCustomerDAO();
this.state = {
items: customerJsonDAO.listCustomers()
};
break;
}
return null;
}
public setInitialState(): void {
this.state = {
items: []
};
}
}
DAOFactory.ts
import ICustomerDAO from "./ICustomerDAO";
import SharepointListDAOFactory from "./SharepointListDAOFactory";
import JsonDAOFactory from "./JsonDAOFactory";
import DataSources from "./DatasourcesEnum";
abstract class DAOFactory {
public static SHAREPOINTLIST: number = 1;
public static REMOTEJSON : number = 2;
public abstract getCustomerDAO(): ICustomerDAO;
public static getDAOFactory(whichFactory: DataSources): DAOFactory {
switch (whichFactory) {
case DataSources.SharepointList:
return new SharepointListDAOFactory();
case DataSources.JsonData:
return new JsonDAOFactory();
default :
return null;
}
}
}
export default DAOFactory;
ICustomerDAO.ts
import Customer from "./Customer";
interface ICustomerDAO{
insertCustomer(): number;
deleteCustomer(): boolean;
findCustomer(): Customer;
updateCustomer(): boolean;
listCustomers(): Customer[];
}
export default ICustomerDAO;
Jsoncustomerdao.ts
import ICustomerDAO from "./ICustomerDAO";
import Customer from "./Customer";
export class JsonCustomerDAO implements ICustomerDAO{
public insertCustomer(): number{
return 1;
}
public deleteCustomer(): boolean{
return true;
}
public findCustomer(): Customer{
return new Customer();
}
public updateCustomer(): boolean{
return true;
}
public listCustomers(): Customer[]{
let c1= new Customer();
let c2= new Customer();
let list: Array<Customer> = [c1, c2 ];
return list;
}
}
Sharepoint customer dao
import ICustomerDAO from "./ICustomerDAO";
import Customer from "./Customer";
class SharepointCustomerDao implements ICustomerDAO{
public insertCustomer(): number{
return 1;
}
public deleteCustomer(): boolean{
return true;
}
public findCustomer(): Customer{
return new Customer();
}
public updateCustomer(): boolean{
return true;
}
public listCustomers(): Customer[]{
let c1= new Customer();
let c2= new Customer();
let list: Array<Customer> = [c1, c2 ];
return list;
}
}
export default SharepointCustomerDao;
SharepointListDaoFactory.ts
import DAOFactory from "./DAOFactory";
import ICustomerDAO from "./ICustomerDAO";
import SharepointCustomerDao from "./SharepointCustomerDAO";
class SharepointListDAOFactory extends DAOFactory{
getCustomerDAO(): ICustomerDAO{
return new SharepointCustomerDao();
}
}
export default SharepointListDAOFactory;
Json DAO Factory
import DAOFactory from "./DAOFactory";
import ICustomerDAO from "./ICustomerDAO";
import JsonCustomerDAO from "./JsonCustomerDAO";
class JsonDAOFactory extends DAOFactory{
getCustomerDAO(): ICustomerDAO{
return new JsonCustomerDAO();
}
}
export default JsonDAOFactory;
Answer: render() should know nothing about the source of the data. It only has to know about the ICustomerDAO.listCustomers().
If props are known in the constructor, the DAO objects must be set there too.
I'm not into Typescript that much, but it'd be something like (I've put commented code to show you how this would evolve when you needed more DAaccess):
import * as React from 'react';
import styles from './TypescriptDesignPatterns02AbstractFactory.module.scss';
import { ITypescriptDesignPatterns02AbstractFactoryProps } from './ITypescriptDesignPatterns02AbstractFactoryProps';
import { escape } from '@microsoft/sp-lodash-subset';
import { ITypescriptDesignPatterns02AbstractFactoryState } from './ITypescriptDesignPatterns02AbstractFactoryState';
import SharepointListDAOFactory from './Factory/SharepointListDAOFactory';
import DAOFactory from './Factory/DAOFactory';
import ICustomerDAO from './Factory/ICustomerDAO';
import DataSources from './Factory/DatasourcesEnum';
import Customer from './Factory/Customer';
export default class TypescriptDesignPatterns02AbstractFactory extends React.Component<ITypescriptDesignPatterns02AbstractFactoryProps, ITypescriptDesignPatterns02AbstractFactoryState> {
private customerDao: ICustomerDAO;
// private userDao: IUserDAO;
constructor(props: ITypescriptDesignPatterns02AbstractFactoryProps, state: ITypescriptDesignPatterns02AbstractFactoryState) {
super(props);
this.setInitialState();
this.setDaos(props.datasource); // TODO: Inject DAO with Inversify JS or similar tools
}
public render(): React.ReactElement<ITypescriptDesignPatterns02AbstractFactoryProps> {
this.state = {
items: this.customerDao.listCustomers(),
// users: this.userDao.listUsers()
};
return null;
}
// TODO: If you are not using this method anywhere else, set it to private
public setInitialState(): void {
this.state = {
items: []
};
}
private setDaos(datasource): void {
const data: any = datasource == "Sharepoint" ? DataSources.SharepointList : JsonData; // Now, you only have 2 datasources, so you don't need that switch statement
this.customerDao = DAOFactory.getDAOFactory(data).getCustomerDAO();
// this.userDao = DaoFactory.getDaoFactory(DataSources.SharepointList).getUserDAO();
}
} | {
"domain": "codereview.stackexchange",
"id": 28831,
"tags": "react.js, typescript, sharepoint, abstract-factory"
} |
If we use yellow light on a blue paper, what will the paper look like? | Question:
If we use yellow light on a blue paper, what will the paper look like?
Basically I have two different conflicting ideas in my head:
If the light has only one specific wavelength then only this specific wavelength can be reflected at all. So if I use light of any color on the object, the object will always appear in that specific color.
But on the other hand a colored object has that color because it absorbs light of any other wavelength and only reflects the part which gives it the respective color. This means that if I use any light on an object that has a different color, the object must appear black.
Probably neither of those are true and it's different? For example, I found an interesting image here which shows that red light lets a green paprika appear red. Can one explain what is happening exactly?
Answer: If by "yellow light", you mean a light with wavelength between 560 and 590 nm, and by "blue paper", you mean paper that reflects light only if its wavelength is between 450 and 490 nm, then yes, if the only illumination for a piece of blue paper is yellow light, it will appear black. But "yellow light" is generally light that has a mixture of wavelengths such that the average wavelength is between 560 and 590 nm, not light such that every photon has a wavelength in that range. That's why red and green light together looks yellow: when our eyes detect both red and green light, our brain perceives it as yellow.
Similarly, most "blue" paper does not reflect just blue light. Rather, it reflects light most strongly in the "blue" range of wavelengths, and/or reflects light whose wavelengths average out to be in that range.
So when mostly-yellow-but-also-has-a-little-bit-of-other-wavelengths light hits mostly-reflects-blue-but-also-weakly-reflects-other-colors paper, some light will be reflected. What color it looks like will depend on the exact composition of the light and the exact reflective properties of the paper. In the example you gave, the pepper reflects a significant amount of red light, but still looks darker than the red parts of the playing card. In green light, the pepper is bright green, while the red parts of the playing card are almost black. This implies that the pepper reflects green light more than it reflects red light, but still reflects some red light, while the red parts of the playing card reflects very little green light. This makes sense: the pepper is a natural object, while the playing card has an artificial dye specifically designed to be a particular color. | {
"domain": "physics.stackexchange",
"id": 55568,
"tags": "optics, visible-light"
} |
How is the $d=2$ Weyl invariance different from the generic Conformal $SO(2,d)$ invariance? | Question: First of all, I read this answer and I understand that Weyl transformations are transformations of the metric and Conformal transformations are transformations of the coordinates that "Weyl-transform" the metric, but from what I see Weyl and Conformal invariance are pretty much the same thing: doing something that only changes the metric by a positive rescaling.
In my String Theory course, we talked about the $d=2$ worldsheet of the string being Weyl-invariant, and how this invariance (particularly in $d=2$) is important.
Later on, we moved to AdS/CFT, and studying the properties of a CFT we mentioned that it is conformally invariant (duh) but since it generally is in $d>2$ it won't be "as nice as" the $d=2$ invariance of String Theory. Why is that? What changed?
Answer: Conformal invariance in two dimensions is special because the symmetry group is much much larger in D = 2. In higher dimensions the conformal group is SO(D+1, 2) which is finite dimensional. But in D = 2 you find that conformal transformations are just holomorphic and anti-holomorphic coordinate transformations which gives you an infinite set of local coordinate transformations. This gives D = 2 conformal theories much more restrictions.
The subject of 2 dimensional conformal field theories is extremely rich and complex, I recommend Di Francesco's book on CFTs as the subject is much too large to explain here.
Basically the symmetry properties of 2D CFTs mean that your observables (Correlation functions for example) must obey a much larger set of relations (because of the larger symmetry group). This means that given some initial data (like the scaling dimensions of your fields/operators) one can fix an extremely large set of the observables without having to do basically any calculations. | {
"domain": "physics.stackexchange",
"id": 79864,
"tags": "symmetry, string-theory, conformal-field-theory"
} |
Manipulate original DOM in React.js | Question: As an introduction, I would like to have my inputs validated with the built-in validation HTML5 attributes, e.g. use pattern for text input. So this is my sample code that implements that.
Is it considered best practice while handling event, to access event.target, which is the original DOM, and do something with it, rather than dealing with React virtual DOM?
Please let me know if there are more improvements can be made. I'm using React.js 0.14.7. Thanks.
NodeList.prototype.forEach = Array.prototype.forEach;
class InputSet extends React.Component {
render() {
return (
<div>
<input type="text" defaultValue={this.props.firstName} data-field="firstName" required/>
<input type="text" defaultValue={this.props.lastName} data-field="lastName" required/>
</div>
);
}
}
class FormWithValidation extends React.Component {
handleSubmit(event) {
event.preventDefault();
console.log(event.target);
let childNodes = Array.prototype.slice.call(event.target.childNodes);
childNodes = childNodes.filter((elm) => elm.tagName === 'DIV');
const values = childNodes.map((div) => {
const value = {};
div.childNodes.forEach((input) => {
const field = input.getAttribute("data-field");
value[field] = input.value
})
return value;
});
// Do something with values.
console.log(values);
}
render() {
const inputSets = [
{firstName: "Donald", lastName: "Duck"},
{firstName: "Bill", lastName: "Williams"}
]
return (
<form action="" onSubmit={this.handleSubmit} >
{inputSets.map((obj, index) => {
return <InputSet {...obj} key={index} />
})}
<input type="submit" value="Save"/>
</form>
);
}
}
ReactDOM.render(<FormWithValidation/>, document.getElementById('container'))
Answer: It is fine to read event.target in react. In many cases (e.g. finding out which item was clicked) it's the only way to get the data your code needs.
It is generally considered best practice to not change DOM directly, but let react handle have monopoly on DOM changes -> You're good here, you do not change DOM, you only read from it.
Two things that could be improved:
You store the field name in data-field, and later on read it to find out which div is which field. This is not so clean react code.
You read the entire form from event.target, filtering out all divs.
It may work, and it is efficient: react is only triggered after submit is clicked.
But the code is very dependent on the structure of your form in the real DOM. If there is one div to many in your real DOM, then your code breaks down. And to debug, one also needs to understand the tricky real DOM traversal methods (e.g. that a DOM node is not the same as a DOM element).
I would advise to refactor to a react-only solution, making the following changes:
Give your FormWithValidation component state, which stores an array of inputset values (firstname and lastname)
Pass down a this.valueUpdate method, which individual fields can invoke to update state. The method would receive an index (of the input set), a fieldname, and a new value.
Make a single component for a single input field, which can receive the fieldname as a prop.
Give the input field an onBlur handler. Which calls this.props.valueUpdate.
This refactoring has two disadvantages:
it is a lot more code for these simple tasks
react will do a lot more cycles: every time an input field has changed, react will update state and re-render the form (efficiently and only diffs of course)
The big advantage is that all your code and all your logic lives inside the react structure and lifecycle. Which will make your code easier to maintain. | {
"domain": "codereview.stackexchange",
"id": 20016,
"tags": "javascript, dom, react.js"
} |
Energy and Speed of Electrons in a Circuit | Question:
I really wonder about the relationship between potential and kinetic energy in a circuit. At the positive terminal of the battery, there will be high potential energy. The difference in electric potential between two terminals will cause the electrons to move from positive to negative terminal. The electrons thus move with an increasing speed as its potential energy decreases, converted to kinetic energy. When it passes the bulb, its decrease in potential energy will be converted to heat, leading to even less kinetic energy.
But this seems wrong to me since as the electrons move closer to positive terminals, there will be the force from the positive terminal increasing the speed of electrons.
However, I learn that the current in the series circuit is constant. So, the drift speed of electrons in this circuit is the same, contradicting what I previously said.
I am totally confused with different concepts. Could you explain what exactly happens to the electrons and the energy of them in the circuit?
Answer: As a result of the conservation of charge the current is always constant in a circuit (continuity equation).
We know by Ohm’s law that the current density is proportional to the applied electric field
$$\mathbf j=\sigma \mathbf E$$ where $\sigma$ is called the conductivity. Since we apply a constant voltage, the electric field is the same across the circuit. (Think of it as if the electron is nearer to the positive terminal it feels a greater attraction but also a lesser repulsion from the negative one). But if the conductivity changes the electric field need not to be the same as we see later. Due to scattering at atoms the electrons get slowed down making their speed constant (this makes up the resistance). This explains the proportionality in Ohm’s law. From this we get that the drift velocity is proportional to the applied electric field.
We can write the current through the circuit as
$$I=nev_dA$$ where $n$ is the charge carrier density per volume, $e$ the elementary charge, $v_d$ the drift velocity and $A$ the cross-sectional area. As the current is the same in the circuit the drift velocity can indeed be different by changing the cross-sectional area $A$ and charge carrier density $n$. For example if you half the cross-sectional area the drift velocity doubles. You can think analogously of a water circuit. The analogon of a resistor (e.g. the bulb) is a narrowing. If the area is halved in the narrowing the water will flows double as fast than before. But after the narrowing it will flow with the same velocity as before it.
As mentioned above the electric field need not to stay the same across the circuit. If we revisit Ohm’s law (using absolute values instead of vectors) and by definition $j=\frac I A$, we get
$$\frac I A =\sigma E.$$ For simplicity let’s assume the area $A$ is the same for the wires and the bulb. Since the bulb is a resistor it will have a smaller conductivity $\sigma$. (Note that the conductivity is a material constant and does not depend on the area or the length). But this means the electric field inside the bulb has to be greater because the current is the same across the circuit. This complies with Kirchoff’s first law stating that the voltage across a loop sums up to zero. Usually we treat the wires as perfect conductors with infinite conductivity and assume that the resistance can be modeled as one discrete element. Hence Ohm’s law gives us that all the battery’s voltage drops at the bulb. But Ohm’s law gives us more, if there are two or more resistors in a circuit, we can calculate the voltage drops based on their individual conductivities (at the smaller resistor will be the smaller electric field).
You are right, the electrons lose kinetic energy and convert it to heat in the resistor but in the same time the bigger electric field accelerates them again and compensate their energy loss by raising more the potential energy. From an energy point of view, you have a constant voltage source making sure there is always the same potential energy. This means if you lose heat at the bulb the voltage source will compensate for that.
There is another way to look how energy gets transferred from the battery to the resistor which comes handy when considering high frequencies. (I guess it is also more correct). We can look at the created electric and magnetic fields by the current. The Poynting vector $\mathbf S$ measures the energy flux and is defined as $$\mathbf S = \mathbf E \times \mathbf H.$$ If you do this for your circuit you will see that the Poynting vectors points from the battery to the lamp as in the picture below from Wikipedia. That is to say the energy flows from the battery to the bulb. Look also at this great post by @wbeaty.
Source: https://en.m.wikipedia.org/wiki/File:Poynting_vectors_of_DC_circuit.svg | {
"domain": "physics.stackexchange",
"id": 51223,
"tags": "electrostatics, electric-circuits, potential-energy"
} |
Looking for a reference use of PoseWithCovariance: any pkg using it? | Question:
Hi all,
PoseWithCovariance has a 6x6 covariance matrix which, according to REP 103, corresponds to these variables in this order: (x, y, z, rotation about X axis, rotation about Y axis, rotation about Z axis). With rotation around fixed axes, not like in the "common" yaw/pitch/roll various conventions.
I'm working in a pkg for covariance transformations (pose_cov_ops) so it would be great if I could compare my numerical results with any other existing ROS packages that output PoseWithCovariance at present. Checks would go in the package unit tests.
More explicitly: I'm looking for some "stable" ROS package which computes covariances in 6D, so I can peek its source code and make sure I'm following the right convention.
Any recommendation?
If there're none, I'll assume the 3x3 rotation matrix is as follows:
R = R_x(roll) * R_y(pitch) * R_z(yaw)
R = R_z(yaw) * R_y(pitch) * R_x(roll)
where matrices are in the inverse same order than the non-global axis, yaw-pitch-roll convention.
PS: wouldn't it have made more sense to keep a 7x7 covariance for the unit quaternion form? In general, equations for uncertainty propagation in this form are far simpler than for Euler angles. Is there room for a PoseWith7x7CovarianceStamped yet?
Originally posted by Jose Luis Blanco on ROS Answers with karma: 288 on 2012-03-30
Post score: 6
Original comments
Comment by Jose Luis Blanco on 2012-04-11:
Just for the records: the numerical values of yaw/pitch/roll in the dynamic-axes (rotating) convention are exactly the same than those in the roll/pitch/yaw fixed-axes convention. So a conversion of covariance matrices between both formats becomes just a permutation. Hope it may help someone else.
Answer:
AFAIK there is no ROS package which make the transform between two poses with uncertainty (perhaps http://ros.org/wiki/bfl but I don't think so). Because of this the package pose_cov_ops seems unique. I belive that the MRPT ([[mrpt-ros-pkg]]) in which the pose_cov_ops package is based looks very powerful handling poses with uncertainty and ROS should take advantage of this fact.
I think that this package should be integrated in the generic http://ros.org/wiki/tf2 package as a plugin in a similar way that the kdl package was integrated (see https://kforge.ros.org/geometry/experimental/file/40be50217595/tf2_geometry_msgs/include/tf2_geometry_msgs/tf2_geometry_msgs.h). Perhaps the tf2 package should be refactored to support more explicitly the inverse tranformation or at least the unary inverse transform.
Edit:
Definitely I think you can find what you are looking for in the source code of the package http://ros.org/wiki/robot_pose_ekf
Regards.
Originally posted by Pablo Iñigo Blasco with karma: 2982 on 2012-06-25
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 8811,
"tags": "transform"
} |
Snakes and Letters | Question: This is the "Clean up the words" challenge from CodeEval:
Challenge
Given a list of words mixed with extra symbols. Write a program that will clean up the words from extra numbers and symbols.
Specifications
The first argument is a path to a file.
Each line includes a test case.
Each test case is a list of words.
Letters are both lowercase and uppercase, and mixed with extra symbols.
Print the words separated by spaces in lowercase letters.
Constraints
The length of a test case together with extra symbols can be in a range from 10 to 100 symbols.
The number of test cases is 40.
Input Sample
(--9Hello----World...--)
Can 0$9 ---you~
13What213are;11you-123+138doing7
Output Sample
hello world
can you
what are you doing
In a previous question someone joked the program would be much shorter / simpler in Python. I accepted this as a challenge and excuse to practice Python.
Solution:
import sys
import re
def sanitized(line):
sanitized_line = re.sub("[^a-zA-Z]+", " ", line)
return sanitized_line.lower().strip()
def main(file):
with open(file, 'r') as input_file:
for line in input_file:
print(sanitized(line))
if __name__ == "__main__":
try:
file = sys.argv[1]
main(file)
except:
print("No argument provided.")
It's so succinct I'm unsure there's enough for a review, but this site surprised me in the past.
Answer: The code is pretty clear and clean. I will have to do some nit-picking, but here goes an attempt:
try - except without the actual exception(s). Here, you print an error when the exception raised is an IndexError (to sys.argv), but consider that you get the same error message if the file can't be properly opened (or is simply non-existent).
Change it to except IndexError, and let any IO/OSError just bubble up, since the corresponding error message is often clear enough (e.g., 'Is a directory', 'File does not exist', etc).
You're printing the error message to stdout. You could consider exiting using sys.exit("No argument provided."), which will exit with an exit code of 1, and print the message to stderr. Or raise an equivalent exception to the one you're catching.
'r'ead mode is the default opening mode. While perhaps clearer with the extra argument, with open(file) as ... is more standard.
In case of Python 2: don't use file as a variable name, since it shadows the built-in file function. Use e.g. filename instead.
Nits:
Do you want it to be Python 3 only? Otherwise, include a from __future__ import print_function at the top. print will still work as it is now in Python 2, but once you use more arguments, you're printing a tuple in Python 2, instead of a series of concatenated str-ified elements. (Thanks to JoeWallis in the comments for pointing out the mistake that I was thinking "tuple"-wise where it was just a parenthesized string.)
doc-strings to the program and functions, if you want to go all-out. (A linter would complain about that, even if it's pretty unnecessary here, and not to the point of the exercise.) | {
"domain": "codereview.stackexchange",
"id": 20431,
"tags": "python, beginner, strings, programming-challenge, error-handling"
} |
Will the Ford-Fulkerson algorithm always find the max flow if we start from a valid flow? | Question: I stumbled across this question and answer (source):
Question: Suppose someone presents you with a solution to a max-flow problem on some network. Give a linear
time algorithm to determine whether the solution does indeed give a maximum flow.
Answer: First, verify that the solution is a valid flow by comparing the flow on each edge to the capacity of
each edge, for cost O(|E|). If the solution is a valid flow, then compose the residual graph (O(|E|))
and look for an augmenting path, which using BFS is O(|V | + |E|). The existence of an augmenting
path would mean the solution is not a maximum flow.
Does this mean that the Ford-Fulkerson algorithm will reach max flow if given any valid flow as input, instead of initializing all edges to 0 at the start?
Answer: Yes. If the flow is not maximum, then there is an augmenting path. If there's an augmenting path, Ford-Fulkerson will find it (and continue to find them until the flow is maximum). Starting from a different initial flow does not change this. | {
"domain": "cs.stackexchange",
"id": 5947,
"tags": "algorithms, network-flow, ford-fulkerson"
} |
Question about interpreting probabilities in QM | Question: For the example of an infinite square well, $\psi(x)=0$ for $x$ outside the well/interval, and we are to interpret this as the particle cannot be found outside the well because $|\Psi(x,t)\bar{\Psi}(x,t)|^2=0$ in these regions. But probability of $0$ does not necessarily imply that an event is impossible, so I'm a bit confused as to why you can say that it's impossible for the particle to be in this region or that just based off of this. Can anyone clarify?
Answer: When dealing with the infinite square well, we must be clear that it is a limit of the finite square well case. But even though for the finite case we have as Hilbert space $L^2(\Bbb{R})$, that is, the particle can have non zero probability of being found in any region of non-zero measure, for the infinite case, the limit forces the condition of working with the Hilbert space $L^2[0,a]$. In this case the domain of the wavefunction $\psi(x)$ is $[0,a]$, so, it is indeed imbossible to find the particle outside the well because these events are not acceptable, since any region outside $[0,a]$ is not an element of the space of possible events (or in a more precise language of the $\sigma$-algebra) and not because they have zero probability. | {
"domain": "physics.stackexchange",
"id": 17631,
"tags": "quantum-mechanics, wavefunction, probability, born-rule"
} |
Why Black Hole is maximally chaotic? | Question: I understand intuitively that black holes are chaotic. However, people say black holes are not just chaotic, they are "maximally chaotic". What is the quantitative definition of "maximally chaotic," and how can we know that black holes are maximally chaotic?
Answer: A measure of chaos in a system is given by the so-called Lyapunov exponent $\lambda$. In a classical chaotic system, nearby trajectories typically diverge exponentially fast in time:
$$
\{q(t),p(0)\}=e^{\lambda t}
$$
One can show that the correct way to implement this in a quantum system is by considering the behaviour of thermal (i.e. at finite temperature $T=1/\beta$) 4-point functions of two operators, say $W$ and $V$. What one wants to compute is the object:
$$
\langle V(0)W(t)V(0)W(t) \rangle_{\beta}
$$
Notice that they have alternating time $t$. Such correlators are called out-of-time-ordered. How do we compute them? A way to do this is provided by the AdS/CFT correspondence.
The near-horizon region of near-extremal black holes is given by the AdS$_2$ geometry. In this particular spacetime, one can define a particular theory of gravity by coupling the metric to a field called dilaton (gravity alone is non-dynamical in 2 dimensions). In addition to that, one can add some matter fields. In the AdS$_{d+1}$/CFT$_d$ correspondence, such matter fields are supposed to be dual to conformal primary operators living on the boundary of AdS$_2$. More technically, the partition function of the gravity theory is identified with the generating functional of the CFT. If $\phi$ is a matter field with boundary value $\phi_0$ dual to an operator $\mathcal O$ with conformal dimension $\Delta$, then we have
$$
Z_{\text{AdS}}[\phi_0]=\int\limits_{\lim_{z\to0} \phi z^{\Delta-d}=\phi_0} D\phi\, e^{-I[\phi]}\equiv Z_{CFT}[\phi_0]= \bigg\langle \exp \left(\int_{\partial \Omega}d^dx\, \mathcal{O}\phi_0\right)\bigg\rangle
$$
By applying the above equation, we are able to compute the four-point functions of the conformal operators.
In the simplest case without supersymmetry, one finds $[1]$
$$
\langle V(0)W(t)V(0)W(t) \rangle_{\beta} \sim \beta e^{\frac{2\pi}{\beta}}
$$
meaning that the Lyapunov exponents is $\lambda=\frac{2\pi}{\beta}$.
Maldacena showed that this is the maximum value allowed for Lyapunov exponents $[2]$, so these non-supersymmetric black holes are maximally chaotic.
In supersymmetric black holes there are matter fields which are actually spinors, and for them it turns out that the Lyapunov exponent is less than the maximal value $[3]$.
References
$[1]$ Maldacena J. Conformal symmetry and its breaking in two dimensional Nearly Anti-de-Sitter space. PTEP 2016, 12C104 (2016).
$[2]$ Maldacena, J. A bound on chaos. J. High Energy Phys. 08, 106 (2016).
$[3]$ Campos Delgado, R. and Foerste, S. Lyapunov exponents in N=2 supersymmetric Jackiw-Teitelboim gravity. Phys.Lett.B 835, 137550 (2022). | {
"domain": "physics.stackexchange",
"id": 93774,
"tags": "thermodynamics, black-holes, quantum-information, quantum-gravity, chaos-theory"
} |
Mapping between numbers and symbolic representations | Question: I am not a physicist but applying symbolic dynamics for information coding in signal processing. Is there any mapping between symbols and numbers?
Answer: While I am not familiar with signal processing, the question of the 1-to-1 mapping may be easy to answer - assuming your alphabet of symbols is finite.
Suppose your symbol alphabet is $A = \{ a_0, a_1, \dots , a_{n-1} \} $. First define a mapping $$ f : A \rightarrow \mathbb \{ 0, 1, \dots (n-1) \} : f(a_i) \mapsto i.$$
Then for a sequence of symbols $$b = (b_!, b_2, \dots, b_m )$$ define a mapping $$ b \mapsto f(b_1) + (f(b_2) \times n) + (f(b_3) \times n^2) + \dots + (f(b_m) \times n^{m-1}).$$
In other words, the sequence $b$ maps to a base $n$ expansion where the $i'th$ element of the sequence is the $(m-i)'th$ digit of the expansion. As such, it is clearly a bijection.
You can then biject between the base $n$ expansion and the binary (base 2) expansion.
I hope I haven't misunderstood your question.
EDIT : As per your comment below, here is a less mathematical description.
First, number your alphabet of symbols, say $a_1, a_2, ... a_n$. The order in which you number them is not important, however it is essential that both the sender and receiver number the alphabet in the same way. Next, convert the string of symbols into a numerical value written in base $n$, where $n$ is the size of your set of possible symbols. You will treat each symbol in the sequence as a digit in a number written in base $n$. Once you have your symbol sequence converted into a base $n$ number, you will need to convert this base $n$ number into binary (base 2) for transmission.
To go the other way, start will a binary string (base 2), convert this into a base $n$, then use the digits of the base $n$ number to read off the symbol sequence in your message.
For convenience, let's suppose your alphabet has 10 symbols, $\{ +, -, =, 0, 1, x, y, z, <, > \} $. Then we could convert the message $x + 1 = y$ as follows :
The first symbol in our sequence (x) is the 6'th symbol in our alphabet, so the first digit would be 6. The + is the second symbol of the sequence and the 1'st symbol of our alphabet, so the second digit would be a 1. Continute to obtain the numerical encoding of 61537. Finally, convert this number into binary for transmission. In our example, this would be 1111000001100001.
Conversely, if you receive a signal (111100001100001), then you can convert (decode) it into a symbol string by following the above procedure in reverse. First convert the binary to decimal, and then read off the symbol corresponding to each digit.
If you are unfamiliar with converting numbers between different bases, then here is a good place to start [wikipedia] (http://en.wikipedia.org/wiki/Binary_number). Good luck. | {
"domain": "physics.stackexchange",
"id": 16125,
"tags": "discrete, signal-processing, chaos-theory, complex-systems"
} |
Is there a term for head-on collisions that don't seem head-on in the lab frame? | Question: I have found that generally, you can speak of 'Head-on' collisions if they are head-on (in any frame of reference) and 'glancing', 'oblique', or 'non-head-on' collisions if they are not (the impact parameter is larger than 0). But is there a term for collisions that are not head-on in the lab frame, but are in the center-of-mass frame of reference (meaning the absolute velocities in the lab frame don't point toward each other, but the relative velocity is along the line of impact)?
Answer: As long as one of the particles is initially stationary in the lab frame, then head-on is a thing that is agreed between lab frame and CoM frame. i.e. glancing is also agreed between lab frame and CoM frame.
If you want things to be more general than that, then the concept itself becomes nebulous. There is a definition of head-on for CoM frame alone, and maybe you want to use that to define it for the general case. | {
"domain": "physics.stackexchange",
"id": 95738,
"tags": "terminology, collision"
} |
Range check operator | Question: I'd like to improve this operator code if I can. I don't know enough Raku
to write idiomatically (or any other way for that matter), but suggestions
along those lines would be good also.
use v6d;
sub circumfix:<α ω>( ( $a, $b, $c ) ) {
if ( $a.WHAT ~~ (Int) ) {
so ( $a >= $b & $a <= $c );
}
else {
my @b;
for $a { @b.push( ( $_ >= $b & $_ <= $c ).so ) };
so (False) ∈ @b;
}
};
if (α 5, 0, 10 ω) {
"Pass".say;
}
else {
"Fail".say;
}
if (α ( 5, 7, 11 ), 0, 10 ω) {
"Pass".say;
}
else {
"Fail".say;
}
This does a range check on a variable or variables, between/bookended by $b (lower) and $c (upper). I wonder about using
some sort of signature check paired with multi as a cleaner
approach?
Answer: Regarding this code:
my @b;
for $a { @b.push( ( $_ >= $b & $_ <= $c ).so ) };
so (False) ∈ @b;
I'm sure that Raku has some form of the any operator that would make this code a one-liner. Having a three-liner is definitely not idiomatic Raku. :)
Did you intentionally use the & operator instead of &&? I'd think that && is more efficient, but that may only be my knowledge from other programming languages. I don't have any experience with Raku. From what I read about junctions, there should be an easier way to express this.
Your code is hard to read since you use the meaningless variable names a, b and c. It would have been better to call two of them min and max. The third can then go by any other name.
I would have expected the operator to be α $min $x $max ω, but that's not what you chose. In that case, I would have probably named the operator in_range and made it an infix operator in the form $x in_range ($min .. $max), if that's possible at all. | {
"domain": "codereview.stackexchange",
"id": 37284,
"tags": "overloading, raku"
} |
Choosing between rosbuild and catkin | Question:
I am very interested in installing ROS and starting to work on some cool projects (probably starting of with Lego NXT stuff).
I am a little confused on which distribution I should go with; I wanted to make sure that even if I go with Groovy and learn the 'catkin' build system, I will be able to walk through all the tutorials (even the ones without a catkin-specific section). I also want to make sure that all the packages (for lego NXT for example) can easily be used with catkin as well.
I'm also open to hearing any other advice you guys might have on trying to pick this up from scratch!
*I'm planning on installing Ubuntu on a partition of a Macbook Pro
Thanks!
Originally posted by asriraman93 on ROS Answers with karma: 75 on 2013-06-02
Post score: 2
Answer:
For a new project I recommend starting out with Groovy. It supports more recent OS releases, and you'll be able to run it with Ubuntu Precise LTS for quite a long time.
Even though much of the Groovy base was converted to catkin, nothing keeps you from using rosbuild on top of that. Catkin is harder to learn than rosbuild, but worthwhile in the long run. The catkin documentation is getting better, and still evolving. Here is the current Groovy ctakin how-to doc. You might find it helpful.
If your dependencies have all been catkinized, use catkin yourself. If not, you'll need to stick with rosbuild until catkin versions are available. Many more packages are being converted for Hydro, something to consider when making that decision.
Also, there is a new catkin_simple package which promises to provide most of the simplicity of rosbuild in a form that is compatible with catkin workspaces. It is still experimental and not fully documented, but you are welcome to give it a try.
Originally posted by joq with karma: 25443 on 2013-06-02
This answer was ACCEPTED on the original site
Post score: 10 | {
"domain": "robotics.stackexchange",
"id": 14396,
"tags": "nxt, catkin, ros-fuerte, ros-groovy, rosbuild"
} |
Error building cv_bridge on RaspberryPi | Question:
I am trying to build some libfreenect drivers (for the kinect) and I do a rosmake and it fails in cv_bridge complaining about some image formats (e.g., CV_YUV2RGB_UYVY, see below for more).
RaspberryPi (ARM)
Raspbian Linux - Debian for RaspberryPi
OpenCV 2.3 installed
ROS fuerte
What am I doing wrong??
[kevin@raspberrypi freenect_stack]$ CC="distcc arm-unknown-linux-gnueabi-gcc" CXX="distcc arm-unknown-linux-gnueabi-g++" rosmake freenect_stack
[ rosmake ] rosmake starting...
[ rosmake ] Packages requested are: ['freenect_stack']
[ rosmake ] Logging to directory /home/kevin/.ros/rosmake/rosmake_output-20121103-133147
[ rosmake ] Expanded args ['freenect_stack'] to:
['freenect_camera', 'freenect_launch', 'libfreenect']
[rosmake-0] Starting >>> roslang [ make ]
[rosmake-0] Finished <<< roslang No Makefile in package roslang
[rosmake-0] Starting >>> rospy [ make ]
[rosmake-0] Finished <<< rospy No Makefile in package rospy
[rosmake-0] Starting >>> roscpp [ make ]
[rosmake-0] Finished <<< roscpp No Makefile in package roscpp
[rosmake-0] Starting >>> rosservice [ make ]
[rosmake-0] Finished <<< rosservice No Makefile in package rosservice
[rosmake-0] Starting >>> dynamic_reconfigure [ make ]
[rosmake-0] Finished <<< dynamic_reconfigure [PASS] [ 45.26 seconds ]
[rosmake-0] Starting >>> geometry_msgs [ make ]
[rosmake-0] Finished <<< geometry_msgs No Makefile in package geometry_msgs
[rosmake-0] Starting >>> sensor_msgs [ make ]
[rosmake-0] Finished <<< sensor_msgs No Makefile in package sensor_msgs
[rosmake-0] Starting >>> rosbuild [ make ]
[rosmake-0] Finished <<< rosbuild No Makefile in package rosbuild
[rosmake-0] Starting >>> roslib [ make ]
[rosmake-0] Finished <<< roslib No Makefile in package roslib
[rosmake-0] Starting >>> rosconsole [ make ]
[rosmake-0] Finished <<< rosconsole No Makefile in package rosconsole
[rosmake-0] Starting >>> pluginlib [ make ]
[rosmake-0] Finished <<< pluginlib [PASS] [ 17.47 seconds ]
[rosmake-0] Starting >>> message_filters [ make ]
[rosmake-0] Finished <<< message_filters No Makefile in package message_filters
[rosmake-0] Starting >>> image_transport [ make ]
[rosmake-0] Finished <<< image_transport [PASS] [ 17.48 seconds ]
[rosmake-0] Starting >>> polled_camera [ make ]
[rosmake-0] Finished <<< polled_camera [PASS] [ 15.69 seconds ]
[rosmake-0] Starting >>> common_rosdeps [ make ]
[rosmake-0] Finished <<< common_rosdeps [PASS] [ 2.14 seconds ]
[rosmake-0] Starting >>> camera_calibration_parsers [ make ]
[rosmake-0] Finished <<< camera_calibration_parsers [PASS] [ 14.33 seconds ]
[rosmake-0] Starting >>> opencv2 [ make ]
[rosmake-0] Finished <<< opencv2 [PASS] [ 0.09 seconds ]
[rosmake-0] Starting >>> cv_bridge [ make ]
[ rosmake ] Last 40 lines_bridge: 49.9 sec ] [ 1 Active 17/50 Complete ]
{-------------------------------------------------------------------------------
[rosbuild] Including /opt/ros/fuerte/share/roscpp/rosbuild/roscpp.cmake
-- Configuring done
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:
CMAKE_TOOLCHAIN_FILE
-- Build files have been written to: /home/kevin/ros/vision_opencv/cv_bridge/build
cd build && make -l1
make[1]: Entering directory /home/kevin/ros/vision_opencv/cv_bridge/build' make[2]: Entering directory /home/kevin/ros/vision_opencv/cv_bridge/build'
make[3]: Entering directory /home/kevin/ros/vision_opencv/cv_bridge/build' make[3]: Leaving directory /home/kevin/ros/vision_opencv/cv_bridge/build'
[ 0%] Built target rospack_genmsg_libexe
make[3]: Entering directory /home/kevin/ros/vision_opencv/cv_bridge/build' make[3]: Leaving directory /home/kevin/ros/vision_opencv/cv_bridge/build'
[ 0%] Built target rosbuild_precompile
make[3]: Entering directory /home/kevin/ros/vision_opencv/cv_bridge/build' make[3]: Leaving directory /home/kevin/ros/vision_opencv/cv_bridge/build'
make[3]: Entering directory /home/kevin/ros/vision_opencv/cv_bridge/build' [100%] Building CXX object CMakeFiles/cv_bridge.dir/src/cv_bridge.o distcc[7294] ERROR: compile /home/kevin/ros/vision_opencv/cv_bridge/src/cv_bridge.cpp on arch failed distcc[7294] (dcc_build_somewhere) Warning: remote compilation of '/home/kevin/ros/vision_opencv/cv_bridge/src/cv_bridge.cpp' failed, retrying locally distcc[7294] Warning: failed to distribute /home/kevin/ros/vision_opencv/cv_bridge/src/cv_bridge.cpp to arch, running locally instead /home/kevin/ros/vision_opencv/cv_bridge/src/cv_bridge.cpp: In function ‘std::map<std::pair<cv_bridge::Format, cv_bridge::Format>, std::vector<int> > cv_bridge::getConversionCodes()’: /home/kevin/ros/vision_opencv/cv_bridge/src/cv_bridge.cpp:179:47: error: ‘CV_YUV2GRAY_UYVY’ was not declared in this scope /home/kevin/ros/vision_opencv/cv_bridge/src/cv_bridge.cpp:180:46: error: ‘CV_YUV2RGB_UYVY’ was not declared in this scope /home/kevin/ros/vision_opencv/cv_bridge/src/cv_bridge.cpp:181:46: error: ‘CV_YUV2BGR_UYVY’ was not declared in this scope /home/kevin/ros/vision_opencv/cv_bridge/src/cv_bridge.cpp:182:47: error: ‘CV_YUV2RGBA_UYVY’ was not declared in this scope /home/kevin/ros/vision_opencv/cv_bridge/src/cv_bridge.cpp:183:47: error: ‘CV_YUV2BGRA_UYVY’ was not declared in this scope distcc[7294] ERROR: compile /home/kevin/ros/vision_opencv/cv_bridge/src/cv_bridge.cpp on localhost failed make[3]: *** [CMakeFiles/cv_bridge.dir/src/cv_bridge.o] Error 1 make[3]: Leaving directory /home/kevin/ros/vision_opencv/cv_bridge/build'
make[2]: *** [CMakeFiles/cv_bridge.dir/all] Error 2
make[2]: Leaving directory /home/kevin/ros/vision_opencv/cv_bridge/build' make[1]: *** [all] Error 2 make[1]: Leaving directory /home/kevin/ros/vision_opencv/cv_bridge/build'
[ rosmake ] Output from build of package cv_bridge written to:
[ rosmake ] /home/kevin/.ros/rosmake/rosmake_output-20121103-133147/cv_bridge/build_output.log
[rosmake-0] Finished <<< cv_bridge [FAIL] [ 49.97 seconds ]
[ rosmake ] Halting due to failure in package cv_bridge.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] Results:
[ rosmake ] Built 18 packages with 1 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/kevin/.ros/rosmake/rosmake_output-20121103-133147
Originally posted by Kevin on ROS Answers with karma: 2962 on 2012-11-03
Post score: 3
Answer:
It just happened to me as well and I thought I would share the solution.
Your OpenCV version is too old. You probably downloaded opencv using the prebuilt binaries which will not work. You will need to compile from source.
Originally posted by kalectro with karma: 1554 on 2013-02-04
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by prarobo on 2013-05-28:
I am also having error with installing cv_bridge on raspbian. About installing opencv from source, are you referring to vision_opencv that comes with ros or the opencv from debian repositories?
Comment by kalectro on 2013-05-29:
back then there was no openCV debian ;)
You should use the debian package now and everything should work without problems | {
"domain": "robotics.stackexchange",
"id": 11614,
"tags": "ros, opencv, raspberrypi, cv-bridge, raspbian"
} |
Can I calculate the amount (mol or mass) of water in a gas, in a heat exchanger, if I know the partial pressure of the water? | Question: Is it possible to calculate the amount of water (mol, mass...) in a gas, in a heat exchanger, if I know the partial pressure of the water? The stream through the tubes is mainly benzene and hydrocarbons. Partial pressure of the water and the temperature is assumed constant.
I first tried to use the ideal gas law with the compressibility factor, but most people told me this equation is meant for closed systems (like in a tank) and not open systems like the one described. (If I'm wrong and it can be used, give me a good reason why so I understand).
Extra info: I need to calculate how much water is in the gas that will cause a relative humidity of 10 %. After taking the coldest temperature in the heat exchanger (because the RH would be highest here) as a constant, with the formula of relative humidity I can calculate the max partial pressure of water in the tubes thats would cause RH of 10 %. But I need the amount (mass, mol) that would be in the gas.
If you have any more questions, please ask.
Answer: To your challenge: Most people tell you ... is not a sound working principle for analysis. Molar volume (the inverse of molar density) does not care whether the gas is in an open system or a closed system. Recast your thought process in terms of either a molar volume (density) or take a basis of a certain (enclosed) volume of gas.
To your main question:
Use Dalton's law $y_{H_2O} = p_{H_2O}/p_T $
Assume a volume of the heat exchanger tubing or TAKE A BASIS of volume $V_T$
Assume ideal gases: $m_{H_2O} = y_{H_2O}(p_T V_T/RT)M_{H_2O}$
Yes, you will not be able to determine $m_{H_2O}$ without a basis volume. That is however not the same as saying that you cannot use a gas law in an open system. | {
"domain": "engineering.stackexchange",
"id": 3385,
"tags": "fluid-mechanics, thermodynamics, chemical-engineering, gas, steam"
} |
Peskin and Schroeder's QFT eq. (9.14): Gaussian momentum field integration of phase space path integral | Question: On Peskin and Schroeder's QFT book page 282, the book considered functional quantization of scalar field.
First, begin with
$$\left\langle\phi_b(\mathbf{x})\left|e^{-i H T}\right| \phi_a(\mathbf{x})\right\rangle=\int \mathcal{D} \phi \mathcal{D} \pi \exp \left[i \int_0^T d^4 x\left(\pi \dot{\phi}-\frac{1}{2} \pi^2-\frac{1}{2}(\nabla \phi)^2-V(\phi)\right)\right] $$
Since the exponent is quadratic in $\pi$, the book evaluates the $\mathcal{D}\pi$ integral and obtains
$$\left\langle\phi_b(\mathbf{x})\left|e^{-i H T}\right| \phi_a(\mathbf{x})\right\rangle=\int \mathcal{D} \phi \exp \left[i \int_0^T d^4 x \mathcal{L}\right]. \tag{9.14} $$
There needs to be some complicated coefficients, but the book omit here.
I am puzzled how this integral finished? ie.
$$\int \mathcal{D} \pi \exp \left[i \int _ { 0 } ^ { T } d ^ { 4 } x \left(\pi \dot{\phi}-\frac{1}{2} \pi^2\right)\right] $$
Since now the integral argument is $\pi$, which is a function, how to understand it's upper and lower limit? Also, their have a term $i \int_0^T d^4 x$ inside the exponent. So how to understand this integral?
Answer: Briefly, there are 2 issues:
It is safest to Wick rotate $t_E=it_M$ to make the Gaussian integrals exponentially damped rather than oscillatory. (NB: Don't also Wick rotate the momentum field $\pi_M=i\pi_E$, cf. my related Phys.SE answer here.)
Truncate spacetime to a finite large box and discretize it. The result of the Gaussians integrations will be a number ${\cal N}$ that doesn't depend on any physically important parameters, but diverge in the continuum limit. Define the path integral measure ${\cal D}\pi$ to contain the reciprocal constant ${\cal N}^{-1}$. | {
"domain": "physics.stackexchange",
"id": 91001,
"tags": "quantum-field-theory, field-theory, path-integral, regularization, normalization"
} |
DFA that rejects $a^{23}$ but accepts $\{a^i|i\geq 24\}$ | Question: Construct a DFA $M$ with $\Sigma = \{a\}$ and max. 11 states so that $a^{23}\not\in L(M)$ but $\{a^i|i\geq 24\}\subset L(M)$.
I don't see how it is possible? Because it's a DFA and the alphabet only contains $a$, shouldn't I only be able to look at modulo 11? I could make 1 mod 11 not accepting but then $a^{34}$ would not be accepted as well.
EDIT: The prof confirmed it was a typo. We are looking for a NFA. Thanks everyone!
Answer: This is not possible. A DFA on a one symbol alphabet is a directed graph with out-degree 1. The walk one follows from the start state consists of a (possibly zero length) path followed by a cycle. As all sufficiently long strings are accepted, all states in the cycle must accept. If the DFA has at most 11 states, a walk of length 23 must be in the cycle, so it must accept.
More generally, if a DFA on one symbol rejects $a^n$ but accepts all $a^m$, $m>n$, then it must have at least $n+1$ symbols and start with a path of length $\ge n$, the $n$th state rejecting, followed by a cycle of accepting states of length $\ge 1$. | {
"domain": "cs.stackexchange",
"id": 12639,
"tags": "formal-languages, automata, finite-automata"
} |
What are the units of Odometry/orientation.z/w and Twist.angular.z fields? | Question:
I an new to ROS and am trying to understand the units in which the values in the Odometry.orientation.w and z fields and what do they represent. I am trying to rotate a turtlebot by a specific number of degrees. Is there a way to achieve it as I am able to set only angular velocities and whose unit I don't know either.
Originally posted by fayazvf on ROS Answers with karma: 21 on 2016-03-01
Post score: 1
Answer:
Orientation is in terms of Quaternion, not Euler angles. You can check http://answers.ros.org/question/220333/what-do-x-y-and-z-denote-in-mavros-topic-mavrosimudata_raw/#220356
For unit conventions you can check REP-0103. As you can check, angular velocity is rad/s if the code you use is convenient with REP-0103.
Originally posted by Akif with karma: 3561 on 2016-03-01
This answer was ACCEPTED on the original site
Post score: 5 | {
"domain": "robotics.stackexchange",
"id": 23952,
"tags": "navigation, odometry"
} |
Finding hash of a substring $[i, j]$ in $O(1)$ using $O(|S|)$ pre computation | Question: Given a string $S$ of length $n$ characters, is it possible to calculate the hash of its substring $[i, j]$ (from index $i$ to $j$, both inclusive) in $O(1)$ using some form of precomputation? Can we use a modification of the rolling hash?
The following is a similar problem to my question. In it the string was given in a compressed form. Example: if the string is "aaabccdeeee" then the compressed form is:
3 a
1 b
2 c
1 d
4 e
The data was stored in an str[] array as :
str[] = [{'a','3'}, {'b','1'}, {'c','2'}....]
Note that hashing was used in the solutions, which checked if the given substring is a palindrome or not. Given a substring of string $S$ as $(i, j)$, they computed the hash of substring $[i, (i + j) / 2]$ and the reverse hash of substring $[(i+j+2)/2, j]$ and checked if they were equal or not. So if they wanted to check if in string S = "daabac" whether substring $[1, 5]$ is a a palindrome or not, they computed the following :
h1 = forward_hash("aa")
h2 = reverse_hash("ba")
h1 == h2
Code for the hashing concept
The hash precomputation was done as follows:
/* Computing the Prefix Hash Table */
pre_hash[0] = 0;
for(int i = 1;i <= len(str); i++)
{
pre_hash[i] = pre_hash[i - 1] * very_large_prime + str[i].first;
pre_hash[i] = pre_hash[i] * very_large_prime + str[i].second;
}
/* Computing the Suffix Hash Table */
suff_hash[0] = 0;
for(int i = 1;i <= len(str); i++)
{
suff_hash[i] = suff_hash[i - 1] * very_large_prime + str[K - i + 1].first;
suff_hash[i] = suff_hash[i] * very_large_prime + str[K - i + 1].second;
}
And then the hash was computed using the following functions :
/* Calculates the Forward hash of substring [i,j] */
unsigned long long CalculateHash(int i, int j)
{
if(i > j)
return -1;
unsigned long long ret = pre_hash[j] - POW(very_large_prime, [2 * (j - i + 1)])*pre_hash[i - 1];
return ret;
}
/* Calculates the reverse hash of substring [i,j] */
unsigned long long CalculateHash_Reverse(int i, int j)
{
unsigned long long ret = suff_hash[j] - POW(very_large_prime, [2 * (j - i + 1)]) * suff_hash[i - 1];
return ret;
}
What I am trying to do
I am looking for a general approach to the above concept. Given a Pattern $P$, I want to check if the pattern $P$ is present in a string $S$. I know the index $i$ to check where it may be present. And I also know the length of pattern $P$ represented as $|P|$. In short I want to check if hash of $S[i, i + |P|]$ and hash of $P$ match or not in $O(1)$ using some form of precomputation on $S$. (Ignoring the time taken to compute hash of $P$ else it would be $O(1+|P|)$.)
Answer: Yes, you can solve your problem, roughly speaking, by precomputing prefix sums.
In particular, if you are willing to do $O(n)$ precomputation and to use $O(n)$ space, we can solve your problem, for most standard rolling hashes. We just need the rolling hash to have one extra property, the inverse property. Most rolling hashes do have this property, including the well-known Rabin-Karp rolling hash.
Let me explain the details, below.
Some notation: let $S[i..j]$ denote the substring of $S$ from index $i$ to $j-1$ (note: it doesn't include index $j$; it is a substring of length $j-i$). Let $H(S[i..j])$ denote the rolling hash of that substring.
The intuition
Imagine that we used a very simple hash, which is just the sum of the bytes modulo 256:
$$H(S[i..j]) = S[i] + S[i+1] + S[i+2] + \dots + S[j-1] \bmod 256.$$
This is a crummy hash function (it has lots of collisions), but let's go with it, since the purpose is just to get some intuition. This is a rolling hash; given the hash of a prefix of the string, it is easy to extend it by one more byte (just add that byte to the hash value you've got so far).
Now with this hash function, we could easily solve your problem. During the precomputation, we'd fill in an array $A[]$ so that $A[j] = S[0] + S[1] + \dots + S[j-1] \bmod 256$ for each $j$; this can be done in $O(n)$ time. Now, we can compute $H(S[i..j])$ via the simple relation
$$H(S[i..j]) = A[j] - A[i] \bmod 256.$$
This can be computed in $O(1)$ time, given the precomputed array $A$.
Of course, you'd never use this in practice, because it's such a crummy hash function. But the same idea can be generalized to work with many other rolling hash functions, which you would want to use. (Intuitively, that's because they are based upon a binary operation that is associative and has an inverse, which is all you need for this trick to work.) I'll describe the details next.
The inverse property
All rolling hashes have the following property:
The accumulation property: Given $H(S[i..j])$ and $S[j]$, we can compute $H(S[i..j+1])$.
We need one additional property from the rolling hash:
The inverse property: Given $H(S[i..j])$ and $H(S[i..k])$, where $j < k$, it should be possible to derive $H(S[j+1..k])$.
Many standard rolling hashes do have this inverse property. For instance, the Rabin-Karp rolling hash has the inverse property. The same is true for hashing with cyclic polynomials (Buzhash).
How to use the inverse property for your problem
Create an array $A[]$ of $n$ elements. During precomputation, we will fill in the elements of $A$ so that $A[j] = H(S[0..j])$. This precomputation can be done in $O(n)$.
Now, at some later point, you would like to compute the rolling hash of $S[i..j]$, i.e., to compute $H(S[i..j])$. How do you do that? It turns out it is easy. First, look up $H(S[0..i])$ and $H(S[0..j])$ using the array; this involves reading the value of $A[i]$ and $A[j]$. Next, use the inverse property to compute $H(S[i..j])$ from these two values. Done! That's all there is to it. | {
"domain": "cs.stackexchange",
"id": 3062,
"tags": "data-structures, strings, hash, rolling-hash"
} |
How can temperature be calculated given relative humidity and dew point? | Question: While looking into the better indicator of how miserable it feels outside, either dew point or relative humidity, I came across this statement:
The optimum combination for human comfort is a dewpoint of about 60°F
and a RH of between 50 and 70% (this would put the temperature at
about 75°F).
Source: http://www.theweatherprediction.com/habyhints/190/
This led me to this calculator that will calculate the temperature given relative humidity and dew point - for example a dew point of 70°F and a relative humidity of 90% results in a temperature of 73.11°F. The web site for this calculator says the values are based on the August-Roche-Magnus approximation and gives the following equation to calculate temperature:
$T =243.04 \Large \frac{\frac{17.625\ TD}{243.04+TD}-\ln\left(\frac{RH}{100}\right)}{17.625+\ln\left(\frac{RH}{100}\right)-
\frac{17.625\ TD}{243.04+TD}}$
Given the equation, I'm still having a hard time figuring out how the temperature is being calculated. Can somebody please explain how the temperature is being calculated in simple terms?
Answer: To understand that formula, it's better to start from the more intuitive dependence of dew point with temperature and relative humidity, as illustrated by the following graph from Wikipedia:
As for any solvent with a solute, the higher the temperature of the solvent the more solute it can hold. That's why hot water can dissolve more sugar than cold water. Or, for a given quantity of sugar, hot water dissolves it faster than cold water.
The figure above says basically the same: Hotter air dissolves more water. Condensation happens when the amount of water in the air is more that the amount the air can actually hold. The dew point is the temperature at which condensation appears. Therefore, for a given relative humidity, the hotter the air, the higher is the dew point.
As you can see, in the Magnus approximation, the relationship is just a straight line, with a slope and constant changing for different values of the relative humidity.
I'll assign letters to each constant for simplicity, so that
$b=17.625$
and
$c=243.04$
With that, the dew point can be calculated as
$TD= \large c \LARGE \frac{\ln\left(\frac{RH}{100}\right)+\frac{b T}{c+T}}{b-\ln\frac{RH}{100}-\frac{b T}{c+T}}$
(see Calculating the dew point)
I won't do the algebra, but if you rearrange that formula so that $T$ is written as function of $TD$ and $RH$, you will go back to the formula you presented. | {
"domain": "earthscience.stackexchange",
"id": 1518,
"tags": "meteorology"
} |
Updating an inventory with R using apply functions | Question: I have created an inventory that works via QR codes. Briefly: the QR code codes for an email with the book/student checking it out. The email is downloaded into R using the gmailR package (code not shown). The info from the email is taken and added to a table which is compared to a master table (the inventory) and the changes are then made accordingly to update the master table.
The updating works by looking to see if the book is already IN or OUT and then simply flipping it to the opposite. And if it is being checked in the student and the date are erased (switched back to NA).
My original approach to updating the table was to use a function from the apply family but the problem I ran in to was that I do not want to change ALL rows in the master table but only the ones that need to be updated. I could not figure out a way to do this without using a for loop. Is there a way to write this code more efficiently, perhaps using apply, or via any other vectorized functions? Also, any other suggestions about my general design strategy/approach for this inventory would be appreciated.
(By the way, I know I should probably use the date class for the date column but don't worry about that for now).
## Sample data.
# new = List of books to update, the date, and student name.
# master = The inventory
new <- structure(list(book = structure(1:3, .Label = c("Almost moon", "Ava my story", "Catching fire"), class = "factor"), date = structure(c(1L, 1L, 1L), .Label = "8/23/15", class = "factor"), student = structure(1:3, .Label = c("John", "Mary", "Sue"), class = "factor")), .Names = c("book", "date", "student"), row.names = c(NA, -3L), class = "data.frame")
master <- structure(list(book = c("A trick I learned from dead men", "Almost moon", "Austin monthly july 2013", "Ava my story", "Becoming jane austen", "Bossypants", "Catching fire", "Cold mountain", "Comfort food", "Confessions of a jane austen addict"), author = c("Aldridge", "Sebold", "Various", "Gardner", "Spence", "Fey", "Collins", "Frazier", "Jacobs", "Rigler"), status = c("IN", "IN", "IN", "IN", "IN", "IN", "IN", "IN", "IN", "IN"), student = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA), date = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, NA)), .Names = c("book", "author", "status", "student", "date"), row.names = c(NA, 10L), class = "data.frame")
## Update inventory
if(sum(new[,1] %in% master$book) == length(new[,1])) {
matches <- which(master$book %in% new[,1])
for(i in matches) {
if(master[i, 3] == "IN") {
master[i, 3] <- "OUT"
master[i, 4] <- as.character(new[new$book == master[i, 1], "student"])
master[i, 5] <- as.character(new[new$book == master[i, 1], "date"])
} else if(master[i, 3] == "OUT") {
master[i, 3] <- "IN"
master[i, 4] <- NA
master[i, 5] <- NA
}
}
} else {
stop("At least one book not found in database")
}
Answer: You can indeed use vectorization. Create two vectors of indices (or booleans like I did) to identify the rows of the master that need to be updated. To find the corresponding row from the new, you can use the match function like I did.
## Update inventory
if (all(new$book %in% master$book)) {
idx.map <- match(master$book, new$book)
is.match <- master$book %in% new$book
checking.out <- is.match & master$status == "IN"
checking.in <- is.match & master$status == "OUT"
master$status[checking.out] <- "OUT"
master$status[checking.in ] <- "IN"
master$student[checking.out] <- as.character(new$student[idx.map[checking.out]])
master$student[checking.in ] <- NA
master$date[checking.out] <- as.character(new$date[idx.map[checking.out]])
master$date[checking.in ] <- NA
} else {
stop("At least one book not found in database")
}
You will notice another improvement: everywhere I am referring to columns using their name, for example, master$student instead of their index like master[..., 3]. It makes the code more readable and easier to maintain (Imagine what would happen if suddenly a column was inserted at the beginning of master: you would have to modify all your indices...) | {
"domain": "codereview.stackexchange",
"id": 15434,
"tags": "r, vectorization"
} |
Injectable logging with NLog and Ninject | Question: I have been using NLog for logging purposes in my web applications, but it was not injectable. More precisely, each class using logging declared the logger like this:
private static Logger logger = LogManager.GetCurrentClassLogger();
Since my logging is very simple, I have defined some extension methods to easily log any message and/or exception information:
public static class NLogExtensions
{
public static void LogEx(this Logger logger, LogLevel level, String message)
{
logger.Log(level, message);
}
public static void LogEx(this Logger logger, LogLevel level, String format, params object[] parameters)
{
logger.Log(level, format, parameters);
}
public static void LogEx(this Logger logger, LogLevel level, IList<String> list)
{
String output = String.Join("; ", list);
LogEx(logger, level, output);
}
public static void LogEx(this Logger logger, LogLevel level, String message, Exception exc)
{
try
{
GlobalDiagnosticsContext.Set("FullExceptionInfo", exc.ToString());
logger.Log(level, message, exc);
}
finally
{
GlobalDiagnosticsContext.Remove("FullExceptionInfo");
}
}
public static void LogEx(this Logger logger, LogLevel level, String format, Exception exc, params object[] parameters)
{
try
{
GlobalDiagnosticsContext.Set("FullExceptionInfo", exc.ToString());
logger.Log(level, format, parameters);
}
finally
{
GlobalDiagnosticsContext.Remove("FullExceptionInfo");
}
}
}
It is clear that everything is static and I cannot replace logging while automatic testing takes place, for example. So, I thought about injecting the logging mechanism.
First, I have read this article, but it looks quite complicated for my needs, so I thought of giving a try on my own.
The service
public class LoggingService : ILoggingService
{
private static Logger logger = LogManager.GetCurrentClassLogger();
public void Log(LogLevel level, String message)
{
logger.Log(level, message);
}
public void Log(LogLevel level, String format, params object[] parameters)
{
logger.Log(level, format, parameters);
}
public void Log(LogLevel level, IList<String> list)
{
String output = String.Join("; ", list);
Log(level, output);
}
public void Log(LogLevel level, String message, Exception exc)
{
try
{
GlobalDiagnosticsContext.Set("FullExceptionInfo", exc.ToString());
logger.Log(level, message, exc);
}
finally
{
GlobalDiagnosticsContext.Remove("FullExceptionInfo");
}
}
public void Log(LogLevel level, String format, Exception exc, params object[] parameters)
{
try
{
GlobalDiagnosticsContext.Set("FullExceptionInfo", exc.ToString());
logger.Log(level, format, parameters);
}
finally
{
GlobalDiagnosticsContext.Remove("FullExceptionInfo");
}
}
}
The configuration
<targets>
<target name="database" type="Database">
<connectionString>
Data Source=dbinstance;Initial Catalog=database;User Id=userid;Password=userpass;Application Name=TheLogger
</connectionString>
<commandText>
insert into dbo.nlog
(log_date, log_level_id, log_level, logger, log_message, machine_name, log_user_name, call_site, thread, exception, stack_trace, full_exception_info)
values(@timestamp, dbo.func_get_nlog_level_id(@level), @level, NULL /*@logger*/, @message, @machinename, @username, NULL /*@call_site */, @threadid, @log_exception, @stacktrace, @FullExceptionInfo);
</commandText>
<parameter name="@timestamp" layout="${longdate}"/>
<parameter name="@level" layout="${level}"/>
<parameter name="@logger" layout="${logger}"/>
<parameter name="@message" layout="${message}"/>
<parameter name="@machinename" layout="${machinename}"/>
<parameter name="@username" layout="${windows-identity:domain=true}"/>
<parameter name="@call_site" layout="${callsite:filename=true}"/>
<parameter name="@threadid" layout="${threadid}"/>
<parameter name="@log_exception" layout="${exception}"/>
<parameter name="@stacktrace" layout="${stacktrace}"/>
<parameter name="@FullExceptionInfo" layout="${gdc:FullExceptionInfo}"/>
</target>
</targets>
It is clear that I do not have class information anymore, since the logger is defined in a single place, but my custom field FullExceptionInfo gets me relevant information for exceptions.
Is this a good approach or it can lead to trouble in the future?
Answer: Your implementation looks all right. It should be able to injected by any DI tool as you work with abstractions.
But I got couple of improvement notes:
Logger type:
private static Logger logger = LogManager.GetCurrentClassLogger();
This would generate yournamespace.LoggingService as the logger.
I believe key part of your log file is to identify where the log has been originated/written. Therefore create a logger base on the caller's type (where the logger file being used)
self explanatory methods:
Instead of having overload methods think of having self explanatory method and consumers of your logger would have better understanding what method would appropriate for the given situation.
e.g Info, Debug, Exception, LogException, LogExceptionWithParameters etc.
Think about given meaning full names for the following methods in your implementation.
void Log(LogLevel level, String format, params object[] parameters)
void Log(LogLevel level, IList<String> list)
void Log(LogLevel level, String message, Exception exc)
void Log(LogLevel level, String format, Exception exc, params object[] parameters)
As a side note;
If I'm were you I would implement a Logger factory to create loggers base on the caller's type.
Eg.
Factory
public interface ILoggerFactory
{
ILogger Create<T>() where T : class;
}
public class LoggerFactory:ILoggerFactory
{
public ILogger Create<T>() where T : class
{
return new Loger(typeof(T));
}
}
Logger
public interface ILogger
{
string Name { get; }
void Debug(string message);
}
public class Loger:ILogger
{
private readonly NLog.Logger _logger;
public Loger(Type type)
{
if(type==null)
throw new ArgumentNullException("type");
_logger = LogManager.GetCurrentClassLogger();
}
public string Name {
get { return _logger.Name; }
}
public void Debug(string message)
{
_logger.Debug(message);
}
}
Usage
var logger = loggerFactory.Create<CallerClass>();
logger.Debug("some debug message"); | {
"domain": "codereview.stackexchange",
"id": 32702,
"tags": "c#, logging, dependency-injection, ninject"
} |
When does a star rise with the sun? | Question:
Problem: Suppose an observer's latitude is 45 S and a star's RA/DEC is
3h15min/41S. On what date will the sun set with the star.
I have been stuck on this problem for a while.
I found out that the time of setting of a star is found using the formula:
$$\alpha+\arccos(-\tan(\delta).\tan(\phi))$$
$\implies3\frac{1}{4}^h+\arccos(-\tan(-41)\tan(-45)) = \alpha_{sun}+\arccos(-\tan(\delta_{sun})\tan(-45))$
$\implies 47.8 = \alpha_{sun}+arccos(tan(\delta_{sun}))$
I am not able to solve this any further.
Is there something wrong in my approach? Something I am missing?
If not how do I solve the above equation further?
Thanks in anticipation!
Answer: The formula you gave is to find the hour angle of the star while setting, not the setting time itself. Suppose the hour angle of star is $HA\star$, and $RA=\alpha$,then the Local Sidereel Time is given by $LST=HA\star+\alpha$. At the time of setting, $HA\star=10^h1^m30.2^s$. Thus, $LST=13^h16^m30.2^s$at the time of setting.
Now, $RA\odot=6^h$ on $\text{June} 21^{st}$ and $HA_\odot =\arccos(\tan(23.43^{\circ}))=4^h17^m17^s$ at sunset. Thus the star sets with the sun after $\text{ June} 21^{st}$. Since $RA_\odot$ increases approximately by $2^h$ every month, we can assume that the star sets with the sen near $\text{July }21^{st}$. Let it set after $n$ days. At $\text{July }21^{st}$, $\delta_\odot = 20.09^{\circ}$ which gives us $HA_\odot=4^h34^m11^s$, and $RA_{\odot}=8^h9^m44^s$. The $HA_\odot$ at sunset is approximately the same for nearby dates. Since $\delta_\odot$ is also near the maximum, we can approximate $\Delta RA_\odot= \omega_\odot n$. Thus,
$$13^h16^m30.2^s=8^h9^m44^s + \omega_\odot n + 4^h34^m11^s$$
$$ \omega_\odot n =0.5433^h$$
$$n=8$$
At $\text{July } 29^{th}$, $\delta_\odot=18.23^{\circ}$, $HA_\odot= 4^h43^m6^s$, $RA_\odot=8^h42^m12^s$ which gives us $LST=13^h25^m18^s$, which is a bit more than expected. Trial and error for few days before gives us that on $\text{July } 26^{th}$, the sun sets at LST $13^h15^m9^s$ which would be the most appropriate answer.
Thus, the sun sets along with the star at $\text{July } 26^{th}$ | {
"domain": "astronomy.stackexchange",
"id": 4213,
"tags": "star, the-sun, declination, right-ascension"
} |
How can you test what color different people perceive? | Question: If I would show someone a yellow object and ask them, "is this object yellow?"
That person would say "yes".
But I could never know if my perception of the color yellow is the same as that other person's.
Because he or she could actually be seeing, what I know to be the color green.
But then tells me that its the color yellow because that has been taught to him or her from young age.
So how can you test if people are really seeing the same color?
(originally posted this question on physics.stackexchange but was advised to try it here)
https://physics.stackexchange.com/questions/48731/how-can-you-test-what-color-different-people-perceive
Answer: One way we can get evidence qualia are the same or very similar for different people is by reactions to it, beyond just the word.
For example, beyond the word "pain", we have other strong reactions to pain. So nobody
suspects other people might experience pain as pleasure and vice versa. Obviously not!
There are no obvious signs for colour qualia, so it makes sense to suspect some people experience red as blue and vice versa.
Still, there are weak reactions to colours, like sometimes certain colours are associated with certain emotions. But, this is very weak evidence and could easily mismeasured, or be bound up with culture rather than biology.
We could test it: take pairs of identical twins at birth (so that they haven't learned words for colours yet), re-wire optical nerves of one of each pair so that red and blue receptors were switched (and perform a placebo operation on the other one, so they don't know who is who), and see whether there is any statistically significant change in the attitudes to different colours or the like as they grow up.
The experiment has various technological and ethical problems :-) But it could be done in principle, so to me this shows the question isn't meaningless, just that it's hard to find the answer. | {
"domain": "biology.stackexchange",
"id": 1676,
"tags": "perception"
} |
When can a deterministic finite-state-automaton (DFSA) along with its input sequence be said to be a part of another DFSA? | Question: For a Finite State Automaton / Finite State Machine (FSM) $F$, that has an input alphabet, a set of possible states, an initial state, a set of possible final states and a state transition function, let a finite input sequence $S$ is given, such that at the end of this sequence the FSM enters a final state and stays in that state.
Can this FSM $F$ along with the input sequence $S$ be considered a separate FSM $F'$?
Analogous to this, can a Turing machine $T$ along with a finite tape $P$ be considered a separate Turing machine $T'$?
What are the conditions, if any, for this to be true assuming it is true?
Note: I expect a formal proof, or a reference/outline to a formal proof that proves that either of this can or cannot be done. Some theory related to this is also welcome.
My research:
Closely related topic:
R. T. G. TAN (1979) Hardware and software equivalence, International Journal of Electronics, 47:6, 621-622, DOI: 10.1080/00207217908938690
I am aware of the principle of hardware and software equivalence, which states that a given task can be performed using hardware or software, i.e. digital hardware and software are equivalent models of computation. But I think my question is different from this one.
Motivation:
From this question (
Is there code below microcode? ) I think we can consider an FSM with its input sequence (microcode) to be a part of another FSM (the digital computer), but of course much more circuitry like Arithmetic and Logical Unit (ALU) and datapath is needed to make a computer. Microcode is used only for the control circuit.
This answer claims the data in the RAM of a computer along with the CPU can be considered to be a part of a bigger circuit.
To quote:
The circuit is fixed (it is the gates in the processor) and part of its input is data that depends upon the program you are executing (which is stored in the RAM of the computer). However, you could consider this a larger circuit where part of it is hardcoded (i.e., the program part of the input is hardcoded); then you can view a computer running a program as a big circuit with part that is universal and identical for all programs (the gates of the processor) and part that depends on the program (the hardcoded input), and this immediately gives a mapping from programs to circuits. The mapping is implemented by a compiler.
Answer: It's not clear to me how to interpret "can be considered", so I'm going to identify one technical question that can be answered.
Given a FSM $F$ and an input sequence $S$, it is possible to build another FSM $F'$ so that the execution of $F'$ on the empty input is in one-to-one correspondence with the execution of $F$ on $S$ (and in particular, both end at the same end state(s); e.g., either $F$ accepts on $S$ and $F'$ accepts on the empty input; or $F$ rejects on $S$ and $F'$ rejects on the empty input).
The proof is a straightforward application of the product construction: we construct one FSM $F_0$ that outputs the fixed sequence $S$, and then compute the parallel composition of $F_0$ with $F$.
The following is also true: given a Turing machine $T$ and a fixed input $P$ (i.e., initial state of the tape $P$), then it is possible to construct another Turing machine $T'$ such that execution of $T$ on input $P$ has the same result as execution of $T'$ on any input.
Formal proofs with Turing machines are often tedious and uninformative, so it's easier to see how this is true by considering a real-world program. For instance, suppose we have Python code that defines some function t():
def t(x):
...
and we have some fixed string p. Consider the following Python function t':
def t(x):
...
def t'(x):
return t(p)
Then it is easy to see that the behavior of t' on any input is equivalent to the behavior of t on input p. (Here we have hard-coded a lexical constant string in the place indicated with p above.) You can do the same thing with Turing machines, where the machine first writes $P$ on the tape, and then starts executing $T$, to define a Turing machine $T'$ that proves the claim above. | {
"domain": "cs.stackexchange",
"id": 17857,
"tags": "turing-machines, finite-automata, computer-architecture"
} |
Understanding navigation stack | Question:
Hi,
I have trouble of understanding how navigation stack works. After days of debugging code to work with navigation stack I have concluded that its either problem in tf or I dont really understand how navigation stack work. Specifically I have a couple of questions/problems:
When I change tf to some reasonable parameters from my initial parameter, that should work, I get that my laser sensor is out of range, exactly - on some negative x or y positions. Does navigation work only with positive x and y?
My simulation is left oriented, so this might be problem with getting to work with navigation. But, when I left x an y signed positive or negative as I get from my odometry or change x to -x to compensate left-right orientation my robot's footprint moves in rviz always in the same direction. Should't be that in opposite direction? Is navigation right oriented? What problem I might get with left-right orientations?
My main problem is that when the robot in, let's say, position (x,y,orient)=(5,5,0) /map frame, and I send goal from rviz in /map frame to (6,5,0), robot gets positive x command velocity from navigation, but when the goal point is reached robot remains to move in +x direction, the output from navigation (cmd_vel) remains the same as before reaching goal. Same problem with y direction, both + and -y.
Some help needed,
thanks.
Originally posted by Jack Sparrow on ROS Answers with karma: 83 on 2011-04-19
Post score: 1
Answer:
To add a little bit of clarification on item #1:
The navigation stack will work with negative values for coordinates in a given frame depending on the configuration of its costmaps. Is sounds like perhaps your robot is not localized properly, causing the laser sensor to be outside of the specified map. If you're using AMCL, have you made sure to set the "initialpose" of the robot?
Originally posted by eitan with karma: 2743 on 2011-04-20
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Jack Sparrow on 2011-04-28:
Yes, 1. was a problem with localization :) | {
"domain": "robotics.stackexchange",
"id": 5404,
"tags": "navigation"
} |
How can I find the zenith over a uk location on a previous date | Question: Does any one know how I can find out what was the position of the zenith over a uk location, for example, at midday on the 13th of January 2013?
Answer: How accurate does it need to be? Zenith has a declination equal to the latitude. RA=7h30m is approximately overhead at midnight on Jan 13th so add 12 hours for middday. So 19h20m, +53 would be about right for a UK location on Jan 13th.
Planetarium software like Stellarium allows you to set your location and look at the sky on any date/time. | {
"domain": "physics.stackexchange",
"id": 32770,
"tags": "astronomy"
} |
How do tracking detectors in particle accelerators create the pretty pictures we see? | Question: I have read several sources about tracking detectors used in particle accelerators like LHC, but still have not found a more detailed source that can still be understood by a layperson like myself. I am looking at CERN's article, "How a detector works". I am hoping to learn more details about this part:
Tracking devices reveal the paths of electrically charged particles
as they pass through and interact with suitable substances. Most
tracking devices do not make particle tracks directly visible, but
record tiny electrical signals that particles trigger as they move
through the device. A computer program then reconstructs the recorded
patterns of tracks.
My core question is this: With the uncertainty principle and the observer effects in mind, how do these tracking/tracing devices measure both the position and momentum of particles with the kind of accuracy that they seem to get with the beautiful color pictures you see of particle traces coming out of a collision?
Do they use some kind of charged gas that emits light when a charged particle, such as an electron, passes through them? Can electrons be tracked, or just certain heavier particles?
Answer: First of all, the uncertainty principle and observer effects are completely irrelevant. The tracking devices in modern detectors are large enough to be firmly in the realm of classical physics. Any uncertainty in the detector's wavefunction is negligible compared to the size and energy of the device itself, and the effect of detected particles on the tracker is not more than the loss of a few electrons here and there. Granted, over trillions of collisions, this could become a problem, but trackers are built to resist this kind of damage. They have electrical connections to replenish lost electrons, and they are made of dense materials that will retain their structure even if the occasional atomic nucleus gets transmuted into another one due to radiation.
As for how these tracking devices actually work: there are several different types. Each of them records a particular type of information, and is sensitive to only certain particles. The trackers are arranged around the beamline (the path through the center of the detector, where the incoming particles go) in a way that allows scientists to identify the signature of a particular particle by cross-checking the outputs of different types of trackers. It looks basically like this picture, from Wikipedia:
(that's the ATLAS detector).
A typical detector includes the following types of components, working from the inside out:
A silicon tracker consists of small "panels" of silicon arranged in concentric layers around the beamline. A charged particle produced in a collision will pass through one of these panels and knock a few electrons off the conduction band of the silicon (via the electromagnetic interaction), creating an electrical signal. Each panel is connected to its own dedicated wire, and the other end of that wire runs to the detector's readout circuit (an interface between the detector itself and the CERN computers), so the computer knows exactly which panels were exposed to outgoing particles, and to some extent, how much.
Silicon trackers don't measure the momentum of a particle, but they don't change it very much either. They're more focused on accurately measuring position. Since the individual silicon panels are quite small - maybe a few centimeters on a side - the computer gets access to precise information about the location of the particle as it passed through this tracker. And with six or seven concentric layers of silicon, spaced a few centimeters apart, you can reconstruct the path of the particle pretty well. You can see a visualization of the information received from the silicon tracker in the center of this image from CMS, the red blocks in the middle:
At this stage, it's impossible to know what kind of particle the tracker is seeing, but only charged particles interact with the silicon, so anything that leaves a track has to be charged: probably an electron, muon, or light hadron.
Next up are the calorimeters, which are massive blocks of metal designed to absorb certain particles and measure their energies and momenta. There are usually two kinds: electromagnetic calorimeters, which absorb light particles that interact electromagnetically (electrons, and photons), and hadronic calorimeters, which absorb particles that interact via the strong force (almost everything else).
Calorimeters are shaped into thin "wedges" that are pointed toward the interaction point, as you can kind of see from the first picture on this page (see the yellow layer). Each particle deposits its energy into one wedge of the calorimeter, corresponding to the direction in which it exited the silicon tracker. But the calorimeters don't detect individual particles; they can only identify how much energy was deposited into a particular wedge, and thereby get a distribution of the directions in which energy came out of the collision. The amount of energy deposited can be determined by measuring how hard the cooling system has to work to maintain the calorimeter at a constant temperature.
If you were to look at the data collected by the calorimeters only, you'd get something like the yellow blocks in this image:
Outside the calorimeters, modern detectors include a muon spectrometer, which operates a bit like the silicon tracker but on a much larger scale, using crossed strips of metal instead of silicon. The muon spectrometer records the tracks of muons by checking which strips receive electrical signals as the muons pass through them, and it can determine their momenta because the entire detector is inside a magnetic field, which makes the muons' paths curve. The radius of curvature tells you how much momentum the particle had.
At this point, everything except neutrinos has been detected, and there's nothing you can do about the neutrinos, so we just let them go.
As I mentioned before, the electrical signals from the components get fed into readout circuits, which convert them into digital signals that are then passed on to the computer. A detector sees thousands of collisions per second and collects an enormous amount of data on each one, so it can't all be stored. Instead, the signals get sent through several levels of triggering systems. The first level simply combines the readings from different parts of the detector and throws out any detections which are "boring" - for example, none of the trackers got any readings, or the readings don't exceed a certain threshold, or whatever the detector team decides is not important. (They go through a long process of analysis to decide what is not important.) After that, anything which hasn't been eliminated is sent to the CERN computer cluster for a more sophisticated analysis. What comes out at the end are sets of numbers giving the signal strength measured by each of the detector components, but only when all those signal strengths together constitute an interesting event.
If you have access to these signal strengths, you can feed them into a computer program which will produce an image of the detector and plot the corresponding signals on top of it. That's where the particle traces you've seen come from: the detector press team (or others who have access to these raw measurements) will pull out the best-looking ones and release computer-generated "pictures" that show the measurements. | {
"domain": "physics.stackexchange",
"id": 22358,
"tags": "heisenberg-uncertainty-principle, large-hadron-collider, particle-detectors"
} |
Identifying an insect looking like a crumle | Question: Recently a little insect crawled over the carpet. First I did not notice, as its appearance is quite similar to the coloring of the carpet. Then I understood that it is not a rolling crumble, as the window and the door were closed.
I took it on an ordinary sheet of DIN A4 paper, but have not the slightest idea, what kind of insect it is. It makes the impression to have taken a shower of water and dust in a row.
What's that insect called?
The second photo gives an idea of the size, as it shows a drinking glass turned upside down.
Note: The place this insect was encountered is in eastern Germany
Answer: Looks like the nymph of a masked hunter.
They carmouflage in dust and sand. | {
"domain": "biology.stackexchange",
"id": 3747,
"tags": "entomology, species-identification"
} |
Peskin and Schroeder QFT problem 3.5 (c) | Question: On Peskin and Schroeder's QFT book, page 75, on problem 3.5 (c) (supersymmetry),
The book first ask us to prove following Lagrangian is supersymmetric:
$$\begin{aligned}
&\mathcal{L}=\partial_\mu \phi_i^* \partial^\mu \phi_i+\chi_i^{\dagger} i \bar{\sigma} \cdot \partial \chi_i+F_i^* F_i \\
&+\left(F_i \frac{\partial W[\phi]}{\partial \phi_i}+\frac{i}{2} \frac{\partial^2 W[\phi]}{\partial \phi_i \partial \phi_j} \chi_i^T \sigma^2 \chi_j+\text { c.c. }\right),
\end{aligned} \tag{A}$$
To prove this, we need to show the variation of this Lagrangian can be arranged to a total divergence, and we need to use the relationship in 3.5 (a)
$$\begin{aligned}
\delta \phi &=-i \epsilon^T \sigma^2 \chi \\
\delta \chi &=\epsilon F+\sigma \cdot \partial \phi \sigma^2 \epsilon^* \\
\delta F &=-i \epsilon^{\dagger} \bar{\sigma} \cdot \partial \chi
\end{aligned} \tag{B}$$
I can show that $\delta\mathcal{L}$ really can be arranged to a total divergence: we see that this Lagrangian is supersymmetric under (B).
What really troubles me is the book's following argument: "For the simple case $n=1$ and $W=g\phi^3/3$, write out the field equations for $\phi$ and $\chi$ (after elimination of $F$)"
If I use the above condition, I will get
$$\mathcal{L}=\partial_\mu \phi^* \partial^\mu \phi+\chi^{\dagger} i \bar{\sigma}^\mu \partial_\mu \chi+F^* F+\left(g F \phi^2+i g\phi \chi^T \sigma^2 \chi+\text { c.c. }\right) \tag{C}.$$
But now why can we get the E.O.M of $F$, and further $\phi$ and $\chi$? Previously, we knew that we could get the E.O.M by variation of Lagrangian, but that needed that $\delta F$, $\delta \phi$ and $\delta \chi$ be independent.
But now, the situation is different : these quantities are not independent; they are connected via (B).
Actually, in my understanding, (B) already has the function of E.O.M, so we can get a total divergence after derivation, so I am really lost about the book's logic.
Answer: Frankly, I am lost about your logic trail, not the book's. You seem to somehow connect the (super)symmetry variations (B) to the dynamical variations yielding the EOM, and predicate one on the other? Nothing of the sort is even implied in that book.
(A) is invariant under (B), as you confirmed. Its equations of motion need not take cognizance of (B), and hold whether you've "noticed" the symmetry (B) or not.
Your next step asks you to particularize (A) to (C), as I've corrected it. It, too, has the supersymmetry (B), but you are not asked to consider, or even recognize, that, up front. You are asked to find its equations of motion, and observe how "easy" elimination of F is in them, reducing three such to two, slightly messier ones. At no point have you connected to (B); it is a merely "optional" observation, basically at a "right angle" to the EOM. (Later on, you would use these EOM to confirm the on-shell conservation of the supercharge, but this is not apparent at the step you are working on.) | {
"domain": "physics.stackexchange",
"id": 90811,
"tags": "quantum-field-theory, lagrangian-formalism, supersymmetry"
} |
Help understanding the proof of the definition of Big-Theta based on limits | Question: I was reading Kleinberg's and Tardo's book (especifically, this one) and, on page 38, these authors define the Big-Theta notation the following way:
Let $f$ and $g$ be two functions that $\lim_{n\to\infty}f(n)/g(n)$ exists and is equal to some number $c>0$. Then $f=\Theta(g)$.
Then, they provide a proof that connects the classic defition based on sets of functions to this definition based on the limit of the ratio of two functions. This proof goes as:
We will use the fact that the limit exists and is positive to show that $f=O(g)$ and $f=\Omega(g)$, as required by the definition of $\Theta(\cdot)$. Since $\lim_{n\to\infty}f(n)/g(n)=c>0$, it follows from the definition of a limit that there is some $n_0$ beyond which the ratio is always between $c/2$ and $2c$. Thus, $f(n)\leq 2cg(n)$ for all $n\geq n_0$, which implies that $f=O(g)$; and $f\geq c/2\cdot g(n)$ for all $n\geq n_0$, which implies $f=\Omega(g)$. $\blacksquare$
However, I'm struggling to follow this proof; in particular, I fail to see how or why the authors chose the constants $c/2$ and $2c$ "from the definition of the limit" (as they say). I consulted Leithold's book on calculus (this one) and, on page 250, I found the following definition for infinite limits (translation is mine):
Let $f$ be a function defined for all numbers within some open interval $(a,\infty)$. The limit of $f(x)$ when $x$ grows indefinitely is $L$, which is denoted as $\lim_{x\to\infty}{f(x)}=L$, if, for any $\varepsilon >0$ (no matter how small this number is), there exists a number $n>0$ such that: if $x>n$, then $|f(x)-L|<\varepsilon$.
Applying this definition to Kleinberg's and Tardo's proof, I understand that $\lim_{n\to\infty}f(n)/g(n)=c$ implies that, no matter what $\varepsilon$ I choose, there's always an $n_0$ beyond which $|f(n)/g(n)-c|<\varepsilon$ (in other words, the ratio $f(n)/g(n)$ differs from $c$ within any given margin $\varepsilon$). But then, I fail to see how this fact implies that "the ratio is always between $c/2$ and $2c$", as Kleinberg and Tardos claimed.
I think that the definition of infinite limits allows us to choose (at least in this particular case) any $\varepsilon$ that we find convenient, since $n_0$ will always exist no matter what we choose. Following this idea, I would suspect that Kleinberg and Tardos simply chose a value for $\varepsilon$ for which $|f(n)/g(n)-c|<\varepsilon$ would imply that $c/2\leq f(n)/g(n)\leq 2c$ is always true. However, I don't think such $\varepsilon$ can be chosen under these circumstances, since the distance from $c/2$ to $c$ is not the same as the distance from $c$ to $2c$.
Can somebody please help me understand what step I'm missing from this proof?
Answer: By the definition of the limit, for any $\epsilon$ we can always establish
$$c-\epsilon<\frac{f(n)}{g(n)}<c+\epsilon$$
by choosing $n$ sufficiently large.
In fact, the proof stops here, as this reads
$$c_0 g(n)\le f(n)\le c_1 g(n).$$
The author preferred to work with $\epsilon=\dfrac c2$ on the left and $\epsilon=c$ on the right (this is more "constructive"), but that does not make a big difference.
Technical note: $c_0$ must be positive, so we must choose $\epsilon<c$ on the left (hence the $\frac12$). There is no such constraint on the right, but the author might as well have kept the same $\epsilon$ for clarity.
The reciprocal is not true. As a counter-example, consider
$$f(n)=(2+\sin n)n.$$
We do have
$$n\le (2+\sin n)n\le 3n\implies f(n)=\Theta(n)$$ but the limit does not exist. | {
"domain": "cs.stackexchange",
"id": 20720,
"tags": "big-o-notation"
} |
Accounting for computation time in Counterfactual Computers | Question: After researching some stuff about Turing Machines and Automaton Theory I stumbled upon the concept of "counterfactual computers". Having little experience with quantum physics (and usually keeping out of it for exactly that reason) I tried doing my own research and - as one would expect - failed (The tensor products got me *wink).
The thing I was wandering is: After at least somewhat understand the Elitzur-Vaidman-Bomb-Experiment, there are supposed to be computers using this effect to produce computation results without ever being run.
At a first glance that seems as sensible as it possibly can; but after thinking about it for a while a slight issue came to mind: In most publications about that topic, it is stated that the computation begins on arrival of the photon (or any other particle) and is either emitted or not-emitted depending on the computation's result.
However, the part of the wave function propagating along the interferometer's other path goes to the detector directly and does not await the computation's solution. How is it possible then, for them to - even in a negative-result where "normal" intereference occurs - result to meet each other at the second beam splitter? The one side of the wave-function should have long passed? Or rather cannot have passed, because then not everything will have had a chance to reach the detector. So my current belief is that the wave-functions propagations is somehow delayed.
My first thought was along the lines of time-dialation, my second thoughts were something about the computation result being intrinsic to probability as well. Both seem far-fetched and the second explenation should at least break down if one introduces some fixed waiting time for the output of the result.
Needless to say, I am somewhat inexperienced in the subject and come from mostly an Information Theorey perspective, so please be kind to my mistakes. Greeting and have a good day.
Answer: Bowden's original presentation dealt with this by using a Mach-Zender interferometer and requiring the computation time to be a multiple of the inverse frequency of the light. Specifically, his "computation" was light navigating a maze, and the maze was made of square cells whose side length was the wavelength of the light, so regardless of the unknown length of the solution path, the escaping light would have a known phase. That means, though, that you have to wait for the worst-case computation time (maximum path length) before doing your measurements, so this trick doesn't save any time: in fact it costs time, both because you can't bail out early and because to have a good chance of a counterfactual outcome, you have to repeat it many times.
I think this extends to the general case: not doing the computation can't use less of any resource—such as time, or energy (work), or wear and tear on the components of the quantum computer—than doing it, so there is never any advantage to it.
The general setup is that you have a quantum circuit that computes the answer to some decision problem, and is equivalent to a CNOT gate if the answer is yes, or a no-op if the answer is no. You initialize the control bit to $|0\rangle$ and then repeatedly rotate it one $n$th of the way toward $|1\rangle$, do the decision-problem computation, then measure the output bit to see if it's flipped.
If the circuit is equivalent to a no-op, then the control and output bits are uncorrelated, so measuring the output bit does nothing, and after $n$ rotations the control bit is guaranteed to be $|1\rangle$.
If the circuit is equivalent to a CNOT, then measuring an unflipped output bit means the control bit was $0$ (you don't know that, but the universe does), so the system collapses back to its initial state, and after $n$ repetitions where you measure an unflipped output, the control bit's final measured value is its initial value of $0$, meaning the answer must be yes, but the computation never happened.
If you ever measure a flipped output bit, then the computation did happen (the "bomb explodes" case), but the chance of that can be made arbitrarily small by choosing $n$ large.
Note that if the answer is no, the computation must be exactly equivalent to a no-op: it must not use any more time, power, etc. if the control bit is $1$ than if it's $0$, since that would leak information, causing a collapse, and the final state of the control bit would be $|0\rangle$ with high probability.
If the answer is yes, there is no problem with the circuit using more resources when the control bit is $1$, since that only leaks information that we learn anyway when we measure the output bit. But to know that you can safely do that, you have to know that the answer is yes, which means you've already finished the whole computation and there is no longer anything to spend resources on.
So the counterfactual computation can't be made cheaper. In fact, it's more than $n$ times more expensive. | {
"domain": "physics.stackexchange",
"id": 91635,
"tags": "quantum-mechanics, quantum-information, quantum-computer, interactions, information"
} |
Somewhat esoteric if statement in a paginated feed | Question: I am working on the home page of a website that will have a paginated feed (much like a blog's home page).
One of the requirements is that when a user navigates to a non-existent page, he or she will be redirected to the last available page. For example, when the user navigates to a non-existent page such as page number 500, they will be redirected to the highest (last available) page number which might be something sensible like 10.
The code to fulfill this requirement easy enough looks like this:
private const int PageSize = 5;
public ActionResult Index(int pageNumber = 1)
{
IEnumerable<Post> posts = repository.All();
PagedList<Post> page = posts.Paginate(pageNumber, PageSize);
if (page.Count == 0 && pageNumber != 1)
{
return RedirectToAction("Index", new { pageNumber = page.TotalPageCount });
}
return View(page);
}
My problem with this code is the following expression:
page.Count == 0 && pageNumber != 1
Personally I find this code hard to reason about (and I am the person who wrote it!). How can I make the meaning of this expression clearer? Is there an alternative, more readable way of implementing this logic perhaps?
The condition begins by evaluating whether the requested page has any posts - if the contents of the page are empty, the page effectively does not exist. The condition then checks whether the user is attempting to access the home page (the first page) or not. This check is important because if the database is empty, the user still needs to be able to access the first page - so that the view can display a nice error message.
Answer: You're checking that "page does not contain any posts and page number is not one".
Reversing the condition is often the easiest way to clarify it. The inverse condition would be "page contains at least one post, or page number is one".
I would make it look like this:
if (pagePosts.Any() || pageNumber == 1)
{
return View(pagePosts);
}
else
{
return RedirectToAction("Index", new { pageNumber = pagePosts.TotalPageCount });
}
Notice page renamed to pagePosts, also for clarity: it's not a page, it's the posts on the page that was requested. pagePosts is less ambiguous I find. | {
"domain": "codereview.stackexchange",
"id": 9227,
"tags": "c#, .net, asp.net-mvc"
} |
Axis direction Clav DOF | Question:
Could the developers show the Axis direction of this DOF?
Best!
Originally posted by mariof4 on Gazebo Answers with karma: 1 on 2013-05-09
Post score: 0
Original comments
Comment by scpeters on 2013-05-10:
I assume you're talking about Atlas in drcsim. l_clav and r_clav are links, that are connected by joints named [lr]_arm_usy and [lr]_arm_shx. Which joint are your curious about?
Answer:
Use the GUI to turn on joint visualization (View -> Joints). The axis of rotation for a joint is highlight with a circle.
Originally posted by nkoenig with karma: 7676 on 2013-07-23
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3278,
"tags": "gazebo"
} |
roswtf communcation error bug in Ubuntu 11.04 | Question:
Hi there,
I am using ros-diamondback on Ubuntu 11.04-64bit.
I am launching the following launch file:
<launch>
<node pkg="tf type="static_transform_publisher" name="blub" args ="0 0 0 0 0 0 blub blub2 100" />
</launch>
In another terminal I run roswtf
Loaded plugin tf.tfwtf
No package or stack in context
================================================================================
Static checks summary:
No errors or warnings
================================================================================
Beginning tests of your ROS graph. These may take awhile...
analyzing graph...
... done analyzing graph
running graph rules...
... done running graph rules
running tf checks, this will take a second...
... tf checks complete
Online checks summary:
Found 2 error(s).
ERROR Communication with [/blub] raised an error:
ERROR Communication with [/rosout] raised an error:
I tried to set ROS_IP and ROS_HOSTNAME by hand, but still the same error. The funny thing is, that there is no error. Everything works as expected. I can listen to the tf topic and receive data.
Can anybody tell me, why roswtf thinks that there is an error?
Thx
Originally posted by kluessi on ROS Answers with karma: 73 on 2011-08-01
Post score: 0
Answer:
No activity in > 1 month, closing
Originally posted by kwc with karma: 12244 on 2011-09-02
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 6312,
"tags": "ros, roswtf, 64bit"
} |
A Start button handler | Question: I was told my code contains a lot of force unwrapping. I thought it's okay to do that if I am sure that the value operated won't be nil:
private var x: Int?
private var y: Int?
@IBAction func startButtonPressed(_ sender: UIButton) {
guard let numberOfRooms = selectedRooms.text, !numberOfRooms.isEmpty else {
return selectedRooms.placeholder = "type it, dude"
} //if user typed something
let rooms = Int(numberOfRooms)
x = Int(ceil(sqrt(Double(rooms!))))
y = x //grab some values from user input
maze = MazeGenerator(x!, y!) //generate a maze
hp = 2 * (x! * y!) //get hp value depending on user input
hpLabel.text = "hp: \(hp)"
currentX = getRandomX(x!) //get random value in 0...x
currentY = getRandomY(y!)
currentCell = maze?.maze[currentX!][currentY!] //game starts in a random part of the maze
for item in items.indices { //create and show some imageViews
items[item].location = (getRandomX(x!), getRandomX(y!))
createItems(imageName: items[item].imageName)
if items[item].location == (currentX!, currentY!) {
reveal(index: item)
} else {
hide(index: item)
}
}
refreshButtons() //refresh UI
maze!.display() //print maze scheme in debug console
}
Does it look fine? If not, what should be done?
Answer: Code review:
1. use a guard statements to transform optional values to values.
private var x: Int?
private var y: Int?
@IBAction func startButtonPressed(_ sender: UIButton) {
guard let numberOfRooms = selectedRooms.text, !numberOfRooms.isEmpty else {
return selectedRooms.placeholder = "type it, dude"
} //if user typed something
let rooms = Int(numberOfRooms)
x = Int(ceil(sqrt(Double(rooms!))))
y = x //grab some values from user input
guard let x = x, y = y else {
return
}
guard let maze = MazeGenerator(x, y) else { //generate a maze
return
}
hp = 2 * (x * y) //get hp value depending on user input
hpLabel.text = "hp: \(hp)"
guard let currentX = getRandomX(x), let currentY = getRandomY(y) else {
return
}
currentCell = maze.maze[currentX][currentY] //game starts in a random part of the maze
for item in items.indices { //create and show some imageViews
items[item].location = (getRandomX(x!), getRandomX(y!))
createItems(imageName: items[item].imageName)
if items[item].location == (currentX!, currentY!) {
reveal(index: item)
} else {
hide(index: item)
}
}
refreshButtons() //refresh UI
maze.display() //print maze scheme in debug console
}
2. If possible, define a properties as non optional
private var x: Int = 0
private var y: Int = 0
@IBAction func startButtonPressed(_ sender: UIButton) {
guard let numberOfRooms = selectedRooms.text, !numberOfRooms.isEmpty else {
return selectedRooms.placeholder = "type it, dude"
} //if user typed something
let rooms = Int(numberOfRooms)
x = Int(ceil(sqrt(Double(rooms!))))
y = x //grab some values from user input
guard let maze = MazeGenerator(x, y) else { //generate a maze
return
}
hp = 2 * (x * y) //get hp value depending on user input
hpLabel.text = "hp: \(hp)"
guard let currentX = getRandomX(x), let currentY = getRandomY(y) else {
return
}
currentCell = maze.maze[currentX][currentY] //game starts in a random part of the maze
for item in items.indices { //create and show some imageViews
items[item].location = (getRandomX(x!), getRandomX(y!))
createItems(imageName: items[item].imageName)
if items[item].location == (currentX!, currentY!) {
reveal(index: item)
} else {
hide(index: item)
}
}
refreshButtons() //refresh UI
maze.display() //print maze scheme in debug console
}
3. @IBAction should call other methods instead executing a logic code. Example: One would trigger a game start method and an analytics event once a button tapped.
private var x: Int?
private var y: Int?
@IBAction func startButtonPressed(_ sender: UIButton) {
gameStartButtonPressed()
Analytics.logAction("startButtonPressed")
}
private func gameStartButtonPressed() {
guard let numberOfRooms = selectedRooms.text, !numberOfRooms.isEmpty else {
return selectedRooms.placeholder = "type it, dude"
} //if user typed something
let rooms = Int(numberOfRooms)
x = Int(ceil(sqrt(Double(rooms!))))
y = x //grab some values from user input
guard let x = x, y = y else {
return
}
guard let maze = MazeGenerator(x, y) else { //generate a maze
return
}
hp = 2 * (x * y) //get hp value depending on user input
hpLabel.text = "hp: \(hp)"
guard let currentX = getRandomX(x), let currentY = getRandomY(y) else {
return
}
currentCell = maze.maze[currentX][currentY] //game starts in a random part of the maze
for item in items.indices { //create and show some imageViews
items[item].location = (getRandomX(x!), getRandomX(y!))
createItems(imageName: items[item].imageName)
if items[item].location == (currentX!, currentY!) {
reveal(index: item)
} else {
hide(index: item)
}
}
refreshButtons() //refresh UI
maze.display() //print maze scheme in debug console
}
4. Separate a model from a controller
struct Model {
var x: Int
var y: Int
}
private var position: Model = .init(x: 0, y: 0)
// instead of:
private var x: Int?
private var y: Int?
Additional insights:
Force unwrapping is useful in following cases (this list is not exhaustive):
failable initializers
functions or methods returning optional type instead throwing an error
IBOutlet attribute
Example of a failable initializers:
URL type has a failable initialiser init?(string:) which return optional URL object. In case of a string "https://www.google.pl" one can be sure that the string results in correct url object and not nil. In that case force unwrap is the way to express this having confidence that instead obtaining optional URL, one expects an URL object.
//failable initializer creates an optional URL object
let url: URL? = URL(string: "https://www.google.pl")
//force unwrapping leads to a non optional URL object
let url: URL = URL(string: "https://www.google.pl")! | {
"domain": "codereview.stackexchange",
"id": 39329,
"tags": "swift, optional"
} |
Resonant Frequency & Opera Singers | Question: Would it be possible under math of strings to note the frequency of each string vibrations? And in doing so, in hand with using the technique opera singers use to shatter glass with their voice, would it be possible to destroy/aggravate each string accordingly?
Answer: According to the Wikipedia article on String theory,
At sufficiently high energies, the string-like nature of particles would become obvious. There should be heavier copies of all particles, corresponding to higher vibrational harmonics of the string. It is not clear how high these energies are. In most conventional string models, they would be close to the Planck energy, which is $10^{14}$ times higher than the energies accessible in the newest particle accelerator, the LHC, making this prediction impossible to test with any particle accelerator in the near future.
The maximum energy of the LHC is about 7 TeV, so the vibrations of the strings would be around 7000 YeV (yotta-electron-volts).
We, as a race, cannot produce the energy required to access these energies even mechanically, so it would be quite impossible for a single human to access (and subsequently destroy) a string with their voice alone. | {
"domain": "physics.stackexchange",
"id": 15036,
"tags": "string-theory, waves, acoustics, resonance, string"
} |
Simulation with a simple car model in Gazebo | Question:
Hi,
I'm currently planning to set up a gazebo simulation for an default gazebo simple car model .I haven't had any luck understanding how to go about making the robot move via external commands (keyboard presses, etc.).
The documentation seems to gloss over this, or at least I can't find where it's described. How would I go about making a robot move in the a Gazebo simulation?
Thanks
Originally posted by mangoya on ROS Answers with karma: 53 on 2011-07-04
Post score: 1
Answer:
If I am not wrong, you need to implement a gazebo plugin that basically converts the velocity commands of the keyboard teleop to gazebo joint torques or speeds. For that purpose, I would recommend to have a look at:
http://www.ros.org/wiki/gazebo_plugins
You will find some gazebo plugins available. I think that GazeboRosDiffdrive is specially interesting for you. It implements a gazebo plugin for a differential robot. You would have to do something similar but for a car-like steering model.
Finally, to add Gazebo extensions to urdf files (where I guess you have modeled your robot), have a look at:
http://www.ros.org/wiki/urdf/Tutorials/UnderstandingPR2URDFGazeboExtension
I hope this helps.
Originally posted by gazkune with karma: 219 on 2011-07-04
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by mangoya on 2011-07-06:
Hi @gazkune,
first of all, Thanks for your rapid answer. I've been finding in the gazebo_plugins and I've found gazebo_ros_diffdrive.cpp but I don't understand it. Then Where could i find more documentation or examples about it, because I'm novice
Thanks. | {
"domain": "robotics.stackexchange",
"id": 6028,
"tags": "gazebo, simulation"
} |
Multple AppyWrenchDialog, storage data structure | Question:
Hi,
Can you please tell me what is happening when ApplyWrenchDialog is clicked for multiple links.
In which data structure is the name of all those links getting stored. (here by links I mean the links for which apply force/torque dialog is open)
Relevant source code : gazebo/gui/ModelRightMenu.cc : https://bitbucket.org/osrf/gazebo/src/d3b06088be22a15a25025a952414bffb8ff6aa2b/gazebo/gui/ModelRightMenu.cc?at=default&fileviewer=file-view-default
Originally posted by meha on Gazebo Answers with karma: 13 on 2016-03-18
Post score: 0
Answer:
All the link names get stored in gazebo::gui::ApplyWrenchDialog::linksComboBox. See the implementation for details.
Originally posted by chapulina with karma: 7504 on 2016-03-21
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 3890,
"tags": "gazebo"
} |
Attempting a Strategy design pattern in JS | Question: I'm going through the Head First Design Patterns book and I want to check whether I'm understanding some aspects of the first chapter. Does the code below program correctly to interfaces, encapsulate changeable behavior, and employ composition in a reasonable manner?
// trying to enforce some design pattern habits
// create a duck, that prints a line to the page as text
class DuckAbilities {
constructor() {
this.flying = new FlyBehavior()
}
addToPage() {
this.statement = document.createElement("p")
this.statement.innerHTML=this.flying.fly()
document.body.append(this.statement)
}
}
//interface for behaviors
class FlyBehavior {
fly() {
null
}
}
class DoesFly{
fly() {
return "I'm flying"
}
}
class DoesNotFly {
fly() {
return "not flying"
}
}
class Mallard {
constructor() {
this.abilities = new DuckAbilities()
this.abilities.flying = new DoesFly()
}
}
class Rubber{
constructor() {
this.abilities = new DuckAbilities()
this.abilities.flying = new DoesNotFly()
}
}
window.onload = ()=> {
let mallard = new Mallard()
mallard.abilities.addToPage()
let rubber = new Rubber()
rubber.abilities.addToPage()
}
Answer: There are some issues with your class, but I don't think that they are so specific to implementing a strategy pattern.
addToPage() {
this.statement = document.createElement("p")
this.statement.innerHTML=this.flying.fly()
document.body.append(this.statement)
}
This assumes there is a global document variable, and that statement is part of the abilities object. If such a method is present, I would expect document to be a parameter of the method and statement to be a local variable. But even that doesn't seem right because you would be mixing representation (a paragraph "p") and a data class.
//interface for behaviors
class FlyBehavior {
fly() {
null
}
}
Now that doesn't seem right. If you have JS ducktyping then you don't need this class. Furthermore, specifying null is just asking for null pointer exceptions at a later stage.
this.abilities = new DuckAbilities()
OK, so now we have DuckAbilities object but without a valid state, just null, which we then adjust in the next call. There seem to be two ways of resolving this issue:
having the fying behavior as parameter to the DuckAbilities constructor;
removing the DuckAbilities altogether and just assigning the various flybehaviors to a field.
So when we're using classes anyway, lets implement it using those.
I've created an "abstract" class Duck because we require inheritance there. I don't like to create an interface for the strategies because JavaScripts duck-typing should be sufficient.
'use strict'
class Duck { // this is the context
constructor(flyAbility) {
// ES2015 only, avoid instantiation of Duck directly
// if (new.target === Abstract) {
// throw new TypeError("Cannot construct Abstract instances directly");
// }
this.flyAbility = flyAbility;
}
// this is the operation, returning the flyBehavior as a string
showFlyBehavior() {
return "This duck " + this.flyAbility.flyBehavior();
}
}
// Strategy interface is missing due to JS ducktyping
// Strategy #1
class DoesFly {
// with algorithm #1
flyBehavior() {
return "flies";
}
}
// Strategy #2
class DoesNotFly {
// with algorithm #2
flyBehavior() {
return "doesn't fly";
}
}
// Context #1
class Mallard extends Duck {
constructor() {
super(new DoesFly());
}
}
// Context #2
class Rubber extends Duck {
constructor() {
super(new DoesNotFly());
}
}
let duck = new Mallard();
console.log(duck.showFlyBehavior());
duck = new Rubber();
console.log(duck.showFlyBehavior());
Sorry about using NodeJS, but in principle only console.log is NodeJS specific ... I hope. | {
"domain": "codereview.stackexchange",
"id": 37414,
"tags": "javascript, design-patterns"
} |
Calculate torque of a bolt based on grade, size, and lubricant | Question: To calculate torque I need to look up a value from a table which I've hardcoded based on three conditions: grade, size, and thread. Then I need to multiply it by coefficients associated with the selected lubricants.
I've written the code to be a web-app, but it will also just run as an html file sitting on a company drive, not connected to the internet. Because it's offline, I needed to hardcode the values instead of look them up from a data file because of same-origin policy and JavaScript security settings. It's not ideal style, but I couldn't think of an alternative.
It works, but I want to make sure it's really robust and easy for anyone to maintain after my internship is done. It's one of the first programs I've written in this language and I don't get easily offended, so please be thorough! I'm not sure what's considered good style or efficient. Also, this needs to be run on IE8 so some things I've done have reflected the limitations of that browser.
JSFiddle
JavaScript
var size;
var grade;
var thread;
var value;
document.getElementById("message").innerHTML = "Select grade.";
function fetch_grade_sizes() {
// get grade from select menu
var grade_menu = document.getElementById("grade_menu");
grade = grade_menu.options[grade_menu.selectedIndex].value;
// get thread from radio buttons
var radios = document.getElementsByName("radio");
for (var i = 0, length = radios.length; i < length; i++) {
if (radios[i].checked) {
thread = radios[i].value;
break;
}
}
// tell user to select any missing values
if( grade === undefined || grade == ""){
document.getElementById("message").innerHTML = "Select grade.";
return;
}
else if ( thread === undefined || thread === null){
document.getElementById("message").innerHTML = "Select thread.";
return;
}
else
document.getElementById("message").innerHTML = "Select size.";
// get the sizes corresponding to that grade and thread
var size_keys = [];
for (var key in torque_table[grade][thread]) {
size_keys.push(key);
}
// populate the drop down selet menu with those sizes
create_menu("size_menu_location", "size_menu", size_keys);
}
function calculate(form_data) {
document.getElementById("message").innerHTML = "Select lubricant.";
// get selected size
var size_menu = document.getElementById("size_menu");
size = size_menu.options[size_menu.selectedIndex].value;
// if anything hasn't been selected yet, prompt the user
if( grade === undefined || grade == ""){
document.getElementById("message").innerHTML = "Select grade.";
return;
}
else if ( thread === undefined || thread === null){
document.getElementById("message").innerHTML = "Select thread.";
return;
}
else if (size === undefined || size == "" ){
document.getElementById("message").innerHTML = "Select size.";
return;
}
// look up the torque
var torque = torque_table[grade][thread][size];
// multiply each checked value together and
// create a string of the values separated by multiplication signs
var check_number = 1;
var check_string = "";
for (i = 0; i < form_data.check_menu.length; i++) {
if (form_data.check_menu[i].checked) {
check_string += " × " + form_data.check_menu[i].value;
check_number *= form_data.check_menu[i].value;
// clear the command to select something after a lubricant is selected
document.getElementById("message").innerHTML = "<br/>";
}
}
// round the resulting value to one decimal place
value = Math.round((torque * check_number) * Math.pow(10, 1)) / Math.pow(10, 1);
var value_in = Math.round((torque * check_number * 12) * Math.pow(10, 1)) / Math.pow(10, 1);
var value_SI = Math.round((torque * check_number * 1.35581795) * Math.pow(10, 1)) / Math.pow(10, 1);
// display the multiplication string and the resulting value
document.getElementById("number").innerHTML = value + " ft-lbs ";
document.getElementById("string").innerHTML = torque + " ft-lbs " + check_string + " = ";
document.getElementById("additional_info").innerHTML = "<br>Other units: " + value_in + " in-lbs, " + value_SI + " Nm";
}
function create_menu(div_id, sel_id, menu_options) {
var myDiv = document.getElementById(div_id);
// wipe what's already in the div we're putting this menu
myDiv.innerHTML = "";
// create select menu and size
var selectList = document.createElement("select");
selectList.setAttribute("id", sel_id);
selectList.setAttribute("onclick", "calculate(this.form);");
myDiv.appendChild(selectList);
// make an unselectable placeholder
var option = document.createElement("option");
//option.setAttribute("disabled","true");
option.innerHTML = "Select size";
option.setAttribute("value", "");
selectList.appendChild(option);
// populate the menu with the options from the array
for (var i = 0; i < menu_options.length; i++) {
var option = document.createElement("option");
option.setAttribute("value", menu_options[i]);
option.innerHTML = menu_options[i];
selectList.appendChild(option);
}
};
grade_1 = {
'coarse': {
'1/4"': 3.28,
'5/16"': 6.75,
'3/8"': 12.0,
'7/16"': 19.2,
'1/2"': 29.3,
'9/16"': 42.2,
'5/8"': 58.3,
'3/4"': 103,
'7/8"': 167,
'1"': 250,
'1-1/8"': 354,
'1-1/4"': 500,
'1-3/8"': 655,
'1-1/2"': 869
},
'fine': {
'1/4"': 3.75,
'5/16"': 7.49,
'3/8"': 13.6,
'7/16"': 21.4,
'1/2"': 33.0,
'9/16"': 47.1,
'5/8"': 66.0,
'3/4"': 115,
'7/8"': 184,
'1"': 273,
'1-1/8"': 397,
'1-1/4"': 553,
'1-3/8"': 746,
'1-1/2"': 978
}
};
//grade_5, grade_8 = .... same things
var torque_table = {
"grade_1": grade_1,
"grade_5": grade_5,
"grade_8": grade_8,
"stainless": stainless,
"monel": monel,
"ASTM_A325": ASTM_A325
};
HTML
<font size="5"><b>Torque Calculator
</b></font>
<FORM NAME="calculation">
<br/>
<select id="grade_menu" onchange="fetch_grade_sizes();">
<option value="" disabled selected>Select grade</option>
<option value="grade_1">Grade 1</option>
<option value="grade_5">Grade 5</option>
<option value="grade_8">Grade 8</option>
<option value="ASTM_A325">ASTM A325</option>
<option value="stainless">316 Stainless</option>
<option value="monel">Monel</option>
</select>
<INPUT TYPE="radio" name="radio" id="radio_menu" onclick="fetch_grade_sizes();" VALUE="coarse" />coarse
<INPUT TYPE="radio" name="radio" id="radio_menu" onclick="fetch_grade_sizes();" VALUE="fine" />fine
<br/>
<div id="size_menu_location">
<select id="size_menu">
<option value="" disabled selected>Select size</option>
</select>
</div>
<br><b>Check lubricant used:</b> <br>
<br/>
<INPUT TYPE="Checkbox" id="check_menu" onclick="calculate(this.form);" VALUE="1.00" />Steel (clean, dry, non-plated)<br/>
<INPUT TYPE="Checkbox" id="check_menu" onclick="calculate(this.form);" VALUE="1.00" />Black oxide<br/>
<INPUT TYPE="Checkbox" id="check_menu" onclick="calculate(this.form);" VALUE="0.90" />Silver grade anti-seize<br/>
<INPUT TYPE="Checkbox" id="check_menu" onclick="calculate(this.form);" VALUE="0.80" />WD-40 <br/>
<INPUT TYPE="Checkbox" id="check_menu" onclick="calculate(this.form);" VALUE="0.75" />Grease with nickel and graphite flakes <br/>
<INPUT TYPE="Checkbox" id="check_menu" onclick="calculate(this.form);" VALUE="0.70" />Motor oil with cadmium plating <br/>
<INPUT TYPE="Checkbox" id="check_menu" onclick="calculate(this.form);" VALUE="0.65" />Nickel based anti-seize <br/>
<INPUT TYPE="Checkbox" id="check_menu" onclick="calculate(this.form);" VALUE="0.45" />Never-seize<br/>
<INPUT TYPE="Checkbox" id="check_menu" onclick="calculate(this.form);" VALUE="0.80" />243<br/>
<INPUT TYPE="Checkbox" id="check_menu" onclick="calculate(this.form);" VALUE="0.85" />246<br/>
<INPUT TYPE="Checkbox" id="check_menu" onclick="calculate(this.form);" VALUE="0.95" />248<br/>
<INPUT TYPE="Checkbox" id="check_menu" onclick="calculate(this.form);" VALUE="1.00" />277
</FORM>
<br/>
<div id="message"></div>
<table>
<td>
<div id="string"></div>
</td>
<td>
<div id="number"></div>
</td>
</table>
<div id="additional_info"></div>
<font size='1'><br>About:<br>
Calculated by multiplying standard dry torque with a stress of 70% minimum <br>
tensile strength or 75% the proof strength by a coefficient representing <br>
the effect of anti-seize compounds, lubricants, platings, coatings, etc. <br>
For steel, this cooefficient is equal to the torque coefficient (K) times 5.0 <br>
<br>Values are from <i>Pocket Ref</i> by Thomas J. Glover, 4th ed., 2011. pg 213</font>
Answer: A few things caught my eye here.
Firstly, since you need to store all the data in the files (which, I agree, is the best way to go about this, given the constraints) why not store all the data there? By which I mean build the lubricant list and grades in JS too. Or flip it around and keep everything in the HTML (hiding and showing the appropriate dropdowns as needed). Eitherway, I think it'd be better to keep all the source data in one place, rather than split it up. This will make the code a lot more maintainable.
And if you include the "select only one type of lubricant" constraint you've talked about in the comments, everything can be done as single-selection dropdowns, making the work of building the UI simpler. You just need a buildDropdown function.
I'd probably choose to define the values in a separate JS file and load that in the html. Loading a local JS file with a <script src=... is allowed even with same-origin constraints.
Second, a little more separation in the JS would be nice. For instance, the calculate function which does more than just calculate. It also reads the form, validates it, provides feedback to the user etc.. Just pass it the given values when you're ready, and let it perform the math - nothing more. Move rest of the logic elsewhere, breaking it into smaller chunks as you go. This will also make the code more maintainable.
For instance, your validation/feedback code is duplicated in two places, which is a sure sign that it can be extracted.
I'd also avoid calling document.getElementById so many times. For instance, you can just store the "message" element in a variable after you've gotten it once.
And there's no need to set an onclick attribute on an element you've created or fetched from the DOM. onclick is simply a property on that object. So instead of this
selectList.setAttribute("onclick", "calculate(this.form);");
you can just do
selectList.onclick = function () { calculate(this.form) };
Then again, since there's only the one form, it's a little redundant to pass that to the calculate function. Especially right now when calculate does so much other DOM work already. So, if we imagine that calculate (after more or less refactoring) knows to get the form, you can simply do:
selectList.onclick = calculate;
Of course, you should maybe use the more modern addEventListener function instead of directly assigning something to onclick. But for a simple, contained page like this, it's not really necessary.
I'd also advice you to use the usual JS camelCase-naming convention for function etc., and avoid polluting the global scope. But again, for a single-purpose, self-contained page like this, it matters somewhat less than for a big site. Still, it's good hygiene.
Anyway, you've got all the basic building blocks already. Rework the structure a bit, and it should be quite neat! I'll try to work up a basic example of all of this.
FYI, I might choose to use "unrolled" dropdowns (i.e. select elements with a size attribute higher than 1), to make it quicker to drill down the "tree".
Update: Here's a basic attempt at a very different structure. It's not perfect, but hopefully you can glean something from it. I've skipped including the code here, because it's really not relevant, review-wise, since it's so different. I'm not saying you should necessarily go this way, though, but I think it's cleaner.
Basically, what I've done is declare the relationship between dropdowns using a data-* attribute (also used one to declare the validation message). So picking a grade populates the thread list and picking a thread populates the size list, but the code is identical for all the dropdowns, including the lubricants list. In other words, I've tried to the keep the JS mostly generic, and keep the data separated.
A few more things I noticed, going through the code:
Don't rely on the order of items in objects. It usually works out, but the order of properties of objects is not guaranteed. So options may not show up in dropdowns in the same order as you defined them.
I've also used objects, so the same warning applies there.
Don't mess around with multiplying and dividing and rounding to get a specific number of decimals. Just use number.toFixed(numberOfDecimals) | {
"domain": "codereview.stackexchange",
"id": 17761,
"tags": "javascript, beginner, html"
} |
controller_spawner node missing | Question:
I want to simulate many uavs in gazebo, when I launched 20uavs in gazebo, I found 17th-20th uav can't be controlled. I checked the rqt_graph and found that the node /gazebo had subscribed all uavs' topic (/pose_action) but the last four. So I want to know is there any limitation on topics a node can subscribe?
update1:
I found 17th-20th uavs didn't have the topic "cmd_vel". On the other hand, I checked the rqt_graph and found that the node /gazebo had subscribed all uavs' topic (/command/pose) but the last four. I checked the log and found that the four uavs' log had the following warning:
Controller Spawner couldn't find the expected controller_manager ROS interface.
But other uavs all correctly load controller_manager and upload the controller. I had tried many times and sometimes there were 5 wrong and sometimes 3 and etc. Why some of them can't?
update2:
I also found the four uavs' node (controller_spawner) were not created.
Originally posted by nm46nm on ROS Answers with karma: 1 on 2018-04-09
Post score: 0
Answer:
There aren't any hard limits in ROS on the number of subscribers, but your operating system may have some limits that indirectly limit the number of publishers or subscribers. For example the maximum number of open file descriptors limits the number of open TCP connections that a process can have.
On most common versions of Ubuntu Linux, for example, the maximum number of open file descriptors is 1024. I suspect your subscriber is nowhere near this limit, but if gazebo is publishing and subscribing to many topics per uav, it may be close to this limit.
You should be able to find the PID (process ID) for gazebo through ps; something like ps a | grep gazebo will probably help. Once you have the process ID, you can show all of the open file descriptors for gazebo with lsof -p {PID} (where {PID} is the process ID for gazebo), or you can count the number of open file descriptors with lsof -p {PID} | wc -l.
Originally posted by ahendrix with karma: 47576 on 2018-04-10
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2018-04-10:
Additionally: it might be good to check whether the (Gazebo) plugins used impose any (artificial) limits. 20 - 4 == 16. That is too nice a number to be a coincidence.
Comment by nm46nm on 2018-04-10:
I had changed the ulimit by ulimit -n 20480 before, it seemingly didn't work. Besides, when I try lsof -P {PID} , it says "No such file or directory", why?(I had replaced the pid)
Comment by ahendrix on 2018-04-10:
oops; that should be a lowercase p: lsof -p {PID}
Comment by nm46nm on 2018-04-11:
Thank you but it toled me "lsof: WARNING: can't stat() tracefs file system /sys/kernel/debug/tracing
,Output information may be incomplete."
When I put sudo in front of the lsof, it said "lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs ,Output information may be incom..."
Comment by nm46nm on 2018-04-11:
@gvdhoorn, it is not always 16. If we want to simu 30 uavs, there are just 9 or 14 or 11 etc uavs connected. | {
"domain": "robotics.stackexchange",
"id": 30595,
"tags": "microcontroller, gazebo, ros-kinetic"
} |
Generating all directed acyclic graphs with constraints | Question: I am interested in listing all the unlabeled1 acyclic digraphs with n vertices which satisfy some additional constraints, such as (a) the resulting graph is connected and (b) except for R identified root vertices, all vertices have two incoming edges and zero or one outgoing edges.
There may also be additional domain-specific constraints, but they are probably easy enough to implement by simply generating and rejecting graphs that do not meet the constraints. In principle, conditions (a) and (b) above could be satisfied in this way above, but at least in the case of (b) it seems very advantageous to consider them directly in generating process since the output is likely to be quite sparse (i.e., a large majority of graphs will fail to satisfy (b) and this ration increases with increasing n).
My problem is twofold:
Even ignoring the constraints part, I have not be able to determine how to efficiently generate DAGs. I know there are a lot of them so I'll mostly be dealing with smallish n, let's say n < 15. There seem to be some promising papers, although may of them are focused on the closed-form counting of the number of graphs rather than actual generation, and any remaining ones are paywalled.
I want to apply the at least enough of the constraint part during the generation to avoid the case where nearly all generated graphs are simply rejected. That is, I hope the generation of the constrained graphs is considerably faster than the generation of the much larger number without constraints.
You can find lots of information on random generation of such graphs, but not much on exhaustive generation (which seems like the easier of the two problems).
1 That is, I want to exclude generation of isomorphic graphs.
Answer: For small $n$, the easiest solution might be to download a list of all non-isomorphic graphs and then filter them according to your condition.
You might take a look at Brendan McKay's collection, constructed using geng as part of the Nauty graph isomorphism package.
See Enumerate all non-isomorphic graphs of a certain size for more details and citations. | {
"domain": "cs.stackexchange",
"id": 7757,
"tags": "algorithms, graphs, graph-isomorphism, graph-algorithms"
} |
Too fast circling during the localization | Question:
When with rviz i inform the estimate localization, the robot start by circling very fasly. It is too fast, and may be dangerous. So how can I decrease the speed?
I tried to reduce in the base_local_params_planner the acc_lim_th et the max_rotationnal_vel, but it doesn't work.
Originally posted by Moda on ROS Answers with karma: 133 on 2014-07-23
Post score: 1
Original comments
Comment by dornhege on 2014-07-24:
What exactly are you doing? The "2D Pose Estimate" button in rviz should not move the robot at all.
Comment by Moda on 2014-07-24:
I'm doing 2D Pose Estimate and 2D Nav Goal in rviz
Answer:
Just some sanity checks: Set the rotational recovery to false and see if that rotation stops. Also, try to manually move the robot around to check if it localizes in the map properly.
PS - I have also noticed that in if you reduce the max x and theta velocities below a certain limit, the robot tends to rotate. I do not know what causes this behavior.
Originally posted by 2ROS0 with karma: 1133 on 2014-08-02
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 18734,
"tags": "ros, localization, navigation, acceleration, velocity"
} |
List all the git branches | Question: I have written a toy script to list all the git branches in cwd. I am interested in knowing how can I make this code more modular and flexible.
For example I would like to have features like:
List only local branches.
List only remote branches.
Etc.
const os = require('os');
const cp = require('child_process');
const REGEX_STAR = /^\*\s*/g;
const promiseExec = (cmd) =>
new Promise((resolve, reject) => {
cp.exec(cmd, (err, stdout, stderr) => {
if (err) {
return reject(err);
}
resolve(lines(`${stdout}`));
});
});
const lines = (str) =>
str.trim().split(os.EOL).map((line) =>
line.trim().replace(REGEX_STAR, '')
);
const branches = () =>
promiseExec('git branch -a')
.then((res) => {
console.log(res);
});
branches();
Answer: Just return the data and leave it to the consumer what to do with it. That way, branches becomes a generic data grabber. If you want to have printer functions, keep them separate from the raw command.
Your Promise wrapper appears to expect the output of the branches command. Move the use of lines to the branches function instead.
Also, arrow functions with single arguments can omit the parens.
const promiseExec = (cmd) => new Promise((resolve, reject) => {
cp.exec(cmd, (err, stdout, stderr) => {
if (err) reject(err);
else resolve(stdout);
});
});
const branchLines = (str) => str.trim().split(os.EOL).map(line => line.trim().replace(REGEX_STAR, ''));
const branches = () => promiseExec('git branch -a').then(res => branchLines(res));
const printBranches = () => branches().then(res => console.log(res));
// Print manually
branches().then(res => console.log(res));
// or have a printer function do it
printbranches(); | {
"domain": "codereview.stackexchange",
"id": 25367,
"tags": "javascript, node.js, git"
} |
Why is phase gate a member of universal gate set? | Question: According to Solovay-Kitaev theorem it is possible to approximate any unitary quantum gate by sequence of gates from small set of another gates. The approximation can be done with an arbitrary accuracy $\epsilon$.
One of such set of gates is composed of Hadamard gate, phase gate ($S$), $\pi/8$ gate ($T$) and CNOT gate.
However, it is also true that $S=T^2$ because $T$ gate is a rotation around $z$ axis by $\pi/4$ and $S$ gate a rotation by $\pi/2$ around the same axis.
Since $S$ gate can be composed of two $T$ gates, why do we add $S$ gate to the set? It seems that a set containing only $H$, $T$ and CNOT is equivalent. What am I missing?
Answer: You might want a small set of gates, but it doesn't necessarily mean that you want the smallest set possible. When you talk about a fault-tolerant quantum computer, what you really want to do is minimise the number of $T$ gates (typically the thing that is hard to implement). Other gates from, for example, the Clifford group, are (relatively) easy to implement, so you would much rather implement $S$ rather than $T^2$. | {
"domain": "quantumcomputing.stackexchange",
"id": 1512,
"tags": "quantum-gate, solovay-kitaev-algorithm"
} |
best kinect object detection in indigo | Question:
I've seen this question before, but since 2011 the world has advanced. I know for one that roboearth has simply stopped, splitting into two separate projects. Their latest release seems to be in fuerte.
I also know that find_object_2d has a new 3d feature which I aim to test out sometime soon. This DOES exist in indigo. Does anybody else know of any other packages that do 3d object tracking? possibly better than either of these? Does anybody already have experience with find_object_2d's 3D features?
(The reason I mention Indigo specifically is that I plan to use rtabmap in conjunction with whatever object detection I do, and I would like to avoid needing to set up a new ubuntu, groovy, and rtabmap install)
Originally posted by ThaHypnotoad on ROS Answers with karma: 33 on 2015-04-19
Post score: 1
Answer:
There is a demo on rtabmap integrating find_object_2d (with 3D feature). You can take a look at the bottom of the referred demo launch file. For live object detection using a Kinect, there is an example launch file find_object_3d.launch in the find_object_2d project too.
Cheers
Originally posted by matlabbe with karma: 6409 on 2015-04-19
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by ThaHypnotoad on 2015-04-19:
Thanks! Totally skipped over that. Really convenient stuff. Just about to try it out. | {
"domain": "robotics.stackexchange",
"id": 21467,
"tags": "slam, navigation, kinect, 3d-object-recognition, roboearth"
} |
Logic for changing entity navigational properties States | Question: The question might seem a little bit long, but I will be very glad if you have read it till the end and told me your opinions.
Let's say I have such entity:
public class FirstEntity
{
public int Id { get; set; }
...
public virtual SecondEntity SecondEntity { get; set; }
public int SecondEntityId { get; set; }
public virtual ICollection<ThirdEntity> ThirdEntities { get; set; }
}
And here is the SecondEntity and ThirdEntity:
public class SecondEntity
{
public int Id { get; set; }
...
public virtual FirstEntity FirstEntity { get; set; }
}
public class ThirdEntity
{
public int Id { get; set; }
...
public virtual FirstEntity FirstEntity { get; set; }
}
While inserting entity I am setting it's state to Added:
Context.Entry(entity).State = EntityState.Added;
In this case, all child entities states are changing to Added also.
The problem is I must not insert some entities if they already exist in the database. So, for example if some SecondEntity exists in the database, then I must change it's state to Detached, but also I must set SecondEntityId value to existing SecondEntity in the database.
IMHO, it is pretty common problem. But, I can't find any question like this in SE.
I have solved this problem. But I wonder, is my algorithm any good?
I have created two attributes: [MustBeChecked] and [ModifyState("SecondEntityId")]
Logic is that, MustBeChecked is the navigational property which includes navigational properties which must be checked. And these properties has decorated with ModifyState attribute and the name of the property which must be set to the key value of the existing row in the database.
For example, FirstEntity will look like that:
[MustBeChecked]
public class FirstEntity
{
public int Id { get; set; }
...
[ModifyState("SecondEntityId")]
public virtual SecondEntity SecondEntity { get; set; }
public int SecondEntityId { get; set; }
[ModifyState]
public virtual ICollection<ThirdEntity> ThirdEntities { get; set; }
}
Also I have one type per entity which has DefineState method. I call it with relfection in runtime. It checks if this entity exists in the database or not and then sets the property value in parent object. For example, one of them:
public class SecondEntityState : IStateDefiner<SecondEntity>
{
public Phone DefineState(DbContext context, SecondEntity model, object parent, string propertyName)
{
var entity = context.Set<SecondEntity>().AsQueryable().FilterByUniqueProperties(model).SingleOrDefault();
if (entity != null)
{
if (!String.IsNullOrEmpty(propertyName))
{
parent.GetType().GetProperty(propertyName).SetValue(parent, entity.Id);
}
context.Entry(model).State = System.Data.EntityState.Detached;
}
return model;
}
}
And I am iterating over entity properties recursively to find such navigational properties and process over them.
public void ModifyState<TEntity>(TEntity entity)
{
var properties = entity.GetType()
.GetPropertiesWithAttribute<MustBeCheckedAttribute>();
// Iterate over attribute details
foreach (var prop in properties)
{
// Check if the property is collection
if (typeof(IEnumerable).IsAssignableFrom(prop.PropertyType))
{
var collection = entity.GetType()
.GetProperty(prop.Name)
.GetValue(entity);
foreach (var collectionElement in collection as List<object>)
{
ModifyState(collectionElement);
ProcessOverProperties(collectionElement);
}
}
else
{
ModifyState(entity);
ProcessOverProperties(entity);
}
}
}
/// <summary>
/// Will find properties which has decorated with ModifyState attribute.
/// And then will select property and IdContainerPropertyName value for this attribute.
/// We are going to use these informations for detaching navigational property
/// and setting corresponding property's value
/// </summary>
/// <typeparam name="TEntity">Entity type</typeparam>
/// <param name="entity">Entity object</param>
private void ProcessOverProperties<TEntity>(TEntity entity)
{
var modifyStateAttributes = entity.GetType()
.GetPropertiesWithAttribute<ModifyStateAttribute>()
.Select(prop =>
new
{
Property = prop,
IdContainer = (Attribute.GetCustomAttribute(prop, typeof(ModifyStateAttribute)) as ModifyStateAttribute).IdContainerPropertyName
});
// Iterate over attribute details
foreach (var attributeInformation in modifyStateAttributes)
{
var stateDefinerInstance = EntityStateFactory.CreateStateDefiner(attributeInformation.Property.PropertyType);
MethodInfo methodInfo = stateDefinerInstance.GetType().GetMethod("DefineState");
methodInfo.Invoke(stateDefinerInstance, new object[]
{
this,
entity.GetType().GetProperty(attributeInformation.Property.Name).GetValue(entity),
entity,
attributeInformation.IdContainer
});
}
}
Is my logic any good?
Answer: I'm going to answer in the context of comparing this to GraphDiff since it was mentioned in the comments.
First, make sure you have a need to update objects in this fashion. I'm assuming you do.
Implementation-wise, the main difference between GraphDiff and this approach is that GraphDiff builds a graph object and then visits it, while this approach uses recursion for a more procedural approach.
As for the use of reflection, I think it's required to solve this problem. I can't see a way around it.
I see this as a valid approach. It requires custom attributes on entities and some boilerplate code in places, but short of writing a library that extends EF objects, I think you kind of have to do something like this.
As long as you can verify that you won't get into
circular reference issues
stack overflows from recursing too deeply
then I would say it's a valid approach. It's clean, and it appears easy enough to maintain.
Two suggestions:
I would suggest storing your objects you get from reflecting on types in a static dictionary so you don't have to use reflection every time, just the first time.
Consider making an abstract or at least a parent class for a base implementation of your State classes. I think there's some code in your DefineState methods that you that you can avoid duplicating in every method definition by pulling out into a base implementation. Can you possibly use generic parameters and/or one generic method?
Overall though, let performance be the judge. As long as your app isn't taking a significant hit from this implementation, I say you should congratulate yourself on finding a solution to a rather complex problem. | {
"domain": "codereview.stackexchange",
"id": 15420,
"tags": "c#, entity-framework"
} |
Meaning of source here | Question: In graph theory, a source of a directed graph $D = (V(D), E(D))$ is a vertex of it whose in-degree is zero.
The book CLRS makes these statements:
Given a graph $G = (V, E)$ and a distinguished source vertex $s$, breadth-first
search systematically explores the edges of $G$ to “discover” every vertex that is
reachable from $s$.
I know this is an amateur question but does source have the same meaning here (at least when the graph is directed) or it's just some word the book uses without any particular reason? Maybe by source it means a root.
Answer: In that context source is just a way to give a specific name to the vertex $s$.
It makes sense to use that word since it is the vertex from which all shortest-paths computed using BFS emanate. | {
"domain": "cs.stackexchange",
"id": 19011,
"tags": "graphs, graph-traversal, breadth-first-search"
} |
Why quarks are confined? Why can't they be found in unbound states? | Question: No one has observed before a "free" quark, i.e. a quark in an unbound state. According to one paper, I read that $p\bar{p}$ collision produce unbound $t\bar{t}$ pair which quickly decay into other particles. But some people argue that the produced $t\bar{t}$ pairs decay so quickly that they had no time to bind! And I agree with this idea.
So why can one not find quarks in unbound states?
Answer: This is because the force-versus-distance law for quarks is such that the farther away from one another you pull a pair of quarks, the harder they attract one another. It is as if they were attached to each other with a rubber band (a very stiff one!). If you pull them far enough apart, there's enough energy stored in the system to create a new pair of quarks (i.e., the rubber band snaps) which pair up with the others and instead of getting two "free" quarks, you get a pair of mesons with two quarks inside each one. | {
"domain": "physics.stackexchange",
"id": 69820,
"tags": "particle-physics, quantum-chromodynamics, quarks, confinement"
} |
Do we come to know which allele is dominant by seeing family genration tree only? | Question: I know that a Gene has Alleles (variation) and one is Dominant over Other i.e the Other Recessive.
Then I got a Thought that How can we tell whether an Allele is Dominant or Recessive...... and I came across this Site while Googling, and It says we can tell this by Observing Patterns in the Family Generation Tree.
So, my question is that can we only describe an Allele by observing pattern? Is there any other Method or by looking at the Molecular level and tell?
Please extend this "How can we tell whether an Allele is Dominant" to a Special Case where a Gene has 3 Alleles.....can we have 2 Dominant Allele?
Answer: Addressing Your First Question
We can tell whether an allele is dominant or recessive based on patterns in family trees, that is true, and it is very helpful! However, that is not the only way, since by looking at the molecular function of the alleles, the dominant and recessive relationship between alleles can be assessed without needing to look at family trees!
I think a deeper understanding of what it means to be dominant versus recessive would be helpful, because usually biology isn't just that simple! In most scenarios where there is a distinct recessive and dominant trait, it is because the dominant trait causes some specific activity/functioning protein while the recessive trait does not. Let's look at an example of this:
Let's choose eye color:1
There are multiple genes that affect eye color but let's just look at one: the one that codes for the brown pigment (melanin) to be produced in the iris [specifically the HERC2 gene]. As you probably already know, brown eyes are dominant and blue eyes are recessive.
Let's call the dominant allele B and the recessive allele b. The only way for blue eyes to be the phenotype is for the allele combination to be bb. Now think about why this would occur on a molecular level. Alleles are just forms of genes, which then in turn code for some protein. Think back to what we said earlier about how most of the time dominant alleles cause some certain activity within the cell and recessive alleles do not. We can now use that knowledge to uncover the molecular basis of this eye color scenario: The B allele codes for the brown pigment to be produced in the iris, while the b allele codes for a dysfunctional protein that does not lead to melanin being produced, leading to a blue-ish color seen. So, if the B allele causes the production of melanin and the b allele does not, it makes sense by the B allele would be dominant in this case. In the case of Bb, melanin is still being produced (even if it is because of just one allele), so there still would be brown eyes! Therefore this makes sense by bb is the only allele combination that results in the blue-eye phenotype, because it is the only combination with no allele still producing melanin. This is why brown eyes are dominant to blue eyes, because blue eyes can only exist without the presence of any B allele.
So, by knowing how an allele actually functions on a molecular level, that can help us to understand if it is in fact dominant or recessive to another allele. We should also acknowledge that genes do not necessarily need to be in this black-and-white relationship of recessive or dominant. There is also incomplete dominance and co-dominance!
Now to Your Second Question
When talking about three alleles, it is important to understand that allele dominance only is used when comparing one allele to another allele. So with three alleles, the dominance, recessiveness, co-dominance, or incomplete dominance is between two of the alleles.2 So if we are talking about alleles A, B, and C. We could say things such as:
A is recessive to B, B is dominant to C, C has in-complete dominance with A, etc...
However, it is really up to the molecular function of each of the alleles and to determine how those interactions would take place. But to answer your question directly of "can we have 2 dominant alleles," the answer is sort of. Allele B and C can both be dominant to allele A, but that says nothing about the interaction between Allele B and Allele C.
1 This is a simplification of the actual molecular mechanism of eye color, but at a simplified level this works to illustrate our point. You can read more into the genetic/molecular basis of eye color here.
2 This article is very good at discussing alleles and clarifying the point that dominance is only between two alleles. | {
"domain": "biology.stackexchange",
"id": 10904,
"tags": "genetics, dna, chromosome, biotechnology, allele"
} |
Comparing two multi-fasta files of the same set of proteins with parser - to find and count mutations after treatment | Question: My task is to count the mutations occurred in several proteins after a treatment. The sequences are all present in the two files in the same order. I opened both files with the FASTA parser (SeqIO.parse) in Biopython and I got all the proteins listed (separated before and after the treatment).
How can I zip the parsers together to count the mutations?
How can I count the mutations that occured after the treatment?
from Bio import SeqIO
for normal_samples in SeqIO.parse("/data/statistic/normal_samples", "fasta"):
print(normal_samples.id)
print(repr(normal_samples.seq))
print(len(normal_samples))
for treated_samples in SeqIO.parse("/data/statistic/with_treatment", "fasta"):
print(normal_samples.id)
print(repr(normal_samples.seq))
print(len(normal_samples))
dict_n_t = dict(zip(normal_samples & treated_samples))
Answer: from Bio import SeqIO
for normal, treated in zip(SeqIO.parse("/data/statistic/normal_samples", "fasta"),
SeqIO.parse("/data/statistic/with_treatment", "fasta")):
... do stuff...
That's generally how you zip iterators together in python. | {
"domain": "bioinformatics.stackexchange",
"id": 408,
"tags": "python, fasta, proteins, statistics, mutations"
} |
Why isn't conditional probability sufficient to describe causality? | Question: I read these comments from Judea Pearl saying we don't have causality, physical equations are symmetric, etc. But the conditional probability is clearly not symmetric and captures directed relationships.
How would Pearl respond to someone saying that conditional probability already captures all we need to show causal relationships?
Answer: Perhaps the shortest answer to this question is that Bayes' Theorem itself allows us to easily change the direction of a conditional probability:
$$
P(A|B) = \frac{P(B|A)P(A)}{P(B)}
$$
So if you have $P(B|A)$, $P(A)$, and $P(B)$, we can determine $P(A|B)$, and similarly you can determine $P(B|A)$ from $P(A|B)$, $P(B)$ and $P(A)$. Just by looking at $P(B|A)$ and $P(A|B)$, it is therefore impossible to tell what the causal direction is (if any).
In fact, probabilistic inference usually works the other way round: When there is a known causal relation, say from diseases $A$ to symptoms $B$, we usually have $P(B|A)$, and are interested in the diagnostic reasoning task of determining $P(A|B)$ from that. (The only other thing we need for that is the prior probability $P(A)$ since $P(B)$ is just a normalization factor.) | {
"domain": "ai.stackexchange",
"id": 1902,
"tags": "philosophy, comparison, conditional-probability, causation"
} |
What's the name and purpose of this specific shape of hollows in a reinforced concrete slab? | Question: Below is a stack of reinforced concrete slabs with hollows (camera is positioned along the hollows' axis). Such slabs are used for floors - you build walls of the next floor, then lay such slabs such that they rest onto walls with the end shown on the picture and the opposite end. The dark round dots are steel rebars covered in rust.
I've seen slabs with round hollows and with oval hollows but this shape is new to me. It's something like a rounded hexagon with an extra cavity on the bottom.
What's the name of this shape and why is it used for hollows in a reinforced concrete slab?
Answer: It looks like a typical precast concrete "hollow-core" or "voided" slab. A quick Google image search reveals the variety of void shapes employed by manufacturers.
The image on the left (from Oldcastle Precast, incidentally) nicely shows the variation of void shape with increasing slab thickness.
As noted in the other answers, we care about the concrete profile more than the void profile. The voids are simply introduced to minimize concrete area where it is not structurally necessary.
In theory, a rectangular void would minimize concrete. However, the corners are better off rounded to reduce the stress concentrations (and therefore minimize cracking). Hence, the oblong voids. In thinner slabs, the voids may indeed approach a circular shape. One possible explanation for the irregular void shape in your posted question, is that the irregular void can set proper concrete cover for the rebar. | {
"domain": "engineering.stackexchange",
"id": 1697,
"tags": "structural-engineering, civil-engineering, concrete, reinforced-concrete"
} |
Condition for closed orbits | Question:
I'm working on a central force problem in which the potential is
$$ U(r)=-\frac{\alpha}{r}\left( 1+ \frac{\beta}{r} \right) $$
I'm asked what condition has to be satisfied for the orbit to be closed.
I'm aware that Bertrand's theorem suggests that the form of the potential allows closed orbits and in other books like Marion's they have a condition that goes like:
$$ \Delta{\phi}=2\int_{ r_{min}}^{r_{max}}\frac{\frac{L}{r^{2}}\text{d}r}{\sqrt{2\mu \left[E-\frac{L^{2}}{2\mu r^{2}}+\frac{\alpha}{r}\left(1+\frac{\beta}{r}\right) \right]}}=2\pi\frac{a}{b} $$
with $a$ and $b$ being natural numbers.
I already have the condition I'm asked (It's not a homework). But I don't understand where that $\frac{a}{b}$ having to be rational comes from. Is that a geometrical reasoning? It has to do with Bertrand's theorem? It looks somewhat like Lissajous curves and it may be something simple that I don't know.
Answer: The orbit is in 2d and "oscillates" between a minimum and a maximum $r$. The position in the plane if given by $(r(t),\phi(t))$ but here $t$ has been eliminated and you have $r(\phi)$.
As you go once from $r_{min}$ to $r_{max}(\phi)$, the body will advance along the orbit by an angular distance $\Delta \phi$. As you go from $r_{min}$ to $r_{max}$ and back to $r_{min}$, you advance by an angle $2\Delta \phi$.
To get a closed orbit, you must eventually come back to your starting point, meaning you must make an integer number $b$ of trips between $r_{min}$ and $r_{max}$ while advancing by an integer multiple $a$ of $2\pi$. This is the geometrical origin of the $2\pi a/b$ factor.
Edit: In answer to a comment, two situations are illustrated below. In both cases $r_{min}=1$ and $r_{max}=3$, and these values are shows as red thick lines. These values restrict the orbits to a ring of inner radius $1$ and outer radius $3$. The radius oscillates between $1$ and $3$ with some frequency $\omega_r$, as can be seen by the black lines in the figures.
The parametric equations for the figures on the left and the right are, respectively,
$$
r(\phi)=2+\cos\left(\sqrt{3}\phi\right)\, ,\qquad
\hbox{and}\qquad r(\phi)=2+\cos\left(\phi\right)
$$
In the first case, the ratio $\omega_\phi/\omega_r$ is not commensurate since $\sqrt{3}$ is irrational, and the orbit does not close. The best way to see this is to note that the start of the parametric curve is at $r=3,\phi=0$ but, at the end of the curve, $r\ne 3$. Because the ratio $\omega_\phi/\omega_r$ is irrational, the orbit would eventually densely fill the ring.
In the second case, on the other hand, the ratio $\omega_\phi/\omega_r$ is commensurate, and one can show (if we follow the curve through its $\phi$ evolution) that in fact it goes from $r_{min}\to r_{max}\to r_{min}$ exactly once when $\phi$ goes from $0\to 2\pi$. | {
"domain": "physics.stackexchange",
"id": 41758,
"tags": "classical-mechanics, orbital-motion"
} |
Reference request - Nomenclature of organophosphorus compounds? | Question: Can someone please provide me a good reference for the nomenclature of organophosphorus compounds such as pesticides and chemical warfare agents? Something from the IUPAC would be great.
Thanks.
Answer: The usual substitutive nomenclature for organic compounds was extended to various other elements including phosphorus. The current rules for name construction of organic compounds containing phosphorus are included in Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book). These recommendations cover typical organophosphorus compounds that are used as pesticides and chemical warfare agents. However, they do not include the nomenclature for organometallic compounds, coordination compounds, polymers, and some natural products. | {
"domain": "chemistry.stackexchange",
"id": 5354,
"tags": "nomenclature, reference-request, organophosphorus-compounds"
} |
What is the EMF in a bar rotating around any point? | Question: The problem of a bar (or rod) of length $l$ rotating around one of its ends in a magnetic field of magnitude $B$ is very well known. The induced emf is $\frac{1}{2}B\omega l^2$, where $\omega$ is the angular frequency. And in the case where the bar is rotating around its center then the induced emf is $\frac{1}{2}B\omega (\frac{l}{2})^2$, which is the same voltage induced when a bar of length $\frac{l}{2}$ is rotating around one of its ends.
But my question is:
What is the induced emf when the bar is rotating around any other point inside the bar?
The research I have done so far:
The video Motional EMF and Rotating Rods says that it does not matter how many rods there are, as long as they are attached to the same central hinge, the induced emf (voltage) will be the same. The video also says that that happens because the two rods can be considered as two voltage sources connected in parallel.
The Quora post What would happen if two batteries connected in parallel says:
If the two cells connected in parallel are not identical, a few things can happen, as addressed in other answers. Firstly, the voltage of the cells will be balanced.
From that, my guess is that the axis (point) of rotation will move "automatically" to the middle of the bar, but this would generate other questions (which I can discuss in the comments/answers).
I would also like to know if anyone has found this exercise in any book. I have looked at Purcell's and Griffith's but have not found it there.
Answer: The induced EMF exists between the axis (stationary point) and the free end of the rod.
If two rods are joined to the same axis or hinge, then whether or not they are in the same straight line an EMF exists between the the axis and each free end.
In the same way, for a single rod with axis at its midpoint the free ends will be at the same potential difference relative to the midpoint. If the axis is not at the midpoint, so that one arm is longer than the other, the potentials at the two free ends will be different. They PD between the centre and each free end will be in proportion to the squared distances of the ends from the axis. | {
"domain": "physics.stackexchange",
"id": 53977,
"tags": "electromagnetism, magnetic-fields, rotational-dynamics, electric-current, rotational-kinematics"
} |
How to detect loop closure after scan matching? | Question:
Hello,
How to detect loop closures from the scan matched 2D data?
Thanks
Originally posted by Raksha Murthy on ROS Answers with karma: 11 on 2017-10-04
Post score: 1
Answer:
This is a non-trivial problem, two approaches doing this are implemented in open_karto and cartographer. For the latter, you can read about the clever branch-and-bound approach used in this paper.
In gmapping, there is no explicit detection of loop closures, but map hypotheses with "badly" matched parts get a low weight and get sampled out.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2017-10-05
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Raksha Murthy on 2017-10-09:
May i know how to solve using open_karto? Could you please elaborate the steps? What are the changes to be made in tha launch files to run my own datasets? | {
"domain": "robotics.stackexchange",
"id": 29001,
"tags": "laser"
} |
What happens when the output length in the brevity penalty is zero? | Question: The brevity penalty is defined as
$$bp = e^{(1- r/c)},$$
where $r$ is the reference length and $c$ is the output length.
But what happens if the output length gets zero? Is there any standard way of coping with that issue?
Answer: Division by zero is not mathematically defined.
A usual or standard way of dealing with this issue is to raise an exception. For example, in Python, the exception ZeroDivisionError is raised at runtime if you happen to divide by zero.
If you execute the following program
zero = 0
numerator = 10
numerator / zero
You will get
Traceback (most recent call last):
File "main.py", line 3, in <module>
numerator / zero
ZeroDivisionError: division by zero
However, if you want to avoid this runtime exception, you can check for division by zero and deal with this issue in a way that is appropriate for your program (without needing to terminate it).
In the paper BLEU: a Method for Automatic Evaluation of Machine Translation that introduced the BLEU (and brevity penalty) metric, the authors defined the brevity penalty as
\begin{align}
BP =
\begin{cases}
1, & \text{if } c > r\\
e^{(1- r/c)} & \text{if } c \leq r\\
\end{cases} \label{1} \tag{1}
\end{align}
This definition does not explicitly take into account the division by zero.
The Python package nltk does not raise an exception, but it (apparently, arbitrarily) returns zero when c == 0. Note that the BLEU metric ranges from 0 to 1. For example, if you execute the following program
from nltk.translate.bleu_score import brevity_penalty, closest_ref_length
reference1 = list("hello") # A human reference translation.
references = [reference1] # You could have more than one human reference translation.
# references = [] Without a reference, you will get a ValueError.
candidate = list() # The machine translation.
c = len(candidate)
r = closest_ref_length(references, c)
print("brevity_penalty =", brevity_penalty(r, c))
You will get
brevity_penalty = 0
In the example above, the only human reference (translation) is reference1 = list("hello") and the only candidate (the machine translation) is an empty list. However, if references = [] (you have no references), then you will get the error ValueError: min() arg is an empty sequence, where references are used to look for the closest reference (the closest human translation) to the candidate (the machine translation), given that there could be more than one human reference translation, and one needs to be chosen to compute the brevity penalty, with respect to your given candidate.
In fact, in the documentation of the brevity_penalty function, the following comment is written
# If hypothesis is empty, brevity penalty = 0 should result in BLEU = 0.0.
where hypothesis is a synonym for candidate (the machine translation) and the length of the candidate is $c$ in the formula \ref{1} (and c in the example above).
To answer your second question more directly, I don't think there's a standard way of dealing with the issue, but I've not fully read the BLEU paper yet. | {
"domain": "ai.stackexchange",
"id": 1497,
"tags": "natural-language-processing, machine-translation, bleu"
} |
2D Uniform Flow Inclined Plane - Reynolds Averaging: Leads to no Turbulence? | Question: We're looking at a fully developed flow along an inclined plate, the $x$ coordinate is along the plate and the $z$ coordinate is perpendicular to it.
In 2D, uniform flow I end up with the continuity equation reduced to:
$$\frac{\partial w}{\partial z} = 0$$
Now if I apply the Reynolds ensemble averaging decomposition I have :
$$\frac{\partial \bar{w}}{\partial z} + \frac{\partial w'}{\partial z} = 0$$
Now if I apply the ensemble averaging equation to that equation I end up with simply:
$$\frac{\partial \bar{w}}{\partial z} = 0$$
And injecting in the previous equation I deduce also:
$$\frac{\partial w'}{\partial z} = 0$$
Those two values are then constant along $z$. Applying bottom boundary condition I actually deduce :
$$ \bar{w} = w' = 0 $$
Now if I look at my Reynold's Stresses in my momentum equations I just have:
$ \overline{u' w'}$ in the $x$ equation and $\overline{w'^2}$ in the $z$ equation.
But knowing that $w' = 0$, all of my Reynolds stresses disappear... How is that possible ? All the material I see about it suggest that I should still have those if I want to use a turbulent model (such as Prandtl Mixing Length).
What am I missing ?
Answer: The answer is that turbulence is always a 3-dimensional phenomenon and never 2-dimensional. In other words while mean flow may be 2-dimensional, fluctuations of flow exist in all 3 dimensions. This means that if $(u,v,w)$ is the total flow velocity field, then it is to be decomposed as $(\bar{u}+u',v',w')$ because $\bar{v}=\bar{w}=0$. So Reynolds stress will be present. | {
"domain": "physics.stackexchange",
"id": 46704,
"tags": "flow, turbulence, navier-stokes"
} |
ROS Lunar on Ubuntu 17.10? | Question:
Has anyone successfully run ROS Lunar on Ubuntu 17.10? If so, I'd be grateful for instructions on how to do it. Many thanks.
Originally posted by peebz on ROS Answers with karma: 1 on 2017-11-30
Post score: 0
Original comments
Comment by gvdhoorn on 2017-12-01:
Related questions: #q262187 (discusses Kinetic on 17.04, but conceptually the same), #q274615.
Answer:
Officially this is not supported, so there are no binary packages available (but you are probably already aware of this, see also REP-3).
A source install could be possible, but is not guaranteed to work, as dependencies may have changed between Zesty and Artful. As for instructions: see wiki/lunar/Installation/Source (replace occurences of zesty in commands with artful where needed).
Edit: as I was curious myself, I just did this in an Artful Docker container. I basically followed the Lunar source instructions (so for 17.04), but had to make some minor changes to some commands.
As rosdep doesn't have any artful entries yet, pretend it's really zesty (under Resolving dependencies) (added the --skip-keys to avoid trying to reinstall rosdep again):
rosdep install --from-paths src -i --rosdistro=lunar --os ubuntu:zesty -y --skip-keys="python-rosdep"
To be able to install shiboken2 and related pkgs, add the OSRF repositories following the instructions (artful appears to be missing dirmngr, which made apt-key fail, so install it dirmngr with apt (see also #q275824 and #q264654)).
After that I could follow the regular instructions again.
The build is still running, so I don't know whether it works, but it would seem to at least compile for now.
Edit2: build just finished, no really significant problems afaict. roscore started, but haven't tested anything else.
Edit3: after building, you probably want to add export ROS_OS_OVERRIDE=ubuntu:17.04:zesty to your .bashrc (or similar file). At least until rosdep gets its rules updated for artful.
Originally posted by gvdhoorn with karma: 86574 on 2017-12-01
This answer was ACCEPTED on the original site
Post score: 5
Original comments
Comment by peebz on 2017-12-01:
Many thanks for your answer and for testing it out. I will give it a go. It will give me an excuse to finally do something with Docker too.
Comment by gvdhoorn on 2017-12-01:
Dockerfile that worked for me: gist.
Comment by nile649 on 2018-01-23:
Hey, Hi it is giving me opencv error, how can I exclude opencv from make, it is really urgent
Comment by gvdhoorn on 2018-01-24:
This is a regular "ROS from source" installation - just on an unsupported OS - so excluding packages would be done the same way as always.
Comment by nile649 on 2018-01-24:
I was able to build but the final error before exit is on this lines
COPY ./ros_entrypoint.sh \
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["bash"]
Comment by gvdhoorn on 2018-01-24:
ros_entrypoint.sh is standard file in all ROS Docker images, see osrf/docker_images/ros/indigo/ubuntu/trusty/ros-core/ros_entrypoint.sh.
Comment by nile649 on 2018-01-24:
Thanks, will have a look | {
"domain": "robotics.stackexchange",
"id": 29491,
"tags": "ros"
} |
Observed Effects of Gravitational Lensing? | Question: If Gravitational lensing is real why cant we see the sun during a solar eclipse? Why doesn't the suns light follow the curve of space created by the moon and become visible to us?
Answer: The light is gravitationally lensed - by a very small amount. The fact that eclipses happen proves nothing one way or another; even if gravitational lensing did not occur, all that means is that the Moon could be slightly further away from the Earth and still block out the light from the Sun at a spot on the Earth.
A ray of light from the Sun passing close to the limb of the Moon is deflected by about $10^{-10}$ degrees. | {
"domain": "physics.stackexchange",
"id": 33968,
"tags": "gravitational-lensing"
} |
Counting words in files | Question: I made a simple program for my study that calculates words in a text files and prints every word and its repeated times in the files.
How can I improve this code?
#include <future>
#include <iostream>
#include <fstream>
#include <unordered_map>
#include <string>
#include <vector>
#include <algorithm>
#include <exception>
typedef std::unordered_map<std::string, std::size_t> Words;
Words wordsInFile(const std::string& fileName)
{
std::ifstream file(fileName);
if (!file.is_open())
{
throw "Can't open the file" + fileName;
}
Words loadFromFile;
for (std::string word; file >> word;)
{
++loadFromFile[word];
}
return loadFromFile;
}
int main(int argc, char *argv[])
{
std::vector<std::future<Words>> futures;
for (int i = 1; i < argc; ++i)
{
futures.push_back(std::async([=] { return wordsInFile(argv[i]); }));
}
Words words;
for (auto& i : futures)
{
const auto results = i.get();
for (const auto& j : results)
{
words[j.first] = j.second;
}
}
std::cout << "Word\tRepeated Times\n-------------------------\n";
for (const auto& i : words)
std::cout << i.first << "\t\t" << i.second << '\n';
}
test.txt
File system is huge subject need more work out. file is plain text file
This is some junk words. simple program for words counting in text file. 1 2 3
2 2 2 2
@ % & ^ *
all so good
this test text file
Output:
Word Repeated Times
-------------------------
File 1
out. 1
1 1
system 1
2 5
is 3
need 1
huge 1
subject 1
work 1
more 1
% 1
file 3
plain 1
text 3
This 1
some 1
junk 1
program 1
words. 1
simple 1
for 1
words 1
counting 1
in 1
file. 1
so 1
3 1
@ 1
& 1
^ 1
this 1
* 1
all 1
good 1
test 1
Answer: I see some things that may help you improve your code.
Simplify your code
You don't need a lambda at all. Use this instead:
futures.push_back(std::async(wordsInFile,argv[i]));
Also, instead of if (!file.is_open()) use the more idiomatic if (!file).
Fix the counting
If we have two files which both contain the word fox exactly once then I would expect the result of this program to print fox 2 but in fact, it would falsely claim a count of 1. The problem is this line:
words[j.first] = j.second;
This overwrites the count instead of accumulating it. This is what you need instead:
words[j.first] += j.second;
Use reserve to prevent reallocations
This code already knows how many futures are going to be created. The code can be made a little more efficient by using this:
futures.reserve(argc-1);
With large numbers of files, this eliminates the re-allocation that would be required by dynamically resizing the vector.
Check for exceptions
If any of the file names can't be read. That is, if it can be opened but not read, as with a directory under Linux, this code will throw an error that isn't caught and so therefore results in a crash. Since future::get() will throw any error that was stored, it's that part of the code that should check for exceptions. One alternative would be this:
Words results;
try {
results = i.get();
}
catch(std::exception &err)
{
std::cerr << "ERROR: " << err.what() << '\n';
}
catch(std::string &err)
{
std::cerr << "ERROR: " << err << '\n';
}
for (const auto& j : results)
{
words[j.first] += j.second;
}
Note that this will simply skip the offending file name but continue to process the rest.
Measure your results
Create a non-parallel version of this code and compare the timing to see if it's faster or slower than this version. I have found that it's often true that the overhead of the implicit (as in async) or explicit (as in thread) thread creation swamps the savings in time by making things parallel. The only way to know for sure is to time it. | {
"domain": "codereview.stackexchange",
"id": 11065,
"tags": "c++, c++11, multithreading, file"
} |
Cancellation of surface integrals (involving Maxwell's equation) | Question: In our physics class today, we wanted to derive ${curl}(\bar B) =\mu_0 \bar j $ from the Maxwell equation $$\oint_C \bar B(\bar r).d\bar l = I_{net}\mu_0$$
We did this using Stokes theorem, and in the last step we used the step:
$$\iint_S {curl}(\bar B).\bar n dA = \iint_S \mu_0\bar{j}.\bar n dA \Rightarrow {curl}(\bar B) = \mu_0\bar j$$
I'm quite convinced this cancellation is not mathematically legit, but maybe I'm overseeing a physical argument or something like that so I can justify this step anyway?
Answer: No physics, just a purely mathematical argument.
Stokes's theorem: $\oint_{C} {\vec{v}}\cdot d{\vec l} = \int\int_{S} curl\vec{v} \cdot \vec{dS}$, where $\vec v$ is any differentiable vector field and $C$ any simple (piecewise) smooth loop that bounds a simple smooth (etc.) surface $S$. So if Maxwell's equation is written as $\oint_{C} {\vec{B}}\cdot d{\vec l} = \int\int_{S} \mu_0\vec{J} \cdot \vec{dS}$ then one has $ \int\int_{S} curl\vec{B} \cdot \vec{dS} = \int\int_{S} \mu_0\vec{J} \cdot \vec{dS}$ for any such surface $S$, even for an infinitesimally small one. Take limit and you get pointwise the differential form $curl\vec{B}= \mu_0 \vec{J}$ of Maxwell's equation. | {
"domain": "physics.stackexchange",
"id": 39199,
"tags": "maxwell-equations, integration, magnetostatics"
} |
work and change of kenetic energy | Question: An object with mass 10kg lies still on a frictionless table. A force that goes from 50N to 0 in 2 seconds evenly is then applied to the object. What is the objects speed after 2 seconds.
So i first calculated to work that is done on the object to 50J. then i used that the work done is equal to the change in kenetic energy: $1/2*m*v^2 = W$ and solved for v: $v=\sqrt((2W)/m)$ i then got 3.2 m/s which was wrong.
I did eventually solve the problem using another method, but why did this method not work?
Answer: How did you compute the work?
The force is changing with time evenly and by that I think they mean linearly as:
$$F(t)=F_0-{F_0/2}t$$
so that at $t=0$ we have $F(0)=F_0=50N$ while at $t=2$ we have $F(2)=0N$.
So we have, using Newton's law:
$$m{dv\over dt}=F_0-{F_0\over2}t$$
where $v$ is the velocity, meaning
$${dv\over dt}={F_0-{F_0\over 2}t\over m}$$
which can be easily solved separating variables as:
$$dv={F_0\over m}dt-{F_0\over 2m}tdt$$
and by integrating from 0 to $t$:
$$v(t)=v(0)+{F_0\over m}t - {F_0\over 4m}t^2$$
(note that $v(0)=0$).
Thus $v(2)={50N\over 10Kg}2s-{50N\over 4s 10Kg}4s^2=10m/s-5m/s=5m/s$
To compute work you should do (since $dx=vdt$):
$$W=\int_0^{x(2)} F(t)dx=\int_0^2F(t)v(t)dt$$ | {
"domain": "physics.stackexchange",
"id": 35171,
"tags": "homework-and-exercises, newtonian-mechanics, work"
} |
How did early radar determine range/ distance precisely? | Question: Wikipedia talks about precise timing of the returned radar pulse, with an animation of a clock.
But they didn't have atomic clocks and such before or during WWII.
So how did they determine distances and (possibly) velocities back rhen?
Answer: Early radar uses analog signals and displays.
For example, the WWII Chain Home system would send each pulse at the same instant that an electron beam started across a CRT. The beam would be deflected by any received signal, and the left-right offset gave the distance from the return time:
Chain Home display showing several target blips between 15 and 30 miles distant from the station.
Later, the familiar circular display was created. The beam starts at the center and runs outward in a direction corresponding to where the antenna points at that instant, brightening when an echo is received:
Although the radar pulse moves out and back at the speed of light, radar is used over long distances. A target a kilometer away is a six microsecond echo; 100 miles is six hundred microseconds. The tracking signals involved are MHz or less, often much less. | {
"domain": "physics.stackexchange",
"id": 64013,
"tags": "electromagnetism, history, radar"
} |
Is there a published upper limit on the electron's electric quadrupole moment? | Question: I understand an electric quadrupole moment is forbidden in the standard electron theory. In this paper considering general relativistic corrections (Kerr-Newman metric around the electron), however, there is a claim that it could be on the order of $Q=-124 \, \mathrm{eb}$. That seems crazy large to me, but I can't find any published upper limits to refute it. Surely someone has tested this? Maybe it's hidden in some dipole moment data? If not, is anyone planning to measure it soon?
Answer: I think that the paper is completely wrong and the conclusions are preposterous. The paper argues that when one models the vicinity of the electron as a rotating black hole, he will get new effects.
However, the black hole corresponding to the electron mass – which is much lighter than the Planck mass – would have a much smaller radius than the Planck length. It really means that the Einstein-Hilbert action can't be trusted and all the quantum corrections are important. It also implies that the typical distance scale in any hypothetical electric quadrupole moment of the electron would be much shorter than the Planck scale – surely not a femtometer. Also, the black holes with masses, charges, and spins similar to those of electrons would heavily violate the extremality bound – something that isn't a problem because the classical general theory of relativity can't be trusted for such small systems.
The facts in the previous paragraphs are just different perspectives on the universal facts that gravity may be neglected in any observable particle physics, a fact that the author of the paper tries to deny.
Proof of the vanishing of the quadrupole moment
More seriously, one may prove from quantum mechanics that the quadrupole moment for an electron, a spin-1/2 particle, has to vanish because of the rotational symmetry. The quadrupole moment is a traceless symmetric tensor and because the electron's spin is the only quantum number of the particle that breaks the rotational symmetry, one would have to express the quadrupole moment as a function of the spin, i.e. as
$$ Q_{ij} = \gamma\cdot (3S_i S_j+3S_j S_i - 2S^2 \delta_{ij}) $$
However, in the rest frame, $S_i$ simply act as multiples of Pauli matrices (with respect to the up/down basis vectors of the electron's spin) and the anticommutator $\{S_i,S_j\}$ above – needed for the symmetry of the tensor – is nothing else than the multiple of the Kronecker delta symbol, so it cancels against the last term. $Q_{ij}=0$ for all spin-1/2 objects (and similarly, of course, for all spin-0 objects). Only particles (nuclei) with the spin at least equal to $j=1$ (the case of deuteron) may have a nonzero electric quadrupole moment. This simple group-theoretical selection rule is the reason why you won't find any experiments trying to measure the electron's (or proton's or neutron's or other spin-1/2 particles') electric quadrupole moment. Such experiments would be as nonsensical as the paper quoted by the OP.
Note that unlike the case of the electron's dipole moment, one doesn't have to rely on any C, P, or CP-symmetry (which are broken) to show that the quadrupole vanishes. To deny the vanishing, one would have to reject the rotational symmetry.
Let me wrap by saying that the quadrupole moment may always be interpreted as some "elliptical shape" of the object or particle. This ellipsoid would be stretched along some axes and shrunk along other axes. However, the electron's spin-up and spin-down state really pick the same preferred axis in space – the sign doesn't matter for the quadrupole – so they can't have different values of the quadrupole moment. In other words, the quadrupole moment doesn't depend on the spin, and because the spin is the only rotational-symmetry-breaking quantum number that the electron has, the quadrupole moment has to be zero. (A Pauli-matrix-free proof.) | {
"domain": "physics.stackexchange",
"id": 5332,
"tags": "specific-reference, electrons, research-level"
} |
Identifying different parts of a cave | Question: Hopefully this is on topic here - I'm wondering if there are names for the different "biomes" that are present within a cave.
Some of these "biomes" include:
Open (no) ceiling, sunlit, potentially with trees growing (grotto?)
Dark, little to no natural light ("standard" cave)
Containing an underground stream or lake, potentially supporting aquatic life
I know little about caves, but they (can) have diverse areas that give rise to all sorts of interesting creatures. Are there any specific names given to the different parts of a cave?
Please let me know if I can edit this question to make it clearer or more on-topic.
Answer: The Karstic aquifer is vertically divided in several zones. These zones host caves that are typical for them and are also named after them. The zone closest to the surface is called epikarst that is commonly riddeled with karren and also most efected by outer conditions. Below it lies vadose zone that is from hydrological aspect unsaturated and hosts mostly shafts and vertical caves. Epiphreatic zone is flooded occasionally and caves in this part are mostly horizontal, some have water some do not. The lowest is the phreatic zone that is flooded all the time. Passages of caves here are jumping up and down in vertical profile along fissures and bedding planes.
This is only a general explanation to find out more i would suggest this book:
http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470849967.html | {
"domain": "earthscience.stackexchange",
"id": 1062,
"tags": "climate, geography"
} |
Night Louder Than The Day | Question: Have you guys ever felt that it is quiet at night than in comparison to day?I'm known with the fact that reduction in people's activity makes night quiet...but is there something else which in a way amplifies the sound wave in night?
Answer: The difference between day and night can be pretty big, up to tens of decibels. But most of this is likely due to different activity levels - less traffic, industry, voices/animal sounds etc.
Outdoor sound propagation depends on a number of factors, but their impact in day and night will be variable.
In particular, there is a temperature dependency in how fast sound intensities are attenuated in air: as it gets colder attenuation goes up somewhat. So one might think that as the day cools off the range of sounds decrease. However, there are complications here like humidity changes (dampens some frequencies but not others).
Temperature also affects the speed of sound, making temperature gradients refract sounds in the direction of lower sound velocity (that is, lower temperature). In the night a temperature inversion is common, with the ground and the low air colder than the upper air, and this tends to make sound travel longer distances since it is focused along the ground rather than radiated upward.
A further issue is wind, which can amplify sound in some directions, add turbulent damping, and of course cause noise.
In short, there are various attenuation effects that could play a role. But I suspect the main cause is just less noise sources. | {
"domain": "physics.stackexchange",
"id": 49712,
"tags": "waves, acoustics, atmospheric-science"
} |
Simultaneity in General Relativity | Question: Take the following situation:
An astronomer is on the surface of Sun (assume he's not rotating around the Sun). He measures two stars from two locations in the universe exploding. Both stars exploded 3000 years ago. Now, the astronomer goes somewhere else but he remains stationary relative to the sun (perhaps outside the event horizon of a Schwarzschild black hole that is neither moving towards or away from the Sun). Will the astronomer still measure the stars exploding at the same time?
I read that the concept of relativity of simultaneity in general relativity is kind of meaningless, but isn't my question in the above situation valid? Does the concept of relativity of simultaneity hold in General Relativity?
There seems to be a bit of confusion on my description (as can be seen in the comment section):
Essentially I am asking: if we take into account the light travel time (time the astronomer "saw" it minus the time for the light to travel to the observer), will the explosion still be simultaneous?
Answer: I'll assume that you do a good job of using various clues (the time you see the light, your location when you see it, the direction of the light, and some estimate of the distance to the star) and correctly work out more or less where each explosion took place in spacetime.
In this case, no matter where you are, no matter your speed, and no matter anything else about you, you will derive the same spacetime locations for the explosions, because the locations are an objective property of the external world and we're assuming that you measured them correctly.
There are a lot of different ways you could write down the locations. Those ways are called coordinate systems. Some coordinate systems have a coordinate called "t" in them, and depending on the coordinate system, the t coordinates of the two explosions might be the same or might be different. This isn't a property of the explosions, but of the arbitrary choice of coordinates.
The choice of coordinates is really up to you. In introductions to special relativity it's common to assume that everybody picks an "egocentric" coordinate system (a term I just made up for coordinates in which they're at rest at the origin). If everyone does that, then people moving at different speeds are likely to disagree about the equality of the t coordinates of various things. But (if they're good scientists) they won't disagree about the objective locations of those things, because they'll understand that their choice of coordinates doesn't influence the actual locations. And also (if they're good scientists) they'll understand that they don't need to choose egocentric coordinates, and especially if they're collaborating with someone else they'd be better off agreeing on a common coordinate system to communicate their results.
As I said in my other recent answer, in general relativity the choice of useful coordinate systems tends to be more restricted because most spacetimes have fewer symmetries than the Minkowski spacetime of special relativity. You can still use any coordinates you want, but most will be inconvenient because the metric will have an unnecessarily complicated form. In particular, it tends to be inconvenient to use egocentric coordinates.
When you're talking about the universe on a large scale, only one time/"t" coordinate, called cosmological time, is convenient, because it's the only one that respects the large-scale spacetime symmetries of the universe we live in. When you see a "time since the big bang" in articles about cosmology or astronomy, it's cosmological time.
When you work out the coordinates of the two stars, you'll probably end up with the same t coordinates as someone else working independently on the same problem, because you both will pick the most convenient t coordinate, and that's cosmological time. It doesn't matter where you are or how fast you're moving, since it's dictated by the objective "shape of the universe" which everyone agrees on in principle, if they have accurate enough equipment to measure it well. You could pick a different coordinate system and disagree with the other experimenter, but that doesn't say anything profound about the nature of objective reality, it just says that you picked a different coordinate system. | {
"domain": "physics.stackexchange",
"id": 61395,
"tags": "general-relativity, spacetime"
} |
Wave packets and amplitude | Question: If a wave packet is given by:
My question is basically how do we choose the write $A(k)$ to fit the particle we are looking at, or does it not matter (as my matter as my textbook seems to imply) which seems counterintuitive?
Answer: If you solve the Schrodinger equation for a free particle the solutions are plane waves, and any sum of plane waves is also a solution. Since any wave packet profile can be constructed by summing plane waves then your equation with any $A(k)$ is also a solution of the Schrodinger equation.
The $A(k)$ is not determined by the Schrodinger equation but rather it's a boundary condition. You choose the $A(k)$ that matches the system you're trying to describe. For example if your particle is completely delocalised the $A(k)$ is a delta function, which physically means there is a precise momentum but the position is completely unknown. The other extreme would be if you pinpoint the particle's position precisely, in which case $A(k) = 1$ so the momentum is completely uncertain.
In practice we'd probably usually choose an $A(k)$ that is Gaussian, because Gaussians are easy to work with. In that case the width of the Gaussian would correspond to the uncertainty in momentum. | {
"domain": "physics.stackexchange",
"id": 15198,
"tags": "quantum-mechanics, wavefunction"
} |
Why checking the distribution of data is needed before calculating Gower distance? | Question: I read this article(Clustering datasets having both numerical and categorical variables) to learn how to perform clustering on datasets with not just numerical variables.
Before calculating the Gower distance, distribution of data are plotted and positive skew distribution is log-transformed. (The one on the top right corner)
Anyone knows the reason of doing that? Can you explain in an easy way? Thanks!
Answer: Log transformation is necessary to avoid data being too sparse or having high variability. In other words, log has the rule to compress data and have clean distributions, as you can see on the top right corner.
If you try to calculate the Gower distance without the log, you will see that the distances cannot be meaningful between each other. | {
"domain": "datascience.stackexchange",
"id": 9705,
"tags": "machine-learning, r, clustering, similarity"
} |
Vector math of applying an X-gate on an $|i\rangle$ basis state | Question: It is well known that the X-gate will apply a rotation about the x-axis on the bloch sphere.
Knowing this, the $|i\rangle$ state should be converted to the $|-i\rangle$ state on the application of this gate.
To be clear I define these states as: $|i\rangle$ = ${1 \over \sqrt{2}}(|0\rangle + i|1\rangle)$ and $|-i\rangle$ = ${1 \over \sqrt{2}}(|0\rangle - i|1\rangle)$
When trying to do the vector math with $X|i\rangle$ I get:
$\begin{bmatrix}0&1\\1&0\end{bmatrix}$$1 \over \sqrt{2}$$\begin{bmatrix}1\\i\end{bmatrix}$ = $1 \over \sqrt{2}$$\begin{bmatrix}i\\1\end{bmatrix}$
But I expect to be getting the $|-i\rangle$ state: $1 \over \sqrt{2}$$\begin{bmatrix}1\\-i\end{bmatrix}$
What am I doing wrong, am I missing some intrinsic property of quantum theory?
Answer: In your calculations you are getting the state
$$
|\psi \rangle = \frac{1}{\sqrt{2}}\begin{pmatrix} i \\ 1 \end{pmatrix}
$$
instead of what you are expecting
$$
|\phi \rangle = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ -i \end{pmatrix}.
$$
Well it turns out that in quantum theory these two states are considered the same! This is because they only differ up to a global phase. That is there is an $\alpha \in \mathbb{C}$ with $|\alpha|=1$ such that $|\phi\rangle = \alpha |\psi\rangle$, In this case $\alpha = -i$.
Global phase is considered irrelevant in quantum theory as it is undetectable. Any measurement protocol you apply to one state will give the exact same probabilities for the other state. | {
"domain": "quantumcomputing.stackexchange",
"id": 2972,
"tags": "quantum-gate, mathematics, textbook-and-exercises, matrix-representation, bloch-sphere"
} |
Getting data from different services | Question: I have a piece of code that tries to get some data from a different services, falling back to the next one if previous fails.
The code look a little bit ugly, though. Any suggestions on how to rewrite it in a more concise way?
(defn get-data [query]
(let [data-cached (get-data-from-cache query)
data-service-1 (if (nil? data-cached)
(get-data-service-1 query)
data-cached)
data-service-2 (if (nil? data-service-1)
(get-data-service-2 query)
data-service-1)
data (if (nil? data-service-2)
(get-data-service-3 query)
data-service-2)]
data))
Answer: Perhaps or macro could be used here:
(defn get-data [query]
(or (get-data-from-cache query)
(get-data-service-1 query)
(get-data-service-2 query)
(get-data-service-3 query)))
or short-circuits, so the first match will return and the rest will not be evaluated | {
"domain": "codereview.stackexchange",
"id": 12271,
"tags": "clojure"
} |
K fold cross validation reduces accuracy | Question: I am working on a machine learning classifier and when I arrive at the moment of dividing my data into training set and test set Iwant to confron two different approches. In one approch I just split the dataset into training set and test set, while with the other approch I use k fold cross validation.
The strange thing is that with the cross validation the accuracy decreases, so if I have 0.87 with the first approch, with cross validation I have 0.86.
Shouldn't cross validation increase my accuracy? Thank's in advance.
Answer: Chance plays a big role when the data is split. For example maybe the training set contains a particular combination of features, maybe it doesn't; maybe the test set contains a large proportion of regular "easy" instances, maybe it doesn't. As a consequence the performance varies depending on the split.
Let's imagine that the performance of your classifier would vary between 0.80 and 0.90:
In one approch I just split the dataset into training set and test set
With this approach you throw the dice only once: maybe you're lucky and the performance will be close to 0.9, or you're not and it will be close to 0.8.
while with the other approch I use k fold cross validation.
With this approach you throw the dice k times, and the performance is the average across these $k$ runs. It's more accurate than the previous one, because by averaging over several runs the performance is more likely to be close to the mean, i.e. the most common case.
Conclusion: k-fold cross-validation isn't meant to increase performance, it's meant to provide a more accurate measure of the performance. | {
"domain": "datascience.stackexchange",
"id": 9272,
"tags": "machine-learning, classification, cross-validation, accuracy"
} |
Lost Ability to Regenerate Body Parts during the Transition from Amphibians to Mammals | Question: Why have higher-order animals lost the ability to regenerate body parts during evolution? Wouldn't it be better for survival? What is the evolutionary theory behind it?
Answer: Regeneration of limbs in amphibians is an adaptation where new limbs are generated by dedifferentiated cells. This process is tightly linked to the embryonic program which, in most animal cells, is a difficult program to access once terminal differentiation has occurred (but it's possible, e.g. induced pluripotent stem cells).
Of note, amphibians have a unique life cycle that includes metamorphosis. It is thought that organisms with more diverse stages in their development may have increased potential for regeneration. Mammals, after birth, do pass through multiple stages of development, but these stages are largely continuous.
So the quick answer is: mammalian limbs are made up of terminally differentiated cells (and specialized stem cells) with high barriers of reprogramming. They're difficult to reprogram because there hasn't been strong selective pressure to do so. Limb regeneration might seem like a great adaptation, but it doesn't seem to be that important for the success of mammals. Additionally, selection for "maintaining a full set of functional limbs" could evolve many different ways. For example, primates may have evolved behaviors or reinforced anatomy that reduces their risk of injuring a limb. Also, larger animals are probably less likely to have limbs removed just given their size. And if they do, it may have little effect on their reproductive fitness. A legless amphibian probably finds few, if any, mates.
Yun Gates, and Brock. Regulation of p53 is critical for vertebrate limb regeneration, PNAS 2013 | {
"domain": "biology.stackexchange",
"id": 2502,
"tags": "evolution, healing"
} |
What's happening if we vacuum an air that exists inside aclosed system? | Question: Excuse me, I have a closed system and wanted to vacuum an air that exists inside it, so what is happening with temperature inside that system ? Is it increase or decrease ?
Answer: Based on the Nashwan's last clarification that he is simply removing air (and assuming no heat transfer involved) I suggest the following answer:
Since temperature is a measure of the average translational kinetic energy of the air molecules in the system, if we remove some of the molecules we also remove the kinetic energy possessed by those molecules. Ergo the temperature of the remaining gas should be less.
Hope that helps. | {
"domain": "physics.stackexchange",
"id": 51172,
"tags": "thermodynamics, pressure, temperature, vacuum"
} |
Finding the best poker hand in a connected grid structure where order matters | Question: I am trying to find the best poker hand in a connected grid, where order matters. An illustration is the best way of explaining the situation.
This grid has 12 random cards, in random positions. Each card is connected to the vertically, horizontally and diagonally adjacent cards.
The hands are standard poker hands but no extra cards are allowed – only the cards which constitute the hand. So, a pair is only two cards connected, four of a kind is just four cards connected, etc.
The best hand for this grid highlighted:
Order matters, so a straight must be in the correct order (e.g. 10, J, Q, K, A is valid but 10, A, J, Q K is not). There is no straight in this particular grid, the 10s are not connected to the Jack directly.
I am looking for an algorithm that finds the best hand for a random grid. The grid isn’t ever going to be much larger than this grid of 12.
I am also looking for an algorithm that finds out if there are no moves at all – this would alert the player to this fact to save them searching for too long…
Answer: I recommend you use breadth-first search (BFS) through the state space.
Start by trying to find (partial) straights that are ascending. For each possible starting position (there are 12 possibilities), use BFS to search for a path that starts at that position and makes an ascending straight (i.e., the numbers increment by one each time you move forward). This will find you the longest ascending straight.
Next search for descending (partial) straights in the same way. This will find you the longest descending straight.
You can also search for straight flushes in this way, too.
Next for x-of-a-kinds in the same way. This will find you the longest sequence that are all of the same rank (e.g., two-of-a-kind, three-of-a-kind, four-of-a-kind, five-of-a-kind).
Next search for flushes in the same way. This will find you the longest sequence that are all of the same suit.
You can look for two-pair and full-house separately, with a different method tailored to that problem. Enumerate all pairs (i.e., all pairs of adjacent positions where the two cards have the same rank). Now check whether any two of those pairs are adjacent; if so, you have found a two-pair. Similarly, enumerate all three-of-a-kinds (i.e., paths of length 3 where the three cards have the same rank). Now check whether any pair is adjacent to any three-of-a-kind; if so, you have found a full-house.
In this way you can find the best poker hand in the grid. I expect that this will be very efficient. | {
"domain": "cs.stackexchange",
"id": 10687,
"tags": "algorithms, search-algorithms, square-grid"
} |
Can quarks and antiquarks exist in an atom without annihilating? | Question: Elaborating on antimatter, where atoms consist of antiprotons, antineutrons, and positrons
Is it possible to have something in between? with antiprotons, positrons, and regular neutrons?
Can such an atom exist in a stable mode without the anti-upquarks annihilating the upquarks and
likewise for the antidowns and downs?
Taking, for simplicity, an example analogous to deuterium, an atom consisting of an antiproton, a neutron and a positron. Is then the reaction$$\bar{u}\bar{u}\bar{d}+udd\rightarrow\bar{u}d+Energy$$inevitable? Or can these quarks/antiquarks exist safely confined within their respective particle in the nucleus to make a stable atom?
Answer: I don't think it's annihilation you have to worry about. It's my understanding that at typical energies adjacent (anti)baryons in a cluster confine their respective valence (anti)quarks too tightly for them to annihilate those in another (anti)baryon.
The real stability issue for a neutron/antiproton nucleus is that beta-decay of one or more neutrons to protons will reduce the electrostatic potential. Having said that, it's possible some specific numbers of neutrons & antiprotons are especially stable; you'd just need to deduce their equivalent of the whole of neutron-proton physics: magic numbers, drip lines etc. That's well beyond the scope of this question.
(A thought on terminology: if a cluster of nucleons leads to nuclear physics, we may as well call the antinucleon case antinuclear physics, while your mixed case would be sesquinuclear physics.) | {
"domain": "physics.stackexchange",
"id": 82607,
"tags": "particle-physics, antimatter"
} |
Simple neural network in c++ | Question: I have implemented a neural network in C++. But I'm not sure whether my implementation is correct or not. My code of the implementation of neural networks given bellow.
As an inexperienced programmer, I welcome any and all insights to improve my skill.
#include "csv.h"
using namespace rapidcsv;
using namespace std;
class Neuron;
struct connection{
connection(int i){
weight=a_weight=0;
id=i;
}
void weight_val(double w){
weight=w;
}
void weight_acc(double a){
a_weight+=a;
}
void reset(){
a_weight=0.0;
};
void move(double m,double alpha,double lambda){
weight=weight-alpha*a_weight/m-lambda*weight;
}
double weight,a_weight;
int id=0;
};
typedef vector <Neuron> layer;
class Neuron{
public:
Neuron(int idx,int nxt_layer_size){
n_id=idx;
for(int i=0;i<nxt_layer_size;i++){
n_con.push_back(connection(i));
n_con[i].weight_val(rand()/double(RAND_MAX));
}
set_val(0.0);
is_output_neuron=false;
}
void hypothesis(layer &prev_layer){
double sm=0;
for(int i=0;i<prev_layer.size();i++){
sm+=prev_layer[i].get_val()*prev_layer[i].get_con(n_id).weight;
}
set_val(sigmoid(sm));
if(is_output_neuron){
cost+=target*log(get_val())+(1-target)*log(1-get_val());
}
}
void calc_delta(layer next_layer={}){
if(is_output_neuron||next_layer.size()==0){
delta=get_val()-target;
}else{
double sm=0;
delta=delta_dot(next_layer)*sigmoid_prime(get_val());
}
}
void calc_grad(layer &nxt_layer){
for(int i=0;i<nxt_layer.size()-1;i++){
n_con[i].weight_acc(get_val()*nxt_layer[i].get_delta());
}
}
double flush_cost(){
double tmp=cost;
cost=0;
return tmp;
}
double get_delta(){
return delta;
}
void set_target(double x){
target=x;
is_output_neuron=true;
}
double get_val(){
return a;
}
void set_val(double x){
a=x;
}
void update_weight(double m,double alpha,double lambda){
for(int i=0;i<n_con.size();i++){
n_con[i].move(m,alpha,lambda);
n_con[i].reset();
}
}
connection get_con(int idx){
return n_con[idx];
}
private:
int n_id;double a;
vector <connection> n_con;
static double sigmoid(double x){
return 1.0/(1+exp(-x));
}
static double sigmoid_prime(double x){
return x*(1-x);
}
double delta_dot(layer nxt_layer){
assert(nxt_layer.size()-1==n_con.size());
double sm=0;
for(int i=0;i<n_con.size();i++){
sm+=n_con[i].weight*nxt_layer[i].get_delta();
}
return sm;
}
double target,delta,cost=0;bool is_output_neuron;
};
class Network{
public:
Network(vector <int> arch){
srand(time(0));
for(int i=0;i<arch.size();i++){
int nxt_layer_size=i==arch.size()-1?0:arch[i+1];
layer tmp;
for(int j=0;j<=arch[i];j++){
tmp.push_back(Neuron(j,nxt_layer_size));
}
tmp.back().set_val(1.0);
n_layers.push_back(tmp);
}
}
vector <double> feed_forward(vector <double> in,bool output=false){
vector <double> ot;
assert(in.size()==n_layers[0].size()-1);
for(int i=0;i<in.size();i++){
n_layers[0][i].set_val(in[i]);
}
for(int i=1;i<n_layers.size();i++){
for(int j=0;j<n_layers[i].size()-1;j++){
n_layers[i][j].hypothesis(n_layers[i-1]);
}
}
if(output) {
for(int i=0;i<n_layers.back().size()-1;i++){
ot.push_back(n_layers.back()[i].get_val());
}
}
return ot;
}
void feed_backward(vector <double> ot){
assert(ot.size()==n_layers.back().size()-1);
for(int i=0;i<ot.size();i++){
n_layers.back()[i].set_target(ot[i]);
}
for(int i=0;i<n_layers.back().size()-1;i++){
n_layers.back()[i].calc_delta();
}
for(int i=n_layers.size()-2;i>=0;i--){
for(auto &a:n_layers[i]){
a.calc_delta(n_layers[i+1]);
a.calc_grad(n_layers[i+1]);
}
}
}
void done(double m){
for(unsigned i=0;i<n_layers.size();i++){
for(unsigned j=0;j<n_layers[i].size();j++){
n_layers[i][j].update_weight(m,alpha,lambda);
}
}
}
double calc_cost(){
for(int i=0;i<n_layers.back().size()-1;i++){
cost_acc+=n_layers.back()[i].flush_cost();
}
return cost_acc;
}
double get_cost(double m){
double tmp=cost_acc;
cost_acc=0;
return -tmp/m;
}
void set_hyper_params(double alpha,double lambda){
this->alpha=alpha;
this->lambda=lambda;
}
private:
vector <layer> n_layers;
double cost_acc=0,alpha,lambda;
};
int main() {
Network net({4,5,3});
net.set_hyper_params(0.1,0.0);
Document doc("../dataset.csv");
vector <double> x1=doc.GetColumn<double>("x1");
vector <double> x3=doc.GetColumn<double>("x3");
vector <double> x4=doc.GetColumn<double>("x4");
vector <double> x2=doc.GetColumn<double>("x2");
vector <double> y=doc.GetColumn<double>("y");
vector <double> lrc;
for(int i=0;i<10000;i++){
for(int j=0;j<x1.size();j++){
net.feed_forward({x1[j],x2[j],x3[j],x4[j]});
vector <double> ot;
ot.push_back(y[j]==0);
ot.push_back(y[j]==1);
ot.push_back(y[j]==2);
net.feed_backward(ot);
net.calc_cost();
}
double cst=net.get_cost(x1.size());
lrc.push_back(cst);
if(i%100==0) cout<<"Cost="<<cst<<"/i="<<i<<endl;
net.done(x1.size());
}
return 0;
}
Rapid Csv Iris dataset
Answer: Looks plausible. The two biggest pieces of advice I have for you are:
Format your code consistently and idiomatically! One easy way to do this is to use the clang-format tool on it. A more tedious, but rewarding, way is to study other people's code and try to emulate their style. For example, you should instinctively write vector<T>, not vector <T>.
It sounds like you're not sure if your code behaves correctly. For that, you should use unit tests. Figure out what it would mean — what it would look like — for a small part of your code to "behave correctly," and then write a small test that verifies that what you expect is actually what happens. Repeat many times.
Stylistically: Don't do using namespace std;. Every C++ programmer will tell you this. (Why not? There are reasons, but honestly the best reason is because everyone agrees that you shouldn't.)
Forward-declaring class Neuron; above struct connection is strange because connection doesn't actually need to use Neuron for anything.
connection(int i) defines an implicit constructor, such that the following line will compile and do an implicit conversion:
connection conn = 42;
You don't want that. So mark this constructor explicit. (In fact, mark all constructors explicit, except for the two that you want to happen implicitly — that is, copy and move constructors. Everything else should be explicit.)
weight_val and weight_acc look like they should be called set_weight and add_weight, respectively. Use noun phrases for things that are nouns (variables, types) and verb phrases for things that are verbs (functions). Also, avd unnec. abbr'n.
...Oooh! weight_val and weight_acc actually modify different data members! That was sneaky. Okay, from the formula in move, it looks like we've got a sort of an "alpha weight" and a "lambda weight"? I bet these have established names in the literature. So instead of weight_val(x) I would call it set_lambda_weight(x) (or whatever the established name is); instead of weight_acc(x) I would call it add_alpha_weight(x); and instead of reset I would call it set_alpha_weight(0).
Further down, you use get_val() and set_val(x) to get and set a member whose actual name is a. Pick one name for one concept! If its proper name is a, call the methods get_a() and set_a(a). If its proper name is val, then name it val.
void done(double m){
for(unsigned i=0;i<n_layers.size();i++){
for(unsigned j=0;j<n_layers[i].size();j++){
n_layers[i][j].update_weight(m,alpha,lambda);
}
}
}
Again, the name of this method doesn't seem to indicate anything about its purpose. x.done() sounds like we're asking if x is done — it doesn't sound like a mutator method. Seems to me that the function should be called update_all_weights.
The body of this function can be written simply as
void update_all_weights(double m) {
for (Layer& layer : n_layers) {
for (Neuron& neuron : layer) {
neuron.update_weight(m, alpha, lambda);
}
}
}
Notice that to distinguish the name of the type Layer from the name of the variable layer, I had to uppercase the former. You already uppercased Neuron, so uppercasing Layer should be a no-brainer.
weight=weight-alpha*a_weight/m-lambda*weight;
This formula is impossible to read without some whitespace. Look how much clearer this is:
weight = weight - alpha*a_weight/m - lambda*weight;
And then we can rewrite it as:
weight -= ((alpha/m) * a_weight) + (lambda * weight);
I might even split that up into two subtractions, if I knew I wasn't concerned about floating-point precision loss.
weight -= (alpha/m) * a_weight;
weight -= lambda * weight;
double weight,a_weight;
clang-format will probably do this for you (I hope!), but please: one declaration per line!
double weight;
double a_weight;
That should be enough nitpicking to give you something to do. | {
"domain": "codereview.stackexchange",
"id": 38936,
"tags": "c++, machine-learning, neural-network"
} |
Deriving the action and the Lagrangian for a free massive point particle in Special Relativity | Question: My question relates to
Landau & Lifshitz, Classical Theory of Field, Chapter 2: Relativistic Mechanics, Paragraph 8: The principle of least action.
As stated there, to determine the action integral for a free material particle, we note that this integral must not depend on our choice of reference system, that is, must be invariant under Lorenz transformations. Then it follows that it must depend on a scalar. Furthermore, it is clear that the integrand must be a differential of first order. But the only scalar of this kind that one can construct for a free particle is the interval $ds$, or $ads$, where a is some constant. So for a free particle the action must have the form $$
S = -a\int_a^b ds\,.
$$
where $\int_a^b$ is an integral along the world line of a particle between the two particular events of the arrival of the particle at the initial position and at the final position at definite times $t1$ and $t2$, i.e. between two given world points; and $a$ is some constant characterizing the particle.
For me this statements are inaccurate. (Maybe it's because I have few knowledge from maths and physics. However.)
Why should the action be invariant under Lorentz transformations? Is this a postulate or it's known from experiments. If this invariance follows from special theory of relativity, than how? Why the action should have the same value in all inertial frames? The analogue of the action in non-relativistic Newtonian mechanics is not invariant under Galilean transformations, if I am not mistaken. See e.g. this Phys.SE post.
It is stated "But the only scalar of this kind that one can construct for a free particle is the interval." Why? Can't the Lagrangian be for example
$$
S = -a\int_a^b x^i x_i ds\,,
$$
which is also invariant.
The derivation of the Lagrangian for a non-relativistic free point particle was more detailed I think. See e.g. this Phys.SE post. Does the relativistic case need some more detalization?
Answer:
Yes, the invariance of the action follows from special relativity – and special relativity is right (not only) because it is experimentally verified. All the equations of motion may be derived from the condition $\delta S = 0$, the action is stationary (which usually means it has the minimum value on the allowed trajectory/history among all trajectories/histories with the same initial and final conditions). If $S$ depended on the inertial system, so would the terms in the equations $\delta S =0$, and these laws of motion couldn't be Lorentz-covariant (note how this Lorentz is spelled; Lorenz also existed but it was a different physicist). Quite generally, you shouldn't think about "derivation of the action". When we work with the action at all, we are doing so because we view the action as the most fundamental expression – and we derive everything else out of it. In that context, we pretty much define a Lorentz-invariant theory as a theory determined by a Lorentz-invariant action.
Your integral is Lorentz-invariant but it is not translationally invariant under $x^\mu \to x^\mu + a^\mu$. So it's not Poincaré-invariant (the Poincaré symmetry unifies the Lorentz transformations and spacetime translations) and due to this violation, we also say that it disagrees with the laws of special relativity. You could also create other expressions, e.g. replace $x_\mu x^\mu$ in the integral by some extrinsic curvature invariant of the world line etc. Those terms could be made Poincaré-invariant. So the right claim is that the proper length of the world line is the only Poincaré-invariant functional that doesn't depend on any higher derivatives of the coordinates $x^\mu(\tau)$. | {
"domain": "physics.stackexchange",
"id": 8053,
"tags": "classical-mechanics, lagrangian-formalism, symmetry, relativity, action"
} |
Intelligence and Entropy | Question: Is intelligence an entropy transformer?
and the difference between a lower and higher intelligence is the efficiency?
Answer: Entropy has various uses in Physics. It was first used as a state variable for thermodynamics, connected to energy and heat. Later, it was revealed that entropy is a measure of the disorder of a system. By extension, it is a measure of the observer's ignorance or lack of expectation about the precise, microscopic state of a system. This in turn is related to the modern connection between entropy and information: the higher the entropy of a string of data, the higher the information that can be transmitted.
In turn, there is no precise definition of "intelligence" in physics. Thus your idea of intelligence as entropy transformer is very speculative. There is much that you can read from here, probably in both information theory and computer science. Maybe you can start from this amazing book. | {
"domain": "physics.stackexchange",
"id": 30815,
"tags": "energy, entropy"
} |
Water under high pressure | Question: If you were to sink a container to the bottom of a deep ocean and seal it there, then bring it up to the surface, would it retain its pressure?
The answer for a gas is obviously yes, but what about for a liquid like water which is incompressible? Once the crushing weight of the water column above is removed, does the water retain it's quality of "pressurizedness" or return to normal water? I guess a clear way to test this would be to bottle up a deep water fish and bring it up to the surface and see if it explodes.
While we're at it, what about a solid? Barring any elasticity and incidental temperature change, will a solid object break a non-sealed glass container which is exactly fitted to it and then placed in vacuum?
Answer: Water is slightly compressible, so it will hold its pressure as long as the container does not stretch.
But since it's only slightly compressible, if the container bursts under pressure it will probably not be an explosive failure. This is because at the time of failure, unlike a gas, the water does not push for a long enough time on the failing part of the container to generate much speed. This is why pressure containers are often pressure tested with water or oil instead of air or other gasses.
If a solid is slightly compressible, it will retain pressure inside a container. In practice, if a incompressible solid is enclosed in a pressurized container, there will usually be some gas or liquid mixed in with it that will retain the pressure. | {
"domain": "physics.stackexchange",
"id": 13096,
"tags": "fluid-dynamics, water, pressure"
} |
Real, non-constant scalar field with special properties in class of 4-dimensional spacetimes | Question: David Deutsch (Oxford University) asked the following question which I think is an interesting one:
In what class of 4-dimensional spacetimes does there exist a real, non-constant scalar field φ with the following properties:
It obeys the wave equation: ◻φ=0
Its gradient is everywhere null:
∇φ.∇φ=0
Deutsch would "like the answer to be 'almost none'" but I am really not sure...
Answer: There are many such spacetimes. Already the Minkowski space, $g_{\mu\nu}={\rm const}$, has a non-constant solution $\varphi$ (in either interpretation 1 or interpretation 2 of the question(v1), cf. Muphrid's comment). The wave eq. in a curved spacetime reads
$$\sum_{\mu,\nu=0}^3\partial_{\mu}\sqrt{-g}g^{\mu\nu}\partial_{\nu}\varphi~=~0.$$
If e.g. the metric $g_{\mu\nu}$ is of the form
$$
g_{\mu\nu}~=~\left[ \begin{array}{cc} -1 & 0 \\ 0 &g_{ij}(x^1,x^2,x^3) \end{array} \right], \qquad \mu,\nu=0,1,2,3,\qquad i,j=1,2,3,
$$
and
$$ \sum_{i,j=1}^3(\partial_{i}\varphi)g^{ij}(\partial_{j}\varphi)~=~0, $$
then we can pick an affine function in time
$$\varphi(x)~=~ ax^0+b, $$
as Nick Kidman suggests in a comment.
If e.g. the metric $g_{\mu\nu}$ is on light-cone form
$$
g_{\mu\nu}~=~\left[ \begin{array}{cc} 0 & -1 & 0 \\-1 & 0 & 0 \\ 0 & 0 &g_{ij}(x^2,x^3) \end{array} \right], \qquad \mu,\nu=+,-,2,3,\qquad i,j=2,3,
$$
and
$$ \sum_{\mu,\nu}(\partial_{\mu}\varphi)g^{\mu\nu}(\partial_{\nu}\varphi)~=~0, $$
then we can e.g. pick an arbitrary function
$$\varphi(x)~=~ f(x^+), $$
where we have used light-cone coordinates $x^{\pm}=\frac{1}{\sqrt{2}}(x^0\pm x^1)$.
NB: It is possible that David Deutsch's claim in simplified terms essentially boils down to the following. Put some measure $\mu$ on the space ${\cal M}$ of all metrics $g_{\mu\nu}$ on, say, spacetime $\mathbb{R}^4$, and consider the subset ${\cal N}\subseteq{\cal M}$ of metrics $g_{\mu\nu}$ that admit a non-constant solution for $\varphi$. David Deutsch's phrase almost none should then be understood as that the subset ${\cal N}$ has measure zero, $\mu({\cal N})=0$. If OP's actual question is whether $\mu({\cal N})$ is zero or not in that sense, then my above answer is insufficient. | {
"domain": "physics.stackexchange",
"id": 5414,
"tags": "spacetime, differential-geometry"
} |
rosserial with arduino NO node! | Question:
I have searched the solution for my problem,however i am still puzzled.
below is my problem:
1.I download the examples from the Arduino IDE on my Uno 238P
2.execute :roscore , rosrun rosserial_python serial_node.py /dev/ttyACM0,
output:
[INFO] [WallTime: 1344165710.415424] [0.000000] ROS Serial Python Node
[INFO] [WallTime: 1344165710.434731] [0.000000] Connected on /dev/ttyACM0 at 57600 baud
But, when i list the topics:
/rosout
/rosout_agg
and service list:
/rosout/get_loggers
/rosout/set_logger_level
i have found that others terminal will show :
[INFO] [WallTime: 1342164886.822488] ROS Serial Python Node
[INFO] [WallTime: 1342164886.825278] Connected on /dev/ttyACM0 at 57600 baud
[INFO] [WallTime: 1342164888.953740] Note: publish buffer size is 512 bytes
[INFO] [WallTime: 1342164888.974499] Starting service client, waiting for service 't
so ,how to solve it ?thanks!
Originally posted by bobliao on ROS Answers with karma: 46 on 2012-08-05
Post score: 0
Answer:
I solved it .i don't konow why ,i just restarted my computer~~~it works.
btw:remeber to edit the ttyACM0 in the serial_node.py !!!
Originally posted by bobliao with karma: 46 on 2012-08-05
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 10481,
"tags": "ros, rosseral-arduino"
} |
ROS Answers SE migration: 2D mapping | Question:
Is there a way (or any example) to build a 2D map from Laser sensor and odometry_msgs without using SLAM? I have found only SLAM references.
Thank you
Originally posted by beto.leonardo on ROS Answers with karma: 1 on 2013-03-27
Post score: 0
Original comments
Comment by joq on 2013-03-27:
SLAM stands for Simultaneous Localization and Mapping. What do you want that is different?
Answer:
I believe octomap_server may do what you're looking for. This package does not try to do the localization portion of SLAM -- it only builds maps based on incoming scan data.
Octomap_server requires you to publish a tf transform between the static world frame and your sensor frame. So, you might need to write a node to re-publish your odometry messages as tf transforms, if you really only have raw odometry messages.
You may also need to create a node to convert raw laser_scan data into 3D point_cloud data for octomap_server. The laser_pipeline package provides some good tutorials on how to do this. In particular, the scan_to_cloud_filter_chain node may provide what you need out-of-the-box.
Once running, octomap_server should build a 3D map from the laser scan data. If you only want 2D maps, you can pull those out using the projected_map topic.
Keep in mind that raw odometry data (e.g. from wheel encoders or IMUs) often has a tendency to drift or slip over time. That's why many mapping solutions use SLAM to correct for errors in the raw odometry data during the mapping process. However, if you have a good source of odometry data, then full SLAM may be overkill and the octomap_server may be just what you're looking for!
Originally posted by Jeremy Zoss with karma: 4976 on 2013-03-27
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 13566,
"tags": "navigation, mapping"
} |
Capacitive touch input robot to remote access iPad | Question: I'd like to buy a capacitive touch input robot in order to remote access my iPad but I'm having trouble describing a correct kind of robot.
I would like to keep lag down to an additional 60ms so that it is still a high quality interface.
I would like to have a robotic arm equipped with a capacitive pen that moves to places on the ipad screen based on the mouse or I'd like a array of capacitive pens that emulate the touch of a user.
I guess I'd use Squires software reflect and the mirror function but I'm open to using an SHD camera with the robotic arm and a pixel sensor array with the array of capacitive pens.
Does this make sense? How could I improve the design? What materials would I need to build it myself. Assuming ready built arm? How could I build an array of capacitive touch micro pens?
Answer: Since the screen of an iPad is your desired workspace, and all the arm will have to lift is a stylus, this will not have to be a large arm. There are many "hobby" arms built with hobby airplane servos that will probably work for you. I won't list any makes / models because there are many out there.
I am not sure what your comment about lag is about, but if you want a fast arm, you should consider a Delta robot arm configuration. Because the "standard" serial arms mentioned above are not very fast. These are harder to come by. the only hobby level delta arm i have seen was the Robot Army Starter Kit on Kickstarter. | {
"domain": "robotics.stackexchange",
"id": 334,
"tags": "robotic-arm"
} |
Trying for an efficient and readable heap-sort implementation | Question: I'm looking for any sort of optimization and/or conventional practice tips, such as using the appropriate data types and naming my variables appropriately.
#include <iostream>
#include <vector>
#include <cstdlib>
#include <ctime>
constexpr short MIN_ELEMS = 3;
constexpr short MAX_ELEMS = 15;
constexpr short LARGEST_ELEM_INT = 99;
void fill_vector(std::vector<int> &v, short MAX_ELEMS, short LARGEST_ELEM_INT);
void print_vector(const std::vector<int>::iterator start, const std::vector<int>::iterator end);
void max_heap(const std::vector<int>::iterator elem, const std::vector<int>::iterator start, std::vector<int>::iterator end);
void heap_sort(const std::vector<int>::iterator start, const std::vector<int>::iterator end);
int main() {
srand(time(NULL));
short random_range = rand() % MAX_ELEMS + 1;
short num_of_elems = random_range < MIN_ELEMS ? MIN_ELEMS : random_range; // prevents num_of_elems from being less than MIN_ELEMS
std::vector<int> v;
fill_vector(v, num_of_elems, LARGEST_ELEM_INT);
print_vector(v.begin(), v.end());
heap_sort(v.begin(), v.end());
print_vector(v.begin(), v.end());
}
void fill_vector(std::vector<int> &v, short num_of_elems, short LARGEST_ELEM_INT) {
short i = 0;
while (++i, i <= num_of_elems) {
v.push_back(rand() % LARGEST_ELEM_INT);
}
}
void print_vector(const std::vector<int>::iterator start, const std::vector<int>::iterator end) {
for (auto elem = start; elem != end; ++elem) {
std::cout << *elem << ' ';
}
std::cout << '\n';
}
void max_heap(const std::vector<int>::iterator elem, const std::vector<int>::iterator start, const std::vector<int>::iterator end) {
auto biggest = elem;
auto left_child = start+((elem-start)*2+1);
auto right_child = start+((elem-start)*2+2);
if (left_child < end && *left_child > *biggest) {
biggest = left_child;
}
if (right_child < end && *right_child > *biggest) {
biggest = right_child;
}
if (elem != biggest) {
auto val = *biggest;
*biggest = *elem;
*elem = val;
max_heap(biggest, start, end);
}
}
void heap_sort(const std::vector<int>::iterator start, const std::vector<int>::iterator end) {
// sort vector to max heap
for (auto i = start+((end-start)/2)-1; i >= start; --i) {
max_heap(i, start, end);
}
// sort vector in ascending order
for (auto i = start; i != end-1; ++i) {
auto val = *start;
*start = *(start+(end-i-1));
*(start+(end-i-1)) = val;
max_heap(start, start, start+(end-i-1));
}
}
Answer:
Sort your includes. That way, you can keep track even if there are more of them.
Writing a test-program is a good idea. Though print the seed-value, and allow overriding from the command-line for reproducibility.
In line with that, add a method to test whether a range is ordered, print that result and use it for the exit-code too.
I would expect a function named print_vector() to, you know, print a vector. Not an iterator-range from a vector. Also, encoding the type of an argument in the function-name hurts usability, especially in generic code.
fill_vector() is a curious interface. I would expect get_random_data() which returns the vector.
Know your operators. ++i, i <= num_of_elems is equivalent to ++i <= num_elements.
Anyway, that should be a for-loop, or you could omit i and just count the argument down to zero.
Kudos for using constexpr to avoid preprocessor-cnstants where not needed. Still, ALL_CAPS_AND_UNDERSCORES identifiers are generally reserved for preprocessor-macros. They warn/assure everyone that preprocessor-rules apply. Fix the naming too.
The C++ headers <cxxx> modelled on the C headers <xxx.h> only guarantee to put their symbols into ::std. Don't assume they are also in the global namespace.
max_heap() will often try to create pointers far beyond the passed range. Creating such a pointer invokes undefined behavior.
For simple and correct code, better use indices. | {
"domain": "codereview.stackexchange",
"id": 41406,
"tags": "c++, sorting, heap"
} |
What are details of this claim of Penrose about gravity and QFT being at odds with each other? | Question: Question
Can someone flesh out the details of the argument Roger Penrose makes in this video of a lecture he gave on twistors (starting around 1:25:15) or recommend me the appropriate literature (preferably not behind a paywall)?
Answer: The relevant paper is I believe On the Gravitization of Quantum Mechanics 1: Quantum State Reduction (Penrose, 2014). This is open access.
Penrose actually talks about two types of incompatibility between GR and quantum theory.
One of them is his explicit point that two different vacua can not "legally" be superposed, which I think is mostly a limitation QFT would put, on a conceptual level, upon the validity of using QM to describe a superposed system with a gravitational component. It is discussed in the above paper, and also in chapter 4.2 of Penrose's book 'Fashion, Faith and Fantasy in the New Physics of the Universe' (2016).
The second one is more implicit in the video (and perhaps not even present), and does not involve QFT but plain QM, and it is the fact that when gravity is to be taken into account in QM from the Einsteinian perspective, a quantum superposition actually has to be a superposition of spacetimes, which is not easily handled by the status of time in the QM formalism. While this point is not exposed in the lecture itself, it is developed by Penrose in chapters 30.10 and 30.11 of his book 'The road to reality' (2004) so I believe it is an important part of his thinking about the incompatibilities between GR and QM.
Keep in mind that the overall perspective of Penrose is to keep the GR conceptual framework as is, and 'gravitize quantum mechanics' instead of 'quantizing gravity' (which is by far the most popular approach). | {
"domain": "physics.stackexchange",
"id": 58717,
"tags": "quantum-field-theory, general-relativity, gravity, quantum-gravity"
} |
How to fuse GPS and lidar odometry by using robot_localization? | Question:
Hi, I want to fuse my lidar odometry and GPS by using robot_localization. But the installation position between Lidar and GPS is 20cm,20cm,10cm. Maybe we will make changes to this installation location later. So when I use robot_localization, how can I compensate for the out-of-installation parameters between the Lidar and the GPS?
Originally posted by JACKLiuDay on ROS Answers with karma: 13 on 2022-09-27
Post score: 1
Answer:
I think if your tf's are defined correctly (i.e. they account for this 20cm,20cm,10cm translation), then robot_localization should work correctly. What that package does is takes the messages from the topics which you specify and tries to fuse them. But in order to fuse the information, it needs to know the tf between both sensor's frame. So, first you should try to set your tf correct and then use the robot_licalization package. I hope it helps!
Originally posted by dvy with karma: 52 on 2022-09-28
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by JACKLiuDay on 2022-09-29:
Thank you so much for your reply. You mean that the installation location information between multiple sensors is obtained through the frame_id in the topics. Through the tf transformation information of each frame_id, the corresponding installation offset information can be queried, so that the robot_localization can work normally. So I'll pass the 20cm 20cm 10cm mount offset to robot_localization via a static tf transform. Thank you again.
Comment by dvy on 2022-09-30:
@JACKLiuDay : You got the idea. However, please note that you need to publish a static tf only if the sensors are not defined in your robot urdf model which I doubt (otherwise, how would you get the sensor data). To check the tf between two frames, you can do: ros2 run tf2_ros tf2_echo velodyne base_link (where velodyne is first frame_id and base_link is another; replace them with your frame_ids) on ROS2 or tf2_tools echo.py <frame_id_1> <frame_id_2> on ROS1. Btw, how did you get odometry information from lidar?
Comment by JACKLiuDay on 2022-10-08:
@dvy Hi, I just finished my vacation. My tf between two sensors are obtained by robot_state_publisher. It can publish tf messages with sensor.xacro. We can define different sensor link in the xacro file. Or publish the transformation information between coordinate systems through static tf transformation as you said. The lidar odometry is obtained by using LIO SAM. https://github.com/TixiaoShan/LIO-SAM
It can fuse GPS and lidar.
Comment by dvy on 2022-12-20:
@JACKLiuDay : If this answers your question, can you please accept it as correct? | {
"domain": "robotics.stackexchange",
"id": 38000,
"tags": "gps, ros-melodic"
} |
Water in vacuum (or space) and temperature in space | Question:
So, water in vacuum will boil first and then freeze. I don't know how the freeze happens. As pressure lowers to zero, what happened to freezing point? (I know heat taken by vapor, and the water cool down, but I don't think it will be that cold, will it? In vacuum, boiling point is so low that water shouldn't need so much heat as it does in normal pressure, which means vapor actually takes more heat away under normal pressure than in vacuum, so water under normal pressure would be cooler? (I'm guessing)
And temperature comes from heat generates by motion of molecules (I guess so), so in vacuum, there is no temp?
What happen when I heat up a vacuum tube?
Does heat need a medium to “travel"?
Answer: Conventionally, though with justifications, space is said to begin at the Kármán line which is
100km from Earth surface, i.e., still pretty close. The atmospheric
pressure at this altitude drops to about 0.032 Pa (wikipedia), which is still a
lot more than outer space (less than $10^{-4}$ Pa according to wikipedia)
The phase diagram of water shows that, at this pressure level, water
can exist only as a solid or as vapor, depending on temperature, but
not as a liquid. The phase transition between solid and gaz at that
low pressure takes place near 200°K (around -73°C), which is not that cold.
So, if you drop in space a blob of water at room temperature and
pressure it will instanly start to evaporate (boil) and decompress.
Here I am not sure about what happens. There are accounts from
astronauts on the web that explain that the water (actually urine)
will first vaporise then desublimate into tiny crystals. But no
explanation of the actual physical phenomena that drive it.
My own reconstruction of what could happen (before I saw these sites)
is the following.
First the loss of pressure propagates very fast in the liquid (speed of
sound?) while loss of temperature (heat) propagates slowly (as all beer lovers
know from their fridge). So the boiling will essentially take place
uniformly in the whole liquid. Phase transition from liquid to gas
absorbs heat, and that is what will cool the water very quickly, as
it evaporates.
My guess is also that the energy loss will cool the water down to
sublimation temperature (solid-gas transition) before it all
evaporates, so that some parts of the liquid may be cooled down to
freezing before they have time to evaporate. But as boiling takes
place everywhere, it actually breaks the remaining water into tiny
fragments that cristallize, and possibly also collect some of the
vapor to grow.
Anyway, you apparently get snow.
But the cooling is due to evaporation, which is very fast,
much more than to radiation which has hardly any time to take place.
Numerical evaluation
We analyze what becomes of available heat to understand whether some water freezes directly. This is a very rough approximation as the figures used are
actually somewhat variable with temperature, but I cannot find the
actual values for the extreme temperature and pressures being
considered.
The specific latent heat of evaporation of water is 2270 kJ/kg. The
specific heat of water is 4.2 kJ/kgK Hence, evaporating 1 gram of
water can cool 2270/4.2 = 540 grams of water by 1°K, or 5.4 grams by
100°K which is about the difference between room temperature and water
(de)sublimation temperature in space. So my hypothesis that there is
not enough heat available to vaporize all the water is correct, as
only about one sixth of the water can be vaporized with the available
heat.
Out of 5.4g of water, 1g will evaporate, though may cool down to just
above the sublimation temperature of 200°K, while the remaining 4.4g
will be cooled to sublimation temperature without vaporizing,
yet. The remaining 4.4g cannot remain liquid, hence, one part freezes,
thus freeing some latent het for the other part to vaporize. The ratio
between the two part is inversely proportional to the specific latent
heat for freezing and vaporizing.
Latent heat for freezing is 334 kJ/kg.
The sum of both latent heat is 2270+334=2604 kJ/kg. These figure are
very approximate. As a sanity check, the latent heat of sublimation of
water is approximately 2850kJ/kg (wikipedia), which show that the
figures are probably correct within a 10% approximation.
The ratio divides the remaining 4.4g into approximately 3.8g that
freezes and 0.6g that evaporates, making it a total of 1.6g of
vaporized water.
So, skipping a quick calculation, we find that about 70% of the water freezes into some kind of snow, while the remaining 30% are vaporized. And it all happens rather quickly.
I was actually uneasy about this account of astronauts stories of
water boiling and then desublimating at once, because that would leave
us with all the heat to get rid of very quickly. How? Does anyone have a better
account?
A last remark is that there always will be some part of the water that
gets frozen. I thought initially that very hot water might provide enough heat to vaporize itself completely un low pressure. The critical point of liquid water is at 650°K (with a much higher pressure than you care to create in space: 22MPa), which is
only 450° above the sublimation temperature. But the water should be
cooled by 540° to provide enough heat to evaporate completely. So
the water temperature will drop to the sublimation threshold before
enough heat can be supplied to evaporate it completely. This problably
a very simplistic analysis, though. I leave the rest to specialists. | {
"domain": "physics.stackexchange",
"id": 35825,
"tags": "thermodynamics, water, temperature, vacuum, molecules"
} |
Is there a difference between impact force and force coming from Newton's second law? | Question: So me and my friend were having a discussion today that why should our legs break if we jump off from a taller height like 20 meters than if we just jump from a mere meter. We both agreed that our legs should break from a taller height, but we couldn't determine the reason behind it. I mean it doesn't matter if I drop from 20 meters or only 1 meter, the acceleration due to gravity will be same in both cases, right? So from Newton's second law, i.e. $F=ma$, my weight should be the same regardless of from which height I jump. Now, my weight is not gonna change even after hitting the ground, so shouldn't the reaction force that I get from the ground be equal to my weight since that is the force which I am applying on the ground during impact?
If no, then why not? Moreover, if the impact force is not the same as my weight (presumably higher than it), then why do I only apply my weight as a force on the ground when I stand on it, but the force becomes different on impact?
Answer: The normal force from the ground must be large enough to give you an upward acceleration, that is large enough so that you don't melt into the ground. First of all, this means that the normal force will be time dependent: while you are in the air, the normal force is zero; then, as you hit the ground, the normal force grows very rapidly, causing acceleration so that you slow down; finally, the normal force reduces again so that when your velocity is zero, the normal force is equal to you weight, so that you remain on the ground.
When you are jumping from 1 meter, your velocity when you hit the ground is small, and the acceleration needed to make you stand still is small. Hence the maximum normal force does not need to be extremely large.
When you are jumping from 20 meters, your velocity when you hit the ground is large, and the acceleration needed to make you stand still is large. Hence the maximum normal force needs to be large*.
*Actually, the impulse must be large, which is the time integral of the normal force over the period of landing, from first contact to stand-still. This impulse is equal to the change in momentum. | {
"domain": "physics.stackexchange",
"id": 84799,
"tags": "newtonian-mechanics, forces, collision, free-body-diagram"
} |
Redshift time relation | Question: I was reading that the relation
$dt=\frac{dz}{H(z)(1+z)} $
between time and redshift (H is the Hubble constant) holds.
I don't understand this. I thought the relation between time and redshift is
$z=H(t-t_0)\Rightarrow dz=H dt$
What am I doing wrong?
Answer: $$dt = \frac{dt}{da}da \frac{a}{a}$$
and we know that $1/a = 1+z$ and $da = -(1+z)^{-2}dz$ and $\frac{da/dt}{a} = H_0E(z)$
so we obtain
$$dt = \frac{1}{H_0E(z)} \times \frac{-dz}{(1+z)^2}\times(1+z)$$
$$dt = -\frac{dz}{(1+z)H_0E(z)} \equiv -\frac{dz}{(1+z)H(z)} $$ | {
"domain": "physics.stackexchange",
"id": 73584,
"tags": "cosmology, space-expansion, redshift"
} |
Why can UV light initiate a reaction between hydrogen and chlorine gas? | Question: Can someone explain me how does UV light help combine chloride gas and hydrogen to produce hydrochloric acid?
$$\ce{Cl2(g) + H2(g) -> 2HCl(g)}$$
Answer: Before going into the mechanism of this reaction, I suggest you look up free radical mechanism, as this reaction takes place through that.
$\ce{Cl-Cl}$ bond in $\ce{Cl2}$ is weak enough to be broken by mere UV rays (present in sunlight), and hence they undergo homolytic cleavage (the resultant products are $\ce{Cl}$ atoms, not ions) to form two $\ce{Cl}$ free radicals. Now these $\ce{Cl}$ free radicals are extremely reactive (due to one free electron that it can share in a covalent bond) and hence attack hydrogen gas.
Now is the time you consider energy lowering. $\ce{H-Cl}$ bond is lot more stronger than $\ce{H2}$, and hence the formation of $\ce{H-Cl}$ bond formation will be favored instead of $\ce{H-H}$. Therefore, $\ce{Cl}$ free radical attacks $\ce{H2}$ gas to break the bond and form $\ce{HCl}$. The other $\ce{H}$ atom hence liberated forms a bond with the other $\ce{Cl}$ free radical. | {
"domain": "chemistry.stackexchange",
"id": 7528,
"tags": "organic-chemistry, photochemistry, radicals, electromagnetic-radiation"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.